It’s almost two years since the launch of ChatGPT and the proliferation of AI tools in marketing has accelerated since, to the point that there are genuine questions as to whether automation can outright replace human roles in the near future. This is a potentially provocative, and even uncomfortable, question for strategists like myself, but the pace at which things are moving means it’s one we can’t ignore.
Rise of the Machines
It’s all too easy to forget that AI isn’t exactly new; from Google Ads automated bidding to the influence of social media algorithms on content development, ever more sophisticated systems have revolutionised almost every aspect of marketing and gradually automated away whole swathes of human input.
In many cases, this is almost certainly a good thing. For instance, AI excels in analysing and processing data far more accurately and efficiently than humans, which in turn allows for optimisation tweaks to be made in real-time. If we’re to take Google’s case studies at face value, this has given advertisers a more than 20% increase in conversion volume and reduced account management time by around 80%.
There’s also plenty to suggest that AI isn’t limited to delivering efficiency at a tactical level. A 2024 study by researchers at the University of Cambridge found that AI outperformed human executives at a corporate strategy decision-making game, including product design, cost control, and responding to market signals. Generative AI is therefore already capable of leading strategy when given high-quality data, and it seems to get better results too.
I’m sorry, Dave. I’m afraid I can’t do that
So does this spell the end of the marketing strategist? Will generative AI kill us off? Well, there’s still a fair few limits to what it can do in terms of strategy, with two areas in particular where it falls short of a human: forward-thinking, and genuine innovation.
By their very nature, Large Language Models (LLMs) base their decisions on historic data and they’re hard-wired to focus all of their energies on the task in front of them. Of course, anyone who’s seen 2001: A Space Odyssey knows that that means that they’ll achieve it with ruthless efficiency. But it will also come at the expense of any broader, longer-term thinking. This was reflected in the Cambridge study, where the generative AI locked into a short-term mindset and failed to change course when ‘black swan’ events were sent its way. And it doesn’t even have to be some seismic, unpredictable event either: it can only make decisions based on the quantity and quality of its training data, so where this is lacking (which, if we’re being honest, is the case for the vast majority of companies), the strategy will likely fall flat.
This “driving by only looking through the rearview mirror” approach is also at the heart of why generative AI can’t truly innovate. It is the epitome of being ‘data-led’ which, as I’ve written before, isn’t actually as desirable as it first sounds. Truly effective strategy is more nuanced and intuitive than a simple black and white interpretation of facts and figures; it’s a blend of art and science to find creative solutions which others haven’t. LLMs will give you a brilliantly reasoned and logical answer based on all of the data available, but it won’t be able to make those odd leaps and connections the humans make. And it is precisely because it is easily distracted from the task at hand that the human mind goes down these rabbit holes, so generative AI’s immense power of focus is also its greatest weakness in this regard.
Sir, I can provide a distraction
The good news, at least for those of us who would need to seek a new livelihood otherwise, is that generative AI can’t replace strategists entirely. In many respects, it’s just another tool in the toolbox. It can take over the heavy lifting when it comes to data and help to identify opportunities based on predictive analysis which may not be apparent to a human. That frees up strategists to focus on longer-term goals, playful ideation, and solving complex problems.
Why is this important?
This best-of-both-worlds scenario is dependent on building a robust data infrastructure that will allow the LLM to be a genuinely valuable strategic assistant, and this is likely the first step for most businesses right now. However, I’d be willing to bet that many will forego this step and will fall into the tactification trap. Those who don’t, who provide the environment for a collaborative process between AI and human strategists, will be the ones who produce the most successful and sustainable strategies of the future.