Nothing stirs our AI paranoia more than the idea of independent AI systems communicating with each other in unmoderated, incomprehensible ways. This fear leapt from science fiction trope (vividly described, for instance, in the 1966 sci-fi thriller Colossus) toward reality in 2017 when Facebook researchers reported that experimental chatbots being trained to negotiate had begun to converse in their own weird, invented language.

When the researchers disabled the unintelligible communication, a wave of sensational reporting suggested that fear of unforeseen consequences had prompted them to pull the plug. The researchers summarily dismissed this, explaining that nothing concerning or unusual had happened; they just forgot to tell the robots to stick to English. Michael Lewis, the study’s lead author, summed up the attitude of many in the AI research community: “While it is often the case that modern AI systems solve problems in ways that are hard for people to interpret, they are always trying to achieve the goals that were given to them by people.”
But the laws that protect society from people’s worst impulses already seem ill-equipped to deal with today’s technologies like social media and generative AI.
How can we expect them to address large-scale emergent effects of AI systems using social media to invisibly negotiate and collaborate as proxies with reward-based reinforcement learning in digital markets?
Fortunately, a rich body of research has explored such scenarios using game theory and simulations. It confirms what we might expect: that AI agents with profit-maximising goals can easily learn tacit collusion tactics such as anti-competitive pricing and cartel-style exclusionary behaviours in market simulations.
Policy-makers and legal scholars began sounding the alarm about this around 2016 as the widespread use of automated pricing on Amazon came to light. One notable study found that Q-learning algorithms in simulated markets consistently learn to charge supra-competitive prices—even without communicating with one another. Other studies reveal algorithms’ abilities to develop undetectable signalling capabilities (For example, sequences of random-seeming price adjustments) and the emergence of cooperative pricing strategies. Even relatively simple pricing algorithms systematically learn cooperative pricing strategies by trial and error.
Such findings should hardly surprise us. After all, they’re often observed in real markets.
Today, we mostly associate generative AI with controlled chatbot interactions and the creation of content artifacts like images, code, and videos to assist humans in their production tasks. The next generation of collaborative agents will use generative AI to produce strategic artifacts that are informed by an organisation’s proprietary information—along with information it can collect from and impart to other agents on its host’s behalf.
This will include strategic marketing documents—product descriptions, distribution models, creative briefs, media plans, customer journey maps, persona attributes, pricing algorithms and so forth—that reflect advanced marketing schemes likely beyond the full comprehension of users. To the extent that cooperative strategies like collusion maximise an AI’s reward quotient with reinforcement learning, we should expect them to emerge in ways far more difficult to detect and remedy than human efforts. We should also expect the humans who use AI to heed its suggestions without asking too many questions.
It should also come as no surprise that there’s little accord on how to deal with such behaviours—or even how much of a problem they really represent. As with many topics in economics, positions tend to stretch across the spectrum from laissez-faire capitalism to aggressive interventionism.
Self-Healing markets
Proponents of free market policies often assert that the competitive nature of the market itself acts as a natural deterrent to algorithmic collusion. If AI systems in a particular industry coordinate their behaviour to raise prices, the argument goes, new competitors will enter the market with lower prices and better value, taking market share and profits from the colluding producers and re-establishing fair market prices.
So successful cartels do more than fix prices. They also employ tactics to exclude independent competitors from the market by controlling supply chains and distributors. Counteracting this requires, at minimum, assuring open access to networks and platforms for distribution and imposing restrictions on vertical integration. This is historically problematic in markets with high barriers like energy, telecommunications and air travel. AI proxies amplify and diversify these difficulties.
Customer agency
Another approach to protecting fair markets relies on customer bots (“custobots?”) to develop collective bargaining and publishing capabilities that keep prices in check and assist smaller competitors. Such agents may learn to detect oligopoly pricing based on collaborative market data and take active countermeasures, including boycotts, active cultivation of alternative sources, even organising collective legal action.
Unfortunately, as before, many asset-based industries present barriers to entry and network effects that give the largest corporations more bargaining power than consumers and smaller competitors. Supply-side agents also have a stronger case for tacit cooperation, being less fragmented, more aligned and easier to punish for defection. They’re also on the side that controls capital and the AI infrastructure.
Market Tiling
Even if price and market access collusion could be detected and suppressed, subtler ways to maximise profits are likely to emerge. These exploit flexibility of brands to differentiate their products in ways that avoid direct comparison while retailers set prices instantaneously based on context and personalisation.
Product differentiation has deep roots as a marketing strategy. Digital business trends such as personalisation and software-based services have multiplied the dimensions of differentiation and removed the constraints of mass production, physical distribution and mass media advertising. As brands seek to differentiate themselves with more intangible, qualitative features tailored to more specific customer segments (even down to segments-of-one), questions of market efficiency and fairness become increasingly complex.
Tools have appeared that use AI to analyse market competition across any number of dimensions of differentiation including abstract qualities of branding and messaging—to support marketers in their search for the most viable and defensible niche to maximise profits. This leads to finely tiled markets where a virtual oligopoly of brands tacitly avoids direct competition to maximise margins and suppress non-aligned competitors without overt collusion. Traditional cartels would use territorial boundaries to tile markets; digital businesses use fine-grained customer segmentation.
Endgame
Ultimately, the long-term viability of any marketing strategy depends on the health of the economy as a whole. Exploiting market inefficiencies to maximise profits sets up a scenario where wealth distribution becomes so concentrated that demand collapses and markets suffer. Sustainable growth in demand requires that corporate profits be returned to the economy in the form of wages, dividends and taxes.
Economists will continue to divide over the optimal distribution of wealth that balances growth and equity, but from the AI agent’s point of view, an optimal marketing strategy needs to consider how to grow demand rather than just control supply. Human insights and sensibilities will play a key role here, but a well-trained AI might learn to tacitly cooperate on the payout side to spur economic development with new, high-paying hiring plans, pro bono services, community investments and investment incentives.
According to Gartner’s 2023 survey of multichannel marketing leaders, over two-thirds of multichannel marketing leaders have deployed or are piloting AI to reduce the number of marketing roles needed to execute their strategy.
As the story goes, in the 1950s, when Walter Reuther, leader of the United Auto Workers, was shown a modern Ford plant in which much of the work was done by robots, he was asked how he was going to get those robots to pay union dues. He replied, “How will you get them to buy your cars?”
Marketers will always need to prioritise both demand generation and profit-maximising differentiation across the dimensions of brand and product. Their new charter is using human feedback to reinforce the values of fair market behaviours that encourage growth to custom models.
Maybe a cooperative AI network can help us determine the right number of people the economy needs us to hire to train our models to do the right thing. Then they’ll be able to buy our cars.
This blog was republished with permission from the Gartner Blog Network.