The Art of the Pivot & Adaptation

What Established Tech Giants Are Learning About AI and Their Customers: The measure of a company is not whether it stumbles in the age of artificial intelligence, but how swiftly and honestly it finds its footing again

There is an old adage in boardrooms from Silicon Valley to the City: markets punish failure, but they punish arrogance more. Nowhere has this truth been more vividly demonstrated in recent years than in the scramble by established software companies to graft artificial intelligence onto decades-old business models, and, in several notable cases, to reckon with the consequences of getting it badly wrong.

Adobe, Microsoft, Salesforce, Google, Amazon, IBM, OpenAI and Apple have each, in their own fashion, misjudged the pace, appetite or ethics of AI adoption among their paying customers. The failures are varied in character: some are ethical, some technical, some commercial, and at least one is the quieter failure of excessive caution. But the question that now separates those destined to thrive from those sliding toward irrelevance is deceptively simple: are you actually listening?

Microsoft: The Copilot Credibility Gap

Microsoft’s AI difficulties run considerably deeper than the Recall privacy debacle that attracted headlines in 2024. That episode, in which a feature designed to create a searchable visual history of everything a user had ever done on their computer was found to store sensitive data with inadequate protection, and was swiftly pulled pending a redesign, was in many respects the more forgivable stumble. A technical misjudgement, recognised quickly, corrected with reasonable speed and genuine engineering rigour. The climbdown was handled with grace, and the subsequent rebuild, featuring encryption and opt-in defaults, demonstrated that the organisation could move when it needed to.

The larger and more consequential failure is Copilot. Microsoft’s flagship AI proposition, woven across Microsoft 365, Windows, Teams and a proliferating family of related products, was positioned as nothing less than a transformation of the modern workplace. At thirty dollars per user per month, layered on top of existing Microsoft 365 licences, the commercial ambition was explicit and the bar for demonstrable return on investment was always going to be formidably high.

What enterprise customers found in practice bore insufficient resemblance to what they had seen in the demonstrations. Meeting summaries that missed the point. Documents generated with enough plausibility to require careful checking but enough error to undermine trust. Spreadsheet assistance that hallucinated references. Organisations that had rolled out Copilot to hundreds or thousands of employees during pilot programmes found themselves quietly scaling back, their IT departments fielding complaints from staff who had been promised a productivity revolution and received instead a moderately useful autocomplete function with an expensive price tag.

Analyst firms,noted the gap between expectation and delivery with unusual directness. Microsoft’s response has been a restructuring of the Copilot offering into tiers and a revision of its pricing model, adjustments that are commercially sensible but which constitute, in effect, an implicit acknowledgment that the original proposition was not landing as intended.

There is also a fragmentation problem that no pricing adjustment can resolve on its own. Copilot now exists in so many forms, Copilot for Microsoft 365, Copilot in Windows, Copilot Studio, GitHub Copilot, and further variants besides, that customers and their IT procurement teams have struggled to understand precisely what they are buying, for whom, and to what end. A brand intended to signify coherent, embedded AI intelligence has become, in the perception of many enterprise buyers, a label applied promiscuously across products of varying quality and relevance. Satya Nadella’s organisation has demonstrated before that it can reinvent itself under pressure. The ingredients for a credible recovery are present. What is needed now is the discipline to narrow the promise until it matches the product.

Adobe: When Integration Is Not Enough

Adobe’s AI difficulties are, in some respects, more structurally revealing than those of its peers, because the technology, unusually, is not really the problem. The company’s Firefly generative AI engine is genuinely capable. The Generative Fill feature in Photoshop attracted broadly positive reviews on its release and demonstrated that Adobe could build AI tools that professional creatives would actually want to use. The initial excitement was real.

What followed, however, exposed a tension that sits at the very heart of Adobe’s business model. In mid-2024, proposed changes to its terms of service provoked a furious response from the creative community. Artists, photographers and designers read the new language and concluded that their work might be used to train Adobe’s AI models without adequate consent or compensation. Adobe moved quickly to revise the terms and issue reassurances, but the damage was instructive: it revealed how fragile the trust relationship had become, and how quickly that community could mobilise against a perceived betrayal.

The deeper problem, though, is not the terms-of-service misstep. It is the structure of how Adobe has chosen to deploy AI across its product ecosystem. Firefly works, but it works in silos. The generative tools embedded in Photoshop do not flow naturally into Illustrator or Premiere Pro in the manner that a working creative professional actually operates. Rather than building AI as a flexible, cross-suite creative accelerator, Adobe has bolted AI capability onto an existing architecture designed for a different era and a different workflow.

The Generative Credits system, under which meaningful AI usage is metered and charged separately from the underlying subscription, has deepened the frustration. Professionals who pay substantial monthly fees for Creative Cloud find themselves effectively billed twice for AI features that competitors offer with fewer restrictions. The perception, fair or otherwise, is of a company using artificial scarcity to protect a revenue model rather than genuinely committing to AI as a creative tool.

This points to something more uncomfortable than a product flaw. Adobe’s business model has long depended on the depth of its ecosystem lock-in. Truly flexible, interoperable AI tools would reduce that lock-in. The incentive to make Firefly genuinely open and fluid runs directly against the incentive to keep customers inside the Adobe walls. Adobe’s leadership faces a more fundamental question: is the company willing to redesign its AI proposition around what professional creatives actually need, even if the answer requires loosening the grip of the subscription model that has made it so profitable?

Google: The Fastest Turnaround in the Room

If speed of recovery is the measure, Google’s response to its Gemini crisis deserves particular attention. The launch of Gemini’s image generation feature in early 2024 was one of the most publicly humiliating product failures in recent memory. The model produced historically absurd images in a manner that struck users across the political spectrum as both offensive and faintly comic. The internet, never slow to amplify a corporate misfire, made short work of it.

Google’s response was notable for its directness. Sundar Pichai described the errors as unacceptable in an internal memo that swiftly became public, and the image generation feature was suspended within days. Rather than retreating into corporate defensiveness, the company committed to a substantive rebuild of its testing and evaluation pipelines, the unglamorous engineering work that determines whether an AI model behaves sensibly before it meets the public.

The subsequent releases under the Gemini 1.5 and 2.0 families have been broadly well received. Google has not entirely escaped the shadow of that early stumble, first impressions in consumer technology have a long half-life, but its recovery has been among the more credible in the industry. The lesson it demonstrated is one that rivals would do well to study: when you are wrong, say so plainly, move quickly, and let the product do the rehabilitating.

Salesforce and the Autonomy Question

Salesforce presents a subtler case study. Its Agentforce platform, which allows businesses to deploy autonomous AI agents across customer service, sales and operations, represents a genuine bet on the future of enterprise software. But the rollout has not been without friction.

Several corporate customers have raised concerns about accountability, specifically, who bears responsibility when an autonomous AI agent gives a customer incorrect information or takes an action that has financial or legal implications. Salesforce’s answer, that the platform provides the tools and the enterprise assumes the liability, has satisfied some clients and unsettled others.

The company has responded by investing heavily in what it calls ‘trusted AI’ frameworks, offering customers more granular control over agent behaviour and clearer audit trails. Whether the product reality matches the rhetoric is something enterprise procurement teams are now scrutinising with considerably more rigour than they applied to earlier waves of cloud software.

IBM: The Long Shadow of Watson

No survey of established technology companies and their AI reckoning would be complete without IBM, whose experience represents perhaps the most instructive cautionary tale of the past decade, and the slowest, most arduous recovery.

Watson was, in its heyday, one of the most aggressively marketed AI products in corporate history. IBM positioned it as a transformative intelligence capable of revolutionising industries from finance to oncology. The partnership with MD Anderson Cancer Center, under which Watson was tasked with assisting in cancer diagnosis and treatment recommendations, became the emblem of this ambition. It was also, after years of work and tens of millions of dollars, quietly discontinued, the system having failed to demonstrate the clinical reliability that had been so boldly promised.

IBM’s recovery has been deliberate and notably humble in register. The watsonx brand, relaunched with measured language and a focus on specific, demonstrable enterprise use cases, represents a studied retreat from grand claims. The company no longer promises to cure cancer. It promises to help organisations manage and deploy AI models within governed, auditable frameworks. It is a less exciting proposition. It is also a considerably more honest one, and, in consequence, one that enterprise clients are beginning to take seriously again.

The IBM arc illustrates something important: recovery from a trust failure of that magnitude is measured not in quarters but in years. It begins not with a new product launch or a rebranding exercise, but with the harder discipline of bringing your promises into alignment with what you can actually deliver.

Amazon: The Consumer-Enterprise Divide

Amazon’s AI journey reveals a striking divergence between its enterprise and consumer ambitions. On the enterprise side, AWS has built a credible and commercially successful AI infrastructure business. Here, Amazon has been largely sure-footed.

The consumer story is rather more complicated. Alexa, launched to considerable fanfare in 2014, enjoyed years as the defining voice assistant of its era. The rise of large language models exposed its limitations with uncomfortable clarity. Users who had grown accustomed to the fluid, contextually aware conversations offered by ChatGPT and its successors found Alexa’s responses stilted and constrained by comparison. Amazon has invested heavily in a generative AI rebuild of Alexa, but the rollout has been protracted and the competitive gap remains visible. .

Apple: The Perils of Caution

Apple represents a different category of failure, one that is easy to overlook precisely because it generates no scandal, no privacy breach, no historically inaccurate imagery. Apple’s stumble in the AI era is the stumble of the overly careful: a company so protective of its reputation for quality and its customers’ privacy that it arrived late, underdelivered on its promises, and found itself, for perhaps the first time in a generation, trailing rather than setting the pace.

Apple Intelligence, announced with characteristic theatrical confidence at WWDC 2024, launched to reviews that ranged from underwhelmed to mildly scathing. Features were delayed. Siri’s long-promised overhaul arrived in increments that felt more like patches than transformation. The integration with third-party AI services, including a partnership with OpenAI, struck some observers as an acknowledgment that the company’s own models were not yet where they needed to be.

Apple’s challenge is distinct from its peers. It has not betrayed its customers’ trust, if anything, its caution around data privacy has been a genuine competitive differentiator. But it has disappointed them, which in the context of Apple’s extraordinarily high self-imposed standards amounts to a form of failure. The recovery here demands not a public apology or a product rollback but something perhaps more difficult: genuine acceleration without sacrificing the quality and privacy commitments that define the brand. They have wisely now pivoted with Google’s Gemini.

OpenAI: The First Mover Fallacy

History offers a sobering corrective to the cult of the first mover. The companies that start something rarely finish it. AltaVista and Ask Jeeves mapped the terrain that Google conquered. WordPerfect and Lotus defined the desktop productivity era that Microsoft ultimately owned. Palm invented the smartphone category that Apple perfected. Netscape opened the browser wars it would not survive. Uber upended urban transport and remains, more than a decade later, a company still searching for the profitability its disruption was supposed to guarantee.

The pattern is consistent enough to constitute a law: in technology, it is not the pioneer who plants the flag that tends to profit most, but the company that arrives second or third with a clearer understanding of what the customer actually needs and the operational discipline to deliver it. OpenAI has swiftly arrived at this pivotal moment. It ignited the generative AI era, captured the public imagination and set the terms of a global conversation, and then discovered that being first is an expensive and precarious position to defend.

The investment demands have been extraordinary, the path to sustainable revenue uncertain, and the moat narrower than its early dominance suggested. Competitors are better resourced, more focused, and unencumbered by the organisational tensions that have characterised OpenAI’s brief but turbulent history. They are closing the gap with models that match or exceed their own. The evidence available suggests that the probability of OpenAI, in its current form, remaining the defining AI company of the next decade is considerably lower than the breathless coverage of its early years implied. In technology, as in nature, the creature that adapts best usually survives, not the one that evolves first.

The Common Thread

Across these cases a pattern emerges with uncomfortable consistency. Established technology companies, emboldened by vast research budgets and the existential pressure of competition from AI-native challengers, have repeatedly made the mistake of prioritising capability over consent, speed over reliability, and marketing over engineering reality.

They have announced features before stress-testing them in the real world. They have buried consequential policy changes in terms-of-service updates that no ordinary customer reads. They have promised transformations that their products could not yet deliver. They have metered and monetised AI features in ways that felt extractive rather than generous. And they have underestimated the sophistication, and the patience, of the customers on the receiving end.

The companies now recovering most credibly are those that have done something disarmingly unfashionable: they have gone back to their customers, acknowledged what went wrong, and changed course accordingly. The customer, long taken for granted in the gold rush of AI capability, has reasserted leverage, and the companies that recognise this earliest will hold a significant structural advantage over those still hoping the next product cycle will quietly bury the failures of the last.

Recovery as Strategy

The most instructive aspect of the current moment is not the failures themselves, all technology transitions produce them, but the quality of the recoveries. Companies that have treated their missteps as genuine learning opportunities, rather than communications problems to be managed, are rebuilding trust more durably than those still in the business of damage limitation.

There is a management principle, sometimes attributed to the quality movement of the late twentieth century, that a customer who has experienced a problem and seen it resolved well often ends up more loyal than one who never encountered a problem at all. The principle is contested, and no executive would voluntarily engineer a product failure to test it. But it contains a truth about the nature of trust that the software industry would do well to internalise rather than merely cite in investor presentations.

Technology companies are not judged solely on the elegance of their AI models or the ambition of their product roadmaps. They are judged on how they behave when things go wrong, and whether the people on the other side of the relationship, the ones who have integrated their professional and creative lives around these tools, feel seen, respected and genuinely served.

Failure, in an industry moving at this pace, is close to inevitable. The companies that will define the next decade are not those that avoid it. They are those that respond to it with honesty, speed and, above all, a genuine and demonstrable orientation towards the people they serve.

The rest is noise.

Scroll to Top