The measure of a company is not whether it stumbles in the age of artificial intelligence, but how swiftly and honestly it finds its footing again
There is an old adage in boardrooms from Silicon Valley to the City: markets punish failure, but they punish arrogance more. Nowhere has this truth been more vividly demonstrated in recent years than in the scramble by established software companies to graft artificial intelligence onto decades-old business models — and, in several notable cases, to reckon with the consequences of getting it badly wrong.
Adobe, Microsoft, Salesforce and a cohort of enterprise software stalwarts have each, in their own fashion, misjudged the pace, appetite or ethics of AI adoption among their paying customers. The question that now separates those destined to thrive from those sliding toward irrelevance is deceptively simple: are you listening?
Adobe’s Creative Reckoning
When Adobe unveiled proposed changes to its terms of service in mid-2024, the backlash from its creative community was swift and savage. Artists, photographers and designers, the very professionals who had built careers using Photoshop, Illustrator and Premiere Pro, read the new language and concluded, not unreasonably, that their work might be used to train Adobe’s generative AI models without adequate consent or compensation.
Adobe moved quickly to clarify its position, issuing revised terms and a public letter from its chief executive, Shantanu Narayen, insisting that the company had no intention of training AI on customer content without permission. The damage, however, had already registered. Rival tools gained trial users almost overnight. Trust, once fractured in a creative community bound by word of mouth and professional reputation, does not heal on the timeline of a quarterly earnings call.
The episode exposed a fault line that runs beneath the entire enterprise software industry: the assumption that customers who have spent years, sometimes decades, inside a product ecosystem will accept whatever the vendor decides to do next. In the age of generative AI, where intellectual property sits at the very heart of a creator’s livelihood, that assumption has become untenable.
Adobe’s recovery, such as it is, has been methodical. The company has invested in clearer communication around its Firefly AI tools, explicitly positioning them as trained on licensed and public-domain content. It has offered customers explicit opt-outs and launched transparency reports. Whether this is sufficient remains, in the view of many industry observers, an open question. But the direction of travel, towards the customer, rather than away, is correct.
Microsoft’s Recall: A Lesson in Reading the Room
Microsoft, for its part, suffered a more technical embarrassment with Recall, the AI-powered feature for its Copilot+ PCs that promised to give users a searchable, visual history of everything they had ever done on their machine. The pitch was seductive: a perfect memory for your computer. The reality, when security researchers examined it closely, was rather more alarming.
Critics pointed out that Recall, in its original form, stored vast quantities of sensitive data, passwords, banking details, private messages, in a local database that was insufficiently protected. The feature, which had been scheduled to roll out to millions of Windows users, was pulled back and delayed within days of the backlash. Microsoft subsequently redesigned Recall with encryption, opt-in defaults rather than opt-out, and tighter access controls.
The climbdown was handled with reasonable grace. The company acknowledged the concerns, moved with unusual speed for an organisation of its scale, and emerged with a clearer sense of what its enterprise and consumer customers actually demand from AI features embedded in their most personal devices: above all, control and transparency.
There is a broader lesson here that Microsoft’s leadership appears to have absorbed. Satya Nadella’s organisation has, over the past decade, cultivated a reputation for cultural reinvention, the company that learned from its failures in mobile, in search and in cloud. Recall suggests that even reformed companies can lapse into product hubris when the excitement of a new technology overwhelms the discipline of customer empathy.
Salesforce and the Autonomy Question
Salesforce presents a subtler case study. Its Agentforce platform, which allows businesses to deploy autonomous AI agents across customer service, sales and operations, represents a genuine bet on the future of enterprise software. But the rollout has not been without friction.
Several corporate customers have raised concerns about accountability, specifically, who bears responsibility when an autonomous AI agent gives a customer incorrect information, makes an erroneous booking or, in more consequential deployments, takes an action that has financial or legal implications. Salesforce’s answer, that the platform provides the tools and the enterprise assumes the liability, has satisfied some clients and unsettled others.
The company has responded by investing heavily in what it calls “trusted AI” frameworks, offering customers more granular control over agent behaviour and clearer audit trails. Chief executive Marc Benioff, never shy of a platform, has made trust a centrepiece of his public messaging. Whether the product reality matches the rhetoric is something enterprise procurement teams are now scrutinising with considerably more rigour than they applied to earlier waves of cloud software.






