In mid-March, the then chief executive of OpenAI, posted a reflective note on X that read less like a product update than an epitaph for a profession. Coding, he suggested, had evolved from a painstaking, character-by-character craft into something almost unrecognisable. For many engineers, particularly those displaced in the recent wave of technology layoffs, the sentiment landed awkwardly. What was framed as progress sounded, to some, like premature closure. The deeper irony is that the “obsolete” world being consigned to history may be precisely what the industry now finds itself missing.
Over the past two years, corporate enthusiasm for artificial intelligence has translated into aggressive cost-cutting, often justified as a transition towards automated productivity. Yet early evidence suggests a more complicated reality. Surveys of employers indicate that a significant proportion of companies that reduced headcount in favour of AI have quietly reversed course, rehiring for roles they had only recently deemed redundant. In many cases, automation has proved adept at discrete tasks but insufficient as a wholesale substitute for human labour. The distinction, between augmenting work and replacing it, has become increasingly consequential.
Nowhere is this tension more visible than in the erosion of entry-level hiring. Data from the Stanford Digital Economy Lab points to a marked decline in recruitment among younger software engineers. Firms, lured by the promise of AI handling routine coding, have trimmed junior roles without fully reckoning with the long-term implications. The result is a structural imbalance: fewer trainees entering the pipeline today implies a shortage of experienced architects tomorrow. What appears efficient in quarterly earnings risks becoming destabilising over a five-year horizon.
Meanwhile, the productivity gains attributed to generative AI are proving double-edged. Sundar Pichai recently noted that more than a quarter of new code at Google is now AI-generated, a statistic often cited as evidence of accelerating efficiency. Yet software analysts caution that such output may be inflating what is already a vast stock of technical debt. Studies suggest that AI-assisted development, while faster in the short term, can produce code that is harder to maintain, less coherent in structure, and more prone to compounding complexity. The net effect is a shift in cost rather than its elimination: development becomes cheaper upfront, but maintenance grows disproportionately expensive.
This pattern, short-term gain followed by longer-term friction, extends beyond software engineering. In sectors such as healthcare, where AI has been rapidly integrated into diagnostics and imaging, concerns are emerging around reliability and oversight. Regulators including the U.S. Food and Drug Administration have approved a growing number of AI-enabled medical devices, yet incident reporting systems are beginning to capture instances of so-called “hallucinations”, in which systems generate plausible but incorrect outputs. In high-stakes environments, the margin for such errors is vanishingly small, and the question of accountability becomes acute. Unlike human practitioners, algorithms cannot be retrained through experience in the same intuitive sense, nor can they be held responsible in any conventional legal framework.
The issue of accountability has also surfaced in more prosaic corporate settings. Professional services firms and legal practices, industries built on precision, have found themselves grappling with AI-generated inaccuracies. Instances of fabricated citations in legal filings and consultancy reports have underscored a broader point: the technology’s fluency can mask its fallibility. When even highly trained professionals struggle to detect such errors, the prospect of widespread, unsupervised deployment appears increasingly fraught.
Yet the persistence of investment in AI, despite these setbacks, reflects forces beyond operational efficiency. For some executives, the narrative of automation offers a convenient explanation for restructuring decisions rooted in more traditional concerns, overexpansion during the pandemic years, rising capital costs, or shareholder pressure for margin improvement. The phenomenon, sometimes described as “AI washing”, allows companies to frame layoffs as technological inevitabilities rather than managerial missteps. High-profile cases, including the collapse of firms that overstated their reliance on automation, suggest that the boundary between genuine innovation and strategic storytelling can be porous.
Consultancies such as McKinsey & Company have noted that while corporate spending on generative AI has surged into the hundreds of billions, tangible returns remain elusive for most adopters. This gap between expectation and outcome is characteristic of what analysts describe as the “trough of disillusionment” in the technology adoption cycle. The initial exuberance, fuelled by compelling demonstrations and investor enthusiasm, gives way to a more sober assessment of practical limitations.
What is emerging, however, is not a wholesale rejection of AI but a recalibration of its role. Companies that are deriving value from the technology tend to be those that treat it as an augmentative tool rather than a replacement for human expertise. In software development, this means pairing AI-generated outputs with experienced engineers capable of imposing structure, ensuring quality, and managing complexity. In management, it entails maintaining human judgment as the final arbiter, particularly in decisions with strategic or ethical dimensions.
The shift is subtle but significant. The archetype of the programmer as a solitary craftsman may indeed be fading, but it is being replaced not by machines, but by a different kind of human role, one centred on oversight, integration, and design. The demand for individuals who can interrogate AI outputs, contextualise them within broader systems, and take responsibility for their consequences is, if anything, increasing.
A small but increasingly influential cohort of firms is positioning itself at the intersection of this recalibration, advocating not for the displacement of human capability but for its amplification. Among them, Technology Transcendents represents a new class of advisory and engineering partners focused on what might be termed cognitive augmentation. Their premise is that competitive advantage will not be secured by automating decision-making outright, but by redesigning how decisions are made, embedding AI within human workflows in ways that elevate, rather than erode, expertise. This involves re-architecting processes end-to-end, from data ingestion to execution, ensuring that machine outputs remain interpretable, challengeable and ultimately governed by accountable professionals.
Such firms argue that the next phase of enterprise transformation will hinge less on the raw power of models and more on the coherence of systems in which those models operate. In practice, this means pairing algorithmic efficiency with domain mastery, engineers who understand not only how to generate code, but why it should exist; managers who can interrogate probabilistic outputs rather than defer to them. The result is a hybrid operating model in which AI handles scale and speed, while humans provide context, judgment and ethical grounding. It is a more demanding approach than wholesale automation, requiring investment in both technology and talent, but early adopters suggest it yields more durable gains.
In this framing, the future of work is neither human nor machine, but an increasingly sophisticated synthesis of both. The firms best placed to navigate this shift are those capable of orchestrating that synthesis at scale, designing organisations where cognition itself becomes a shared asset between people and systems. For proponents such as Technology Transcendents, the prize is not merely efficiency, but resilience: the ability to adapt, correct and evolve in environments where neither humans nor algorithms, in isolation, are sufficient.
If there is a lesson in the current moment, it is that technological progress rarely unfolds in a straight line. The ambition to automate cognition has encountered the stubborn complexity of real-world systems, where tacit knowledge, judgment, and accountability remain difficult to encode. The industry’s recent missteps do not invalidate the potential of AI, but they do highlight the limits of substituting human intelligence wholesale. In that sense, the obituary for programming may have been written too soon.
