2025 sucked.
It sucked for a great many people, for a great many reasons, and I'm acutely aware that my particular flavour of suffering ranks relatively low on any objective scale of hardship. But my 2025 sucked in some specific ways that I think some people might find illuminating, perhaps even cathartic, and I've been sitting with these experiences long enough now that I feel compelled to write them down before they calcify into something less useful than honest reflection.
Over the past twelve months, I've worked with three companies navigating what everyone insists on calling 'the AI transformation': a global mobile phone manufacturer launching a flagship device with AI as its core selling point; a development software brand attempting to build their own suite of AI tools; and, more briefly, a global hotel chain trying to make sense of what any of this means for their business, their workforce, and their future. Each engagement was different in its specifics, but all three shared a quality I've come to recognise as the defining characteristic of this moment: a desperate, almost frantic need to be seen doing something about AI, paired with a profound uncertainty about what that something should actually be.
What follows is less a summation of misery (though there will be some of that) and more an attempt to corral everything I've learned into something digestible, something that might help others who find themselves in similar positions. Consider it field notes from the front lines of a bubble, written by someone who spent the year feeling increasingly like Cassandra at the gates of Troy; cursed to see what was coming, and equally cursed to be unable to stop it.
*
I need to say something at the outset: I am not against technology. I am not a nostalgist pining for some imagined pre-digital golden age, nor am I simply old and frightened of change. I've spent the better part of twenty years helping organisations understand the forces shaping human behaviour online, and I've built a career on the premise that understanding technological transformation is essential for navigating contemporary life. I've worked on radical, award-winning creative campaigns. I've been early to emerging platforms and trends more times than I've been late. My instinct, when confronted with something new, is curiosity rather than dismissal.
And yet I've become what I never expected to be: a reluctant Luddite, someone who finds themselves increasingly aligned with the sceptics and the critics and the people asking uncomfortable questions at exactly the moment when such questions are most unwelcome.
The original Luddites were skilled textile workers in early nineteenth-century England who understood exactly what the new machinery could and couldn't do, and who objected to its deployment under conditions that served capital at the expense of labour and craft. Their concern wasn't with technology per se; it was with the social relations of technology, with questions about who benefits and who suffers, who decides and who endures the consequences of those decisions. That's a distinction worth holding onto, because the contemporary discourse around AI tends to collapse any criticism into a binary: you're either a wide-eyed accelerationist who believes we're on the cusp of transformative abundance, or you're a frightened reactionary who doesn't understand what's happening.
The reality, as anyone actually working in this space can tell you, is considerably more complicated.
*
Let's begin with slop. It is undoubtedly one of the most significant cultural developments of the past few years, and it's not getting nearly the attention it deserves.
'Slop' has become the term of art for AI-generated content that floods our information environment; technically competent, semantically empty, designed to fill space and capture attention without offering anything of genuine substance. It's the grey goo of the digital ecosystem, an ever-expanding mass of text and images and video that looks, at first glance, like real content but dissolves into nothing when you try to engage with it meaningfully. Slop is the AI-generated article that ranks highly in search results but says nothing you couldn't find in a Wikipedia summary. Slop is the LinkedIn post that hits all the right notes of professional insight while communicating precisely zero actionable ideas. Slop is the synthetic image that populates your social feed, technically impressive and spiritually vacant.
The slop crisis is fundamentally epistemological; that is, we should think about this in terms of how exactly how we know what we know and how we distinguish signal from noise in an environment where noise is being generated at industrial scale. It's not that AI-generated content is always bad (it's not; some of it is genuinely useful, and I'd be a hypocrite to deny that I've used these tools myself in contexts where they're appropriate), its that there's an awful lot of it; serviceable-but-empty content produced so cheaply and in such quantities that it's drowning out everything else, making it harder and harder to find the work that actually matters.
We're experiencing a kind of Gresham's Law of information, where bad content drives out good because bad content is infinitely cheaper to produce. If you can generate a thousand mediocre articles for the cost of one excellent one, and if the distribution systems (search engines, social algorithms, content aggregators) can't reliably distinguish between them, then the economics favour mediocrity at scale. The people doing careful, thoughtful, genuinely insightful work find themselves competing for attention against an ocean of slop, and the ocean is rising faster than anyone can swim.
The bitter irony is that many of the same agencies loudly decrying AI-generated content are simultaneously, eagerly, racing to produce what they cheerfully term 'brainrot': human-made content deliberately engineered for minimum cognitive engagement and maximum algorithmic reach, ie scroll-stopping nonsense or attention-hacking ephemera. Content designed not to communicate but to metabolise, to pass through the viewer leaving nothing behind except a vague compulsion to keep scrolling. In a sense, the slop crisis isn't really about AI; AI simply industrialised a degradation that was already well underway. The economic logic that makes AI slop profitable is the same logic that made brainrot a viable creative strategy: when attention is the only metric that matters, and attention is captured most efficiently by the lowest common denominator, quality becomes a competitive disadvantage. We are watching a race to the bottom conducted by people who know exactly what they're doing and have decided the rewards outweigh the costs (costs which, conveniently, are borne by everyone except themselves).
For those of us who work in research and cultural analysis, this is catastrophic in ways that are difficult to overstate. The signals we rely on (what people are saying, how they're saying it, where new ideas emerge, how discourse evolves) are being obscured by noise. The internet, which was supposed to be the great library of human knowledge, is becoming illegible; not because the information isn't there, but because it's buried under so much synthetic filler that finding it requires more effort than most people can afford to invest.
The same contradiction plays out in software development, where 'vibe coding' has become the term of art for programming-by-prompting: describing what you want to an AI, accepting whatever code it generates, and moving on without understanding why it works (or, more consequentially, why it might eventually stop working). The practice has its evangelists, naturally; Andrej Karpathy, one of the founding figures of the current AI moment, coined the term with evident affection, describing a workflow where you 'fully give in to the vibes' and 'just see stuff that works.'
What Karpathy frames as liberation might be better termed as the outsourcing of comprehension itself. Vibe-coded software is code that functions until it doesn't, maintained by developers who lack the knowledge to diagnose its failures, secured by teams who never understood its architecture in the first place. It is, in essence, slop for the stack: technically functional, semantically hollow, accumulating as technical debt that will come due at the worst possible moment.
The irony of my own situation was not lost on me. I spent months embedded with a development tools company whose public communications consistently emphasised rigour, craftsmanship, and the irreplaceable value of developers who truly understand their systems. Their internal roadmap, meanwhile, was dominated by features designed to make vibe coding faster, easier, and more frictionless. When I raised this contradiction (gently, diplomatically, in the language of strategic risk), I was told that the market demanded these tools, that competitors were already shipping them, that failing to build them would mean ceding ground to less scrupulous players. The logic was impeccable and utterly circular: we must accelerate the degradation because everyone else is accelerating the degradation.
This is how races to the bottom work. Nobody thinks they're the problem. Everyone believes they're simply responding rationally to conditions created by others. And so the floor keeps dropping, and the people doing the dropping keep expressing concern about how far we've fallen.
I don't know what the solution is. I'm not sure anyone does. But I know that we need to start talking about this more honestly, because pretending it's not happening isn't making it go away.
*
And speaking of illegibility: search is broken, and we need to stop pretending otherwise.
I don't mean slightly degraded, or somewhat less useful than it used to be, or in need of minor improvements. I mean fundamentally, structurally broken in ways that are affecting our collective ability to know things about the world. Google's results are now dominated by SEO-optimised slop, by AI-generated summaries that hallucinate citations and present fabricated information with the same confident formatting as genuine facts, by sponsored content masquerading as organic results, by content farms gaming every metric the algorithm uses to assess quality. Finding primary sources (the actual documents, the original research, the firsthand accounts) requires skills that most users don't have and patience that most users can't afford.
Everyone in the industry knows this. If you talk to people who work in search, SEO, maybe even digital marketing more broadly, they will tell you (usually off the record, because their livelihoods depend on not saying this too loudly) that the system is broken and nobody knows how to fix it. They will tell you that the incentives are all wrong; the platforms are optimising for engagement rather than accuracy and the economics of digital advertising have created a landscape where truth is less profitable than attention.
And yet we keep pretending it's fine. We keep acting as if Google is still a reliable way to find information, as if the first page of search results still represents something like the best available knowledge on a topic. We keep building businesses and making decisions and forming beliefs based on information we found through systems that we know, if we're honest with ourselves, are no longer trustworthy.
If you want a glimpse of where this might be heading, consider the case of Grok and Grokipedia. Elon Musk's contributions to the epistemic landscape function less as information tools than as monuments to one man's extraordinary (and entirely unwarranted) capacity for self-regard.
When users asked Grok to compare Musk's fitness to LeBron James, the chatbot declared that the tech billionaire "edges out in holistic fitness." When asked who would win in a fight between Musk and Mike Tyson, it chose Musk. When Sam Altman asked who should guide AI if humanity's fate was on the line, Grok dutifully selected its owner. These responses reveal something pathological about the relationship between the technology and the man who controls it.
Grokipedia, launched as Musk's answer to what his supporters call 'Wokepedia,' takes this dynamic further. The historian Sir Richard Evans, one of the world's foremost experts on the Third Reich, checked his own entry and found it riddled with fabrications: trials he wasn't involved in, supervisors he never met, positions he never held. 'Chatroom contributions are given equal status with serious academic work," Evans observed. "AI just hoovers up everything.' The political slant is equally pronounced: the entry on Ukraine's invasion cites the Kremlin and reproduces Russian terminology about 'denazifying'; Britain First is described as a "patriotic political party"; the January 6th attack becomes a 'riot.'
This is nothing less than the weaponisation of epistemic infrastructure. This is AI systems deployed to construct an alternative reality serving the political and psychological needs of whoever controls them. Musk has spoken of etching Grokipedia into stable oxide and placing copies on the Moon and Mars 'to preserve it for the future.' The man saying this controls a major social media platform, has the ear of the American president, and possesses more wealth than most nation-states.
The deeper problem, as cultural historian Peter Burke notes, is that encyclopedia entries carry authority precisely because they appear anonymous and objective. When that authority is captured by systems designed to advance particular interests, we stop arguing about interpretations of shared facts; we start inhabiting different factual universes entirely.
For researchers, strategists, analysts (and indeed anyone whose work depends on finding true things about the world and distinguishing them from false things) this is an existential problem. Our tools are failing us, and there's no clear replacement on the horizon. The irony is bitter: we're in the middle of what's supposed to be an AI revolution that will transform how we access and process information, and the actual result, so far, has been to make information harder to find and less reliable when we find it.
*
I want to set aside my frustration with agencies and clients for a moment to acknowledge something that I think gets lost in critical accounts of the current AI frenzy: I understand the pressure. I understand it viscerally, because I've sat in the rooms where it operates, and I've watched intelligent, thoughtful people make decisions they knew were questionable because the alternative (being seen to do nothing, falling behind competitors, failing to satisfy AI-pilled executives and boards) felt worse.
The pressure on companies to DO SOMETHING about AI is immense, almost physical in its intensity. Competitor spend is staggering; you can watch the numbers climb in real time through industry publications and earnings calls, each announcement ratcheting up the anxiety for everyone else. CEOs who've drunk deeply from the AI hype trough are demanding transformation on timelines that bear no relationship to technical or organisational reality. TED-talking futurists and LinkedIn thought leaders are promising obsolescence for the slow adopters, painting vivid pictures of a near future in which those who hesitated are simply swept away. The FOMO is institutional, structural, and ultimately existential.
I get it. I genuinely do.
But this is noise, and the tragedy of the current moment is that so few people feel able to say so. There's an enormous disconnect between boardroom urgency and consumer reality, between the breathless rhetoric of transformation and the actual lived experience of people encountering these tools. What I keep encountering, over and over, in research and in strategy and in my own daily life, is generative AI products as solutions in search of problems. They're products nobody asked for, solving problems nobody has, at prices nobody will pay. The people who most need to hear this (the executives making billion-dollar bets, the investors pricing in exponential growth, the strategists building careers on AI expertise) are precisely the people least positioned to listen.
*
That fundamental disconnect between hype and reality should, on its own, tell us we're in a bubble. But we should take a moment to actually examine the numbers.
HSBC recently built a model to determine whether OpenAI (the company that has, more than any other, come to represent the promise and peril of this technological moment) can actually pay for all the compute it's contracted. The answer, once you work through the projections, is a resounding no.
OpenAI has committed to $250 billion in cloud compute from Microsoft and $38 billion from Amazon, bringing contracted compute to 36 gigawatts. Based on total deal value of up to $1.8 trillion, HSBC estimates OpenAI is heading for data centre rental bills of approximately $620 billion annually, though only about a third of that capacity comes online by 2030. Cumulative rental costs through 2030 reach $792 billion, rising to $1.4 trillion by 2033.
Against those obligations, HSBC estimates cumulative free cash flow of $282 billion, plus $26 billion from Nvidia's cash injections and AMD share sales, $24 billion in undrawn debt facilities, and $17.5 billion in current liquidity. Add it all up and there's a $207 billion funding hole, plus another $10 billion buffer HSBC thinks they'd need for operational safety.
It’s important to note that HSBC's model already assumes extraordinary success. It assumes OpenAI reaches 3 billion users by 2030, which represents 44% of the world's entire adult population outside China. It assumes 10% of those users become paying customers, double the current conversion rate. It assumes OpenAI captures 2% of global digital advertising. It assumes enterprise AI generates $386 billion annually.
Even with every single one of those absurdly heroic assumptions going right, with everything breaking in OpenAI's favour, the company still cannot pay its bills.
HSBC's suggested solution is remarkable in its candour: OpenAI might need to 'walk away from data centre commitments' and hope the big players show 'flexibility' because 'less capacity would always be better than a liquidity crisis.' That's a polite way of saying the business model doesn't work, and everyone involved might need to pretend the contracts don't exist, so the whole edifice doesn't collapse under the weight of its own obligations.
This is the company anchoring a $500 billion Stargate project, the company driving hundreds of billions in infrastructure spending across the industry, and the company whose valuation and growth trajectory serves as the benchmark against which every other AI venture is measured. The centre cannot hold because there's simply nothing at the centre; just projections built on projections, assumptions stacked on assumptions, and a collective determination not to be the first to say that this is all folly.
*
The AI literacy deficit is enormous at every level of every business I've encountered, and I want to be clear that I don't mean technically. Technical illiteracy is widespread, certainly (most executives couldn't explain the difference between a transformer architecture and a recurrent neural network, and frankly that's fine; they don't need to). But the more profound literacy deficit is conceptual, strategic, and cultural.
I've sat in rooms where senior strategists couldn't articulate the difference between generative AI and machine learning, conflating fundamentally different technologies because both fall under the capacious umbrella of 'AI.' I've watched agencies sell 'AI-powered insights' that were, functionally, search queries with extra steps and a markup for the buzzword. I've seen executives conflate large language models with AGI because they read a breathless LinkedIn post from someone with 'futurist' in their bio and didn't have the framework to evaluate the claims being made.
AGI (Artificial General Intelligence, for the uninitiated; the hypothetical point at which artificial intelligence matches or exceeds human cognitive abilities across all domains) has become a kind of religious horizon for the industry. It structures investment decisions, corporate strategy, talent acquisition, and career anxiety despite being, at best, a theoretical possibility whose timeline remains genuinely uncertain and, more likely, a convenient fiction that justifies present spending on future promises that will never materialise. The belief in imminent AGI functions socially much like beliefs in the rapture or the singularity: it creates urgency, it forecloses certain kinds of questions, and it makes the present moment feel like a threshold that must be crossed at any cost.
The problem isn't ignorance, exactly; it's the wrong kind of confidence. These are intelligent people, often highly accomplished in their own domains, who've been told by a chorus of seemingly authoritative voices that AI is the most important technological development since electricity, that it will reshape every industry within five years, that failure to act now will mean irrelevance by the end of the decade. And because the technology itself is genuinely complex and even experts disagree about its capabilities and trajectory, there's no easy way for a non-specialist to distinguish between credible analysis and wishful thinking dressed up in technical vocabulary. The result is a kind of borrowed certainty; executives pattern-matching on confidence rather than evidence, trusting the people who sound most sure of themselves rather than the people who are actually most informed. I've watched this dynamic play out dozens of times: the careful, qualified assessment from someone who genuinely understands the technology gets steamrolled by the bold, unhedged prediction from someone who's mastered the performance of expertise without the substance. In an environment where admitting uncertainty reads as weakness and scepticism reads as being 'not a team player,' the incentives all favour overconfidence; and overconfidence, compounded across thousands of boardrooms and strategy sessions and investment committees, is how bubbles form.
This literacy deficit extends far beyond the technical and the financial; it encompasses a profound failure to grapple with the social and cultural implications of these technologies, the ways they're already reshaping labour, creativity, trust, and the very texture of public life. I've sat in meetings where the entire discussion centred on efficiency gains and cost savings, where the question of what happens to the people whose work is being automated was treated as, at best, an afterthought and, at worst, a distasteful topic to be avoided. I've watched organisations deploy AI systems with no consideration of how those systems might entrench existing biases, erode worker autonomy, or undermine the conditions that make creative and intellectual work meaningful.
The discourse around AI in most corporate contexts is almost wilfully blind to decades of scholarship (from critical labour studies, from science and technology studies, from media theory and cultural criticism) that could help us understand what's actually at stake. We know, for instance, that automation doesn't simply eliminate jobs; it transforms them, often in ways that intensify surveillance, fragment tasks into meaningless micro-units, and shift power decisively toward employers and platforms. We know that algorithmic systems tend to reproduce and amplify the inequalities present in their training data, that they create new forms of opacity and accountability gaps, that they change how we relate to each other and to institutions in ways that are difficult to perceive and even more difficult to reverse. We know that the concentration of AI development in a handful of enormously wealthy corporations raises profound questions about democratic governance, about who gets to shape the technologies that increasingly mediate every aspect of our lives. But this knowledge rarely makes it into the rooms where decisions are being made; it's treated as academic, as political, as somehow separate from the serious business of implementation and return on investment. The result is a kind of engineered ignorance, where the people building and deploying these systems have been carefully insulated from any framework that might cause them to question whether they should.
Working with an agency who fundamentally don't understand AI, while embedded in client departments who also don't understand AI, while surrounded by a media ecosystem that actively rewards confident nonsense over careful analysis, creates a peculiar hall of mirrors. Everyone is performing confidence. Everyone is hoping someone else in the room actually knows what's happening. Everyone is afraid to be the one who asks the basic question, because asking basic questions marks you as behind, as not getting it, as insufficiently bullish on the transformative potential of this technology.
The result is a kind of collective hallucination, a shared pretence that we're all working toward something coherent when in fact we're mostly just responding to each other's performances of certainty.
*
There's a specific exhaustion that comes from being the person in the room who has to say 'that's not how it works' repeatedly, and I've been thinking a lot this year about what that exhaustion actually consists of, because naming it feels important. There is emotional labour to being Cassandra.
Part of it is simply the repetition; the grinding, Sisyphean quality of making the same points in meeting after meeting, watching them land and then dissolve, knowing you'll make them again next week to a slightly different configuration of faces. You explain that large language models don't 'understand' anything in the way humans understand; they predict statistically likely next tokens based on training data. You explain that 'AI-generated insights' are only as good as the prompts and the data and the human judgment applied to the outputs. You explain that the timeline for profitability looks nothing like the projections. And people nod, and they take notes, and then the conversation continues as if you hadn't spoken, because the institutional momentum is too strong, and the incentives all point in the same direction, and nobody wants to be the one who slowed things down.
In Greek mythology, Cassandra was a Trojan priestess cursed by Apollo to speak true prophecies that no one would believe. The contemporary version of this curse is less dramatic but more grinding: you're believed, technically (people don't think you're lying), but your warnings are filed under 'concerns noted' and the project proceeds anyway. The meeting ends. The Slack messages continue. The launch happens. The results are exactly what you predicted, sometimes down to the specific failure modes you identified months earlier. And nobody mentions this; there's no moment of reckoning, no acknowledgment that perhaps the sceptic had a point. The institutional memory is short, and besides, there's already a new initiative to discuss, a new AI-powered something that needs to be evaluated, and the cycle begins again.
The emotional labour is substantial, and it operates on multiple levels simultaneously. You're not being paid to say no; you're being paid to help, and sometimes helping means being the voice of doom in rooms full of people who've already committed psychologically and financially to the thing you're questioning. This creates a constant low-grade tension between what you're seeing and what you're supposed to be facilitating. You want to be useful. You want to be collaborative. You want to be the person who helps organisations navigate complexity rather than the person who just points out problems. But the problems are real, and ignoring them doesn't make them go away; it just delays the reckoning and makes it worse when it finally arrives.
There's also the isolation, which I hadn't fully anticipated. Being the sceptic in an ecosystem of believers is lonely, even when you know you're right (perhaps especially when you know you're right, because the certainty doesn't make the loneliness any easier to bear). You start to doubt yourself. You wonder if you've become the kind of person who's simply against things, reflexively negative, constitutionally unable to see possibility or embrace change. You lie awake at night running through your reasoning, looking for the flaw, trying to figure out what everyone else is seeing that you're missing. And then you read another earnings report, or another technical paper, or another piece of cultural research, and you realise you're not missing anything; you're just willing to say what others won't.
*
There's a weird grief in all of this, and I've struggled to find language for it because it doesn't map neatly onto the conventional categories of loss. It's the grief of watching your expertise become either suddenly invaluable or suddenly irrelevant, sometimes both simultaneously, in ways that feel arbitrary and destabilising.
On one hand, everyone wants to talk to the person who understands culture, who can decode what's actually happening with these technologies versus what the press releases and the keynote speeches claim. The demand for genuine insight has never been higher, precisely because the supply of confident bullshit has grown so overwhelming. I've had more inbound enquiries this year than any previous year, more requests for strategic counsel, more invitations to speak and advise and consult. In that sense, the skills I've spent nearly two decades developing (deep research, cultural synthesis, strategic thinking, the ability to read between the lines of corporate rhetoric and technological hype) have become more valuable than ever.
On the other hand, those same skills are precisely the skills most frequently claimed to be automatable, replaceable, soon-to-be-obsolete. Every LinkedIn thought leader with access to ChatGPT now considers themselves a strategist, capable of generating 'insights' and 'frameworks' and 'cultural analysis' at the push of a button. The actual quality of this output is, in my experience, somewhere between mediocre and actively misleading; but quality isn't the point. The point is volume, and speed, and the appearance of expertise, and in a market that struggles to distinguish genuine insight from fluent-sounding nonsense, the people doing the genuine work find themselves competing against an infinite supply of cheap simulation.
This creates a peculiar double consciousness: you're simultaneously more needed and more threatened, more valued by those who understand what you do and more vulnerable to those who don't. It's exhausting in a way that's hard to articulate, because you're not just doing the work; you're constantly justifying why the work can't be done by a machine, constantly demonstrating value to audiences who've been told that your entire domain of expertise is about to be disrupted into irrelevance.
The discourse around 'AI will take your job' misses the more mundane reality of what's actually happening right now, in offices and agencies and consulting practices around the world: AI is making your job slightly different and significantly more annoying. You spend more time correcting AI-generated first drafts that sound confident but say nothing, that have the cadence of insight without any of the substance. You spend more time explaining why the 'AI-powered insights' your client is excited about are neither insightful nor, in any meaningful sense, intelligence. You spend more time managing the expectations of people who've been promised magic and received autocomplete, who can't quite understand why the tool that writes fluent paragraphs can't seem to produce anything actually useful.
And underneath all of this, there's a grief that I think many knowledge workers are feeling but few are naming: the grief of watching something you've dedicated your life to becoming both more necessary and more precarious, of having your expertise simultaneously validated and undermined, of knowing that you're good at what you do while also knowing that the market's ability to recognise and reward that goodness is eroding in real time.
*
So what do I take from this year? What's worth carrying forward?
First: the gap between AI hype and AI reality is wider than I understood, even as someone who started the year sceptical. The financial structures are precarious; the technical limitations are often fundamental rather than temporary; the attempts to force adoption through bundling and corporate mandates are creating resentment rather than enthusiasm.
Second: literacy matters more than anything else right now. Technical literacy, certainly, but also financial and cultural literacy; the ability to read between the lines of corporate announcements and understand what's actually being said versus what's being performed. The organisations that navigate this moment successfully will be those willing to invest in genuine understanding rather than chasing headlines.
Third: to be a reluctant Luddite is to insist on asking better questions. For whom is this technology being developed? Who benefits, and who bears the costs? What's being lost in the rush to automate, and is that loss justified by what's gained?
Those questions were worth asking in 1811, when skilled weavers watched their livelihoods destroyed by machines that produced inferior cloth more cheaply. They're worth asking now, when knowledge workers watch their expertise simultaneously valued and undermined by systems that produce inferior thinking more cheaply.
The answers won't always lead us to reject the technology. Sometimes they'll lead us to embrace it, to find uses that genuinely expand human capability. But we won't find those uses by treating scepticism as a character flaw rather than a necessary intellectual virtue.
*
After everything I've written, you might wonder why I'm still doing this work, why I haven't walked away from an industry that seems increasingly captured by hype and populated by people who don't want to hear what I have to say.
The honest answer is that I don't know where else I would go. This is what I do; it's what I've done for all my of adult life, and despite everything, I still believe it matters. Understanding culture, decoding the forces that shape how people think and feel and act, helping organisations navigate complexity without losing their souls: these feel like worthwhile things to spend a life on. And if the current moment is making that work harder, it's also making it more necessary. The need for genuine insight, for people who can cut through the noise and say something true, has never been greater.
So I'll keep doing it. I'll keep being the reluctant Luddite, the Cassandra at the gates, the person who says 'that's not how it works' in rooms full of people who'd rather not hear it. Not because I enjoy the role (I don't; it's exhausting and isolating and often thankless), but because someone has to, and I seem to have ended up being someone.
If you're in a similar position (if you've spent this year feeling increasingly like the only sane person in rooms full of true believers, if you've burned out trying to explain things that should be obvious, if you've felt the weird grief of watching your expertise become simultaneously more valuable and more threatened) I want you to know that you're not alone. That feeling of isolation is part of what makes this moment so grinding, and knowing there are others out there helps, even if we can't always find each other.
Here's to 2026. May it suck less than 2025, though I'm not holding my breath.








