While tech leaders warn about super-intelligent AI destroying humanity, they're quietly using these systems to reshape labour markets and concentrate power. The real threat isn't artificial general intelligence - it's how generative AI is being weaponized to serve capital at the expense of workers.
The Platform Capitalism Playbook
Generative models did not emerge in a vacuum; rather, they are the by-product of the specific dynamics of data extractive accumulation central to firms such as Meta and Google, what Nick Srnicek refers to as Platform Capitalism.
The extraction of personal data through surveillance practices has been enormously profitable for firms; the profiling, targeting, management, control, and exploitation of datafied subjects give data both a use-value and an exchange-value, and to that end, subject to market forces and traded as a commodity. As data scientist Jathan Sadowski notes, these companies have created a “data imperative” - an obsession with collecting information from every possible source, even when its immediate value isn't clear.
Platforms arose out of a need for more efficient systems for data extraction at scale; by creating digital spaces for interaction as intermediaries that own both the software (the code) and hardware (the data centres), platforms are perfectly situated to extract data from their users.
This singular focus on data extraction, and the hardware required to deliver it, is what led to the emergence of Generative AI. AlexNet demonstrated that model performance directly correlates with network depth, requiring both large datasets and distributed hardware; platforms have both of these resources in abundance, ideally positioning platform firms to develop very deep neural networks. The logics of personal data extractivism are now applied to the data that internet and platform users actively create; the enormous stores of written text, both public and private, our photos, drawings, videos are now the data commodity with the highest use-value. When viewed in these terms, it is clear that the data commodity is not mined or gathered, but manufactured: Platform users are free labour working in the data mines of generative AI.
Generative Foundation Models (GFMs) didn't emerge from innovation as we typically understand it. The dramatic transformations in deep learning that led to GFMs from AlexNet were not the result of scientific or technical innovations, but from capital throwing ever-more compute and data at the problem of generative models in an escalating computational arms race, open only to platforms that already had the requisite data and technical infrastructure.
This arms race was necessitated by the lack of proprietary ownership over the models themselves; the major tech platforms provide the infrastructure, control the data centers, and own the code. But crucially, they don't own the underlying models - these are based on publicly available research. This creates intense pressure to scale hardware ever larger to stay competitive. It's not innovation driving progress, but brute force application of capital.
The Hidden Labor Crisis
The monopoly power of platforms and GFMs are built on highly exploitative labour practices, both from the perspective of users who give their labour for free and do not own or benefit from the commodity that labour produces, but also in the formation of an increasingly large, increasingly hidden underclass of data labourers. Extractivism alone is not enough to meet the data needs of platform capital; this data must be cleaned and tagged, ready for ingestion into models to be useful. Platforms have employed several strategies in an attempt to acquire this labour for free; Google’s use of the CAPTCHA system to crowdsource the digitisation of images of text, and then to tag images for classifiers, was one such attempt, promoted as a public service and cause for good.
In general, however, this work requires more time and more attention than firms can easily acquire without paying for it. Platforms again provide the solution; an army of highly casual, precarious and extremely low-paid workers are enlisted and sourced via crowdwork platforms, such as Amazon’s Mechanical Turk, to deliver ‘automation’s last mile;' there is an amorphous underclass of highly casualised, highly precarious, and mostly unacknowledged workers that sits behind platforms and datasets This has been the case for some time; the ImageNet database used by AlexNet required an enormous human effort to tag and classify the millions of images in its library and was the biggest project on Mechanical Turk for months. The size and scope of datasets used today dwarf ImageNet, and demand is far higher; although GFMs are initially unsupervised, they still require vast and costly human intervention in the form of reinforcement learning and other forms of semi-supervised approaches applied during training’s last mile.
Even after training, these systems need constant human oversight. GPT-4 reportedly costs $700,000 per day to run and required over $100 million to develop. Yet when OpenAI transitioned from non-profit to for-profit status, they did so knowing investors would expect returns. This raises a crucial question: how do you make money from such expensive technology?
The Real Business Model
GFMs have two principle commodity forms. The first is as a consumer product, delivered directly to consumers through the application layer. ChatGPT is a direct-to-consumer subscription product, as is Claude, Anthropic’s virtual assistant. It is too early to predict the extent to which this commodity form is meaningfully revenue generating; currently, GFMs are cheap and universally accessible, subsidised by massive capital investment, following the ‘growth before profit’ approaches borne from platform capital. This subsidisation facilitates low stakes experimentation and exploration, generating a surge of temporary interest from casual users who are likely to cool off if prices increase substantially.
The real money is in business applications - selling GFMs as fixed capital to replace human workers. Tech leaders carefully frame this as “improving productivity” rather than eliminating jobs. OpenAI's Sam Altman talks about making people “dramatically more efficient." Microsoft promotes its GFM services as ways to “accelerate productivity.” McKinsey estimates productivity gains worth up to $7.9 trillion annually. Perhaps most egregiously, Forbes magazine proclaims that “Generative AI will give us back 40% of our free time."
But “productivity” here is a euphemism. The idea that labour-time would be returned to workers goes against every enduring principle of capital observed since Marx. The overwhelming structural tendency is for increases in productivity to lead either to unemployment, or to greater output without a reduction in labour-time; indeed, capitalism demands that it be so. (Harvey, 2023).
To put it another way – the entire industry is predicated on the expectation that companies will pay to use GFMs to reduce workforce requirements. In a very real way, worker displacement is fundamental to the business model.
The Coming Labour Crisis

Speculative predictions about the impact of these technologies on the labour market are rife. One study found that “300mn full-time jobs will be lost to automation”; another estimated that “80% of the U.S. workforce could have at least 10% of their work tasks affected…. 19% of workers may see at least 50% of their tasks impacted”. Such rampant speculation has not been tempered by the reality that jobs have already been lost to GFMs; Google cut almost an entire ad sales division and replaced them with Gemini, its own GFM, while language learning app Duolingo has shed contractors in favour of using GFMs to produce lessons. Finance company Klarna recently announced that it had replaced 700 customer service employees with a downstream implementation based on the GPT-4 GFM, claiming a saving of $40 million a year.
While apocalyptic predictions of total workforce replacement are overblown, economists widely agree these technologies pose significant risks of worker displacement and deskilling. The immediate threats are to low-stakes, fault-tolerant sectors like customer service and sales. But even traditionally automation-resistant fields are vulnerable - from computer programming to creative industries like advertising and publishing.
Beyond worker displacement and deskilling, GFMs risk deepening the casualisation of labour that typify work under platform capitalism. Platforms emerged at a moment of profound crisis; the austerity that followed the 2008 recession accelerated the erosion of the welfare state, subordinating the social safety net to neoliberal market imperatives. Wages had stagnated, full-time salaried work was often not enough to support a family, and government and other social institutions could not be relied upon to pick up the slack. These market imperatives proliferated, first as the primary driver of economic activity, and then as the primary driver of all forms of human society and decision-making. Platform mediated gig work is a direct reflection of these market logics; what began as a need for supplemental income ended with the transformation of an entire class of worker into the precariat, workers who are defined on the one hand by distinct relations of production (i.e. flexible, casual, temporary part-time labour contracts) and on the other, by distinct relations of distribution (i.e. a reliance on money wages in place of non-wage benefits such as pensions, holidays, medical coverage.) The benefits for capital are enormous; the gig economy has enabled platform owners to evade worker and consumer protections while also suppressing labour power.
If we accept the claim that GFMs are inherently deskilling, then this is the logical conclusion of their introduction to firms; they allow casual workers to perform the tasks of salaried employees, on demand and as needed, and the structural convergence with platforms will produce new ways for firms to access new pools of workers.
Given how flawed Generative AI seems to be, you would be forgiven for thinking that, if these platforms don’t work the way we are told they will, then all will be well. All of these impending crises, whether the ‘Slow Tsuanami’ of displacement or the precariousness of casualisation, will be avoided so long as Generative AI remains incapable of performing as well as human labour.
The Irrelevance of Viability
Here's the darkest truth: it may not matter whether GFMs actually work as promised. The logics of Growth before Profit come to a point in the venture capital that birthed the platforms, expressed as exit-value; that is, the value placed on the sale of equity accrued by early investors during a later liquidity event, either through the private sale of the shares as part of a merger or acquisition, or when those shares are issued on an exchange following an IPO. Investors play an outsized role in valuations, especially within the tech sector, generating exit-value by heightening market expectations prior to public trading.
To that end, VC funded firms are incentivised to pursue irrational loss-making strategies in the near-term, using influxes of capital to obscure the reality of the loss-making and drive overvaluation, anticipating greater returns during the liquidity event. This irrationality deepens the monopolising effect; large capital reserves give firms the freedom to rapidly outcompete or acquire competitors, swallowing up the competition.
We've seen this before. Uber lost billions undercutting taxi services to dominate the market. The initial predatory pricing strategy that led to early market dominance, squeezing out traditional taxis and their underfunded rivals, was never intended to generate near-term P&L gains, but rather the maximisation of share value once the company went public. In the first year after the IPO, Uber’s share price halved, and in the years that followed the firm doubled rider fares and halved driver pay in an abortive effort to increase margin. These efforts have largely failed, and in the end, the consumer has gained nothing from Uber’s total devastation of an entire global industry; prices are now equal to or above the taxi fares initially undercut by the firm’s predatory practices, while consumer studies have strongly suggested that the service is perceived to be markedly worse. All the while, driver wages have been slashed, worker protections have been hollowed out across the entire sector, and a job that was once comparatively secure is now casualised and precarious. This story is repeated time and time again, from AirBnB to WeWork, across almost every industry in every country in the world.
This is the essence of Silicon Valley’s ‘move fast and break things’ approach to disruption. It is not new technology that reduces costs; this form of disruption discards the creative potentiality of technology in favour of the rapid establishment of monopolies, built on the exclusion of incumbents using predatory strategies, underwritten by vast hoards of capital, and sustained under the cover of the techno-optimist machinery of irrational exuberance. It is Schumpter's creative-destruction, yet entirely without creativity.
In a very real sense, the technology is completely irrelevant; the long-term viability of the business model (or the ability of the app or service to truly lower costs) is subordinate to the violently destructive form of capitalism that attempts to embed the technology into our lives.
The AI Safety Smokescreen
This brings us to the current moral panic around AI safety. In 1996, at the height of the Dot-com bubble, Alan Greenspan, then Chairman of the Federal Reserve, gave a speech admonishing the way in which the “Irrational Exuberance” of investors had unduly escalated asset prices in tech companies. Four years later, the asset prices to which he was referring had fallen by 75% from their highs, wiping out $1.755 trillion in value. Greenspan was speaking of a fundamental psychology, an ebullience that drives investors to buy assets at inflated prices that lead to market bubbles; economists who study bubble dynamics connect this idea to behavioural psychologies such as herding and a fear of missing out that reinforces the price surge, or observe the relationship to media hype, speculative narratives, and a collective belief in new economic paradigms that justify abnormal price increases. Under these conditions, Venture Capitalists are considered victims of irrational exuberance, caught up in the excitement and furore around the assets that drive the bubble, be they dot-coms, unicorn start-ups, or AI.
This analysis fails to grasp the transformative, totalising power of capital to reshape reality based on technofuturist imaginings; Nick Land and the work of the CCRU recognised this in their formulation of the notion of Hyperstition.
“Hyperstition is a positive feedback circuit including culture as a component. It can be defined as the experimental (techno-)science of self-fulfilling prophecies. Superstitions are merely false beliefs, but hyperstitions — by their very existence as ideas — function causally to bring about their own reality.”
The idea of feedback loops that will reality into existence is potent; Land is right when he notes that “Capitalist economics is extremely sensitive to hyperstition, where confidence acts as an effective tonic, and inversely.” Isabella Weber’s work on inflation in the wake of the pandemic demonstrates that, contrary to the usual explanation of supply-side shocks; inflation rose because public discourse about inflation rose, giving firms cover to raise prices even when unaffected by costs, which drove both more discourse and more price inflation. Land suggests that this logic applies to technology itself; as he notes, drawing from William Gibson’s Neuromancer “The (fictional) idea of Cyberspace contributed to the influx of investment that rapidly converted it into a technosocial reality.”
But, as Uber, WeWork and now Generative AI show us, the power of the feedback loop lies not in its object but in its application; what capital wills into existence is not the form of technology (cyberspace, websites, platforms) but rather the economic and socio-cultural conditions they create. The promise of technology is a fig-leaf for the violent reconfiguration of conditions in the service of capitalist imperatives; that is, consolidation, monopolisation, and the disruption of labour power. Capital isn't willing into existence human-like AI - it's using that narrative to reshape economic and social conditions. The promise of technological revolution is merely cover for changes in labour relations.
UK Policy: A Case Study in Misdirection
The UK government's approach to AI regulation perfectly illustrates this dynamic. Despite investing more in AI safety than any other nation and hosting summits about existential risks, Britain has yet to implement a single meaningful regulation of GFMs.
Even obvious issues like intellectual property rights are being ignored. GFMs are trained on massive amounts of copyrighted material without permission or compensation. Rather than protect creators' rights, the government is likely to shield platforms from liability while placing responsibility on end users - just as Section 230 protections in the US enabled social media platforms to profit from harmful content while avoiding accountability.
The recent replacement of the Centre for Data Ethics and Innovation with the AI Safety Institute shows how policy is being shaped by technocapitalist interests. Rather than addressing immediate concerns about labour rights, data privacy, and algorithmic bias, resources are being directed toward speculative fears about super-intelligent AI.
Time to Redirect the Conversation
The discourse around AI safety isn't just wrong - it's actively harmful. By focusing on imaginary future threats, we're ignoring how these technologies are being used right now to concentrate power and wealth while making work more precarious for millions of people.
We need to strip away the science fiction and engage with the reality: GFMs are tools of capital accumulation, designed and deployed to reshape labor markets and social relations in ways that benefit their corporate owners. The existential threat isn't from the technology itself, but from how it's being weaponized against workers and society at large.
Until we recognise this and redirect our attention to the real issues at stake, all the summits and safety institutes in the world won't protect us from the actual damage being done in the name of artificial intelligence.