I can't believe I have to say this, but don't use ChatGPT to set trade policy

I can't believe I have to say this, but don't use ChatGPT to set trade policy

Apr 4, 2025

Apr 4, 2025

It's not often that we all get to witness the catastrophic collision of technological hubris and political incompetence playing out in real time. Yet here we are, watching global markets haemorrhage trillions in value because someone in the White House apparently decided that setting international trade policy — a domain requiring sophisticated understanding of economic theory, geopolitical relationships, and complex supply chains — was a suitable task for ChatGPT.

Welcome to Vibe Governing

To understand the magnitude of this debacle, we must first grasp 'vibe coding' — a practice that has migrated from amateur developers to, apparently, the highest echelons of government. Vibe coding occurs when individuals use AI to generate code without understanding the underlying principles, essentially asking the AI to "make something that does X" without supervising or comprehending its output. The user simply "vibes" with the result, trusting the AI implicitly.

What we're witnessing now is its evolution into 'vibe governing' — the application of this same reckless methodology to matters of state. Rather than consulting economists or trade experts, someone seemingly prompted an AI for a formula to calculate retaliatory tariffs, received an absurdly simplistic response (trade deficit divided by imports), and proceeded to implement it as national policy affecting billions of people worldwide.

The White House's attempts to disguise this formula by adorning it with Greek symbols — only to reveal precisely the same mathematically illiterate approach identified by economists — would be comedic if the stakes weren't so devastating. When challenged, they published a formula that confirmed exactly what critics had suspected, while apparently believing the fancy notation would obscure their intellectual negligence.

The Confabulation Catastrophe

What makes this situation particularly alarming is that it exemplifies AI's tendency toward confabulation (commonly called 'hallucination'). Large Language Models like ChatGPT don't 'know' economics — they generate responses based on statistical patterns in their training data, with no understanding of accuracy or consequence.

The term 'confabulation' is actually more appropriate than 'hallucination' because it better captures what's happening: the system isn't experiencing false perceptions but rather generating plausible-sounding narratives to fill knowledge gaps. This psychological term describes a memory disorder where patients create false memories they genuinely believe, much like how ChatGPT generates convincing but entirely incorrect economic formulae with supreme confidence.

Consider a simple example: if you ask ChatGPT about the shape of the Earth, it confidently answers "oblate spheroid" — not because it grasps this truth, but because those words appear together frequently in its training data. Since its dataset includes most of the internet, it's reasonable to assume flat Earth content made it in there too. In a sense, ChatGPT simultaneously "believes" the Earth is both round and flat, just with different statistical weights assigned to each answer. There's no ground truth — just pattern recognition across vast amounts of text data.

The White House got bad advice, yes, but they also received fundamentally incorrect information presented with the veneer of authority that these systems excel at producing. This highlights the incalculable danger of using AI tools in contexts where precision and domain expertise are essential

The Doomsday We Really Should hHave Expected

For years, AI doomers have hysterically warned us about superintelligent systems outsmarting humanity and bringing about our downfall; this, of course, is nonsensical. The reality has proven far more banal yet far more destructive than they ever anticipated: it wasn't a hyper-sophisticated AGI sorting paperclips at a galactic scale that triggered economic catastrophe, but rather the combination of a decidedly mediocre language model and the astonishing gullibility of those who (you'd hope) really should know better.

At the very least, I hope this makes both the E/acc techno-utopianist wing, and the doomer, apocalyptic AI safety camp, have a good long think about their respective perspectives about AI. The threat isn't some future superintelligence; it's the present-day deployment of half-baked systems by individuals lacking both technical understanding and domain expertise. The real damage comes not from machines taking over, but from humans abdicating critical thought too readily.

The Human-In-The-Loop Fallacy

The supposed safeguard against AI mishaps has long been the 'human-in-the-loop' concept — the reassuring notion that human judgment would act as the final arbiter of AI suggestions. But this trade policy disaster exposes the fatal flaw in this reasoning: humans consistently demonstrate what psychologists call 'automation bias', the tendency to trust computer-generated information over their own judgment.

Research consistently shows that people are remarkably uncritical of AI outputs, particularly when those outputs are formatted to appear authoritative. A 2023 study from the University of Cambridge found that participants accepted AI-generated explanations 71% of the time, even when those explanations contained obvious logical fallacies or factual errors.

In this case, the 'humans in the loop' clearly lacked both the expertise to evaluate the AI's suggestions and the intellectual humility to seek verification. The Greek-symbol-adorned formula debacle reveals the true nature of our relationship with these systems: rather than acting as critical overseers, humans often function as uncritical amplifiers of AI-generated nonsense.

What Should We Do About This?

1. Establish AI No-Go Zones

First and foremost, we must acknowledge that LLMs have no place in contexts where their output truly matters. Government policy, healthcare, judicial decisions, and other high-stakes domains should be explicitly designated as AI no-go zones — at least until these systems demonstrate dramatically lower confabulation rates and higher reliability. (There's a solid argument to be made that challenges the idea that there is any context where LLMs have a place, but that is another essay for another day.)

The £1.9 trillion ($2.5 trillion) market wipeout triggered by this tariff debacle should serve as a stark warning: the economic cost of AI mistakes can dwarf any efficiency gains these tools purportedly offer. We wouldn't entrust nuclear launch codes to a system with even a 5% error rate; why are we comfortable using systems that are dramatically more fallible to craft policies affecting millions of livelihoods?

2. Demand Domain Expertise

If AI tools are to be used at all in professional contexts, they must be wielded exclusively by individuals with deep domain expertise. The White House spokesperson who proudly shared the underlying formula used to calculate these tariffs with a critical economist on Twitter, thinking that the Greek symbols it contained supported, rather than obliterated, the argument he was trying to make, is a perfect case study here: understanding which economic models apply to which situations, recognising when a formula is nonsensical, and identifying when an AI has confabulated all require tacit knowledge that the AI itself cannot provide. The inability of White House staff to recognise that their formula was mathematically equivalent to the one they were attempting to refute demonstrates that they lacked the most basic comprehension of the subject matter they were handling. Presumably, this lack of expertise includes the policy team who prompted chat GPT in the first place. This is tantamount to medical malpractice, but on a global economic scale. As one STEM researcher put it on Twitter "Now economists know how 'just inject bleach' felt."

3. Institutionalise Radical Scepticism

Finally, organisations using AI must institutionalise radical scepticism toward AI outputs. This means implementing rigorous verification processes, requiring multiple independent human evaluations, and establishing clear accountability structures for AI-informed decisions.

If the additional overhead of these safeguards negates the supposed efficiency benefits of using AI, that's not a flaw in the safeguards but rather an honest accounting of AI's current limitations. The promise of AI has always been that it would free humans from menial tasks to focus on more complex work. Perhaps the most complex work of all is determining when and whether to trust AI outputs in the first place.

Conclusion

The tariff fiasco serves as a cautionary tale of what happens when technological solutionism meets political expediency. It reveals not just the limitations of current AI systems, but also the dangerous overconfidence they can inspire in their users. This is a technical failure, yes, but it's also a sociotechnical one — a breakdown at the interface between human judgment and machine capability.

What's particularly alarming is how predictable this disaster was. Scholars in science and technology studies have long warned about the dangers of black-boxed technological decision-making in governance. Any automation theorist working anywhere near ML has been discussing these problems for literally decades. It's taught to undergrads; why isn't it taught to presidents?

As markets continue to reel and international relationships fray, we must confront an uncomfortable truth: we are entering an era where the most significant danger is not that AI will become too intelligent, but that we will become too credulous. The solution isn't to abandon technological progress, but to reestablish the primacy of human expertise, judgment, and responsibility.

Perhaps most concerning is that this incident represents not an endpoint but a beginning — the first major international crisis triggered by AI-driven policymaking. Unless we establish robust guardrails now, it certainly won't be the last. The question is whether we can learn from this £1.9 trillion lesson before the next, potentially larger, catastrophe strikes.

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

© 2024