Donald Trump's election victory signals a seismic shift in how artificial intelligence will be governed and developed in the United States. His promised dismantling of Biden's AI Executive Order, combined with the growing influence of techno-libertarian ideologies in Silicon Valley, points toward a dramatic deregulatory turn that could reshape the future of this transformative technology. The implications extend far beyond simple policy changes - they cut to the heart of how we think about AI development, safety, and the role of government oversight in technological progress.
The End of Biden's Regulatory Framework
The Biden administration's October 2023 Executive Order represented the most comprehensive attempt yet to establish federal oversight of AI development. Its core provisions - mandatory safety testing, vulnerability assessments, and the creation of the US AI Safety Institute - aimed to create guardrails for an industry moving at breakneck speed. Trump's pledge to immediately dismantle this framework leaves a vacuum in AI governance at a critical moment.
This isn't happening in isolation. The likely "streamlining" or outright repeal of the CHIPS Act under Speaker Mike Johnson would further erode attempts to build domestic semiconductor manufacturing capacity - capacity that's essential for training advanced AI models. Combined with Trump's proposed tariffs - 10% on all imports and 60% on Chinese goods - we're looking at a perfect storm that could severely disrupt the AI industry's hardware supply chains.
The Rise of Techno-Libertarian AI
But focusing solely on what regulations will be stripped away misses the deeper ideological shift at play. Trump's return to power, heavily backed by Elon Musk's millions, represents the ascendancy of what we might call techno-libertarian AI development - an approach that views government oversight as an impediment to innovation rather than a safeguard for public interest.
This ideology aligns closely with what scholars have termed the "TESCREAL bundle" - a cluster of techno-futuristic worldviews including transhumanism, singularitarianism, and "effective accelerationism" (e/acc) that have become increasingly influential in Silicon Valley. These perspectives share a common thread: the belief that rapid, unfettered technological development is not just desirable but morally imperative.
The e/acc movement in particular, which gained prominence during the recent OpenAI leadership crisis, provides a window into this worldview. E/acc proponents argue that accelerating AI development is essential for human progress and that concerns about AI safety are overblown. This stands in stark contrast to the "effective altruist" (EA) perspective that influenced much of Biden's regulatory approach.
Musk's Growing Influence
Elon Musk's substantial financial support for Trump's campaign wasn't just about political alignment - it was about securing regulatory influence. With xAI's Grok-2 model competing against established players like OpenAI and Anthropic, Musk has a vested interest in shaping the regulatory environment. His companies, from Tesla to SpaceX to xAI, stand to benefit significantly from a deregulatory agenda.
The irony here is striking. Musk has repeatedly warned about the existential risks of artificial intelligence, yet he's now backing an administration likely to remove the few guardrails we have. This apparent contradiction makes more sense when viewed through the lens of market competition - Musk may be less concerned about government oversight of AI safety than about regulations that could advantage his competitors.
The State-Level Response
As federal oversight recedes, state governments are likely to step into the breach. We're already seeing this with California's AI transparency requirements, Tennessee's voice cloning protections, and Colorado's tiered oversight system. This could create a complex patchwork of regulations that companies will need to navigate - potentially more burdensome than a single federal framework.
However, these state-level efforts may face challenges from a Trump administration hostile to tech regulation. The Commerce Clause gives the federal government significant power to preempt state regulations, particularly when they affect interstate commerce. A Trump Justice Department could actively work to invalidate state AI regulations deemed too restrictive.
Military AI and the New "Manhattan Projects"
While Trump's allies talk about "deregulation whenever possible," they're simultaneously advocating for a series of "Manhattan Projects" to advance military AI capabilities. This suggests a split approach: minimal oversight of commercial AI development coupled with aggressive government investment in military applications.
This mirrors broader patterns in Trump's approach to technology policy - skepticism of regulation except where it serves national security or military interests. The risk here is creating a two-track system where military AI development proceeds with massive government funding while commercial AI development occurs in a regulatory vacuum.
The Global Impact
The implications extend well beyond U.S. borders. As the world's leading AI developer, American regulatory policies have outsized global influence. A dramatic deregulatory turn under Trump could undermine international efforts to establish AI governance frameworks, potentially triggering a race to the bottom as countries compete for AI investment and talent.
The proposed tariffs on Chinese goods could also accelerate the bifurcation of global AI development into U.S. and Chinese spheres, with other countries forced to choose sides. This could hamper international collaboration on AI safety and ethics at precisely the moment such cooperation is most needed.
The Safety Paradox
Perhaps the most concerning aspect of this deregulatory turn is its timing. We're entering what many experts consider a critical period in AI development, with models becoming increasingly powerful and their societal impacts more profound. Removing oversight now, when the technology is advancing so rapidly, creates significant risks.
The safety paradox is this: while a deregulatory environment might accelerate AI development in the short term, it could ultimately slow progress by eroding public trust or leading to accidents that prompt harsh regulatory backlash. The history of other transformative technologies - from nuclear power to biotechnology - suggests that a complete lack of oversight often leads to public fear and resistance.
Alternative Futures
What might an alternative approach look like? Rather than wholesale deregulation or heavy-handed government control, we could pursue what some scholars call "adaptive regulation" - frameworks that evolve with the technology while maintaining basic safety standards. This would preserve innovation while protecting public interests.
Several key elements could form the basis of such an approach:
Mandatory safety testing for the most powerful AI models
Transparency requirements for training data and methodologies
Clear liability frameworks for AI-related harms
International coordination on standards and oversight
Public-private partnerships for research and development
However, given Trump's stated positions and the influence of his techno-libertarian supporters, such a balanced approach seems unlikely.
Looking Ahead
As we look toward Trump's return to office, several key indicators will signal the direction of AI policy:
The speed and scope of Biden Executive Order dismantling
Appointments to key technology policy positions
Treatment of state-level AI regulations
Changes to AI export controls and chip policies
Military AI funding priorities
The most likely scenario is a period of significant deregulation followed by reactive policy-making in response to crises or accidents. This pattern - deregulate first, regulate later - has characterized many technological transitions, often with substantial costs.
Conclusion
Trump's victory represents more than just a change in administration - it signals a fundamental shift in how America approaches AI development and oversight. The ascendancy of techno-libertarian ideologies, combined with Trump's deregulatory instincts, points toward a period of minimal oversight at precisely the moment when thoughtful governance is most needed.
The risks are significant: potential safety issues, erosion of public trust, international tension, and the possibility of accidents that could set AI development back years. Yet there are also opportunities - the chance to experiment with new governance models at the state level, to develop international frameworks that don't depend on U.S. leadership, and to build public-private partnerships that can fill some of the oversight gap.
What's clear is that we're entering uncharted territory. The next four years will likely shape not just the future of AI development but our broader relationship with transformative technologies. Whether this represents a triumph of innovation or a dangerous experiment in deregulation remains to be seen. What's certain is that the impacts will resonate far beyond Trump's term in office.
For those concerned about responsible AI development, the focus must now shift to building alternative frameworks - through state governments, international cooperation, and private sector initiatives - that can maintain some oversight in an era of federal deregulation. The future of AI is too important to be left entirely to market forces, no matter what the new administration believes.