The realpolitik AI — forging a new political alliance
AI is rapidly becoming a political topic. In a few years, AI will become the primary source of economic and military power in the world. As it does, it will become the central focus of politics. If you thought the conversation was messy today, just wait.
No one is free from politics and groupthink. Either we're implicitly biased by our prior battle scars, or we're implicitly influenced by others still fighting old wars. Here we map existing forces to understand how they shape perspectives on AI and inform debates on creating a human-machine society. Hopefully, this helps us better navigate public discourse on AI governance by addressing explicit and implicit biases.
AI is heating up as a discussion topic. Today, old politics will increasingly try to cast AI debates in their language and for their goals. Tomorrow this will reverse, and old political debates will start recasting themselves in the new AI language. Political language follows the seat of power, and AI will soon become the ultimate throne. As the power of AI grows, the jockeying and politicking will intensify, as will our own internal biases and tribalisms. But we have to set aside old battles. We must keep our eye on the goal of arriving at a human-machine society that can govern itself well. In the future, if we succeed, a well-governed society is what will let us have a chance at resolving all other debates. Today, we should seek a political ceasefire on every other issue but the future of democracy in an age of AI.
No other political cause matters if we don't succeed at setting a new foundation. A human-machine society will arrive in just a few years, and we don't know how to stabilize it. If we do succeed though, then we will have a new future in which to bask in the joy of relitigating all our past grievances: without the collapse of society into AI-powered dictatorship looming over us. But if we don't fortify democracy today, we will lose all our current battles, all our future battles, and likely our freedom to boot.
Let's jump across the landscape and see where current politics takes us. The scorched earth, yield-no-ground style of modern politics distorts even noble causes into dangerous dogma, but there is truth and goodness across them. Just as importantly, we’ll argue that adopting the policy of any group wholesale will likely lead to disaster.
We instead must adopt the right proposals across the political spectrum. We must upgrade our government, modernize our military, enhance checks and balances, and empower ourselves as citizens. If we do only some of these things, the game is up.
The right politics already exist, dispersed across different groups. Our goal is to embrace the goodwill of each of these groups and movements, point out where AI changes the calculus of what these groups fight for, while highlighting how today we are all on the same side: humanity's.
Pause AI
After thinking through everything superintelligence will unleash, the dangers it presents, the carelessness that the world is currently displaying toward building it, you’d excuse anyone for saying:
“Jesus fuck, let's just not build this.”
Thus the Pause AI movement was born.
Politically, you might think this group is composed of degrowthers and pro-regulation contingents. But actually the Pause AI movement is composed of many people normally pro-growth, pro-open source, and pro-technology generally. They rightfully say that despite their support for technology normally, that this technology is different. We should commend them for that clarity, and for pushing to expand the AI conversation into the public sphere, where it’s most needed.
There are downsides to pausing. Our geopolitical adversaries may not pause, for one. China is racing to build AGI and is only months behind the US. Moreover, it's getting easier to build AGI every year, even if research is halted. The most important ingredient to AI is compute, and Moore's law makes compute exponentially cheaper over time. If we succeed at pausing AI internationally, what we really will do is delay AI. Then, in a few years once compute is even cheaper, hobbyists or small nation states around the world will easily be able to tinker toward AGI, likely under the radar of any non-proliferation treaty. The only way to truly stop this would be an international governance structure on all forms of computing, requiring granular monitoring not just at the industrial scale but at the individual citizen level. This would require international coordination beyond anything the world has ever seen, and an invasive government panopticon as well.
Still, non-proliferation has seen partial successes before, as with nuclear weapons and nuclear energy. More recently we’ve seen international coordination on preventing human gene editing and human cloning. We shouldn’t assume the international political willpower is missing to achieve a peaceful future. The specifics of AI may make it unlikely and even dangerous to pursue this path, but it’s nonetheless a good-faith position that should be included in public discourse.
If you’re in tech, it’s easy to sneer at this position (and indeed, many technologists do). Technology and science have been a leading force for good in the world, ushering in more abundance and prosperity than any time in history. If nothing else though, keep in mind that the vast majority of people outside of technology appreciate technology, but are fundamentally skeptical toward it, and often cynical. You won’t win any allies if your cavalier dismissal alienates the majority.
On the other side, if you’re cynical of technology, keep in mind the realpolitik of the world. Technology is a key source of geopolitical power. Whatever your own preference toward it, undermining it can have many unintended consequences.
Exactly not like nuclear
Nuclear weapons and nuclear energy are a common analogy for AI. Nuclear is dual-use, having both military and civilian use cases. It’s capable of destroying humanity or giving it near-infinite free energy. We have managed some international treaties for non-proliferation. We've also forgone most of the benefits in order to achieve the moderate safety we've secured. Whatever your opinion on nuclear energy, it’s an existence proof that humanity is capable of walking away from incredible treasures because it helps secure peace and non-proliferation. So why not with AI?
Nuclear requires difficult-to-source fissile material like uranium. There are only a few good uranium mines in the world. AI requires computer chips, which are literally made out of sand. There is still a shortage of computer chips today, because of how voracious the appetite for AI is, but it's only an industrial capacity that limits us, not a scarce resource.
Moreover, nuclear weapons are ironically a defensive weapon only. In an age of mutually assured destruction, the primary benefit of acquiring nukes is to deter enemies from attacking you. AGI will be much more powerful and surgical. For instance, AGI can help a dictator control their country. AGI can help a free country outcompete a rival on the economic world stage. An AGI can help a would-be dictator seize power. An AGI can unlock what a trillion-dollar company needs to become a ten-trillion-dollar company.
Those incentives push leaders across the world to covet AI in a way that nuclear never could. There's no world where a CEO needs a nuke to be competitive. There's no world where a president can wield nukes to consolidate power across their own citizens. Nukes are ham-fisted weapons that limit their own use. An AGI will be a shape-shifting force that can help any motivated power become more powerful. This makes international non-proliferation substantially harder to secure.
So let's regulate!
We rely on government to step in where free markets fail. The free market pushes us to build AGI, despite all the negative externalities and risks, so government regulation seems prudent. But the government is not a neutral force. If we empower government to control AI so that industry doesn't abuse it, then we are handing government a powerful weapon to consolidate power. This is unlike other common regulations that we’re familiar with. Federal regulations over national parks don't help the government seize power. Regulation for guarding our rivers from toxic industrial runoff doesn't help the government seize power. Regulations for how fast you can drive on a freeway don’t help the government seize power.
The aggregate of many common regulations can combine to give the federal government excessive power. We’ve been debating when to limit that aggregated power for hundreds of years. We don’t pretend to have an answer to that complex debate here. Instead, we simply flag that AI is different, and merits a dedicated conversation:
Allowing the federal government to control AI directly gives it the tools it needs to consolidate power. An automated executive branch could far outstrip the ability of Congress or the public to oversee it. The potential for abuse is extreme.
That doesn’t mean that regulation has no place. But it does mean that we need to be thoughtful. Politics often pushes people toward one of two sides: regulations are good, or regulations are bad. This is always the wrong framing. The correct framing is to prioritize good outcomes, and then reason through what the right regulatory environment is. Sometimes there are regulations that can help achieve good outcomes. Sometimes removing regulations is most needed. And sometimes regulation is needed, but bad regulations are passed that are ultimately worse than no regulation at all.
Keep this in mind when reading or discussing AI policy proposals. If you read an argument that argues about the merits of regulation or deregulation in general, it’s likely that the author is trying to appeal to your political affiliation to win you as an ally, instead of engaging you in the hard work of debating what we actually need to ensure a free future.
Libertarians and open source absolutists
Libertarians believe in small, accountable government. They inherently mistrust government and instead seek to empower citizens and the free market to better resolve societal issues.
Deregulation of AI is a natural position for libertarians, but their underlying goal is to distribute this new power among the people so that power can't concentrate into the government. To further that goal, they often suggest open-sourcing AI, so that it's freely available, which will help small companies compete against big companies, and help citizens stand up to tyranny. In general: let's level the playing field and keep the extreme power of AI distributed. Like all our other heroes from different political backgrounds, this too is noble. And this too requires nuance.
There are inherent limits on how powerful a human-powered company can become. People get disillusioned and leave to start competitors. A limited amount of top talent prevents companies from tackling too many verticals. The scale of company politics crushes productivity and demoralizes employees.
Humans have a precious resource that companies need: intelligence. That gives bargaining power to all of us.
And AI destroys that power.
Today, a passionate designer can leave a company and build a new product that delights new users. In fact, this is becoming easier with AI. But once the intellectual labor of that designer is automated, the power dynamic is flipped. A mega company can simply spend money to have an AI design the same or better product. And the AI won't be frustrated by politics or ego.
But won't that designer also have AI? Yes, but less of it, even if all AIs were open source. With AI, we know that more is more. If you have 100x the budget to spend on the AI thinking, you will get much better results. And big companies have millions of times more resources than small companies. In the age of AGI, money buys results, and more money will always buy better results, and more of them. The result is that money will breed money, and will never again be beholden to human genius and drive.
We want the libertarian ideal of empowered citizens. But stripped of our key competitive advantage —the uniqueness of our intelligence— this won't be the default outcome. We need a new chessboard or we won't be players any longer.
Degrowth
The degrowth movement views the excesses of capitalism and hyper-growth as a key factor in the ongoing deterioration of the world.
Degrowthers often point to environmental factors to detract from AI, such as the energy requirements to train AIs or the ongoing energy demands of AI data centers. Like the environmental movement it grew out of, degrowthers want to protect the most precious things in the world from the dangers of industrialization: nature, our social fabric, and ultimately our humanity. Noble goals.
Slowing down has downsides, though. Degrowthers have often allied with entrenched upper-class interests like the NIMBYs, seeking to slow down housing developments needed to lower the cost of living for everyone. The movement against nuclear energy has resulted in higher energy costs with worse environmental impacts. Degrowth comes at a price: higher costs and a worsening quality of living.
In truth, capitalism has led to more abundance for even the poor than any other time in post-agricultural civilization. And, the bounty of AGI could do even more toward degrowther goals: it could free humanity from the daily toil of capitalism, while ushering in more abundance in ever more efficient ways. But the distrust in capitalism isn’t entirely misplaced: by default, the forces of capitalism will assimilate AI and consolidate power in a way that need not be conducive to a happy civilization. We should all be critical of the dynamics at play.
Growth, YIMBY, Silicon Valley, and the e/accs
In contrast to degrowth are the pro-abundance movements. Often centered around technology, pro-abundance forces choose an optimism for a richer future, and they want to build it: more energy, more houses, more technology, more cures for diseases. AI can be a tool to accelerate all of these goals, and so these groups are often pro-AI and pro-deregulation of AI.
But sometimes you do need to slow down if you want to go fast. Nuclear energy would likely be more pervasive today if Three Mile Island, Chernobyl, and Fukushima hadn’t scared the absolute shit out of everyone. If a similar AI disaster happens, how strong will the public backlash be? How onerous will the regulatory burden become?
That backlash may slow down the advent of AGI by years, which in turn may delay cures to disease, dooming millions more to death. Moreover, a heavy regulatory environment may merely shift AI deployments out of the public and into the opaque world of the military and government, breeding further risks of concentration of power.
The pro-tech world rightfully wants the abundance AI can deliver. We should evolve our society thoughtfully to ensure that abundance actually arrives.
Jingoism and the military-industrial complex
It’s probably no surprise to anyone that the military is well beyond interested in AI. Big military contractors like Anduril and Palantir have already committed to deploying AI into the government. To stay competitive there's likely no other option. Even traditionally liberal big tech companies have walked back public commitments not to partner with the military: part of the “vibe shift” heralded by the 2024 presidential election.
And in truth, it is required. No foreign adversary is slowing down their militarization of AI. We're behind on any form of international AI non-proliferation discussions, even narrow discussions specifically focused on military AI applications.
There are the obvious aspects of an automated military. Drones will become more accurate, more autonomous, and more numerous. Intelligence gathering will become faster, broader, and more reliable.
But dangers abound. Today’s military is powered by citizens bound to their Constitution and a duty to their fellow countrymen. A military AI aligned to the command of a general or president need not have those sensibilities. And because the US government represents such a massive potential client for AI companies, there will be extreme economic pressure to provide the government with unfettered AI that never rejects orders.
The US military is also one of the largest federal expenses at over $800 billion a year. There is increasing pressure to reduce spending, and military automation is one way. Military AI won’t just be more accurate, capable, and numerous than human military, it will also be cheaper. AI hardware will also likely prove cheaper than most of our expensive arsenal today. Drone warfare is paving the way for cheap, AI-powered military hardware to outpace the heavy, expensive hardware of the past. Because of this, there will be (and already is) both economic and strategic pressure to automate the military.
As we’ve seen many times elsewhere, this bears repeating: the default incentives we have today push us toward automating important institutions, and once automated, the threat to democracy grows precariously.
An automated army with no oath, taking direct orders from perhaps one or a handful of people, is the quintessential threat to democracy. Caesar marched on Rome exactly because he had a loyal army. If an AI army is likewise loyal to its commander or president, the most fundamental barrier to dictatorship will be gone. Human soldiers rarely accept orders to fire on their own people. An AI army might have no such restraint.
Throughout all of this will be the ongoing rhetoric that we must secure ourselves against China. Meanwhile, there will be counterforces pushing for no automation at all. We have to resist the urge to stand on one side of a political battle, where we might be obliged to approve of an automated military with no oversight, or to instead push for no automation at all.
Instead, we must modernize our military to remain the dominant superpower, and we must simultaneously upgrade the oversight and safeguards that prevent abuse of this incredible concentration of power.
The longer we wait to do this, the less leverage we'll have. If war were to break out tomorrow, who would possibly have the political courage to stand up for oversight and safeguards while we automate our war force?
Jobs
Jobs have been such a key ingredient in our society that we often confuse them for something inherently good rather than something that delivers good things. Jobs are good when they create abundance, when they help our society grow, and when they allow the job-holders to pursue a happy and free life.
But throughout history we’ve eliminated jobs —or allowed them to be eliminated— in order to usher in a more abundant world. The majority of Americans used to be farmers, but industrial automation has massively increased the efficiency of farmers, freeing up most of the population to pursue other endeavors that have also pushed the country forward. At the same time, those who do pursue industrial farming are far richer than almost any farmer from 200 years ago.
This same story has played out many times. The world is much better off because of the vast amount of automation that we’ve unlocked. Goods and products are cheaper, better, and more readily available to everyone. And yet, we as a society often still fight against automation, because we fear for our jobs. And rightfully so. The way we’ve designed our society, you are at extreme risk if your job is eliminated.
Sometimes this slows progress. Automation of US ports has been stalled by negotiations with the port workers and longshoremen. This has led to decreased port efficiency and increased costs for Americans. Meanwhile, China has nearly fully automated their ports, continuing to help compound their industrial capacity. Competitiveness on the world stage will become increasingly important in the next few years as AI-powered automation takes off. Countries that delay automation will fall behind, both economically and militarily.
Often automation proponents argue that new jobs will always replace eliminated jobs. But there is a real chance this will no longer be true with AGI. If a future AGI can do all things that a human can do, then any new job created will be automated from the start.
So what do we do? Our future depends on automating nearly everything. But our society is designed to function well only with a strong, well-employed citizenry.
This is, as they say, tricky as fuck. There aren’t easy answers, but we for sure won’t get anywhere if we keep having bad-faith arguments built on tired and incorrect assertions.
We should also keep in mind the political expediency that may arise from a public backlash against unemployment caused by automation. There is little appetite in Washington to regulate AI today. In a near-future world where AI-fueled unemployment is skyrocketing, it may become easy for the government to step in and halt the impact of AI. Meanwhile, they may simultaneously use that moment to push for government and military automation. And why not? This would be argued as a win-win-win: the private sector would maintain low unemployment, the US would maintain international military dominance, and US citizens would enjoy decreased taxes as the government unlocks AI-powered efficiency.
This indeed may be a great outcome, so long as we have oversight in place to ensure government automation isn’t abused.
Today, in 2025, government efficiency is a widely supported goal. While DOGE has proven a politically divisive issue, the goal of efficiency itself has remained popular. Everyone knows the government is slow and bureaucratic. It won’t take much political willpower to fully automate the government once AGI arrives.
Republicans and Democrats
For better or worse, AI is coming. It will reshape every aspect of our world. But we have control over how this new world will look and what the new rules will be. We all want to reach a positive future, whether we’re Republicans, Democrats, or independents. The choices we make need to be the right choices, not just the politically expedient ones. The AI conversation is unfortunately rapidly becoming a partisan issue, with specific choices pre-baked to align with major political fault lines, regardless of how well-thought-out those AI policy stances are. But with the stakes so high, we can’t afford to let tribalism be our rallying cry.
We have to do better than our past politics.
We’ve discussed many threats and challenges that AI poses. Most of these are naturally bipartisan issues. Nobody wants their face eaten off by a robot attack dog. Nobody wants an overpowered executive that can seize unlimited power. Everybody wants the abundance that AI can usher in, from cures to diseases to nearly free energy and food.
But the solutions to try to mitigate these harms and ensure the benefits are becoming politically coded.
For example, the Biden administration began to lay the foundation for some forms of AI regulation. Their aim was to ensure AI wasn’t misused by bad actors. This naturally created a perception of alignment between Democrats, regulation, degrowth, and AI safety. And hence naturally created an alignment of the right with the opposite.
As of early 2025, Republicans have come out sternly in favor of AI deregulation, pro-growth, and pro-open-source. Their aim is to ensure US competitiveness in the new AI age and an abundant future.
These need not be partisan battlegrounds, though. In fact, they must become bipartisan collaborations for America to succeed on the world stage.
Most Americans want a prosperous country, regardless of their politics. For that, we’ll need to accelerate our energy investments, build out our domestic chip manufacturing, and ensure we can continue to automate our industry to be competitive on the world stage. But if we’re too careless, we will ultimately cause a backlash that slows us down more than any regulation. The AI equivalent of a Chernobyl meltdown could freeze AI development and put us in a permanent second place on the world stage. If we don’t address the problems caused by AI automating all jobs, the public backlash may further stall the growth of automated industrial capacity.
Most important of all, we the people must stand for freedom and a transparent, accountable government — whether we’re Democrats, Republicans, or of any other type of political philosophy. To defend our freedom, we must upgrade the legislature and judiciary to be AI-enhanced, just like the executive and military will be enhanced. If we don't, we risk what American patriots have always fought to prevent: a government of tyranny.