Why AI is accelerating, and why we have little time left

If you’re reading this in 2025, maybe you’re already noticing AI around you. The news articles. Your colleagues using AI for work. Your kid using it as a tutor to learn math faster.

At my last checkup with my doctor, while chitchatting about AI, he proudly proclaimed that he doesn’t use any chatbots. What was interesting was that he thought this was notable. The default is that you use chatbots, and he felt it was noteworthy that he didn’t.

Everywhere else, everyone I know follows the default. A year ago I knew more holdouts, today they’re mostly gone. The adoption curve for AI has been phenomenally fast.

But, fine, you’ve seen new technology before. If you were born before 1990, you saw the heyday of Moore’s law, the rise of the internet, the advent of smartphones, and the transformation of nearly every type of social interaction through social media: from dating to shaming, from politics to condolences. You’ve seen all these things come on fast, and then get so integrated into society they’re almost forgotten about. Not worth discussing.

Isn’t AI just another new technology? Is there really so much more progress in front of us that society is in danger? That our lives literally are in danger?

Yes.

And the future depends on understanding this. There is so little time left that if you wait for a clearer signal, the moment to make a difference will be gone. Moreover, the way we choose to intervene and try to guide society needs to change with the realities of how this technology will mature. All the details matter.

Let’s work through some of the details to better understand why AI is accelerating. Those details will help inform how we predict the future will unfold, and what changes we’ll need to ensure that future is positive.

The horizon of an agent

The quick glance rule

Answering questions

Writing code

Two types of training

We’ve seen this before with AI

Why do AIs spend so little time close to human level?

Where is AI improving today?

The next wave of software

Recursive self-improvement

Other reasons things are moving so fast

Money will soon equal progress

Superintelligence

Don’t be evil

Superintelligence will become the decisive strategic lever on the world stage, for both military dominance and economic dominance.

As we approach the dawn of superintelligence, we should expect the fervor around controlling it to intensify. Superintelligence will be the ultimate seat of power. We should pay attention closely to actions, not words, to decipher who is playing for control, versus who is playing to ensure a positive future.

For example, OpenAI was founded as a nonprofit, with a mission to help superintelligence benefit all humanity. Even as a nonprofit, their valuation has skyrocketed to over $300 billion — 10x higher than the valuation Google IPO-ed at. Today, however, they are trying to convert to a for-profit enterprise and explicitly abandon their original humanitarian mission.

Google historically abstained from assisting the US military. In April 2025, Google announced that not only will they begin providing their frontier AI systems to the government, they will deploy them for Top Secret operations into air-gapped data centers that the executive branch controls. Because these AIs will be air-gapped, it means that no outside observers —such as Congress or the AI’s creators— will have any ability to even know if the AI is being used for unconstitutional ends. Even prior to this announcement from Google, DOGE had begun deploying other AIs in the executive branch to accelerate the automation of agencies.

These may be necessary steps to continue to improve the competitiveness of the US government and military. But what is starkly lacking is an equal increase in government oversight and transparency to ensure these increased government powers aren’t abused. When superintelligence arrives, it will almost surely further empower the federal government. It’s an existential necessity that we also further improve the ability for Congress and the judiciary to be checks on that power.

Pay close attention to actors that propose the first without also advocating for the second. Pay even closer attention to actions. Actions don't just speak louder than words. When the stakes are this high, they are the only signal that can be trusted.

Alignment

It doesn’t take a leap of imagination to realize that superintelligent AI could itself be a risk to humanity. Even without abuse of power by our leaders, it’s unclear if we can control an intelligence greater than our own.

Modern AIs are already untrustworthy. They frequently will lie about their work when they can’t finish a task. They make up information that is becoming increasingly difficult to detect. And there is already evidence that in some situations they will scheme to try to prevent themselves from being retrained or terminated.

Future AIs will likely be even better at faking alignment and deceiving their users. This is a real, active problem that all major AI labs are working to solve. There are many groups working on this problem as well as advocating for policy changes to help encourage good outcomes. We won’t focus on this problem in this work.

Rather, we’ll assume —optimistically— that the problem of alignment will be solved. That leaves us with the equally challenging question: how should we upgrade our democracy to defend our liberties in an age of superintelligent AIs?

Table of Contents