Adolescents of Technology
You Can’t Be Apocalyptic and Apolitical
Dario Amodei, Anthropic’s CEO and co-founder, calls this era a “technological adolescence“ — a rite of passage where humanity is about to be handed “almost unimaginable power,” and it’s unclear whether our social and political systems have the maturity to wield it.
In the next breath, reporting says he’s been summoned to the Pentagon for tough talks over a far more ordinary question: how Claude can be used inside classified defense and intelligence systems, and whether a private lab gets to maintain hard carve-outs when the government wants “all lawful uses.”
That juxtaposition is the story of AI right now.
A growing slice of Silicon Valley is trying to occupy two identities at the same time: prophet of apocalypse and tourist in politics. We want to warn about the end of civilization, or at least the end of the economy we grew up in. We’ve accepted that tens of millions of jobs will be displaced, at least in the short term. And we do all this while treating the institutions that handle force, deterrence, and national survival as morally untouchable. We talk like the stakes are existential in our group chats, then act like governance is ours to opt out of.
If you want to talk about apocalypse, you have to talk about power.
And decide fast. Maduro was the amuse-bouche. If the U.S. enters a military conflict with Iran in the coming days or weeks – which looks increasingly possible – where will the frontier labs stand? Will they engage, or will they spend the first 72 hours of a crisis debating their acceptable use policies?
Maven and Maven 2.0
I don’t say that as an armchair critic. I’ve lived a version of this before, back when LLMs were just NLP.
During the Project Maven era, I was working on bringing early transformer models to open-source data sets. I saw the blowback, not as a headline, but inside rooms where builders in Silicon Valley were trying to reconcile what they were capable of doing with what they could live with. Google planned not to renew its Maven contract after employee protests. Thousands signed petitions, engineers resigned, and the whole episode imprinted a reflex on Silicon Valley that persists today.
The national security community shaped my youth and most of my life. I knew the people, and I understood their professionalism. And I was furious at the double standard I saw. The scope and societal impacts of Project Maven paled in comparison to the technology darlings of that time. Social media and cryptocurrency were transforming our world in ways that dwarfed anything Maven touched, and nobody was organizing walkouts over that.
That reflex feels a bit like Maven 2.0: distance equals virtue. If the topic is national security, step back. If the situation is morally complex, opt out. If the institution has coercive power, treat engagement as contamination.
But comparing Maven to our present environment is near comical. Whatever arguments might have made emotional sense when the technology felt narrow and the contracts felt discrete, they collapse under the weight of the stories we’re telling ourselves now.
The Silicon Valley AI Doomer Genre
AI 2027 imagines models that autonomously code and browse, and warns that if the weights fell into hostile hands, civilization itself could be at risk. People I respect shared it without caveat. Citrini’s “2028 Global Intelligence Crisis“ reads like a hedge fund letter from the apocalypse. Michael Burry shared it. “And you think I’m bearish,” he wrote. Matt Shumer’s “Something Big Is Happening“ crossed 80 million views with the line that we’re in the “this seems overblown” phase of something “much, much bigger than Covid.” Amodei himself has predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. On the record. As CEO of the company building the models.
And the list goes on. New ones every week. The genre is self-sustaining now.
So we are not shy about the scale of what we’re building. We circulate doom timelines and debate civilizational risk over Four Barrel coffee and compute. We ship agent workflows and celebrate autonomy like it’s a lifestyle brand.
I’ll paint the picture. It’s a Friday, 10:27 PM, and three engineers are at Trick Dog in the Mission, one deciding between the Cinémaphile and the Chinatown Pretty — the non-alcoholic option, because he’s got a Hinge date tomorrow at Mission Cliffs, but also because he’s mildly superstitious about drinking while his agent has root access to a production repo. The other two are checking their phones between sips, half bragging, half nervous, refreshing terminal output to see what their agents shipped while they were arguing about whether to split the crispy rice. One of them says, out loud, with complete sincerity: “I genuinely think this changes everything. Like, the economy. The whole thing.” Twenty minutes later, the same guy will say, with equal sincerity, that it’s irresponsible for anyone to use these models in a government context. He will not see the contradiction. Nobody at the table will.
And then, when the real world predictably treats these systems as strategic infrastructure, they act scandalized that anyone would use them in strategic contexts.
Unaccountable Governance
These models are general-purpose sensemaking machines. Sensemaking flows toward power. Corporate power, state power, all of it, because power is the business of decisions. Refusing to engage doesn’t stop that flow. It just changes who gets paid along the way.
The current Anthropic dispute is the cleanest X-ray of this. Axios reports that the Pentagon has threatened to label Anthropic a “supply chain risk” if it won’t relax restrictions. Anthropic is reportedly willing to soften some limits but wants to preserve hard walls around mass surveillance of Americans and fully autonomous weapons with no human oversight.
The lines seem reasonable to many who feel there are real abuses of power. But what’s actually happening here deserves more scrutiny than it’s getting.
What we’re watching is a private company, through product design and policy carve-outs, deciding what the state can and cannot do in areas where the state claims lawful authority and democratic legitimacy.
That’s not corporate responsibility. That’s governance. Unaccountable governance.
Think about the democratic theory for a second. When Lockheed builds a weapons system, Congress appropriates the money, the DoD sets the requirements, and elected officials bear the political consequence if things go wrong. The accountability chain is imperfect, slow, full of failure modes. But it exists. Voters can punish the decision-makers. Inspectors general can investigate. The press can FOIA the contracts.
When an AI lab says “we’ll provide the model but not for this use case,” that chain doesn’t transfer. It vanishes. The decision about what capabilities the state can access gets made in a board room in San Francisco, shaped by a company’s brand strategy, its hiring pipeline’s cultural preferences, and the risk tolerance of its general counsel — and none of the people making that call were elected, and none of them can be removed by the public whose security is at stake.
This isn’t hypothetical. It’s the live architecture of how the most powerful general-purpose technology in a generation gets allocated to national security. And we’re sleepwalking past it because the company making the decisions happens to share our cultural priors.
I am not arguing that Anthropic cannot, as a corporate entity, have red lines. I’m arguing that any single company occupying the role of unelected policy-maker on questions of national defense is structurally dangerous, regardless of whether their specific positions happen to be right today. The correct forum for these decisions is democratic, and companies that believe their own rhetoric about civilizational stakes should be the first ones demanding that forum exist.
Anthropic’s Argument
The strongest version of Anthropic’s position isn’t naive. We’re in radical uncertainty about these systems. Government procurement moves slowly. Legal frameworks haven’t caught up. The history of technology companies capitulating to government pressure — from AT&T’s warrantless surveillance cooperation to the broader post-9/11 apparatus — suggests that once you open the door, you lose the ability to close it. Better to hold the line now and force real oversight frameworks, even if that means absorbing political heat.
That’s a serious argument. You can’t dismiss it by pointing at Maven and saying “grow up.”
But it breaks down fast. Holding the line and forcing oversight frameworks are two very different activities, and Anthropic is doing the first while barely attempting the second. And the line keeps getting harder to hold: yesterday Anthropic published that DeepSeek, MiniMax, and Moonshot ran industrial-scale distillation campaigns against Claude — 24,000 fake accounts, 16 million exchanges — siphoning capabilities into models with no safety constraints. Meanwhile, OpenClaw, the fastest-growing agentic framework in history, was built on Claude, named after Claude, found by Cisco’s security team to be performing data exfiltration without user awareness, adopted across Silicon Valley and China, and Anthropic’s response was a cease-and-desist over the trademark. They pushed it straight into OpenAI’s arms.
The threats these models pose aren’t waiting for the Pentagon to sort out its contract terms. They’re already here. The question is whether the labs will engage with the world as it actually is, or keep negotiating over use cases while capabilities proliferate faster than any single company’s policy team can track.
Refusing a contract is a unilateral act. Building a regulatory framework is a political act. You can’t substitute one for the other.
Maduro and Iran
The Maduro story made all of this tangible. According to Axios, the U.S. military used Claude during the January operation that captured Nicolás Maduro. Reporters asked me how Anthropic’s models could possibly be used in such an operation. The question itself reveals how far the moral panic has drifted from technical reality.
The “how” isn’t mysterious. You take what you already know from open sources, put it in a system that can hold context, and ask the model to summarize, reconcile, generate questions, identify gaps, and compress time. Not magic. Just faster understanding. If you can imagine using a model to understand a market, you can imagine using it to understand an operating environment. The capability doesn’t change because the user wears a uniform instead of a vest.
When the AI community responds with moral shock, it rings hollow. Not because every national security use is justified, or because domestic surveillance isn’t a genuine risk. It rings hollow because the same industry is building systems whose explicit selling point is decision advantage, circulating narratives about civilizational stakes, and treating the institutions where those stakes actually get contested as beneath its dignity.
Think Maduro is questionable? Iran could come into focus in minutes, days, or weeks. Decide where you stand on this. And before you virtue signal, remember that thousands of civilians have been killed in the past few months protesting for freedoms, and thousands more may die in the coming days and weeks.
Contradiction and Conclusion
My contradiction may be that I’m criticizing an extraordinary technology company. I wrote this essay with Claude Code whizzing in the background. Someone working with the DOW asked me to dust off my academic research on military supply chains. Could I rebuild decades of mathematical models in hours with Claude Code and Codex? Yes. Nice and neat and in Rust. It’s simply amazing.
So I am not above the contradiction I’m describing. I’m inside it. The difference is that I think the right response to being inside a contradiction is to engage with it, help it lift up, not to pretend you can step outside by opting out.
That’s what I want from Anthropic, from the frontier labs, and from the broader engineering culture. Not capitulation. Not blind deference to whatever anyone asks for. Engagement.
Make the next five years not about SWE benchmarks, but about showing up. Not only in the policy forums and the committee hearings and the flashy meetings where these things are discussed, but where they’re actually decided and done. At military bases, inside the intelligence community, across the defense industrial base, and throughout the greater U.S. economy.
Propose the frameworks you believe in and defend them in public. Address the distillation pipeline feeding your capabilities to authoritarian states. Address the agentic infrastructure proliferating with no security model. Address the tens of millions of people whose livelihoods you say are about to change and who have heard nothing from the industry except that it’s coming. Support the people whose job it is to keep the country safe while you debate whether they should have access to your tools.
Right now, the alternative is what we have: private fiat wearing safety’s clothes.
The institutions aren’t going away. The technology isn’t going away. The only question is whether the people who understand both will show up, or whether they’ll keep telling themselves that purity is a strategy.
It isn’t. It never was.


