Sending H200s to China: Why the Case Is Clear as Mud
Last week, while the halls of the Pentagon championed GenAI.mil, I walked halls of Congress and saw them fretting over Nvidia selling chips to China.
The Trump administration’s move to let Nvidia sell H200 chips into China – with some guardrails and a big cut of the revenue flowing back to the U.S. government – is exactly the sort of decision that sounds clean in a press conference and gets very murky once you look under the hood.
On one side, the story is easy to tell: a flagship American company, a roaring stock market, and a deal where “China pays America” while we keep the very best chips at home. On the other side, the picture is less flattering: a strategic rival that blends civilian and military technology, clearly wants to reshape the global AI and cloud landscape, and runs an increasingly sophisticated espionage machine that already targets the firms we’re empowering. My basic thesis is straightforward:
Allowing H200 exports to China is strategically ambiguous. There are coherent arguments for fully opening the tap, for letting some volume through under tight conditions, and for slamming the door shut. Each position rests on a different set of beliefs about whether export controls work, how inevitable China’s AI rise really is, and whether we’d rather keep China dependent on U.S. hardware or cut off from it.
The goal here is to do two things. First, organize the main arguments for and against giving China access to H200‑class chips. Second, unpack the beliefs you’d have to hold to land on “full allow,” “partial allow,” or “ban,” and turn that into a simple decision tree. Here’s a rough preview of the decision tree:
What an H200 actually buys you
Before getting into the politics, it’s worth being precise about the hardware.
Under the earlier regime, Nvidia built the H20 for China, a cut‑down part that is substantially hobbled compared to what leading U.S. labs use. It can do serious work, but it’s not what you choose if you want to train state‑of‑the‑art models at scale. The H200 is a very different animal: a much more capable accelerator, roughly comparable to late‑generation H100s on many large training workloads and far more powerful than the H20. For anyone building dense AI data centers or training large multimodal systems, that performance delta matters.
Above the H200 sit Blackwell and Rubin, Nvidia’s next generations. Those remain off‑limits to China. So the current compromise is essentially: H200s can go, the true frontier stays home.
In practice, H200s give Chinese firms and labs the ability to train larger, more capable models; to do so faster and more cheaply; and to stand up the kind of high‑end data‑center infrastructure necessary to serve those models globally. You don’t use H200s only for chatbots; you use them for more ambitious experiments in agentic AI, complex simulations, and multi‑modal systems where latency and throughput are strategic advantages.
All of this sits inside a broader race. The best current estimates put the U.S. share of global high‑end AI compute at roughly 74 percent, with China around 14–15 percent, up from a much tighter gap just a few years ago.China already has plenty of data, strong universities and big tech firms, plus a growing bench of AI talent. Its domestic AI chips have improved, but still trail Nvidia at the high end. Export controls on cutting‑edge accelerators have been, arguably, the main U.S. lever that clearly slows China’s progress. At the same time, Chinese models like DeepSeek have already approached leading U.S. systems on a number of benchmarks at lower cost, showing how much can be done even on constrained hardware.
And we should admit that “zero chips in China” was never a real baseline. Even before this decision, Chinese entities were getting H100/H200‑class hardware via gray markets, shell importers and rentals on foreign clouds. Investigations into the “AI GPU black market” and Justice Department smuggling cases have already turned up tens or hundreds of millions of dollars in diverted Nvidia chips headed to Chinese buyers, including PLA‑linked labs. So the real choice here is not between a pristine world with no advanced chips in China and a fully open tap; it’s between a leaky denial regime and a more explicit, regulated channel that clearly increases the total flow.
The strongest arguments for exporting H200s
The positive case usually starts from the U.S. industrial base. Nvidia is not just a successful tech company; it has become part of the financial and technological scaffolding of the country. In its last full fiscal year, about 13 percent of Nvidia’s revenue – roughly $17 billion – came from China, including Hong Kong. It is now one of the largest firms in the world by market cap and a heavyweight in major stock indices. By late 2025, Nvidia alone had contributed roughly 20–25 percent of the S&P 500’s total gains in some recent years, and accounts for about 7–8 percent of the index’s market cap by itself.
Revenue from China helps fund the next wave of architectures, pays for U.S. data‑center expansion, and supports an ecosystem of suppliers and engineers. If you believe U.S. AI leadership depends on Nvidia out‑investing everyone else – and you discount the security risks – cutting off such a large market looks a lot like self‑harm.
There is also a “better us than them” angle. If the U.S. refuses to sell high‑end chips into China, Beijing will simply double down on domestic AI chips and channel even more subsidies toward Huawei and other national champions. Huawei’s latest Ascend parts, built on constrained domestic fabrication, are not yet at parity with Nvidia’s best – but they’re good enough to power large national training clusters, and Chinese officials are open about wanting to replace foreign GPUs. Meanwhile, Chinese firms can continue to crawl the gray market and rent compute abroad. In that world, the U.S. gives up revenue and influence while still facing strong Chinese AI systems built on a mix of older Nvidia hardware, domestic accelerators and rented compute.
Supporters of exports also point to the difference between smuggling and regulated access. Total bans push activity into the shadows, where Chinese buyers rely on smugglers, shell importers and convoluted cloud setups that are hard to monitor. Legal exports that go through known counterparties and require licenses, audits and telemetry at least give the U.S. government and Nvidia a line of sight. If you assume H200s will end up in China regardless, you might prefer they arrive through channels you can see and, if needed, turn off.
A more strategic version of the argument focuses on leverage through dependence. The goal in that view is not complete denial, but managed reliance. If Chinese hyperscalers and labs are deeply invested in Nvidia hardware, CUDA, and surrounding software and tools, it becomes harder for them to walk away cleanly to domestic alternatives. They remain exposed to future policy decisions in Washington. Forced, immediate decoupling can accelerate the very self‑reliance we worry about; managed dependence buys time. William Blair, for example, estimates that China represents roughly a $50 billion total addressable market for Nvidia’s high‑end GPUs – a market that, if Nvidia cedes, domestic competitors will happily fill.
Layered on top of that is a quiet barter logic. Advanced chips are not the only tool in the relationship. The U.S. still leans heavily on China for a range of critical minerals and processed materials that feed into chipmaking and energy infrastructure, and the Trump administration is simultaneously trying to assemble a “Pax Silica” coalition to reduce that dependence with allies. Beijing has already experimented with export controls on key inputs like gallium and germanium. In that context, limited H200 access is inevitably part of a bigger negotiation that spans minerals, manufacturing equipment, data‑center components, AI safety language and broader trade issues.
And hovering over all of this is domestic politics. It is easy to tell a story where America sells advanced chips, takes a 25 percent cut of the revenue, supports American manufacturing and jobs, and keeps the true frontier hardware at home. For leaders who are watching stock indices and polls more than long‑horizon war‑games, that story has gravitational pull.
To see all of these trade‑offs in one view, it helps to lay them out next to the counter‑arguments:
The strongest arguments against exporting H200s
Now shift the lens and start from a more hawkish view.
From that vantage point, China is catching up or already competitive in data, algorithms, talent and big chunks of infrastructure. The one domain where the U.S. clearly leads is cutting‑edge AI chips and the capacity to manufacture them at scale. Export controls on those accelerators force Chinese labs to make do with weaker or older hardware, limit how quickly they can scale frontier models and delay deployment into military systems. Since 2022, those rules helped push the U.S. share of global AI compute from roughly 51 percent to 74 percent, while China’s share fell from about 33 percent to 14 percent. If you believe that, loosening controls on H200s isn’t a marginal adjustment; it is blunting the single most effective tool the U.S. currently has.
Civil‑military fusion sharpens this concern. China does not draw a meaningful line between civilian and military technology. Chips shipped to a “civilian” university or tech giant can be loaned, rented or repurposed for PLA research and dual‑use projects in surveillance, targeting, cyber‑operations and logistics. Open‑source reporting has already turned up high‑end Nvidia parts in PLA‑adjacent labs and medical institutions. If you assume every advanced chip inside China is, in practice, a dual‑use asset, any export of H200s starts to look like partial military aid.
The global dimension matters as well. H200s don’t just power models inside Chinese borders; they can support a rival global cloud and model ecosystem. With enough H200‑class compute, Chinese firms can build out large training clusters and AI data centers across regions, offer cheap or subsidized AI services to governments and companies, and effectively export “AI as infrastructure” along something like an AI Belt and Road. Even if the U.S. keeps a narrow edge at the very frontier, this can chip away at the economic and political position of U.S. cloud providers and AI platforms in key markets.
Espionage and insider risk are other reasons to be cautious. Foundation‑model providers and chip firms have become attractive targets for insider recruitment and sophisticated cyber‑intrusions. We’ve seen employees walk out with model artifacts and chip designs, and state‑linked groups use Western models themselves to plan and execute cyber‑operations. Every expansion of high‑end hardware and model access gives those actors more surface area to work with, including the possibility of training “models on models” to approximate or replicate capabilities.
Finally, from a signaling perspective, the U.S. spent several years telling the world that advanced AI chips are strategic goods whose export to adversaries would be tightly constrained. Reversing that stance for H200s, especially in a way that looks responsive to one company’s interests, tells Beijing that if it waits and applies economic and political pressure, Washington will soften. It complicates efforts to keep allies on board with their own restrictions and weakens the credibility of any future threat to re‑tighten controls.
Seen through that lens, the same table of arguments above reads very differently. The columns don’t change, but the weights do.
Less obvious implications
If this all feels hard to process, that’s because the baseline is already messy and the policy tools are shifting under our feet.
We’re not choosing between a world in which China has zero access to advanced chips and a world in which they have all they want. We’re choosing between a leaky denial regime and an explicit, regulated flow that clearly increases the total number of H200‑class devices in Chinese hands.
At the same time, the control perimeter is moving from chips into models and cloud services. For the last couple of years, most of the policy attention was on hardware: what chips can be exported, at what performance thresholds, to which end‑users. Now regulators are starting to experiment with rules for model weights—especially for the very largest systems—and for cloud‑compute access by Chinese entities. We are early in that transition. No one is entirely sure how effective weight controls and cloud restrictions will prove, how they interact with chip exports, or whether the end state will be a coherent regime or a patchwork of half‑measures.
The third structural issue is your view on inevitability. If you think China’s rise in AI is basically baked in—because of its scale, talent and industrial policy—then the realistic goal is to stay a generation ahead and ensure the U.S. extracts as much benefit as possible along the way. If you think a two‑to‑five‑year lead in frontier capabilities materially shapes deterrence, military balance and economic influence, then delays carry their own strategic value. People with different answers to that question can look at the same facts and come away with very different policy preferences.
What you’d have to believe to pick a side
At this point it helps to stop trading talking points and look instead at the underlying beliefs driving each position.
A lot turns on a few basic questions. The first is whether you think strict chip export controls materially slow Chinese AI and military progress, or whether they mainly reshuffle activity into gray channels and accelerate domestic workarounds. The second is your view on inevitability: do you treat China’s rise in AI as essentially guaranteed, with the game being how far ahead the U.S. can stay, or do you think meaningful delay is still available and worth paying for?
A third axis is the kind of leverage you prefer. Some people want leverage through denial: keep advanced U.S. hardware out of Chinese hands as far as possible. Others want leverage through dependence: let Chinese firms build on Nvidia and CUDA so that Washington retains knobs to turn later. A fourth axis is how central you think Nvidia is to U.S. economic and political stability. If you see it as part of the country’s “economic scoreboard,” you will be much more reluctant to jeopardize its revenue and stock price.
Civil‑military fusion and espionage make up a fifth axis. If you view Chinese “civilian” labels as essentially meaningless and believe espionage is already out of control, you will push hard against any export regime. If you think those risks are real but manageable with enough guardrails, you will tolerate some flow. The sixth axis is institutional optimism: do you believe enhanced security reviews, telemetry and model‑weight controls can actually manage risk, or do you see them mostly as political decoration? The seventh and final axis is how much weight you put on minerals and supply chains. Are you willing to risk confrontation on those fronts to maintain a hard chip line, or are you looking for a broader truce in which chips are one bargaining chip among many?
For policymakers, the abstract debate eventually collapses into a handful of concrete knobs to turn. The first is volume. Are we talking about tens of thousands of H200s, hundreds of thousands, or millions over the next several years? At what point does China’s aggregate compute capacity meaningfully change?
The second is who gets them. Do PLA‑linked universities, state‑owned enterprises and national labs get categorically excluded, or do they end up accessing H200s through corporate fronts, joint ventures and research consortia? How do you define and enforce those boundaries in a system built on civil‑military fusion?
The third is telemetry and control. Do we require hardware‑level location verification and attestation on every exported chip, and if so, who has access to that telemetry—only Nvidia, or U.S. regulators as well? Under what conditions do we reserve the right to limit or disable exported hardware?
The fourth set of knobs connects chips to other issues. Are H200 licenses explicitly or implicitly tied to commitments around critical minerals, espionage norms, cloud‑compute arrangements or AI safety? And finally, how do chip exports interact with other levers like model‑weight controls, cloud‑compute leasing to Chinese entities and open‑source model policy? If we loosen in one area, do we tighten in another, or do we simply hope the system as a whole remains “good enough”?
You can frame the range of plausible policies as a spectrum of scenarios that differ on those knobs:
Conclusion
“Should we ship H200s to China?” turns out to be the wrong question. The real questions are:
How many chips are we willing to send, over how many years?
To whom do they go in a system defined by civil‑military fusion and aggressive industrial policy?
Under what conditions and telemetry can we credibly claim to manage the resulting risk?
And what fallback plan do we have if our beliefs about inevitability, leverage and enforcement turn out to be wrong?
If you think export controls don’t really slow China down, that Chinese dependence on Nvidia is a strategic asset, and that the economic upside matters a lot in the here‑and‑now, the Trump H200 deal looks like a reasonable compromise: China pays, Nvidia grows, the U.S. keeps Blackwell and Rubin, and anything truly dangerous can be policed at the level of models and cloud compute instead of chips.
If you think export controls are the only thing that has clearly worked, that every advanced chip in China is ultimately a dual‑use asset, and that the current U.S. compute gap is precious and fragile, the very same deal looks like strategic malpractice: we cash out a hardware lead for a few billion in royalties and a nicer stock chart, and we help build the infrastructure our main rival will use to compete with us.
The hard part is that both stories start from facts that are true. It’s a choice among futures, each of which is plausible depending on what you believe about technology, markets and time.






This decisoin tree approach is brilliant for cutting through what's usually just shouting past each other. The part about "choosing among futures" really lands because most policymakers probably haven't mapped out wht they actually believe about inevitability vs delay. I worked on some supply chain risk projects last year and saw the same tension play out around semiconductors, where people kept defaulting to either pure denial or full openness without actually modeling the middle scenarios.