I am Christopher Kanan, a professor and AI researcher at the University of Rochester with over 20 years of experience in artificial intelligence and deep learning. Previously, I led AI research and development at Paige, a medical AI company, where I worked on FDA-regulated AI systems for medical imaging. Based on this experience, I would like to provide feedback on the proposed export control regulations regarding compute thresholds for AI training, particularly models requiring 10^26 computational operations.
The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities. Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models. It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets. Lastly, many companies trying to scale large language models beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.
Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk. Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.
Without careful refinement, these rules risk stifling innovation, especially for small companies and academic researchers, while leaving important developments unregulated. I urge policymakers to engage with industry and academic experts to refocus regulations on specific applications rather than broadly targeting compute usage. AI regulation must evolve with the field to remain effective and balanced.
---
Of course, I have no skin in the game since I barely have any compute available to me as an academic, but the proposed rules on compute just don't make any sense to me.
"First, it assumes that scaling models automatically leads to something dangerous"
The regulation doesn't exactly make this assumption. Not only are large models stifled, the ability to serve models via API to many users, and the ability to have many researchers working in parallel on upgrading the model is also stifled. It wholesale stifles AI progress for the targeted nations.This is an appropriate restriction on what will likely be a core part of military technology in the coming decade (eg drone piloting).
Look, if Russia didn't invade Ukraine and China didn't keep saying they wanted to invade Taiwan, I wouldn't have any issues with sending them millions of Blackwell chips. But that's not the world we live in. Unfortunately, this is the foreign policy reality that exists outside of the tech bubble we live in. If China ever wants to drop their ambitions over Taiwan then the export restrictions should be dropped, but not a moment sooner.
I rather assumed they were able to re-invent them from scratch by the work of their own scientists. I mean, the US did it before the invention of the transistor and what I've heard about the USSR project is their espionage only helped them know the critical mass without needing so many test runs, so it doesn't seem like it would be implausible that Israel could do it themselves about 20 years later.
In 1965 NUMEC owner, Zalman Shapiro - in coordination with israeli intelligence, diverted 100 kg of 95% enriched uranium from the facility and shipped it to Israel. Enriching the material to weapons grade is the technically difficult part - which I would think israel would certainly not have the budget for given the size of its economy.
> which I would think israel would certainly not have the budget for given the size of its economy.
Hmm.
I'll buy that. I've seen a lot of wildly different cost estimates for separation work units, so I can only guess how much it might have cost at the time.
If it would have otherwise been the full Manhattan Project, I don't even need to guess, you're definitely correct they couldn't have afforded it: https://www.wolframalpha.com/input?i=gdp+israel+1965
https://res.cloudinary.com/dbulfrlrz/images/w_1024,h_661,c_s... (from https://protonvpn.com/blog/5-eyes-global-surveillance).
Israel, Poland, Portugal and Switzerland are also missing from it
I hope someone with a better understanding of the details can jump in, but they are both Tier 2 (not Tier 3) restricted, so maybe there are some available loopholes or Presidential override authority or something. Also I believe they can still access uncapped compute if they go via data centers built in the US.
https://en.m.wikipedia.org/wiki/Import_substitution_industri...
I think it’s about this attempt to block that actually makes the opponent stronger. The export restrictions were designed to weaken their competitiveness not enhance!
The restrictions blocked asml and Samsung and others from trading with China. Now China can just replace them. It seems.
I'm disinclined to let that be a barrier to regulation, especially of the export-control variety. It seems like letting the perfect be the enemy of the good: refusing to close the barn door you have, because you think you might have a better barn door in the future.
> Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.
How to you envision that working, specifically? Especially when a lot of models are pretty general and not very application-specific?
Am I missing something? I am not an expert in the field, but from where I sit, there literally is no barn door at this point to even close too late..
The impression I had is with reversed causation: that it can't be all that dangerous if it's smaller than this.
Assuming this alternative interpretation is correct, the idea may still be flawed, for the same reasons you say.
These rules intentionally "stifle innovation" for foreigners - this is a feature, not a bug.
I personally think the regulation is misguided, as it assumes we won't identify better algorithms/architectures. There is no reason to assume that the level of compute leads to these problems.
Moreover, given the emphasis on test-time compute nowadays and that it seems like a lot of companies have hit a wall with performance gains with trying to scale LLMs at train-time, I especially think this regulation isn't especially meaningful.
Applying this to your thoughts about AI, is that as the efficiency of training gets better, the ability to train models is commodotized, and those models would not be considered to be advantageous and would not need to be controlled. So maybe setting the export control based on the number of operations is a good idea- it naturally allows efficiently trained models to be exported since they wouldn't be hard to train in other countries anyway.
As computing power scales maybe the 10^26 limit will need to be revised, but setting the limit based on the scale of the training is a good idea since it is actually measurable. You couldn't realistically set the limit based on the capability of the model since benchmarks seem become irrelevant every few months due to contamination.
The short term profits US businesses have been enjoying over the past 25 years came at a staggering long term cost. The sanctions won't even slow down the Chinese MIC, and in the long run they will cause them to develop their own high end silicon sector (obviating the need for our own worldwide). They're already at 7nm, at a low yield. That is more than sufficient for their MIC, including the AI chips used there, currently and in the foreseeable future.
b) export controls aren’t expected to completely prevent a country from gaining access to a technology, just make it take longer and require more resources to achieve
You may also be misunderstanding how much money China will spend to develop their semiconductor industry. Sure, they will eventually catch up to the West, but the money they spend along the way won’t be spent on fighter jet, missiles, and ships. It’s still preferable (from the US perspective) to having no export controls and China being able to import semiconductor designs, manufacturing hardware, and AI models trained using US resources. At least this way China is a few months behind and will have to spend a few billion Yuan to achieve it.
The way to win against Russia is not via sanctions but rather via destabilizing the regime through guerrilla propaganda. The Russians, the Chinese, and the Soviets before them have always known this. The West is just too slow to catch on.
This is a totally uninformed vibes-based opinion, but I can't help but feel like the sort of "guerilla propaganda" you're talking about must be a major factor in the current fracturing of cultural and political discourse in the US.
Now if you'd like to also be informed, start by reading (or reading about) Antonio Gramsci and https://en.wikipedia.org/wiki/Cultural_hegemony
Yes, the guerrilla propaganda is real and has been practiced for decades now...and one of its pervasive traits is that it focuses not only on media but also in changing the minds of students while they are in university, when they are particularly susceptible to influence. It's textbook Gramsci stuff.
Again, sorry for my knee-jerk reaction. I probably had too much or too little coffee, and I'd buy you one to make up for it if I could.
This isn't lost on the authors. It is explicitly recognized in the document:
> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.
Does this affect open source? If so, it'll be absolutely disastrous for the US in the longer term, as eventually China will be able to train open weights models with more than that many operations, and everyone using open weights models will switch to Chinese models because they're not artificially gimped like the US-aligned ones. China already has the best open weights models currently available, and regulation like this will just further their advantage.
edit(removed exasparated sigh; it does not add anything )
Something like: 10^26 FLOPS * 1.5^n where n is the number of years since the regulation was published.
Why would you want to automatically increase the cap algorithmically like that?
The purpose of a regulation like this is totally different than the minimum wage. If the point is to keep and adversary behind, you want them to stay as far behind as you can manage for as long as possible.
So if you increase the cap, you want only increase it when it won't help the adversary (because they have alternatives, for instance).
Edit: maybe 10k pages
It only takes one for a PoC. It only takes few to establish a service. Spatial and color accuracy is going to be a challenge, but I imagine there already exist methods to correct for that - e.g. printing reference patterns along with the data, so the scanner/digitizer can calibrate to it on the fly.
China, Russia, and Iran used Internet Explorer too.
And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).
Which apparently might be a good outcome to FOSS operating systems, with national distributions like Kylin.
As European I vote for SuSE.
This might be a product of the USA being a gerontocracy.
These folks aren't "forced" to provide "cheap brainpower:" they are offering services at their market rate.
Yeah, this is really a bit insulting.
> Yeah, this is really a bit insulting.
So you're insulted some country of other wasn't included in:
> First, this rule creates an exception in new § 740.27 for all transactions involving certain types of end users in certain low-risk destinations. Specifically, these are destinations in which: (1) the government has implemented measures to prevent diversion of advanced technologies, and (2) there is an ecosystem that will enable and encourage firms to use advanced AI models to advance the common national security and foreign policy interests of the United States and its allies and partners.
?
IMHO, it's silly to get insulted over something like that. Your feelings are not a priority for an export control law.
Taiwan, even though it's a US ally, is only allowed limited access to certain sensitive US technology it deploys (IIRC, something about Patriot Missile seeker heads, for instance), because their military is full of PRC spies (e.g. https://www.smh.com.au/technology/chinese-spies-target-taiwa...), so if they had access the system would likely be compromised. It's as simple as that.
If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.
On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.
I can't seem to find any information about that anywhere.
China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example, to get access to as many compute as they want. or to use consumer grade compute, or their homegrown Chinese CPUs/GPUs.
USA should actually embrace open source and collaborate together, as we are still in the very beginning of AI revolution
> China engineers have capability to get around these silly sanctions, by renting cloud GPUs from USA companies for example
That's why they're also moving towards KYC for cloud providers.
https://www.federalregister.gov/documents/2024/01/29/2024-01...
The list is controversial obviously, but I have to think nations wouldn't be on any tier of the list except the no-restrictions tier if there wasn't something our intel people weren't worried about. Maybe the concerns are not legitimate, but there's definitely a reason we don't want those nations having access to SOTA AI models.
Let’s see if this survives the next administration. Normally I’d be skeptical, but Musk has openly advocated about the “dangers” of AI and will likely embrace attempts to regulate it, especially since he’s in a position to benefit from the regulatory capture. In fact he’s doubly well-placed to take advantage of it. Regardless of his politics, xAI is a market leader and would already be naturally predisposed to participate in regulatory capture. But now he also enjoys unprecedented influence over policymaking (Mar a Lago) and regulatory reform (DOGE). It’s hard to see how he wouldn’t capitalize on that position.
Lol what?
The only people who think this are Elon fanboys.
I guess you think Tesla is the self-driving market leader, too. Okay.
[0] https://news.crunchbase.com/ai/startup-billion-dollar-fundra...
WeWork had more funding than xAI.
Not saying xAI will die, but you can’t look at funding.
If anything, a well-funded company with a bad product is more likely to engage in regulatory capture because it limits their risk of exposure to a new entrant with a good product.
Where are any of us gonna get 10,000 H100's?
So it's not like you're overwhelming small players by throwing up regulations. Those players are extremely unlikely to get the compute they'd need to even experiment in the first place.
Usually, the US government tries not to do that.
Regardless of who is currently in the lead, China has its own GPUs and a lot of very smart people figuring out algorithmic and model design optimizations, so China will likely be in the lead more obviously within 1-2 years, both in hardware and model design.
This law is likely not going to be effective in its intended purpose, and it will prevent peaceful collaboration between US and Chinese firms, the kind that helps prevent war.
The US is moving toward a system where government controls and throttles technology and picks winners. We should all fight to stop this.
What else can it do? They don’t want to lose their lead, and whatever restrictions they’ve been putting on China et al. have let the exact desired outcomes so far. The idea is to try to slow down the beast that has very set goals (e.g. to become high tech manufacturing and innovation center), and try to play catch up (like on-shoring some manufacturing).
Personally, I’m skeptical that it will work, because by raw number of hands on deck, they have the advantage. And it’s fairly hard when your institutional knowledge of doing big things is a bit outdated. I would argue, a good bet in North America would be finding a financially engineered solution to get Asian companies bring their workers and knowledge to ramp us up. Kinda like the TSMC factory. Basically the same thing as China did in 2000s with western companies.
What lead? The best open-source language models right now are Chinese. deep-seek is amazin, so is qwq.
They absolutely have not. The best open weights LLM is Chinese (and it's competitive with the leading US closed source ones), and around 10x cheaper both to train and to serve than its western competitors. This innovation in efficiency was largely brought about by US sanctions limiting GPU availability.
Moving towards? The US has a pretty solid history of doing a great deal of this (and more) in the 20th century. But so did all of the world's powers... as they all continue to do today. It seems to be an inherent part of being a world power.
I also think that to a great extent, we’re already at war. China has not respected intellectual property rights, conducted espionage against both companies and government agencies, repeatedly performed successful cyberattacks, helped Russia in the Ukraine conflict, severed telecommunications cables, and more. They’ve also built up the world’s largest navy, expanded their nuclear arsenal, and are working on projects to undermine the status of the US Dollar. All of this should have been met with a much stronger and forceful reaction, since clearly it does not fit into the notion of “peaceful collaboration”.
China’s unpeaceful actions aren’t limited to the West. China annexed much of its current territory illegally and through force (see Xinjiang and Tibet). When Hong Kong was handed back, it was under a treaty that China now says is not valid. China has been trying to steal territory from neighboring countries repeatedly, for example with Bhutan or India. They’ve also threatened to take over Taiwan many times now, and may do so soon. They’re about to build a dam that will prevent water from reaching Bangladesh and force them to become subjugated. The only peaceful and just outcome is for those territories to be freed from the control of China - which will require help from the West (sanctions, tariffs, blockades, and maybe even direct intervention).
Even within China, the CCP rules with an iron fist and violates virtually all principles of free societies and classically liberal values that we value in the West. I don’t see that changing. And if it doesn’t, how can they be trusted with more economic and military power? That’s why I don’t think we should seek peaceful collaboration with China. We just need smarter strategies than this hasty AI declaration.
I would take those subtle means over the blatant invading that America has been doing.
Regarding floods - if Bangladesh wants an upstream dam, why aren’t they included as a decision maker on whether the Chinese dam goes ahead? Clearly this is because they would say no to it. The issue isn’t floods - it’s that China can withhold water for drinking and irrigation and threaten the country with starvation and famine. It’s a huge national security threat.
https://www.federalregister.gov/documents/2025/01/15/2025-00...
The ML stuff they're worried about takes a giant data center to train and an unusually beefy computer even to run. The weights are enormous and the training data even more enormous. Most of the people who have the models, especially the leading ones, treat them as trade secrets and also try to claim copyright in them. You can only recreate them if you have millions to spend and the aforementioned big data center.
Now, consider this: the Palm [1] couldn’t even create an RSA [2] public/private key pair in “user time” The pace of technological advancement is astonishing, and new techniques continually emerge to overcome current limitations. For example, in 1980, Intel was selling mathematical coprocessors [3] that were cutting-edge at the time but would be laughable today. It’s reasonable to expect that the field of machine learning will follow a similar trajectory, making what seems computationally impractical now far more accessible in the future.
[1] https://en.wikipedia.org/wiki/Palm_(PDA)
There should be a federal regulation about that.
I don’t see how we can assume it will be enacted at all.