1. Early 2025: Identify Federal sites for AI data centers, clean energy facilities, and geothermal zones. Streamline permitting processes and draft reporting requirements for AI infrastructure. Develop a plan for global collaboration on trusted AI infrastructure.
2. By Mid-2025: Issue solicitations for AI infrastructure projects on Federal sites. Select winning proposals and announce plans for site preparation. Plan grid upgrades and set energy efficiency targets for AI data centers.
3. By Late 2025: Finalize all permits and complete environmental reviews. Begin construction of AI infrastructure and prioritize grid enhancements.
4. By 2027: Ensure AI data centers are operational, utilizing clean energy to meet power demands.
5. Ongoing: Advance research on energy efficiency and supply chain resilience. Report on AI infrastructure impacts and collaborate internationally on clean energy and trusted AI development.
I am quite skeptical in this regard. :P
I have some sympathy for certain domestic capabilities (e.g. chip fabrication) but this "AI" bubble cross-infecting government policy is frustrating to watch.
I think, though, that even if LLMs turn out to be a dead-end and don't progress much further... there are a lot of benefits here.
One of the US's key strategic advantages is brain drain.
We are one of the world's premier destinations for highly educated, highly skilled people from other countries. Their loss, our gain.
There are of course myriad other countries where they could go, many of them more attractive than the US in various ways. Every country in the world is in a sense competing for this talent.
I think the brain drain has peaked. Many people here in Europe don't think much of the US anymore, while 20-30 years ago it was THE place to go.
I'm sure many countries like India and China, for whom the US still might be somewhat attractive, are going to go the way of Europeans.
Speaking as an immigrant myself, so long as there's still noticeable wealth disparity, people will make the jump. The other aspect that makes US specifically especially attractive compared to some others is its family immigration policy - people generally want their family to join them eventually, and US has an unusually large allotment for that compared to many other countries.
the real fear should be that people wouldnt want to come. already chinese intl students are break even when considering US vs going back to china. who wants to deal with all the bureaucracy and hatred when they could just go back and work for deepseek.
Yeah it sounded like a gift to nVidia.
My prediction was that nVidia would ride the quantum wave by offering systems to simulate quantum computers with huge classical ones. They would do that by asking the government to fund such systems for "quantum algorithm research" since nobody really knows what to use QC for yet.
This move primes that relationship using the current AI hype boom.
So look for their quantum simulation-optimized chips in the near future.
GPU, gpgpu, crypto, ray tracing, AI, quantum. nvidia is a master at milking dollars from tech fads.
Because if they don't do the bad thing first, some bogeyman might become better at it than they are. Same logic that gave us the Manhattan project.
Or try to make them have a heart attack by making a digital twin of them which synchronizes their sentiment, smart watch health data, and man-in-the-middling all of their digital conversations with creepy GenAI? Our adversaries might be doing it, so line up some fresh specimens. Come on bruh it's the future, you gotta think bigger.
"I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.[4]"
We are in the race for a better LLM*, deliberately disguised as a race to the singularity, because that's what gets investor-dollars and fame.
* And anything else close enough to ride on their hype-coattails
Lets make the leap of faith that we can improve our AIs to actually understand code that it's reading and can suggest improvements. Current LLMs can't do it, but perhaps another approach can. I don't think this is a big leap. Might be 10 years, might be 100.
It's not unreasonable to think there is a lot of cruft and optimization to be had in our current tech stacks allowing for significant improvement. The AI can start looking at every source file from the lowest driver all the way up to the UI.
It can also start looking at hardware designs and build chips dedicated to the functions it needs to perform.
You only need to be as smart as a human to achieve this. You can rely on a "quantitative" approach because even human level AI brains don't need to sleep or eat or live. They just work on your problems 24/7 and can you have have as many as you can manufacture and power.
I think having "qualitatively" superiority is a little easier actually because with a large enough database the AI has perfect recall and all of the worlds data at its fingertips. No human can do that.
Or is it more reasonable to suppose that 1) all those improvements might get us a factor of 10 in efficiency, but they won't get us an intelligence that can get us another factor of 10 in efficiency, 2) each doubling of ability will take much more than doubling of the number of CPU cycles and RAM, and 3) growth will asymptotically approach some upper limit?
Then when the earth is used up, you look at space. Space travel is a lot easier because you don't need to keep a meat bag alive.
I think as soon as you can get an AI to break a task down into smaller tasks, and make itself a todo list, you have the autonomy. You just kick it all off by asking it to improve itself. I doesn't have to "want" anything, it just needs to work.
In the macro?
The overall goal is keeping the US competitive in technology. Both in terms of producing/controlling IP as well as (perhaps even more crucially) remaining a premier destination for technologists from all over the world. The cost of not achieving that goal is... incalculable, but large.
Whether or not this is a good way to achieve that goal is of course up for debate.
Note that this isn't a hypothetical, either. Israel is already using AI to pick targets. Ukraine is already using AI-controlled drones to beat jamming. There's no indication that either one intends to stop anytime soon, which tells the others that the tech is working for this purpose.
There is already AI flying F-16's.
AI targeting systems.
AI surveillance
I mean, sky's the limit.
People assume gas plants are the fastest and easiest to build and power but you have to get the natural gas to the plant. Nat gas pipelines take years to build and also there are gas turbine supply issues. Offgrid solar + battery solves all of these.
If you could identify, say, a defunct aluminium smelting plant?
It'll probably have access to 30 megawatts of power. You'd have to give nvidia quite a lot of money get enough GPUs to consume that.
Remember it's federal land, so they can't be held to state permitting or building codes, etc, only what they choose to (IE if their agency adopted it explicitly)
Having seen lots of datacenters constructed over the years - it's tractable in terms of the bureaucracy parts if they want it to be - because they can mostly ignore them.
So for me it breaks down like:
Building construction, it could be done.
Power provision - hard to say without knowing the sites. some would be clear yes, some would be clear no. Probably more nos than yeses.
Filling it with AI related anything at a useful scale - no, the chips don't exist unless you steal them from someoene else, or you are putting older surplus stuff in.
Ie: making a new data center is easy. Making new power plants quickly - not so much. But hey, at least there's some renewed political will, better than nothing.
As another data point, Apple has been doing 100% renewables for their DCs since 2014. A wind farm can definitely be built in months, and energy companies will always follow the money. Site selection will definitely take energy availability into account as well.
Couple concerns:
- I loath to believe in silver bullets. The executive branch seems to believe that investing in AI (note: the order, despite the extensive definitions, leaves Artificial Intelligence undefined) is the solution to US global leadership, clean energy, national defense and better jobs. Rarely if ever is one policy a panacea for so many objectives.
- I am skeptical of government "picking the winners". Markets do best when competitive forces reward innovation. By enforcing an industrial policy on a nascent industry, the executive may just as well be stifling innovation from unlikely firms.
- I am always worried about inducing a _subsidy race_ whereby countries race to subsidize firms to gain a competitive advantage. Other countries do the same, leading to a glut of stimulus with little advantage to any country.
- Finally, government bureaucracy moves slowly (some say that's the point). What happens if a breakthrough innovation in AI radically changes our needs for the type, size or other characteristic of these data centers? Worse still, what happens if we hit another AI winter? Are we left with an enormous pork barrel project? It's hard to envision the federal government industrial policy perfectly capturing future market needs, especially in such a fast moving industry as tech.
Do they know something more than we do with regard to the efficacy of current or soon-to-come AI? Or is it purely a speculative business/economic move?
> require adherence to technical standards and guidelines for cyber, supply-chain, and physical security for protecting and controlling any facilities, equipment, devices, systems, data, and other property, including AI model weights
> plans for commercializing or otherwise deploying or advancing deployment of appropriate intellectual property, including AI model weights
The idea that this technology carries existential risk is how OpenAI and others generate the hype that generates investment.
It's currently quasi-illegal in the US to open source tooling that can be used to rapidly label and train a CNN on satellite imagery. That's export controlled due to some recent-ish changes. The defense world thinks about national security in a much broader sense than the tech world.
See https://www.federalregister.gov/documents/2020/01/06/2019-27...
Siri, use his internet history to determine if he's a threat and deal with him appropriately.
https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the...
> AI can process intel far faster than humans.[5][6] Retired Lt Gen. Aviv Kohavi, head of the IDF until 2023, stated that the system could produce 100 bombing targets in Gaza a day, with real-time recommendations which ones to attack, where human analysts might produce 50 a year
Putting them on weapons so they can skip the middle man is the next logical step.
The Trump administration said that they believed Schedule F would increase the number of career positions to roughly 50,000 existing jobs. However many think tanks and union members believe that Schedule F could be interpreted much more broadly and could include well over 100,000 positions.
If the Trump administration revives the Schedule F order, it could mean very significant changes for many career bureaucrats.
[0] - https://www.govinfo.gov/content/pkg/FR-2020-10-26/pdf/2020-2... [1] - https://www.whitehouse.gov/briefing-room/presidential-action...
I share your hope that the incoming Trump administration will uphold the usual norms of hiring & firing careers federal employees!
They lost
The committee dumps their report and disbands
----------------
Trump's 1776 commission for redesigning history education released their report days before Biden's inauguration. They published by linking a PDF to the White House website and days later it was gone. Dust in the wind.
I think it's more like "we don't want our AI/rocket engines used in ways we'd feel awful about" or "used in ways that hurt us"
It's not really a question of growing or destroying other nations' trust.
It contains a lot of good stuff, also on geothermal and nuclear; long term storage; improving power transmission infrastructure; attempting to address the impact of AI on electricity rates for regular consumers; improving data transparency and communications of interconnections; improving the permitting flow for critical (clean) infrastructure; and a bunch of other stuff.
The likelihood of it all surviving the incoming administration seem low, but given how it aligns with already present structural trends (and downright critically important needs in terms of power infrastructure at least), there's a good chance some parts will.
It's more an energy policy EO than an AI EO, and it's both ambitious and objectively positive for combating climate change. Maybe we could at least read the damn thing before filling the comments with low effort cynicism?
Please keep flamebait, including partisan flamebait, off HN. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
The link is helpful thank you!