At least that's how it was done back in the "golden years", post WW2, during which inequality was quite stable etc
(Though I was thinking more of the research/planning vs actually mass production, that you're taking about. Especially as this discussion is about such as case too, considering it's software)
Imagine if the government said no food stamps, we'll just run our own grocery stores to provide for the less fortunate and we'll hire the best people.
You see the difference? Do you really think government run corporations would be able to design better military defense systems?
Probably more understandable for Meta, since they've been leaving the B2B space since Workplace has been sunset. Amazon losing out on this is pretty rough for AWS though.
https://help.kagi.com/kagi/ai/llm-benchmark.html
nova pro is worse than llama3-70B
AFAIK AWS are pushing pretty hard with GovCloud these days.
Anecdotal based on industry experience, no citations.
Minority Report takes place in 2054... Phillip K Dick might have been onto something.
I don’t think anyone has even seriously proposed using them for weapons targeting, at least in the current broad LLM form.
If they are slow (2x as slow on a cruise missile or drone SOC) and are wrong all the time then why would they even bother? They already have AI models for visual targeting that are highly specialized for the specific job and even that’s almost entirely limited to very narrow vehicle or ship identification which is always combined with existing ballistic, radar, or GPS targeting.
Buying some LLM credits doesn’t help much at all there.
Too much of AI gets uncritically packaged with these hand wavy FUD statements IMO.
Which is obviously stupid. So if stupid people are using these things in stupid ways, that seems bad.
If grant classification is trying to drive a car non-stop (including not stopping for gas) from NY to LA, stuffing LLMs into weapons is more like trying to drive that same car from NY to London. They're just not the proper kind of tool for that, and it's not the same class of error.
This is people's money, and people benefit from competition in the market
Funding research is, by definition, less cut-and-dry as to what we should pay for; thus having an agenda is not always bad and might even be good. I am using an "agenda" not in a narrow political sense, but including positions like "we should be funding space comms / drone networks / real-time soldier health monitoring / whatever because industry is not building what we think we will need in a few years".
But being somewhat exposed to the waste of DoD procurement I am personally vehemently against inserting such ideology into procurement decisions. Those should be money-based. Get what you need at the least cost to the taxpayer. If you do not know what you need, think harder or invite experts or do a study (and publish it so people writing it know they are associated with those decisions) before paying billions for questionable junk. My 2c.
The govt already has various programmes to help promote small business contractors in US defence. This is not a programme; it's a definitive project that has a specific set of (admittedly vague) objectives in mind. It's more efficient for the taxpayer for these to be accomplished when the funding is consolidated to a few entities for a 50% success rate, than to 20 different entities for a 5% success rate.
DoD is experimenting with LLMs and is using multiple of the top providers in the space… just like every other tech company is doing. Everyone I know is coding with Claude, Gemini, or GPT and my experiments with Grok 4 have easily been as good.
If this was an innovation fund, ala what Canada likes to waste money on - where the gov pretends it’s a really bad VC, I’d at least understand these critiques.
Even then, these companies aren't doing research into LLMs, they're just wrapping the endpoints and creating some abstractions.
This admin is about graft and shakedowns. Just like the implosion of science, the companies that exist due to smallish federal contracts for obscure tech and speculative investments are toast.
The failure rate for startups is much higher than 90%. And there’s the additional complexity of how do you pick which 20 such startups get the cash.
On the picking: it’s really not hard to search for AI companies and pick 20. In fact there are government programs that invest in startups so clearly it’s doable.
That's not how corruption works
> That's 10 solutions instead of 1 -- statistically one of them will be a massive breakthrough?
The statistic is that 10% of startups make a massive breakthrough? Would love to see some work that comes remotely close to replicating that! Startup investing would be trivially easy.
Everyone says 1 out of 100 makes it big but the top 5-10% of a portfolio is still substantial. If we’re only giving the money to companies with revenue the odds of success are likely improved.
Startup investing is trivially easy. You give money to good companies and founders. There’s just a bunch of BS that gets in the way. Like giving massive money to big corps that don’t need it instead of startups that do.
Anthropic - same as above
xAI - same as above
CoreWeave - Doesn't make LLMs
Glean - Doesn't make LLMs (wow this startup investing thing might be harder than for you than you thought!)
Perplexity - Has finetuned LLama models AFAIK. Maybe you think Meta should've gotten the nod from DoD as well?
PlayAI - AFAIK only voices
Cohere - Not sure if they are LLama or otherwise
Cyera - Doesn't make LLMs
Replit - Doesn't make LLMs
Windsurf - Doesn't make LLMs
Mistral - Does make LLMs, you got one! Is French, though.
Anysphere - They make an IDE called Cursor
Scale - Doesn't make LLMs, basically a Meta subsidiary (you really must have wanted Meta to get the nod too!)
Harvey - Legal focus, not general
Thinking Machines - Mira Murati's company, just started 5mos ago, no public products. Definitely don't fit your definition of "has revenue"
helsing - Hadn't heard of them, are German.
Cluely - LOL
Suno - If the DoD gets into music generation this would be a great choice.
Clay - Don't know them, doubt they have LLMs.
Crunchbase - lol is correct
Lubega Geoffery - No idea
Caris LIfe Sciences - Life sciences doesn't sound right!
C3 AI - Scam
Runway - Media generation, not general use
LangChain - Doesn't make LLMs
Rigetti Computing - Dude, come on. They're a quantum computing company
Cowbell - Don't know them, but a google shows they're an insurance company lol
Almost all the rest don't even have anything to do with AI. So all-in-all, nearly a complete failure at suggesting even close to 20 alternatives for the DoD to invest in. Your answer didn't even hit US companies that do have some alternatives: Meta, MSFT, AMZN, SSI maybe?
Helsing is a military AI company [0] trying to make Terminator I movie a reality in the name of democracy.
EDIT: added link.
I understand the sentiment to create healthy market but only a few handful company than can create general use LLM, most of them is just wrapper or small fine tune model for specific use case
Looking forward to hearing about your billion dollar VC fund.
My fund will never be $1B but that’s fine :)
I don't for anyone doing serious work. Use the Gemini family, O3 and Claude if you want to gsd. The DoD made the correct call IMO. Kimi K2 is also potentially interesting for non-defense purposes but I haven't spent enough time with it yet.
> My fund will never be $1B but that’s fine :)
It's "trivially easy" for you and 10% of your investments are expected to have "massive breakthroughs". Your strategy of "give money to good companies and founders" should easily enable you to reach $1B AUM!
but yeah that list was very silly anyways
This is a kleptocracy but with extra steps. People are unfortunately numb to it.
“The awards to Anthropic, Google, OpenAI, and xAI – each with a $200M ceiling – will enable the Department to leverage the technology and talent of U.S. frontier AI companies to develop agentic AI workflows across a variety of mission areas.”
(https://www.ai.mil/Latest/News-Press/PR-View/Article/4242822...)
In theory if it’s just labor with some profit mixed in, then you might be looking at 600 employees for each company.
I doubt it is just labor. Quote says $200 million ceiling. So maybe a time and materials (T&M) contract? It’s a ceiling so it’s not like they earn or are guaranteed $200m.
Has to include token or cloud computing time too. Which Google owns and can amortize themselves since it’s a capital asset to them. I don’t know much about the cloud computing background of Anthropic or if they are using Azure or AWS.
I think my original point is still valid it’s not a lot when you look at it
"OpenAI Public Sector LLC, San Francisco, California, has been awarded a fixed amount, prototype, other transaction agreement (HQ0883-25-9-0012) with a value of $200,000,000. Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains. The work will be primarily performed in the National Capital Region with an estimated completion date of July 2026. Fiscal 2025 research, development, test and evaluation funds in the amount of $1,999,998 are being obligated at time of award. Office of the Secretary of Defense Chief Digital and Artificial Intelligence Office, Washington D.C., is the contracting activity. "
It's a prototype and its FFP. Only about $2m has been allocated to them. That equates to about 1-2 employees for a year.
Anthropic’s is “is expected to mirror OpenAI’s structure: a similar token obligation at signing and the rest released via milestones.”
All these frontier AI OTAs follow the same pattern of “up to [X] million” ceilings with actual funding phased out as projects progress. This mirrors what Palantir got last year.
Link: https://www.nextgov.com/defense/2024/06/pentagons-ai-office-...
So does this mean all the web designers, web developers and many other white collar jobs will now be done by one such professional using AI; so XYZ use to require ten people to get the job done now only one who uses AI gets the tasks done (tasks that those ten use to use software applications to complete). All the while a few hundred of Ruoming Pangs make more money then God in which their work further helps killing white collar jobs.
Is there anyone else concerned about this? I am federal worker indirectly yet per some news today not sure how much longer I will be and whether or not it wise to go look for another web design/developer/UX Researcher (think this is safest out of three as you are talking to ppl)position. There are throngs of others looking now to compete against including now competing against AI for less jobs.
this is simply a loyalty payoff for supporting the current admin. same as the other tech companies that got these contracts.
The drama where he outed the president as a pedo, deleted it, and went back to boot licking?
They should, but businesses owned by government employees should be excluded because it's too easy to corrupt the process. In fact, they have explicit rules about not doing that.
The merits? We just had, in the last _week_, a huge scaldal where Grok was spewing racist, pro hitler content, even calling itself MechaHitler.
For 200M Google will open an account, send an email that says:
You're account is ready, there is $40m left on the retainer. We can code up some email template for 40M if you want.
And from https://digital-strategy.ec.europa.eu/en/policies/european-a...: “Both the Horizon Europe and Digital Europe programmes will invest €1 billion per year in AI.”
https://www.theguardian.com/us-news/ng-interactive/2025/may/...
At least we have a real example of landable rockets.
(Only partially joking here)
Before the national security narrative took over, the main argument was about "safe" AI, where releasing models as open weights was considered "not safe." Now that no major US AI players release premium open-weights models, the "safety" narrative isn't needed anymore—so cooperating with the US military is feasible again.