Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
For better or worse, the government is the one who audits, and has it's own internal systems for self audits. So no one except them tells them what they can or cannot do. The government would never put itself in a position where civilians died because Amodei didn't like the vibe of the case being worked.
In a way it's wild that people are upset that the government didn't put a billionaire megacorp CEO in the drivers seat of intelligence.
you're missing "laundering the responsibility" approach - find a lawyer who writes that the thing is legal in his opinion, and voila.
However, if they do sell to the government, they shouldn't have some sneaky way to exert control over decision making using their products. We're a country of laws, and for better or for worse, these laws are made by elected officials and those appointed by elected officials.
Why an American company wouldn't want American defense to have the most capable tools at their disposal is a different matter all together, but here we are.
why not, many companies have all sorts of rules you agree to when using their products, including many legal ("lawful") things. Are you saying that the government as a client should be unbound by contractual obligations that apply to other clients?
Trump implemented tariffs he wasn't allowed to immediately, he started a war he probably wasn't allowed to in order to (allegedly) distract from associating with a pedophile, he wrote an executive order trying to undo the fourteenth amendment, he has actively been abducting and imprisoning lawful residents (and even citizens!) and actively pushed for racial profiling to do so.
If a company feels like the government will simply rewrite the laws in order to advance any kind of political whim (including to be weaponized against that very company!), it's not wrong or even weird for them to want to add safeguards to their product.
To be clear, this isn't weird or uncommon. Lots the stuff you sign in the EULA isn't preventing you from doing things that are "illegal".
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, that they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
The career officials in the Obama FTC started proceedings for an antitrust lawsuit against Google over a decade ago.
The political appointees (of both parties) shut it down.
It seems to me that regulatory capture has been working for Google for some time now.
It's basically impossible to get off the ground competing against google when 30-40% of people are just freeloading your service, and 80-90% think the internet is an ethereal realm that everyone could have ad and subscription access to if we could only agree to starve these greedy middle men.
For search, Kagi has had a growing fanbase for a couple years now, but let's take things that have been easy to get for free for decades: Movies.
People have been, with relatively impunity, able to torrent movies for free for a very long time. It's not hard, and the only way you're paying for it is ads for hot MILFs in your area. And yet, despite this having always been an option, somehow Netflix and Hulu and Disney+ and HBO Max have managed to make fairly successful businesses selling movies that could have been pirated.
I could get YouTube as ad-free with an ad blocker, but I pay for YouTube Premium. I could get all my music for free with Redacted, but I use YouTube music, or I buy CDs. I could torrent video games but I just buy them off Steam or GOG.
This isn't new either; there were thousands of free forums on the internet in the late 90's, but yet people still bought accounts on Something Awful for quite awhile (and indeed still buy accounts, but with much lower numbers).
We can certainly argue about how much value these companies are providing, and we can argue about how it's annoying how there's a million different streaming services now and how that's really irritating, but my point stands: people do pay for things on the internet.
We don't have to accept that companies need to sell all our data. We don't have to accept being bombarded with ads. We don't have to accept that people won't pay to use services.
Most people (at least the people I've talked to, which admittedly is somewhat of a lefty bubble but I think even more generally) agree that companies getting to or close to "monopoly" status is a pretty bad thing, and that they should be broken up. Political candidates get a lot of social credit for claiming that they're going to do exactly that. The moment that they actually get into a position where they actually could do something about it, they suddenly remember who their campaign contributors are, and can then create reasons to avoid actually solving any of these problems.
Very occasionally we have successes in this field, like the breakup of Standard Oil and AT&T), but of course both of these sort of became toothless since we basically allowed both of these companies to re-acquire each other and form the same problems again.
There are similar reasons as to why politicians will occasionally push for regulations to not allow themselves to invest in companies that their policies affect, but somehow manages to never get through.
Politicians are very rarely punished for breaking political promises, but often rewarded for making the promises. They are also rewarded by their corporate overlords for breaking these promises.
There are very real concerns when you break up a company though. Rockefeller's wealth shot up a lot when Standard Oil was broken up. That could easily make a politician that's "politician out to get the big companies" into "politician making billionaires richer."
I wasn't around during the breakup, but my parents told me that phone service got considerably better and cheaper after the AT&T breakup, which makes enough sense to me: if a consumer can drop you for someone else, you have a reason to try and compete on service and/or price.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
I read it twice (admittedly quickly) but couldn't grasp the point even though I felt like it was there.
If this were a traditionally evil company, the work to legalize the evil things would have started forty years ago.
Any AI researcher who continues to work here is morally compromised.
For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.
(I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
>Any AI researcher who continues to work here is morally compromised.
You’re looking for the least defensible, worse interpretation of their comment.
But, “…doing this kind of work with the federal government.” is added context that was not there and is based on your own interpretation.
The language of the parent comment charges that simply working at a company that is engaging in this makes one complicit in an immoral act, and the complicity itself is immoral. I disagree with all of that.
In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.
Hey, I think I'm starting to get how this organized religion thing works. Maybe I'll join a few to make sure I go to allllll the good places
And you haven't disagreed with what I said, only how I said it ;)
If the former, this places a huge incentive on dictatorships like Iran to use the very easy strategy of co-locating all military targets with schools, hospitals, etc. so that any attack on them by anyone is automatically immoral.
I don’t automatically think everything the US has done (either in Iran this year or in history) is good, best, righteous btw. But positions like yours seem to take for granted that it’s never okay to wage any kind of war.
Set aside for a moment whether it’s safe to classify the Islamic Republic as a truly evil regime.
I don’t want to tempt Godwin’s Law, but after seeing how the Left in the US and Europe rallied to the cause of supporting Hamas, I don’t think modern-day “progressives” have the courage to do anything to counter truly bad actors besides to ask them nicely to stop. I’d love to see someone from that political alignment explain where their red lines are, past which they’d morally support a military attack - and yes, even one where we can be nearly certain innocents will also be hurt or killled.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
"Our enemies would have no qualms building a weapon that will end life on earth! We better build it first because we're the good guys!"
The point is - this happens everywhere, it's not just some weird western thing.
It can also mean facilitating a militaristic surveillance state.
Not necessarily the same things, and at some point we might have to choose who's side we're on
https://www.nytimes.com/2026/04/27/us/politics/sergey-brin-g...
In extremis, were the people working for Pol Pot just good patriots with no moral culpability?
We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.
In fact, I think international tribunals have existed which operated on just those principles.
You propose that other governments militaries would not be so compromising. Seems reasonable.
But the question then becomes, what is the operative distinction between the two?
The courts can intervene later, but they can't un-bomb a hospital.
This is setting aside the obvious problem where governments will often set laws based on self-interest rather than morality, particularly when it comes to military conflict.
See also the new national sport of hunting for fishing boats off the South American coast. Is that "lawful?"
And yes, since you went there: everything the Nazis did was "lawful." To the extent it wasn't "lawful," they made it "lawful."
How do you attack law enforcement with a gun while on your knees, with your arms pinned behind you and the gun is holstered? It's interesting how we can watch the same video, and some people only see what they are told to see.
"Any AI researcher who continues to work here is morally compromised."
It feels like a constant campaign and the posters seem so incredibly self righteous and unthoughtful.
I know that there might be $several ways those highly-paid engineers might still rationalize their work. Some of them might have ideological reasons to treat entire classes of people as unworthy of life. Within the model of their ideologies, the most evil things might be perfectly moral.
I wonder what reasons you have to disagree with people's moral stance against using AI as a weapon.
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
People will rationalize themselves into declaring this moral even though it is obviously one of the most overtly amoral actions possible. One friend I have, a rather intelligent guy otherwise, was even trying to create a utilitarian argument that he'd donate some percent of his 'earnings' to life saving charities meaning he'd be saving more life on the net. The fact that if everybody thought and behaved the same way, the entirety of humanity would cease to exist, was a consideration he didn't have a response for. Let alone the fact that he just rationalized his way into justifying near to any deed imaginable, so long as you got paid enough for it.
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
It's not complicated: if these friends would take a non-society-destroying job at equal pay (who wouldn't?) then their values aren't driving the decision, money is. Fine, that's a choice adults get to make. But then own it and actually justify it on its merits, don't just retreat to "who are you to judge."
I agree with the intent of your rhetorical question, so I'm jesting with you. I'm justifying my "yes" with the hopefully humorous distraction that every person, including American taxpayers, has at some point made a nonsustainable/selfish (my definition of immoral) decision.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
In war, the civilians can't audit every move of the military. (It's impractical, both for reacting timely, and for keeping secrets from the enemy.)
If the military doesn't work with Google, they will work with someone else who might not put the same amount of pressure on the military about the practical limits on AI. Or, even worse, our enemy might use a significantly better AI that we do.
My hope is that "war" shifts to AI vs AI, machine vs machine. Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.
And sure, maybe that just means the military decides to take their business elsewhere. But if you have confidence that your service is the best, then you sell based on that.
> The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.
Seems concerning?
(Yes, I recognize that past military entanglements do read as favors for Big Oil, but that’s more because lobbyists directly purchased the corrupt and useless Congress)
A mechanism to address this exists, though.
https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
So Google can't tell the government it needs a warrant to perform a search? Google can't sue over something the government did?
It's Google's product they want to buy.
now follow orders.
The location is classified.
Ok all jokes aside, if you suspect that there’s wrongdoing in the classified sphere, and it really matters to you, well, you should get involved in politics. We don’t just let everyone everywhere know everything, because we think it would be risky if Putin or the Chinese Communist Party also knew all those things. So we limit it to people who have taken oaths and are accountable and need to know (the military), the civilians who need to know (security clearance holders), and those who hold a high office with the public’s trust (high-ranking politicians). You can be a Senator. You just need a lot of people to trust you enough to vote for you. Or, and this is a bit easier, support politicians you do trust to vet classified things to be elected to high office, and ask them to look into it and give you their word that things are being done properly.
This is why we elect competent (hopefully) leaders to worry about these things for us. Mob rule democracy about every national secret would mean they’re not secrets for very long!
Also, this is probably the only acceptable arrangement when it comes to industry-government contracts. The government will always have more information than civilians.
Congress and the courts obviously.
If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.
The first is fully neutered. The second is far too slow.
"Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
What are the consequences of breach? Otherwise, Americans only use for this is to wipe their ass, and only if they can find a paper version.
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
Vote in elections, local and general.
Indeed - paying "taxes" to a murderous entity is a horrible affront to morality and humanity. We do it because we're terrified; we are not perfect moral creatures. But we still know it's wrong.
btw i am not making a judgement call on the ai usage issue itself, just saying that this and taxes are more equivalent than it might seem
sure if you're Lockheed you might be screwed, but that's not the case for Google. Military contracts, or even government contracts as a whole are a tiny fraction of the King Kong Sized gorilla that is Google.
The fact that Anthropic puts up a fight but OpenAI/Microsoft and Google don't I find hard to characterize as anything other than pathetic. These guys could, if the wanted to, afford a lawyer or to two to push back on the administration. They do that pretty successfully with their taxes in most places btw.
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
Reality is this ship sailed once the US/Palantir rolled out AI target selection
Lawful didn't stop Project MKUltra, or attacking countless countries, or overthrowing countless governments, or murdering countless people, or kidnapping people and torturing them, or...
The USA can do anything it wants, to anyone, any time.
[1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...
Remember that even the third Reich had laws!
I've had the unfortunate experience of working at a startup that started courting some autonomous weapons companies and HOLY SHIT were they the bottom of the barrel. Levels of incompetence you wouldn't believe, just good ol' boys who wanted to play with energetics. Then the company I was working for also hemorrhaged all their top engineers because they found the work unsettling.
The takeaway is that your refusal to assist these shitheads does have an impact, they have to pay more for talent and they have a much harder time courting good talent.
This is exactly what got us here.
It's so unbelievably obvious at this point that the Pentagon, and everything like it across the globe, needs a deprecation plan. We don't need these massive states anymore for security or regularity; we can communicate around the world at the speed of light and bypass their notions of how we're supposed to relate to one another.
Enough is enough. Spin down the nukes. Bring home the ships. Send the money back.
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.