Everyone will do this, because everyone will believe that everyone will do this.
Even worse, there really is no guarantee that the great powers will create the best terminators. Everyone talks about China and the US. (And we should.) At the same time however, we should all keep in mind that nations from India and Indonesia, to North and South Korea will not be simply sitting on their hands while the US and China forge ahead.
A future where 4 million dollar American or Chinese terminators are easily overwhelmed by thousands and thousands of 5 dollar Indian autonomous devices is not at all outside the realm of future possibilities.
That's what makes it all so concerning. We can kind of see where it leads in terms of enhanced capability potential for non-state actors, but we can't really see a way to avoid that future.
Neither is a picnic but I'll take a small proxy conflict over massive direct air campaign and definitely boots-on-the-ground Freedom campaigns any day.
They're investing their trade surplus in assets around the world, especially the third world.
When those assets start to go bad and/or the government nationalizes them?
We'll see if China responds any differently than any of the other colonial powers with business interests.
And while we are waiting, there're another few wars to be done.
Jack booted thugs shot a man in the back for the crime of defending a woman and the administration called him a terrorist. Nothing happened to that thug either.
https://claude.ai/public/artifacts/8f42e48f-1b35-450d-8dda-2...
It just shows that they have done poor research about the company before joining (Meta is just as bad) and are in on the grift (joined OpenAI only after post-ChatGPT) and this employee does not believe what they are saying.
Some country can perform a successful head hunt in the span of an afternoon tea party, while some other country have to level cities for few years and yet still fails to even touch the opposition leader. That's the difference between advanced and less-advanced systems.
If people here loves peace, good. But if we can always reasoning our way out of conflict, then why do we also invented the career of professional police force?
Of course, it is possible that countries advanced too far ahead might bully those less-advanced ones. But then, maybe the less-advanced countries should look inward and reflect on the question why can't they themselves create such advanced weaponries. I don't know, maybe these countries instead of forcing their own people to wear an obeisant smelling face mask, it's time to gave back the power and opportunities so their people can actually grow and gain and eventually contribute.
I get that there's nuance, but this feels like they want to make a big ethical stand without burning any bridges. You can have one of those.
"If you disagree this strongly with their actions, how can you still respect them?" is a decent description of the latter.
OpenAI already had military contracts while this employee was at the company and there was no open letter last year about that.
Prior to that, they were at Meta and joined OpenAI after ChatGPT took off.
If they thought that AGI was about "principles" then not only they were naive, but it leads me to believe that they were only there for the RSUs, just like their time at Meta.
Why is it so hard to be honest and just say you were there for the money, fame and RSUs and not for so called "AGI"?
Because then you miss opportunities like this in which to market yourself. A kind of hedging your bets in order to get more money and/or stay out of jail if the winds change. (Jail can be expensive.)
Or it could be honest cognitive dissonance.
The autonomous killing thing is more reasonable, but still, if you're OK building death technology, I'm not exactly sure what difference having a human in the loop makes. It's still death.
Going to work for these big SV corps is and always has been directly in service of US empire, that's literally what built the valley in the first place.
> I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.
But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.
Absolutely nothing wrong with something written with AI. Just pointing it out.
Generated comments are banned on HN, FWIW.
So it wouldn't even be worth a HN submission. Well, I think it can still go under exception for exceptional news.