And yes, there could be problems with the implementation. in particular the chance of having many false positives are one reason many people are opposed to such tech.
There were training corpus in English, Hebrew and Farsi. I guess it means the US had access to Iranian communication means. I wondered how it was possible to rely on a LLM to carry international relations. It seems a new low in stupidity.
The request was made on Innocentive (an open innovation website).
https://cttso.community.innocentive.com/challenge/487ad0cf48...
> Autonomous weapons would require a much faster and much more reliable and deterministic AI.
I think this is only true when the bots are on home-field, and you don't want to kill your own ones. When you are on the other side, you just want to shoot indiscriminately everything that moves, and monitor your surroundings to protect yourself. For this, today's LLMs seem to be more than enough. And since there was no human intervention in shooting everyone, it's not even a war crime.I fully expect this.
Based on industry experience, where vendors were hired (and paid well) so that there would be, and I quote, "a throat to choke" when needed.