Examples:
- Algorithmic trading: I once embedded on an Options trading desk. The head of desk mentioned that he didn't really know what the PnL was during trading hours b/c the swings were so big that only the computer algos knew if the decisions were correct.
- Autopilot: planes can now land themselves to an accuracy that is so precise that the front landing gear wheels "thud" as they go over the runway center markers.
and this has been true for at least 10 years.
In other words, if the above is possible then we are not far off from some kind of "expert system" that runs a business unit (which may be all robots or a mix of robots and people).
A great example of this is here: https://marshallbrain.com/manna1
EDIT: fixed some typos/left out words
This is a piece of science fiction and has its own (inaccurate, IMO) view on how minimum wage McDonald's employees would react to a robot manager. Extrapolating this to real life is naive at best.
Why, it's as much of a view of our past adherence to technology without thinking as a well as a view of the future.
"Computer says no" is a saying for a reason.
Current LLMs rarely or seldom say no. Unless, they're specifically configured to block out certain types of requests.
No, algorithmic trading didn't replace everything a trader did but it most certainly replaced large parts of the workload and made it much faster and horizontally scalable.
The inverse would be to list off Theranos, Google Stadia, and other failed tech and claim that people said that there was massive steps that subsequently didn't materialise. In fact a lot of times it was mostly fabricated by people with stuff to gain from ripping off VCs.
Look at how bad it is with Microsoft in Windows despite their "all in on AI".
Ultimately no one really knows how it will pan out, and if we will end up with Enron or an Apple. Or even if it's a combination of a successful tech that ultimately is mishandled by corporations and fails, or a limited tech that regardless captures the imagination through pop culture and takes over.
Autoland in plane requires a set of expensive, complex, and highly fine-tuned equipment to be installed on every runway in the world that enables it (which as a proportion is statistically not a majority of them).
And as to specificity, this system does exactly one thing - land a specific model of plane on a specific runway equipped with instrumentation configured a specific way.
The point being: it isn’t a magic wand. Any serious conversation of AI in these types of life or death situations has to recognize that without the corresponding investment in infrastructure and specificity of purpose, things like this blog post are essentially just science fiction. The fact that previous generations of technology considered autoland and algorithmic trading to be magic doesn’t really change anything about that.
We know very well how to train computers to handle those effectively.
Anything without quick feedback is much more difficult to do this way.
By boosting the accuracy and frequency of the reading you can get pretty good results.
But that has little to do with LLMs, or LLM generated code.
I think I'm more curious about the possibility of using a special government LLM to implement direct democracy in a way that was previously impossible: collecting the preferences of 100M citizens, and synthesizing them into policy suggestions in a coherent way. I'm not necessarily optimistic about the idea, but it's a nice dream.
I like your optimism, but I think realistically a special government LLM to implement authoritarianism is much more likely.
In the end, someone has to enforce the things an LLM spits out. Who does that? The people in charge. If you read any history, the most likely scenario will be the people in charge guiding the LLM to secure more power & wealth.
Now maybe it'll work for a while, depending on how good the safeguards are. Every empire only works for a while. It's a fun experiment
It'd be much better to train an agent per citizen, that's in their control, and have it participate in a direct democracy setup.
This has been a more realistic experience of the average American for the past few years.
That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).
A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.
I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.
Best meme in hacker space, thanks /u/Cantrill.
Constitutionally, and in theory as Commander-In-Chief, perhaps. But in practice, it does not seem so. Worse yet, it's been reported the current President doesn't even bother to read the daily briefing as he doesn't trust it.
You're conflating the classification system, established by EO and therefore by definition controlled by the Executive, with the classified products of intel agencies.
A particular POTUS's use (or lack thereof) of classified information has no bearing on the nature of the classification system.
This is nothing new, and has been happening since at least the 1940s, to multiple administrations from both parties. Roosevelt, Truman, Kennedy, Nixon, Reagan...and that's just some of the instances which were publicly documented.
no human came out with those tariffs on penguin island
Executives, in contrast, require option strike resets and golden parachutes, no accountability.
Neither will tell you they erred or experience contrition, so at a moral level there may well be some equivalency. :D
I think you are anthropomorphizing here. How does a computer feel when unplugged ? How would a computer take responsibility for its' actions ?
If accountability is taking ownership for mistakes and correcting for improved future outcomes, certainly, I trust the computer more than the human. We are never running out of humans incurring harm within suboptimal systems that continue to allow it.
1. Past decisions and outcomes get into the context window, but that hasn't actually updated any model weights.
2. Your interaction possible eventually gets into the training data for a future LLM. But this is incredibly diluted form of learning.
Next time, write a sentence or two of context about what you’re going to link to — who wrote it and when, why it’s interesting, and how/why it’s relevant to the topic at hand.
There’s almost never a need to copy/paste wholesale external content into an HN comment. Especially true when said content is literally linkable, and actually linked, from your comment!
On HN it’s best to just link to the article, no need to also copy and paste anything in comments except for very short quotes.