> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”
> “Yes,” said the Moderator.
> He leaned back. For a moment, restlessness flickered in his eyes.
> “So why wasn’t I told this at the start?”
> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”
Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.
Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.
Probably not the OPs intent though. I suspect there are a lot of ways to destroy the system.
Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.
Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...
That's what I'm hoping for, at least.
No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.
I have been searching “Vibe coding” videos on YouTube that are not promoting something. And I found this one and sat down and watched the whole three hours. It does take a lot of real effort.
As for my experience vibe coding is that it is not so much vibing but needing to do a lot of actual work too. I haven't watched that video you linked but that sounds to reflect my actual experience and that of people I know offline.
Once Rust can be agentic coded, there will be millions of mines hidden in our critical infrastructure. Then we are doomed.
Someone needs to see the problem coming and start to work on the paths to solution.
That's the real scary part to me. It really ramps up the botnets. Those that know what to look for have better automation tools to attack and at the same time we're producing more vulnerable places. It's like we're creating as much kindling as possible and producing more easy strike matches. It's a fire waiting to happen.
https://wtfm-rs.github.io/wtfm-serde/doc/wtfm_serde/
I know this is orders of magnitude smaller than npm or pip, but if this is the best we can get 50 years since 70s UNIX on PDP-11, we are doomed.
On a side note, I wonder how much of this is due to the avoidance of abstraction. I hear so many say that the biggest use they get from LLMs is avoiding repetition. But I don't quite understand this, as repetition implies poor coding. I also don't understand why there's such a strong reaction against abstraction. Of course, there is such a thing as too much abstraction and this should be avoided, but code, by its very nature, is abstraction. It feels much like how people turned Knuth's "premature optimization is the root of all evil" from "grab a profiler before you optimize you idiot" to "optimization is to be avoided at all costs".
Part of my questioning here is that as the barriers to entry are lowered do these kinds of gross mischaracterizations become more prevalent? Seems like there is a real dark side to lowering the barrier to entry. Just as we see in any social setting (like any subreddit or even HN) that as the population grows the culture changes significantly, and almost always to be towards the novice. For example, it seems that on HN we can't even make the assumption that a given user is a programmer. I'm glad we're opening up (as I'm glad we make barriers to entry lower), but "why are you here if you don't want to learn the details?" How do we lower barriers and increase openness without killing the wizards and letting the novices rule?
The incentive and loss function are pointing to short term attention and long term amnesia. We are fighting the algorithms.
Also, and this is just an aside, but “the protagonist who is too special for the sorting hat” is a bit of a trope in young adult literature at this point. Is this the first real instance of it? 1957. That’s a while ago! I don’t even know if the “sorting hat” trope was established enough to subvert at the time.
Nope. Just within science fiction, early issues of Galaxy had many editorials denouncing/mocking science fiction stories with overused tropes, such as Western transposed to space, or babies being killed as aberrant after a nuclear war because they have ten fingers and toe.
I’m willing to believe the phrase “trope” wasn’t invented in 1957 if that’s what you are saying. But surely they had the idea of popular little trends in contemporary literature.
The must have known they were writing pulp sci-fi. At least when they got their copies they could feel the texture!
I don’t know of a link for the first. Here’s one for the second.
Apparently, Asimov was an early critic of the “Mozart in the womb” movement.
I studied classical music and came from a challenged background which to be honest is a rarity in that field. Almost everyone I studied with has parents who specifically encouraged music education and had the means to help make that happen. I got mine from some gifted vinyl as a child and fell in love with the orchestra. If I was in this story I'd probably not have been recommended to be a Professional Composer (if social expectations were the equivalent of what Asimov is saying here.)
So yeah, I'm pro 'play Mozart to your baby' :)
¹ There's even a BoK for software developers, the SWEBOK, but I've never met anybody who's read it.
Blanking on author and title, but read it a _long_ while ago, and it had a distinctly golden age feel --- maybe Murray Leinster?
0: https://kyla.substack.com/p/the-four-phases-of-institutional
The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.
Nothing wrong per se with citing what someone you are writing about said about themselves. He has some very odd historical, economic and political theories, but a lot of them are rooted in common misconceptions.