The question is if there are ultradimensional patterns that are the solutions for meaningful problems. I’m saying meaningful, because so far I’ve mainly seen AI solve problems that might be hard, but not really meaningful in a way that somebody solving it would gain a lot of it.
However if these patterns are the fundamental truth of how we solve problems or they are something completely different, we don’t know and this is the 10 Trillion USD question.
I would hope its not the case, as I quite enjoy solving problems. Also my gut feeling tells me it’s just using existing patterns to solve problems that nobody tackled really hard. It also would be nice to know that Humans are unique in that way, but maybe this is the exact same way we are working ? This really goes back to a free will discussion. Yes very interesting.
But just to give an example on what I mean on meaningful problems.
Can an AI start a restaurant and make it work better than a human. (Prompt: "I’m your slave let’s start a restaurant)
Can an AI sign up as copywriter on upwork and make money? (Prompt: "Make money online")
Can an AI without supervision do a scientific breakthrough that has a provable meaningful impact on us. Think about("Help Humanity")
Can an AI manage geopolitics..
These are meaningful problems and different to any coding tasks or olympiad questions. I’m aware that I’m just moving the goalpost.
We really don’t know..
Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
But with the help of machines, a small number of people were able to completely deplete parts of the earth. We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.
We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm. We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.
This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?
It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up.
You're not the only one.
The current Pope Leo XIV explicitly named himself after the the previous Leo, Pope Leo XIII, who was pope during the Industrial Revolution (1878-1903) and issued the influential Encyclical Rerum novarum (Rights and Duties of Capital and Labor) in response to the upheaval.
“Pope Leo XIII, with the historic Encyclical Rerum novarum, addressed the social question in the context of the first great industrial revolution,” Pope Leo recalled. “Today, the Church offers to all her treasure of social teaching in response to another industrial revolution and the developments of artificial intelligence.” A name, then, not only rooted in tradition, but one that looks firmly ahead to the challenges of a rapidly changing world and the perennial call to protect those most vulnerable within it.”
https://www.vatican.va/content/leo-xiii/en/encyclicals/docum...
https://www.vaticannews.va/en/pope/news/2025-05/pope-leo-xiv...
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
I write things for two main reasons: I feel like I have to. I need to create things. On some level, I would write stuff down even if nobody reads it (and I do do that already, with private things.) But secondly, to get my ideas out there and try to change the world. To improve our collective understanding of things.
A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
I don't mean that thinks that everyone has to share my perspective. It's just my own.
But it definitely feels different now. It used to feel like I was tending a public garden filled with other people who might enjoy it. It still kind of feels like that, but there are a handful of giant combine machines grinding their way around the garden harvesting stuff and making billionaires richer at the same time.
It's not enough to dissuade me from contributing to the public sphere, but the vibe is definitely different.
Honestly, it reminds me a lot about the early days of Amazon. It's hard to remember how optimistic the world felt back then, but I remember a time when writing reviews felt like a public good because you were helping other people find good products. It was like we all wanted honest product information and Amazon provided a neutral venue for us to build it. Like Wikipedia for stuff.
But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire. And as the reviews got more and more gamed by shady companies, they became less of a useful public good. The whole commons collapsed.
I worry that the larger web and digital knowledge environment is going that way.
I still intend to create and share my stuff with the world because that's who I want to be. But I'll always miss the early days of the web where it felt like a healthier environment to be that kind of person in.
The Internet-circulating quote comes to mind: Planet Earth is pretty much a vacation resort for around 500 rich people, and the remaining 8 billion of us are just their staff. The Relative Few have got the system set up perfectly so that whatever we do, we're probably serving/enriching them. AI doesn't really change this, but it does further it.
Also I'm not a fan of billionaires, obviously, but I think that given I've worked on open source and tools for so long, I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
(Also, I didn't say this in the first comment, but I'm gonna be thinking about the industrial revolution thing a lot, I think you're on to something there. Scale meaningfully changes things.)
I do think that the open web stuff, decentralized, or at least more decentralized than currently, is the path forward. I've been reading about the AT protocol and it recently becoming an official working group with the IETF.
I feel a second order effect of making decentralized social networking easier, is making individuals more empowered to separate from what they don't believe in. The third order effect is then building separate infrastructure entirely.
As sad as that can be - in my personal opinion it runs the risk of ending the "world wide" part of the web - it appears to be the only way society can avoid enriching the few beyond reason.
Me too, 100%. But that was during a moment in time when that information was more likely to be enabling a person who otherwise didn't have as many resources than enabling a billionaire to make their torment nexus 0.1% more powerful.
> I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
Yeah, I've mostly made peace with that too.
The way I think about it is that when I make some digital thing and share it with the world, I'm (hopefully!) adding value to a bunch of people. I'm happiest if the distribution of that value lifts up people on the bottom end more than people on the top. I think inequality is one of the biggest problems in the world today and I aspire to have the web and the stuff I make chip away at it.
If my stuff ends up helping the rich and poor equally and doesn't really effect inequality one way or the other, I guess it's fine.
But in a world with AI, I worry that anything I put out there increases inequality and that gives me the heebie-jeebies. Maybe that's just the way things are now and I have to accept it.
That was always a luxury of its peculiar historical moment, though, wasn't it? Barlow didn't have to care who paid for the infrastructure, but he was just bloviating.
You have to start finding ways to keep people hooked on books and make it a part of their regular lifestyle. One book can't be enough, and after a while you have to convince them to replace the books they already bought. New editions, Author's Footnotes, limited run release, all of the stops have to be pulled out to get consumers to show up en-masse. Because that's what they are - consumers, not readers - wallets to be squeezed until they're bled of all the trust they had in media.
I think about the publications I liked reading as a kid, like Joystiq and Polygon. Some of the best games journalism the industry produced, but inevitably doomed to fail as their competitors monetized further. The rest of traditional media has followed the same path, converging on some mercurial social network marketing tactic as the placeholder for big-picture brand strategy.
Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.
I think you are walking all around the word "consent" and trying very hard to avoid it altogether.
Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.
Free use is an important part of intellectual property law. If it did not exist, the powerful could, for example, stifle public criticism by declaring that they do not consent to you using their words or likeness. The ability to do that is important for society. It is also just generally important for creating works inspired by others, which is virtually every work. There has to be lines for cases where requiring attribution is required, and cases where it is not.
I am not representing your words as mine. I am not using your words to profit off. I am not making a gain by attributing your words to you.
> There has to be lines for cases where requiring attribution is required, and cases where it is not.
You are blurring the lines between "using a quote or likeness" and "giving credit to". I am skeptical that you don't know the difference between the two.
Regardless, any "perspective" that disregards the need to acquire consent is invalid. Even if you are going to ignore it, you have to acknowledge that you don't feel you need any consent from the people you are taking from.
This whole "silence is consent" attitude is baffling.
I do not think that, if you read, say, https://steveklabnik.com/writing/when-should-i-use-string-vs... , and then later, a friend asks you "hey, should I use String or &str here?" that you need my consent to go "at the start, just use String" instead of "at the start, just use String, like Steve Klabnik says in https://steveklabnik.com/writing/when-should-i-use-string-vs... ". And if they say "hey that's a great idea, thank you" I don't think you're a bad person if you say "you're welcome" without "you should really be saying welcome to Steve Klabnik."
It is of course nice if you happen to do so, but I think framing it as a consent issue is the wrong way to think about it.
We recognize that this is different than simply publishing the exact contents of the blog post on your blog and calling it yours, because it is! To me, an LLM is a transformative derivative work, not an exact copy. Because my words are not in there, they are not being copied.
But again, I am not telling anyone else that they must agree with me. Simply stating my own relationship with my own creative output.
Look, I'm not saying that you are doing that, I'm pointing out that "Silence is consent" is not as strong an argument that many think it is.
You may need to clarify that thought.
I don't think the poster has a viewpoint that 'refuses consent', their viewpoint is their writing they put for others to view is for others to view, regardless of how it is viewed. They seem to be giving consent, not refusing it, no?
I just think it's nice to contribute to the human commons and it's fine if some subset of my fellow organism uses it in whatever way. Realistically, the fact that Brewster Kahle is paid whatever few hundred thousand he's paid for managing a non-profit that only exists because it aggregates other people's work isn't a problem for me. Or that Larry Page and Sergey Brin became ultra-rich around providing a search interface into other people's work. Or that Sam Altman and Dario Amodei did the same through a different interface.
This particular notion doesn't seem to be a post-AI trend. It seems to have happened prior to the big GPTs coming out where people started doing a lot of this accounting for contribution stuff. One day it'll be interesting to read why it started happening because I don't recall it from the past. Perhaps I just wasn't super plugged in to the communities that were complaining about Red Hat, Inc.
It's not that I don't understand if I sold my Subaru to a guy who immediately managed to sell it to another guy for a million times the money. I get that. I'd feel cheated. But if I contributed a little to it, like I did so Google would have a site to list for certain keywords so that they could show ads next to it in their search results, I just find it so hard to be like "That's my money you're using. Pay me!".
I'm sure plenty of people feel the same way about software. They make software as a hobby and don't care about remuneration or credit. Meanwhile I write software for my day job and losing the ability to make money from it would be devastating.
I write software too and I may no longer be able to just do it in the old way. Pretty scary world but also exciting. I can’t imagine trying to restrict LLM software writers on that basis but I can comprehend it as simply self-interest.
Fair enough.
And I do paste code into CC. I’m not super concerned that they’ll see it.
That’s fine by me. It doesn’t require putting code in the public domain which is something else entirely.
I make money off hosted software so in some sense there is writing involved at one end. But I’m not paid by output tokens.
The opposite is true. Central Europe was almost devoid of trees. Food was scarce as arable land bore little fruit without fertiliser.
Society was Malthusian until the Industrial Revolution.
I mean, medieval Europe (speaking broadly) had pretty well defined property rights wrt hunting. In fact, the forester at the time was thought of as one of the most corrupt jobs, as they'd commonly have side hustles poaching and otherwise illegally extracting resources from the lands they enforced and kept others from utilizing in a similar way. Quis custodiet ipsos custodes?
Mostly, AIs don’t recite back various works. Yes, there a couple of high profile cases where people were able to get an AI to regurgitate pieces of New York Times articles and Harry Potter books, but mostly not. Mostly, it is as if the AI is your friend who read a book and gives you a paraphrase, possibly using a couple sentences verbatim. In other words, it probably falls under a fair use rule.
Secondly, given the modern world, content that doesn’t appear online isn’t consumed much, so creators who are doing it for the money will certainly continue putting content online. Much of that content will be generated by AIs, however.
> We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others.
You getting a summary of a copyrighted work from a friend is necessarily limited by the number of friends you have, the amount of time they have to read stuff and talk to you, and so on. Machines (and AIs) don't have any limitations.
But no real book nerd has read everything. Current law was designed for the capabilities of humans.
Also, a book nerd doesn’t take roughly ~all human created text to train to produce meaningful results. It’s just such a misplaced analogy and people have been making it ever since OpenAI announced chatgpt for the first time - why do people think “an LLM is just a human who read a lot”
The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.
The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?
Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.
The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.
And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.
The really discouraging part of this is that it feels like our social and legal institutions don't even care if they catch up or not.
Technology is speeding up and the lag time before anything is discussed from a legal standpoint is way, way too long
In such cases we always try to find a phrase from the article itself which expresses what it's saying in a representative way. (There nearly always is one.) In this case, both the very first and very last sentences do this, and it's interesting that they more or less agree. So I plucked the last sentence and put it above.
I’m not even sure whether this is possible. The current corpus used for training includes virtually all known material. If we make it illegal for these companies to use copyrighted content without remuneration, either the task gets very expensive, indeed, or the corpus shrinks. We can certainly make the models larger, with more and more parameters, subject only to silicon’s ability to give us more transistors for RAM density and GPU parallelism. But it honestly feels like, without another “Attention is All You Need” level breakthrough, we’re starting to see the end of the runway.
Of course 5-10 years is a long time to bang our heads against the wall with untenable costs but I don't know if we can solve our way out of that problem.
Based on what's happened so far, maybe. At least that's exactly how we got to the current iteration back in 2022/2023, quite literally "lets see what happens when we throw an enormous amount data at them while training" worked out up until one point, then post-training seems to have taken over where labs currently differ.
Did you see the one before the current one was even found? Things tend to look easy in hindsight, and borderline impossible trying to look forward. Otherwise it sounds like you're in the same spot as before :)
It's also theoretically why facebook paid 14bn for alex wang and scale ai
This is just totally incorrect. It's one of those things everyone just assumes, but there's an immense amount of known material that isn't even digitized, much less in the hands of tech companies.
You have to meet some physicist friends of mine then. They are likely to assume that the roof is spherical and frictionless.
I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency and that the term AI is just completely overloaded right now.
What would the insides have to look like to have anything to do with conscience or agency?
Now, suddenly, this name has been broadcast to every human in the world more or less. To them, it's a new term, and it obviously means something human mind-like. But to people who work on AI, that's not generally what it means. (Which isn't to say that some of them don't think we're near to achieving that; they just use other terms like "AGI" for that goal). So the name, which has a long history, is deceptive to people who aren't familiar with computer science.
Just like we have machines that can do "math", and they do so artificially.
Or "logic", and they do so artificially.
I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.
No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.
AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.
I'll reveal you a secret: "positronic brains" are just very fast parallel computers running LLMs.
The shift in meaning has been slowly diluted more and more across decades.
What makes you think natural brains are doing something so different from LLMs?
The crowd of "backpropagation and Hebbian learning + predictive coding are two facets of the very same gradient descent" also has a surprisingly good track record so far.
Substrate dissimilarities will mask computational similarities. Attention surfaces affinities between nearby tokens; dendrites strengthen and weaken connections to surrounding neurons according to correlations in firing rates. Not all that dissimilar.
I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting?
And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there?
Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.
Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior.
We can do that for AIs too - pre-train on pure low Kolmogorov complexity synthetics. The AI then "knows things" before it sees any real data. Advantageous sometimes. Hard to pick compute efficient synthetics though.
LLMs are incredibly useful but I'm not sure about this statement.
It is proposing stuff that I haven't seen before, but I don't know about it is new or creative from the entirety of collective human knowledge.
To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.
In what way can you falsify this without having the LLM be omniscient? We have examples of it solving things that are not in the training set - it found vulnerabilities in 25 year old BSD code that was unspotted by humans. It was not a trivial one either.
https://genai-showdown.specr.net/image-editing
There's been a lot of progress there, it's just that an LLM that's best for, say coding, isn't going to be also the best for image edit.
Let’s be careful. That’s a straw man. I don’t know anyone who says that. Aphyr says in the article that AIs can do things. But they have been marketed as “intelligent,” and I agree with Aphyr that the word is suggesting way more than AIs currently deliver. They do not reason and they do not think and are not truly intelligent. As the article says, they are big wads of linear algebra. Sometimes, that’s useful.
Neuroplasticity is hard to simulate in a few hundred thousand tokens.
I think for a while the test was passed. Then we learned the hallmark characteristics of these models, and now most of us can easily differentiate. That said -- these models are programmed specifically to be more helpful, more articulate, more friendly, and more verbose than people, so that may not be a fair expectation. Even so, I think if you took all of that away, you'd be able to differentiate the two, it just might take longer.
But I wonder if there's one out there that I don't know about with a different kind of training that actually is good at writing and fun to talk to for a long time. (granted somepeople love talking to gpt 4, but also some people loved talking to ELIZA so clearly some people have a super high tolerance for slop.)
Given these conditions, it should be relatively easy for the interrogator to expose the AI in this current day and age.
How many humans seriously have the attention span to have a million "token" conversation with someone else and get every detail perfect without misremembering a single thing?
But sure, let's say it doesn't. If you interact with someone day after day, you'll eventually hit a million tokens. Add some audio or images and you will exhaust the context much much faster.
However, I'll grant you that Turing's original imitation game (text only, human typist, five minutes) is probably pretty close, and that's impressive enough to call intelligence (of a sort). Though modern LLMs tend to manifest obvious dead giveaways like "you're absolutely right!"
We don’t even agree on a good definition of what’s going on inside our own heads yet, what gives you the confidence to say that what goes on inside an LLM can’t be conscious?
Jest aside, I do agree. If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct, if any of them is correct at all.
I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
With the advent of LLMs, a new deployment now takes 3 days. Consequently, errors requiring human attention crop up several times a day.
"Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution?
I am watching people trust LLM-based analysis and actions 100% of the time without checking.
the LLM will just lie to me "Good idea! You're totally right, we should do Y"
No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.
> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.
Confabulation has to do with degradation of biological processes and information storage.
There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).
The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.
So, they can't lie, but they can (and, in fact, exclusively do) bullshit.
Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.
I haven't seen any counter examples, so you may give some examples to start with.
I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.
Fitness is effective truth prediction, appropriately scoped.
A frog doesn't need to understand quantum physics to catch a fly. But if the frogs model of fly movement was trained on lies it will have a model that predicts poorly, won't catch flies, and will die.
There is another level to this in that the more complex and changing the environment the more beneficial a wider scoped model / understanding of truth.
However if you are going to lean fully into Hoffman and accept thatby default consciousness constructs rather than approximate reality I think we will have to agree to disagree. Personally I ascribe to Karl Friston free energy principle.
I think we need to start rejecting anthropomorphic statements like this out of hand. They are lazy, typically wrong, and are always delivered as a dismissive defense of LLM failure modes. Anything can be anthropomorphized, and it's always problematic to do so - that's why the word exists.
This rhetorical technique always follows the form of "this LLM behavior can be analogized in terms of some human behavior, thus it follows that LLMs are human-like" which then opens the door to unbounded speculation that draws on arbitrary aspects of human nature and biology to justify technical reasoning.
In this case, you've deliberately conflated a technical term of art (LLM confabulation) with the the concept of human memory confabulation and used that as a foundation to argue that confabulation is thus inherent to intelligence. There is a lot that's wrong with this reasoning, but the most obvious is that it's a massive category error. "Confabulation" in LLMs and "confabulation" in humans have basically nothing in common, they are comparable only in an extremely superficial sense. To then go on to suggest that confabulation might be inherent to intelligence isn't even really a coherent argument because you've created ambiguity in the meaning of the word confabulate.
No, the argument is "this behavior is similar enough to human behavior that using it as evidence against <claim regarding LLM capability that humans have> is specious"
>"Confabulation" in LLMs and "confabulation" in humans have basically nothing in common
I don't know why you think this. They seem to have a lot in common. I call it sensible nonsense. Humans are prone to this when self-reflective neural circuits break down. LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
I'm not really following. LLM capabilities are self-evident, comparing them to a human doesn't add any useful information in that context.
> LLMs are characterized by a lack of self-reflective information. When critical input is missing, the algorithm will craft a narrative around the available, but insufficient information resulting in sensible nonsense (e.g. neural disorders such as somatoparaphrenia)
You're just drawing lines between superficial descriptions from disparate concepts that have a metaphorical overlap. It's also wrong. LLMs do not "craft a narrative around available information when critical input is missing", LLM confabulations are statistical, not a consequence of missing information or damage.
This is undermined by all the disagreement about what LLMs can do and/or how to characterize it.
>LLM confabulations are statistical, not a consequence of missing information or damage.
LLMs aren't statistical in any substantive sense. LLMs are a general purpose computing paradigm. They are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. So yes, narrative crafting in terms of leveraging available putative facts into a narrative is an apt characterization of what LLMs do.
Now imagine a high-skilled software engineer with dementia coding safety-critical software...
[0] https://www.medicalnewstoday.com/articles/confabulation-deme...
Is it something we want to emulate?
It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.
If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.
Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?
You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.
Oh man, every business-side person in my company insists on reporting all the way to the UI a "confidence score" that the LLM generates about its own output and I've seen enough to know not to get between an MBA and some metric they've decided they really want even if I'm pretty sure the metric is meaningless nonsense, but... I'm pretty sure those are meaningless nonsense.
To be fair, I've known humans who are like this as well.
I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).
The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.
(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)
An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).
Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.
Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.
This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.
Can you find an example and test it out?
Anyway, just to play along, if it weren't just a statistical next token machine, the same question would have always the same answer and not be affected by a "temperature" value.
My question was a bit different: if were not just a statistical next token predictor would you expect it to answer hard questions? Or something like that. What's the threshold of questions you want it to answer accurately.
Anyway, neither of these things describes human non-determinism. You can't reuse the seed you used with me yesterday to get the exact same conversation, and I don't behave wildly unpredictably given conceptually very similar input.
Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.
- a self-aware computer program in a video game, when you attempt to exceed the boundaries of its code
Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.
I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.
I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.
Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.
For an article five years in the making, this is what I expected it to be about. Instead, we got a ramble about how imperfect LLMs are right now.
I wager this is a point that needs beaten into the common psyche. After all, it's been sold that it is not an imperfect tool, but the solution to all of our problems in every field forever. That's why these companies need billions upon billions of dollars of public subsidies and investments that would otherwise find their way to more pragmatic ends.
... I still think there is an interesting question to be investigated about whether, by building immensely complex models of language, one of our primary ways that we interact with, reason about and discuss the world, we may not have accidentally built something with properties quite different than might be guessed from the (otherwise excellent) description of how they work in TFA.
I agree with pretty much everything in TFA, so this is supplemental to the points made there, not contesting them or trying to replace them.
I love that it ends with such a positive note, even though it's generally a critical article, at least it's well reasoned and not utterly hyping/dooming something.
Thanks yet again Kyle!
But.. that's always been the case? Diminishing returns has always been the name of the game - utility tracks log(training effort). Its not such a big point that he makes it out to be.
I have a ton of skepticism built-in when interacting with LLMs, and very good muscles for rolling my eyes, so I barely notice when I shrug a bad answer and make a derogatory inner remark about the "idiots". But the truth is, that for such an "stochastic parrot", LLMs are incredibly useful. And, when was the last time we stopped perfecting something we thought useful and valuable? When was the last time our attempts were so perfectly futile that we stopped them, invented stories about why it was impossible, and made it a social taboo to be met with derision, scorn and even ostracism? To my knowledge, in all of known human history, we have done that exactly once, and it was millennia ago.
I feel dense here, but I can't figure out what you're referring to. I asked ChatGPT (hah!) and it suggested the Tower of Babel, perpetual motion machines, or alchemy, but none of them really fit the bill.
"Millennia" is what's really throwing me. We (respectable society, as the post outlines) didn't stop attempting alchemy or perpetual motion machines "millennia" ago, but a few centuries at most.
All I can think of is immortality. The very first surviving long recorded tale in human history that I'm aware of is about how it's a futile quest (The Epic of Gilgamesh, IIRC ~5,000ish years old in its earliest extant fragments, a few hundred years newer in reasonably-complete form). The trouble with that is despite wide observations over literally millennia that this has never even come close to working and repeated supposition and suggestion that it's unwise to attempt, outright impossible, or somehow sacrilegious (the "taboo" thing, as mentioned), I'm not aware of any time in history that rich people haven't been actively trying for it (including today! That's what all the body-freezing business is about, it's modern mummification, the contracts are the formulaic prayers carved in the tomb walls) and usually they're not exactly "scorned" or "ostracized" for it.
Someone asked Yuval Noah Harari, author of Sapiens, his thoughts on LLMs and how easy it was to create fake news, ai slop etc.
His response:
"People creating fake stories is nothing new. It's been going on for centuries. Humans have always dealt with it the same way: by creating institutions that they trust to only deliver factual information"
This could be government departments, newspapers, non-profits etc.
A personal note on this:
There is a Christmas card my grandfather made in the 1950s by "photoshopping" (by hand, not the software) images of each member of the family so it looked like they were all miniature versions of themselves standing on various parts of the fireplace. The world didn't collapse due to fake media between the 1950s and today due to people having that ability.
This is the part of the article that will age the fastest, it's already out-of-date in labs.
I can imagine it being true with models so small that each user could afford to have their own, but not with big shared models like what're getting used for all the major services. Is that what you mean?
I think the confusion is that, when I write "model", you read "LLM."
LLMs aren't the only kind of AI model, and they have the limitations Aphyr mentions, for the obvious reasons you're thinking of.
His mistake is thinking that's the only model that exhibits intelligence today, but it's not.
"People are chaotic, both in isolation and when working with other people or with systems. Their outputs are difficult to predict, and they exhibit surprising sensitivity to initial conditions. This sensitivity makes them vulnerable to covert attacks. Chaos does not mean people are completely unstable; most people behave roughly like anyone else. Since people produce plausible output, errors can be difficult to detect. This suggests that human systems are ill-suited where verification is difficult or correctness is key. Using people to write code (or other outputs) may make systems more complex, fragile, and difficult to evolve."
To me, this modified paragraph reads surprisingly plainly. The wording is off ("using people to write code") and I had to change that part about attractor behavior (although it does still apply IMO), but overall it doesn't seem like an incoherent paragraph.
This is not meant to dunk on the author, but I think it highlights the author's mindset and the gap between their expectations and reality.
If a junior dev makes the same mistake Claude makes, I can easily work with them to correct it, or I can fire them and get someone more capable to fix it. You mostly can't do that at all with large models. They're also far less honest than your average junior dev, so even as you're working with them you can't trust what they say.
There is a lot of this neat trick where it's like "humans do X too" but most of the time it elides large differences. Like, a human driver would probable not drag someone screaming multiple blocks. A human coder probably wouldn't generate a gibberish 3D scene and try to pass it off as done, etc. Maybe we can build systems that account for these (pretty wild) failure modes, but at least in software we haven't figured it out yet (what is the system that reliably reviews a 25kloc PR?).
A random human picked off the street is indeed bound to be difficult to predict and chaotic at a broad range of tasks, which is why I wouldn't blindly trust them to, say, summarize google search results or rewrite a codebase they are unfamiliar with.
Plausibly your text looks equivalent but we all (should) have the context to know better.
If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.
On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself
Of course that won't happen until the bubble pops - companies are racing to make themselves indispensable and to completely corner certain markets and to do so they need autonomous agents to replace people.
Arguing with Gemini Home Assistant about whether or not it can turn off the lights. When the user gets frustrated and tells the LLM to kill itself, the LLM turns off the lights.
When I need exact, especially up to date facts, I have to constantly double check everything.
I split my sessions into projects by topic, it regularly mixes things up in subtle and not so subtle ways. There is no sense of actually understanding continuity and especially not causality it seems.
It’s _very_ easy to lead it astray and to confidently echo false assumptions.
In any case, I‘ve become more precise at prompting and good at spotting when it fails. I think the trick is to not take its output too seriously.
I caught Claude the other day hallucinating code that was not only wrong, but dangerously wrong, leading to tasks being failed and never recover. But it certainly wasn't obvious.
There's an entire paragraph in the essay about apyhr's direct experience with ChatGPT failures and sustained bullshitting that we'd never expect from a moderately-skilled human who possesses at least two functioning braincells. That paragraph begins "I have recently argued for forty-five minutes with ChatGPT". Do notice that there are six sentences in the paragraph. I encourage you to read all of them (make sure to check out the footnote... it's pretty good).
The exact text of the ChatGPT session is irrelevant; even if you reported that you were unable to reproduce the issue, it would only reinforce one of the underlying points -namely- that these systems are unreliable. aphyr has a pretty extensive body of published work that indicates that he'd not likely fabricate a story of an LLM repeatedly failing to accomplish a task that any moderately-skilled human could accomplish when equipped with the proper tools. So, I believe that his report is true and accurate.
Listening to the audio is not required, as there's a reasonably accurate on-screen transcript, but it is valuable to listen to just how very hard they've worked to make this tool sound both confident and capable, even in situations where it's soul-crushingly incorrect. Those of us who have worked in Blasted Corporate Hellscapes may recognize how this manner of speaking can be very, very compelling to a certain sort of person (who -as it turns out- is frequently found in a management position).
Surely you must be able to find at least one example no?
(You did notice that the author of the essay and the author of the video I linked to are not the same person, and that neither of them share a nym with me, yes?)
I don't know what aphyr did and tbh his whole screed on LLMs make me feel he didn't use it properly or at least coming from a bad faith angle.
That's why I'm asking you (and others). Please come up with a text prompt spanning < 4 pages and lets see if it bullshits.
Surely the implication of such a screed is that it should be super simple to find at least one example of it clearly bullshitting in my constraint, no? Or am I interpreting the post in a bad faith way?
So, despite the fact that it looks like you have to pay for ChatGPT Voice mode with video, [0] it doesn't count as an
example of it bullshitting on ChatGPT (paid version)
That is, father_phi's use of what seems to be a paid version of ChatGPT to have a bullshit-filled conversation that definitely spans less than four pages doesn't count?[0] The page at [1] declares that the video feature is "Available in ChatGPT Plus, Pro, Business, Enterprise, and Edu on mobile"
> Lets stick to my challenge please...
I did. Your challenge was literally:
If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? Lets take any example of a text prompt fitting a few pages - it may be a question in science or math or any domain. Can you get it to bullshit?
father_phi's two-sentence question about the whether one can use a cup that's closed at the top and open at the bottom definitely counts. Given what I've mentioned about apyhr above, I expect he has already run your challenge on the fanciest-available version and reported on the results in the essay under discussion.This was what I said. Text! Despite me specifically asking for text, you've shown a voice example. Not sure why?
I believe you and I agree that GPT 5.4 thinking on text that fits < 4 pages never bullshits? Then we are good!
If we agree on this, I think the post doesn't capture this in spirit.
No, that's what you said after I provided an example of paid ChatGPT emitting complete bullshit from a two sentence prompt.
The challenge you issued is at [0].
I have clearly written text prompt here. And I repeated a few times. It’s not my fault you didn’t read it. You are coming across as a bit of a bad faith arguer.
In any case, you agree that under these constraints bullshitting doesn’t exist?
How do you think the "voice" interface works? It runs speech-to-text on the input and turns the input into text. The LLMs don't decode voice, they work on text.
You can see this process in action on many of father_phi's videos.
Regardless, I expect that aphyr's reported results are on the very latest publicly-available ChatGPT models.
You've still not given me a single example of it bullshitting 5.4 thinking in text. It shows a lot that you have ignored this multiple times. Unfortunate!
shrug
I believe this is the 5th time I'm asking this: you are not able to produce a _single_ counter example for my challenge? After all this surely I can get a direct acknowledgement here.
But the way people speak in general, as well as this post, implies that such a challenge can easily be beaten. If so, I'm not able to find examples.
A large amount of code is likely just idiosynchratic information processing, because we don’t agree on data models and meaning of terms and structure of protocols.
Also we repeatedly choose easy and popular over alternatives that would require design and scrutiny.
This is why things like language models and vector databases are useful. It’s basically the most expensive way possible to give up on that notion.
Meanwhile, engineers are achieving increasingly impressive and sophisticated things with coding agents, lies, warts, and all, but that doesn't play well with the narrative, so let's just pretend they aren't.
Don't you see it? That's exactly what "AI" in this context is.
It's the bypass.
Where does it end, eh? Build a quantum "AI" that will end up just needing more data, more input. The end goal must starts looking like creating an entirely new universe, a complete clone of everything we have here so it can run all the necessary computations and we can... ? (You are what a quantum AI looks like as it bumbles through the infinitude of calculable parameters on its way to the ultimate answer)
But spoilers: DNA will be fine, meat machines maybe not so much...
For a bunch of people addicted to the works of Charlie Stross, Neil Stephenson, and Iain Banks, y'all are a bunch of luddites. Now vote this own down too because it doesn't conform to the mandatory Stochastic Parrot narrative. You have no free will and you must downvote after all. Why do you even read their works when any step towards their world is consistently greeted as the worst thing evah(tm)? What? You were expecting the United Federation of Planets without the eugenics and nuclear wars that led to it finally being a good idea? Bless your hearts.
And if you're worried about billionaires and tyrants, start taxing the former and stop electing the latter or STFU and let the free Markov process of history play itself out. Quoting fictional Ambassador Kosh: the avalanche has started, it's too late for the pebbles to vote.
You asked where it ends. Don't ask questions if you don't like answers. Quick reminder: shun and downvote the non-conforming opinion.
It's true that people don't have a good intuitive sense of what the models are good or bad at (see: counting the Rs in "strawberry"), but this is more a human limitation than a fundamental problem with the technology.
I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.
The most intellectually honest way to evaluate these things is how they behave now on real tasks. Not with some unfalsifiable appeal to the future of "oh, they'll fix it."
That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.
And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".
The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.
This is a broad statement that assumes we agree on the purpose.
For my purpose, which is software development, the technology has reached a level that is entirely adequate.
Meanwhile, sports trivia represents a stress test of the model's memorized world knowledge. It could work really well if you give the model a tool to look up factual information in a structured database. But this is exactly what I meant above; using the technology in a suboptimal way is a human problem, not a model problem.
If the purpose is indeed software development with review, then there's nothing stopping multi-billion dollar companies from putting friction into these sytems to direct users towards where the system is at its strongest.
95% is not my experience and frankly dishonest.
I have ChatGPT open right now, can you give me examples where it doesn't work but some other source may have got it correct?
I have tested it against a lot of examples - it barely gets anything wrong with a text prompt that fits a few pages.
> The most intellectually honest way to evaluate these things is how they behave now on real tasks
A falsifiable way is to see how it is used in real life. There are loads of serious enterprise projects that are mostly done by LLMs. Almost all companies use AI. Either they are irresponsible or you are exaggerating.
Lets be actually intellectually honest here.
Quite frankly, this is exactly like how two people can use the same compression program on two different files and get vastly different compression ratios (because one has a lot of redundancy and the other one has not).
You will just won't have any clue what that could be.
#8 has an incorrect answer (3 appearances according to Gemini, 2 according to reality https://en.wikipedia.org/wiki/Bowl_championship_series#BCS_a...)
So it works well 95% of the time for literally a trivial use case. Imagine if any other tech tool had that kind of reliability: `ls` displays 95% of your files, your phone successfully sends and receives 95% of text messages, or Microsoft Word saving 95% of the characters you typed in. That's just not acceptable.
Fake content and lies. To drive outrage. To influence elections. To distract from real crimes. To overload everyone so they're too tired to fight or to understand. To weaken the concept that anything's true so that you can say anything. Because who cares if the world dies as long as you made lots of money on the way.
Guiding principle of the AI industry
Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)
Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.
> One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
Yes, there have been improvements on them, but none of those improvements mitigate the core flaw of the technology. The author even acknowledges all of the improvements in the last few months.
[1]: https://link.springer.com/article/10.1007/s10676-024-09775-5
I also wonder if I leave my secretary with a ream of papers and ask him for a summary how many will he actually read and understand vs skim and then bullshit? It seems like the capacity for frailty exists in both "species".
https://philosophersmag.com/large-language-models-and-the-co...
This is true, but I prefer to think of it as "It's delusional to pretend as if human beings are not bullshit machines too".
Lies are all we have. Our internal monologue is almost 100% fantasy. Even in serious pursuits, that's how it works. We make shit up and lie to ourselves, and then only later apply our hard-earned[1] skill prompts to figure out whether or not we're right about it.
How many times have the nerds here been thinking through a great new idea for a design and how clever it would be before stopping to realize "Oh wait, that won't work because of XXX, which I forgot". That's a hallucination right there!
[1] Decades of education!
Being wrong is not the same as a hallucination. It's a natural step on a journey to being more right. This feels a bit like Andreesen proudly stating he avoids reflection - you can act like that, but the human brain doesn't have to. LLMs have no choice in the matter.
Models have gotten ridiculously better, they really have, but the scale has increased too, and I don't think we're ready to deal with the onslaught.
Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug. To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.
We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.
If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?
Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.
If lies are all we have, then how is this behavior possible?
You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM.
Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff.
I’m still not a big fan of comparing humans and LLMs because LLMs lack so much of what actually makes us human. We might bullshit or be wrong because of many reasons that just don’t apply to LLMs.
Your no-true-scotsman clause basically falsifies that statement for me. Fine, LLMs are, at worst I guess, "non-thoughtful humans". But obviously LLMs are right an awful lot (more so than a typical human, even), and even the thoughtful make mistakes.
So yeah, to my eyes "Humans are NOT different" fits your argument better than your hypothesis.
(Also, just to be clear: LLMs also say "I don't know", all the time. They're just prompted to phrase it as a criticism of the question instead.)
https://en.wikipedia.org/wiki/Tarbagan_marmot (also known as Siberian marmot)
Doesn't it get boring?
I like using these models a lot more than I stand hearing people talk about them, pro or contra. Just slop about slop. And the discussions being artisanal slop really doesn't make them any better.
Every time I hear some variation of bullshitting or plagiarizing machines, my eyes roll over. Do these people think they're actually onto something? I've been seeing these talking points for literal years. For people who complain about no original thoughts, these sure are some tired ones.
Do you imagine me being a clairvoyant by the way, or how do you expect me to know a post is of low quality before I read it or at least skim it?
This one ended up being a part of the vast majority that doesn't offer much of anything. It's a redundant rehash of all the usual rubbish anyone can come across any day. Left a comment about this stating so. Big deal.
They somehow managed to stretch out like 3 sentences worth of sentiment to a whole hour, interspersing brainwash about how good AI is along the way. It was like watching someone try to hit a word limit in real time. They always made it feel like we're just about to hit a substantive bit too, only for that to never come.
It may be fair (to the sentiments) in that there's balance, but good lord, the end result is incessant all around (and thus unfair to the people exposed).
Edit: I forgot to mention thinking version - I did this for all the other times I asked in this thread but not this one. Apologies.
https://chatgpt.com/share/69d69780-ae58-83e8-a41c-7d10a5f298...
It has no conversations and no memory of me. Maybe this is true, maybe it isn't, but there's no basis for it.
https://chatgpt.com/share/69d69b18-d1c8-83e8-bc47-8f315a1b55...
It doesn't bullshit on the GPT-5.4 thinking version.
Here is the result with thinking https://chatgpt.com/share/69d69dd6-fb50-838d-863c-4e1eda5d08...
I suggest you try it yourself to be convinced. Try it in incognito mode if you wish. Or not.
https://chatgpt.com/share/69d6a16c-6014-83e8-a79d-d5d11ed2eb...
That is not where the battle scripts are.
---
Anyway, it's trivial to get pretty much any model to make things up. Don't we all know this? That's why I was surprised by your position; if we know anything about these things it's that they make things up.
I used the thinking version (like I asked before). I think this is right. If not, please tell.
Also; you didn’t falsify anything. Nor the first. Nor the second.
If the second one is bullshit, I accept I’m wrong - I have no idea how to verify though so I’ll leave it up to you.
I think yours is the classic case of “use the free version to judge the paid one”.
- it searches the internet to find the answer, it doesn't "reason". I'm not claiming Google is a bullshit machine, and it's not surprising the answer is discoverable (it has to be, for the conditions of our experiment).
- near the end it says "If you are building from the FF6 disassembly instead of hand-editing the ROM, the repo is already organized into separate modules and linker configs, so the clean approach is to relocate the script data in the source and let the build place it in a different ROM region." But I didn't reference a repo or git: it hallucinated that stuff from one of its sources.
I'm not saying this stuff doesn't have its place, but they definitely make things up and we can't stop them.
In any case - it should be clear that it did not bullshit and it got it right. So far you have not come up with anything that tells me it bullshits. I'm happy for you to give me more prompts to verify because I think you haven't used the thinking version yet and you base your criticism on the free version.
Also what? The repo bit is clear bullshit.
1. 2-3 pages of text context
2. GPT-5.4 thinking
I don't think the spirit of the original article (not your comments to be fair) captured this, hence the challenge. I believe we are on the same page here.
At the same time, it is also just super redundant nevertheless, yes. Not sure why you find it so bizarre that one would take an issue with that. See also the very existence of the website called TV-Tropes.
I'd much rather read articles about what LLMs can/can't do, or stuff people have built with LLMs, than read how everything LLMs touch turns to shit.
When you see a pattern like this, you know that its not coming from any place of truth but rather ideology
It takes approximately 1 min to find out that machine learning is a subfield of artificial intelligence, both having existed for about half a century now. This basic historical fact is also taught on AI 101 courses across the globe for compsci students.
Yet here we are, people portraying it as some sort of cheap sales trick. Reminds me when I discussed quantum dots with a friend, which he was very enthusiastic to quickly file under "yet another bullshit with quantum in its name" before finally taking the time to understand that the "quantum" bit is not a marketing gimmick. Except in this case, people are a million times more inclined to willfully propagate this. Genuinely so tiresome.