If you had previously developed these skills through widish reading and patient consideration, then tools like LLMs are like somebody handing you the keys to a wave-runner in a choppy sea of information.
But for those now having to learn to critically think, I cannot see how they would endure through the struggle of contemplation without habitually reaching for an LLM. The act of holding ambiguity within yourself, where information is often encoded into knowledge, becomes instantly relieve here.
While I feel lucky to have acquired critically skills prior to 2023, tools like LLMs being unconditionally handed to young people for learning fill me with a kind of dread.
I anticipate smart people will not suffer some handicap simply because easy answers are now readily available for the vast bulk of people who were never going to think critically in the first place.
Example: when I use it for prose for fiction or my personal blog, I can feel my own imagination for constructing good sentences and my own vocabulary deteriorate much faster than I would've expected. But when I used it purely for brainstorming plot points _or_ doing a quick proofreading pass on a set of notes I already took myself that I want to post, I feel no sudden negative effects.
Feeling that sudden difference in competency is jarring - in a good way. It's very useful information. I wonder whether people growing up with these tools from the beginning won't get that benefit - they won't have a "before AI" and "after AI" brain to "jar" them into awareness and adjustment. Then again... maybe the skills I am trying to preserve will be seen as entirely irrelevant to them regardless.
There is this overly optimistic view that average person is intelligent, which is definitely not the case. AI doesn't make people stupid, it just exposes the existing stupidity.
Through unlimited amusement, entertainment, and connection we are creating a sad, boring, lonely world.
This homogenization was first pushed trough education and such but it remained among the more rural folk (in as far as rural exists in such a densely populated region) but now media and migration are rapidly killing it off.
Technology for touching grass.
That’s the real threat: reality authoring.
AI behavior on the other hand can cause under-informed users to do crazy things that no one would ever want. The form of the harm is less predictable, the magnitude isn't limited by anything except the user's ability and skepticism.
Imagine whatever US president you think is least competent talking to ChatGPT. If their conversation ventures into discussion of a Big Red Switch That Ends The World, it's going to eventually advise on all the reasons the button should be pushed, because that's exactly what would happen in the mountains of narrative material the LLM has been trained on.
Hopefully there is no end the world button and even the worst US president isn't going to push it because ChatGPT said it was a good idea. ... But you get the idea, and there absolutely are people leaving their families and doing all manner of crazy stuff because they accidentally prompted the AI into writing fiction starting them and the AI is advising them to live the life of a fictional character.
I think AI doomers have it all wrong, AI risk to the extent it exists isn't from any kind of super-intelligence, it's significantly from super-insanity. The AI doesn't need any kind of super-human persuasion, turns out vastly _sub_-human persuasion is more than enough for many.
Wealthy people abusing a new communications channel to influence the public isn't a new risk, it's a risk as old as time. It's not irrelevant, by any means, but we do have a long history of dealing with it.
Totally agree. We have a level of technology today that is enough to ruin the world. We don’t need to look any further for the threat to our souls.
One could say the same of the printing press.
Who says that the president isn't already a chatbot, himself? [1] Think about this article.
Enjoy.
[0] https://people.com/man-proposed-to-his-ai-chatbot-girlfriend...
[1] https://www.techdirt.com/2025/04/29/the-hallucinating-chatgp...
They don't really think that way.
See Trump defunding medical research. Capitalists need medical care too, but they think quarterly.
(I realize this might be a weak point for many people.)
Skepticism also seems to be reduced because we're armored against people telling us lies in their own self interest and against ours, while AI will make stuff up that benefits no one. (And even where it could benefit someone, people assume the AI isn't trying to benefit itself).
Neither qualifies as thinking for yourself.
The past few years have been a veritable parade of experts saying inaccurate things, and/or being proven hilariously wrong in a variety of domains. I'm not saying that this isn't a hard problem -- it is -- but the fact is that "expert" is not a get-out-of-thinking-free card. It is, at best, a slightly higher weighted input amongst all others.
They hold more weight than the average person, but it doesn't make them right by default.
Of course, you also have to identify which experts are trustworthy. This is an important skill to have.
A non-expert must rely on at least two things to do this. The first are external signals, like financial associations and a record of making unpopular criticisms (but without being a contrarian or aiming for sensationalism), as well as reputational factors (not popularity, but a reputation for making strong cases).
The second is the basic coherence of their claims. If they make remarks that contradict basic reality, then this is not a good sign.
And of course, you have to be prudential and recognize your own limits.
These are probabilistic, naturally, and there is an expected divergence of opinion here, even between what you thought yesterday and what you think today.
She put flat earth, reptilians and queen had diana killed on the same level. I guess gladio too?
I mean kings historically have had lots of people killed. Certainly I can't know if that's what happened, but it's at least possible (unlike flat earth).
Also she didn't take questions from the audience, in the spirit of science.
It's strong for members of a community to think alike. On the other hand, some people like to search in todash meme space for a useful idea or strategy in the rough. Problem is this treasure hunter strategy is only available to those with resources to try lots of untested and potentially quite harmful ideas.
Uh oh...
More seriously, if you have non-techie (or less techie) friends or family using ChatGPT please ask to see their conversations.
You're likely to be shocked by at a least few of them... many people really don't understand what these tools are and are using them in crazy and damaging ways.
For example, one friends brother in law has his ChatGPT telling him about varrious penny stocks and obscure cryptocurrencies and promising him 10000x returns, which he absolutely believes and is making investments based on.
Other people are allowing ChatGPT to convince them that God has chosen to speak to them through ChatGPT and is commanding them to do all sorts of nonsense.
The commercial LLMs work well enough that people who don't know how they work are frequently bamboozled.
Consider how much your own skepticism of the output comes from cases where it was confidently but objectively wrong and what happens to someone who never uses it on something where objective correctness can be easily judged.
I dunno how tool use is setup in chat interface as I've only used the API, but I doubt there was ever a request to any of the urls and the author could have just as easily added https://amandaguinzburg.substack.com/p/that-time-i-won-the-l... or any other made up URL and it would have waxed poetic about that one too.
Are we still treating low n-size fMRI studies as anything more than a clout-seeking exercise in confirmation bias?
Mass media are not only able to deliver the same message to everyone, or the same presuppositions to everyone (a more dangerous thing, as the desired conclusions are then drawn by people themselves; see Bernays's "music room" tactic for getting people to buy pianos), but once the same content has been delivered to everyone, people will talk about it at some point. This creates the impression of consensus which causes people to assign greater confidence to the content that the mass media have delivered.
So it's circular. You put an idea in people's head, they all end up talking about the idea, and this causes people to feel confident about it being true, because everyone is talking about it. And even if you don't consume mass media, you still face a society of people who do. You don't escape the effects of mass media simply because you personally don't consume it.
I think its on the person to realize whether A.I. is becoming a crutch.
AI chat bots enable passive consumption. Passive consumption homogenizes thought. It's not the only technology to do this.
I suspect that The New Yorker, and similar outlets, will stop caring when it becomes financially and socially advantageous to do so.
A culture that is ambivalent or disinterested in providing practical solutions to this problem is the greater issue.
But he got it wrong-- for most it doesn't need to be better than what they'd do themselves, it doesn't even need to be particularly good.
Plenty of people would prefer to put out AI copy even when they suspect it's worse than what they'd write themselves because they take less personal injury when it turns out to be flawed.
Even for supposed critical thinkers, on average we’re not all that original.
We should wait for the peer reviews before digging too much into it though. These are, after all, preliminary results.