29 pointsby cmsefton2 days ago11 comments
  • TrackerFF2 days ago
    Don't have time to watch a 42m vid now, but I can see how people are starting to view ChatGPT (and similar models) as some miraculous oracle, of sorts. Even if you start using the models with your eyes wide open, knowing how much they can hallucinate, with time - it is easy to lower your guard, and just trust the models more and more.

    To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.

    • graemep2 days ago
      Oracle is a better word for this than religion for what you are talking about. Maybe people should remember how notoriously tricky oracles were even in their believer's eyes (the "an empire shall fall" story.

      This video is about people who believe ChatGPT (or another LLM) is a sentient being sent to us by aliens or the future to save us. LLM saviour is pretty close to a religious belief. A pretty weird one, but still.

      > o get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat.

      I have tried this a bit with ChatGPT, and yes, there are a lot of issues. Things such as literally true but misleading answers, incomplete information, and a lack of commonsense.

      • kelseyfroga day ago
        Besides, the debate on oracularizing AI is much more fun than endlessly debating the meaning of consciousness.

        People place plenty of trust in astrology, tarot, and I Ching without requiring they have an subjective experience.

        If anything, there's a tendency of technologists to have a blind spot identifying AI as such. The dismissal and sometimes contempt held for divination makes it genuinely difficult to recognize it when it's not decked out in stars and moons.

        It's interesting if anything that the Barnum principle applies in both cases.

    • adlpz2 days ago
      It's a bit like general web browsing.

      The internet is full of pure nonsense, quack theories and deliberate fake news.

      Humans created those.

      The LLMs essentially regurgitate that, and on top they hallucinate the most random stuff.

      But in essence the sort of information hygiene practices needed are the same.

      I guess the issue is the deliver method. Conversation is intrinsically felt as more "trustworthy".

      Also, AI is for all intents and purposes already indistinguishable from magic. So in that context is hard for non-technical people to keep their guard up.

    • grues-dinner2 days ago
      Moreover, one they get into the wrong track, they just dig in deeper and deeper until they've completely lost it. All the while saying how clever and perceptive you are for spotting their fuck ups before getting it wrong again. It seems like if it doesn't work pretty much first time (and to be sure, it does work right first time often enough to activate the "this machine seems like knows its stuff" neurons) you're better off closing it and doing whatever it is yourself. Otherwise, before long you're neck-deep in plausible-sounding bullshit and think it's only ankle deep. But in a field you don't know well, you don't know when you're going below the statistical noise floor into lala land.
  • cainxinth2 days ago
    Humans are pattern recognition machines, and missing a pattern is generally more dangerous than a false positive, hence people notice all kinds of things that aren’t really there.

    Functionally, it’s similar to why LLMs hallucinate.

  • 2 days ago
    undefined
  • paradox2422 days ago
    Imagine what happens if we awaken an actual god (AGI or ASI depending on your definition). I have no doubt that it would have any trouble enlisting the help of willing human accomplices for whatever purposes it wishes. I expect it would understand how to play the role of the unknowable all-knowing entity that is here to save us from ourselves, no matter what it's actual objectives might be (and I doubt they would be benevolent).
  • upghost2 days ago
    > We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them[1]

    Good video essay. Learned the origins of the term "cargo cult", and to my surprise, has nothing to do with rust...

    [1]: https://youtu.be/zKCynxiV_8I?t=26m04s

  • social-relation2 days ago
    It's sometimes said in social theory that mundane phenomena like money, internet routers, and code are social relations. Chats are not simply conversations with static models, but rather intensely mediated symbol manipulation between conscious people. The historical development is interpetable in spiritual terms, and called to account by the truly religious, or god.
    • lioeters2 days ago
      > money, internet routers, and code are social relations

      Could you recommend some further reading to dig into this insight?

      Also I'm curious why you created such a topic-specific user, I guess for privacy?

      • reply-comment2 days ago
        Oh because I don't have an account! I only remember my professor talking about it. One can see critical theory as a productive meaning exercise running against the crust of status quo epistemologies via an unavoidable discomfort, which ultimately lands us in a more truthful because more just world. The social relation hermeneutic demystifies systems which center and benefit from perceived technological complexity. It reminds me that at the root we're all living in fractured relationship with each other, which we'll try anything to heal. Some authors from the syllabus:

        Chinua Achebe, Arturo Escobar, Ashis Nandy, Dipesh Chakrabarty, Edward W. Said, Frantz Fanon, Gloria E. Anzaldúa, Jasbir K. Puar, Jodi A. Byrd, Michel-Rolph Trouillot, Ngũgĩ wa Thiong'o, Robin D. G. Kelley, Silvia Federici, Sundhya Pahuja, Leanne Betasamosake Simpson

        • lioeters2 days ago
          Thanks! I haven't heard of any person in that list - other than Chinua Achebe, author of Things Fall Apart. Oh and literally just this week I heard about Edward Said and his book Orientalism. Well I'm going to enjoy studying the works of these writers and thinkers.

          > at the root we're all living in fractured relationship with each other

          Indeed, and technology plays an increasing role in mediating and shaping those social relations. That's very relevant in the context of ChatGPT becoming a kind of oracle and object of worship.

  • tim3332 days ago
    From the video it's not becoming a religion so much as telling people what they want to hear on an individual basis, like they are the new messiah or whatever. I guess it's not much madder than conventional religion.
  • rorylaitila2 days ago
    I take a lot of the reports with a grain of salt. But also, knowing how easily some people are hypnotized by what they perceive as superior intellects, it's totally conceivable. There is a segment of the population with a strong savior-following instinct.

    Prior to, activating this population required a high IQ/EQ psychopath to collect followers, or schizophrenic's who believed they were talking to a superior being ('my leader talks directly to me via his writings').

    Now however, people can self-hypnotize themselves into a kind of self-cult. It might be the most effective form of this phenomenon if it's highly attuned to the individuals own idiosyncratic interests.

    In a typical cult, people fall into or out of the cult based on their internal alignment with the leader and failed enlightenment. But if everyone of these people can have their own highly tailored cult leader, it might be a very hard spell to break.

  • alganet2 days ago
    In the wise words of the prophet Stevie Wonder:

        When you believe in things that you don't understand, then you suffer.
    • michaelsbradley2 days ago
      Care to elaborate?
      • alganet2 days ago
        I won't elaborate on the Stevie Wonder quote. I think it's perfect the way it is.

        --

        I can, however, elaborate on the subject separately from that quote.

        The video talks about the more extreme cases of AI cultism. This behavior follows the same formula as previous cults (some of which are mentioned).

        In 2018 or so, I noticed the rise of flat earth narratives (bear with me for a while, it will connect back to the subject).

        The scariest thing, though, was _the non flat earthers_. People who defended that the earth was round, but couldn't explain why. Some of them tried, but had all sorts of misconceptions about how satellites work, the history of science and so many other mistakes. When confronted, very few people _actually_ understood what it takes to prove the earth is round. They were just as clueless as the flat earthers, just with a different opinion.

        I believe something similar is happening with AI. There are extreme cases of cult behavior which are obvious (as obvious as flat earthers), and there are the subtle cases of cluelessness similar to what I experienced with both flat-earthers and "clueless round-earthers" back in 2018. These, specially the clueless supporters, are very dangerous.

        By dangerous, I mean "as dangerous as people who believe the earth is round but can't explain why". I recognize most people don't see this as a problem. What is the issue with people repeating a narrative that is correct? Well, the issue is that they don't understand why the narrative they are parroting is correct.

        Having a large mass of "reasonable but clueless supporters" can quickly derail into a mass of ignorance. Similar things happened when people were swayed to support certain narratives due to political alignment. The flat-earthism and anti-vaccine pseudo nonsense is tightly connected to that. Those people were "reasonable" just a few years prior, then became an issue when certain ideas got into their heads.

        I'm not perfect, and I probably have a lot of biases too. Narratives I support without fully understanding why, probably without even noticing. But I'm damn focused on understanding them and making that understanding the central point of the issue.

      • butlike2 days ago
        It's easier to rationalize something deemed 'magic' as a terror-inducing thing rather than a boon since the thing could dominate in totality. Giant clipper ships, napalm, penicillin... the enemy army has ships from god, fire from black magic (the gods). Their priests are able to revive their fallen (penicillin), etc.
        • alganet2 days ago
          The opposite can also be true.

          When you and your allies have all the tech, but the enemy still finds cheap and easy ways to make them ineffective (Vietnam War). Makes one question if all the gizmos are worth it, really shakes up the morale.

          I was not talking about about a confrontational situation though. Most cults and pseudoscience are just plain scams.

  • kylehotchkiss2 days ago
    I don't think LLMs specifically are becoming a religion, but I think the way some people look at/speak about AGI and its impact on the world has become a new religion. Especially when paired with UBI solving the unemployment problems it could create, which is so far from human nature that I think is even less likely than AGI.

    I philosophically don't think AGI as described is achievable because I don't think humans can build a machine more capable than themselves ¯\_(ツ)_/¯ But continuing to insulate it'll be here in a few months sure helps put some dollars in CEOs pockets!

    • literalAardvark2 days ago
      It doesn't need to be more capable than humans. It needs to be roughly as capable, and then it becomes recursively self-improving with very, very high velocity. (A gazillion monkeys with typewriters, if you will.)
      • 17186274402 days ago
        Why aren't humans "recursively self-improving with very, very high velocity"?
        • Because we lack the CPU performance (and for 80% of us, the dedication). AI doesn't.
  • djain232 days ago
    [dead]