3 pointsby thm3 hours ago1 comment
  • ksaj3 hours ago
    They definitely can. Journalists spend hours talking to their fave AI chat bots about things like spirituality, then act as if the systems went sentient when they start regurgitating the weirder bits back to them.

    As an experiment, I described dual reality (a magic/mentalism technique that looks like hypnotism or mind control to the audience, but something much more benign to the participant. Their apparent astonishment is over something different than the audience's reaction to its appearance.) to ChatGPT and then asked it how I could fool it using the same technique. The reply was basically that it would go with just about any magical claim I made that didn't contravene its rules, so if it comes up again as a slightly different question, the response will certainly look to an outsider like I've hypnotized ChatGPT.

    Oddly, upon trying to do so, it became instantly obvious why these reporters get the apparent results they report on. Try it!