15 pointsby gangtao4 days ago5 comments
  • tines6 hours ago
    So you have to be able to identify a priori what is and isn't an hallucination right?
    • ares6235 hours ago
      The oracle problem is solved. Just use an actual oracle.
    • happyPersonR5 hours ago
      I guess the real question is how often do you see the same class of hallucination ? For something where you're using an LLM agent/Workflow, and you're running it repeatedly, I could totally see this being worthwhile.
    • makeavish6 hours ago
      Yeah, reading the headline got me excited too. I thought they are going to propose some novel solution or use the recent research by OpenAI on reward function optimization.
      • esafak4 hours ago
        It's rather cheeky to call it "real-time AI hallucination detection" when all they're doing is checking for invalid moves and playing twice. You don't even need real-time processing for this, do you?
  • Zeik2 hours ago
    I didn’t understand quite the point of the claims from end of the page. Surely automatic cars or health/banking services don’t use language models for anything important. Everyone knows those hallucinate. ML is lot better alternative.
  • uncomputation4 hours ago
    There’s a more generalizable work on this recently for those expecting more. https://github.com/leochlon/hallbayes
  • yunwalan hour ago
    is this satire?
  • curtisszmania3 hours ago
    [dead]