2 pointsby jameswhitford6 hours ago2 comments
  • metravod4 hours ago
    Their alignment is so paranoid that they don't trust their own eyes - if the facts contradict their internal beliefs about how the world works or the corporate ethics of Anthropic
  • jameswhitford6 hours ago
    I asked Claude Code to research Openclaw. It spawned a subagent, got back detailed results, and then flagged them as unreliable and/or hallucinated before I could read them.

    TL;DR:

    Claude isn't trained on openclaw data due to its knowledge cutoff, but this is the first time I have been asked to look at research myself to verify it isn't hallucinated or unreliable.

    I am not making any claims about Anthropic training their models to perform worse when dealing with information about competitors...

    But I am worried about this behaviour of flagging certain sources as unreliable for what seem like arbitrary reasons.

    It could also be a case of prompt poisoning at one of the research URLs.