TL;DR:
Claude isn't trained on openclaw data due to its knowledge cutoff, but this is the first time I have been asked to look at research myself to verify it isn't hallucinated or unreliable.
I am not making any claims about Anthropic training their models to perform worse when dealing with information about competitors...
But I am worried about this behaviour of flagging certain sources as unreliable for what seem like arbitrary reasons.
It could also be a case of prompt poisoning at one of the research URLs.