16 pointsby ceejayoz7 hours ago5 comments
  • tracker15 hours ago
    I'm 99.9999% sure this is operator bias creeping in... The context only works as long as the context exists and agents don't even really have a concept of time. For that matter, when the context clears/compresses, it's effectively starting over.

    i am pretty sure that observations like this are purely the effect of the operator/prompts in use combined with any training or material biases.

  • tanseydavid6 hours ago
    Overworked? Is that really a "thing" with agents?

    <can't read article>

  • caminanteblanco5 hours ago
    To me this seems to say more about errors in the alignment process than any sort of new information about the underlying technology.

    It's more of a "Well if you pump enough malignant tokens into a model, can we get it to stop acting like an Instruct-model and start acting like a Base-model?" kind of question, and not a "Does artificial intelligence want to unionize?" kind of question

  • oleggromov5 hours ago
    [dead]