1 pointby PAdvisorya day ago2 comments
  • Ace__7 hours ago
    I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.
  • duxupa day ago
    I’ve found this with many LLMs they want to give an answer, even if wrong.

    Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

    I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

    • PAdvisorya day ago
      I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find