3 pointsby geox6 hours ago2 comments
  • yamanakatakeshi4 hours ago
    As a Japanese editor, this research feels like it has finally put words to the "discomfort" I’ve been sensing.

    In Japanese, the most meaningful parts of a text often reside in the "Ma" (space) or in the unspoken context. However, because the text AI presents as "correct" seems to have passed through a Western logical filter, it feels as though cultural nuances are being treated as "logical flaws" or "ambiguities."

    If this continues, the internet may become flooded with uninteresting writing that fails to move anyone’s heart.

  • theamk6 hours ago
    So they use LLM to evaluate LLMs: with LLM writing the questions, another LLM writing the country-specific answers, and yet another LLM getting the country from an answer. The only manual steps seem to be "manually reviewed [questions] to remove repetitions or accidental location references."

    This seems like a pretty lazy methodology, as if there are LLM-specific country biases, they could be introduced at any stage of the process.