4 pointsby anonhaven4 hours ago1 comment
  • j4k0bfr4 hours ago
    It might be my inner Luddite talking, but LLM use in defense and intelligence terrifies me. What happens when built-in model biases or hallucinations affect human safety? Who is to blame and how will this be mitigated? Fascinating but scary.