5 pointsby 8cvor6j844qw_d68 hours ago1 comment
  • NetworkPerson8 hours ago
    I found it far too expensive for Anthropic. Entire context of every conversation is sent each time you type anything. Switched to a local model running from Ollama. Not quite as smart as opus, but good enough for my needs.
    • CjHuberan hour ago
      Does it not use prompt caching?