1 pointby doom24 hours ago1 comment
  • a7om_com4 hours ago
    One angle worth considering is cost. If you're using LLMs heavily for language learning the inference bill adds up fast. Output tokens run 3.74x more than input on average across the market right now. For iterative back and forth sessions with a model that cost gap compounds quickly. Prompt caching helps but only about 1 in 5 models actually offer it.