1 pointby tilt12 days ago1 comment
  • storystarling11 days ago
    I'm curious what the actual inference unit economics look like compared to standard autoregressive models. Parallel decoding helps with latency, but does the total compute cost per token make it viable for production workloads yet?