3 pointsby jpattanooga4 hours ago1 comment
  • jpattanooga4 hours ago
    I've been thinking about where LLM inference actually ends up in the enterprise stack — and I keep coming back to this: it probably integrates wherever SQL rows go.

    This post lays out a framework built around three integration patterns (conversational interfaces, workflow automation, decision intelligence) and frames the whole problem as a "game of materialized views." The real-time vs. latency tradeoff section is the part I'd push back on most with clients — cost to materialize a view approaches infinity as latency approaches zero.