This is gold. Tokens are really expensive. If i already had context everytime I open my laptop, I wouln't worry about cost at all.
This makes it easy to afford.
this looks like a very good discussion on how context helps save token cost for AI models. I like the technical depth comparing the different techniques used in the blog.
This is a very clear comparison of file-based context vs a memory layer. I liked the way it derived the queries into different categories, it makes it easy to understand the metrics.