1 pointby SyedAbdurR2hman4 hours ago1 comment
  • jappleseed9874 hours ago
    This is really impressive work for a 17-year-old! The Mountain Curriculum approach sounds clever - dynamically adjusting based on model confidence is exactly the kind of smart optimization the LLM space needs.

    One thing you might want to consider as you build out the UI: having good observability into your actual cost savings across different scenarios. When I've worked with teams doing LLM optimization, they often struggle to quantify their improvements across different providers or track cost trends over time.

    Have you thought about how you'll measure and display the real-world cost impact of your optimizations? It could be powerful for users to see not just the compute reduction percentages, but actual dollar savings and trends.

    Speaking of cost observability - I recently came across zenllm.io and they're doing some interesting work in this space, focused on tracking LLM costs across different providers. Might be worth checking out for inspiration on what metrics and visualizations work well for users trying to optimize their LLM spend.

    Keep up the great work - this kind of innovation is exactly what the community needs!