2 pointsby PreciousH7 hours ago1 comment
  • PreciousH7 hours ago
    I'm a visual learner. Whenever I try to understand something hard like dynamic programming, vector calculus, how attention mechanisms work, reading about it only gets me so far. I need to see it move.

    So I built Prism AI. When you ask it to explain something, it doesn't just return a report. If the topic calls for it, it generates an interactive visualization inline. Ask it to explain dynamic programming and you get a 2D animation with the code on one side and a decision tree on the other, recursively solving subproblems as a highlighter steps through each line. Ask it how a vector field works and it renders an interactive 3D field you can rotate and probe. Ask it how the attention mechanism in a transformer works and it shows you the actual weight matrix lighting up across tokens.

    The research pipeline underneath is a Plan-and-Execute setup , a PlanningAgent breaks your query into a roadmap, then multiple Researcher Agents crawl sources in parallel via asyncio, with a LangGraph state machine handling retries when sources are weak. But the viz generation is honestly the part I care about most and the part I'm still iterating on hardest.

    Open source (MIT): https://github.com/precious112/prism-ai-deep-research

    Feedback I'd value: 1. What complex topic would you most want explained this way? 2. Has anyone found a clean way to decide when an agent should generate a visual vs just write prose — that decision boundary is still the messiest part of my pipeline.