3 pointsby softcane10 hours ago2 comments
  • softcane10 hours ago
    Building a DCF model isn’t actually the hard part. The painful part is everything around it: finding industry averages, checking current risk-free rates, and digging through earnings transcripts to justify assumptions.

    After repeating that process too many times, I started building a small tool to automate parts of the workflow while keeping the valuation model itself deterministic.

    The core idea is to separate two very different problems:

    1. Deterministic valuation math DCF calculations should be reproducible and deterministic. The model handles the financial logic and keeps the valuation mechanics consistent.

    2. Qualitative research and narrative building Things like reading filings, summarizing earnings calls, or challenging assumptions are much more open-ended. That’s where LLMs are useful.

    The tool is local-first and runs entirely on your machine. There are no accounts or hosted services — you just bring your own API keys if you want to use LLM features.

    It also comes pre-seeded with Professor Damodaran’s public datasets (industry margins, risk metrics, failure rates, etc.) so you don’t have to manually assemble those inputs each time you build a model.

    One design constraint was making sure the AI never touches the core valuation math. LLMs are allowed to suggest assumptions or critique them, but the underlying calculations remain deterministic.

    A useful side effect is that the AI often behaves more like a skeptical analyst than an oracle. For example, it might flag things like:

    “This margin expansion assumption is outside the historical range for this industry.”

    Which is often exactly the kind of pushback you want when building a valuation narrative.

    Curious if others building investing tools have experimented with a similar separation between hard financial models and AI-assisted research.

    Repo: https://github.com/stockvaluation-io/stockvaluation_io

  • softcane10 hours ago
    One thing I noticed while building this: LLMs are terrible at doing valuation math but surprisingly good at critiquing assumptions. When you let them read transcripts and compare assumptions to industry data, they often act like a skeptical analyst pointing out inconsistencies.