2 pointsby ForestHubAI4 hours ago3 comments
  • kone963 hours ago
    Impressive pipeline description — but I'm curious about the boundary between "computed" and "LLM-generated." You mention the schematic generation is fully deterministic and the LLM only handles intent parsing. How exactly does that handoff work? Does the constraint solver operate purely on structured intermediate representation, or does the LLM ever influence component selection or topology decisions downstream? Asking because "not an AI wrapper" is a strong claim, and I'd love to understand the architecture well enough to verify it.
    • ForestHubAI3 hours ago
      Great question — this is the right thing to probe. Let me walk through the actual architecture.

      TL;DR: The LLM is the front door (intent parsing) and an optional QA layer (verify/modify). Everything in between — component selection, topology, constraint solving, value computation, KiCad export — is deterministic code operating on a typed IR.

        The pipeline has 9 stages (B1–B9):                                                                                                              
                                                                                                                                                        
        B1 Intent Parsing → B2 Normalization → B3 Component Selection →                                                                                 
        B4 Topology Synthesis → B5 HIR Composition → B6 Constraint Refinement →
        B7 BOM Generation → B8 KiCad Export + ERC → B9 Confidence Scoring
      
        Where the LLM lives: Only B1 (Intent Parsing). It turns your natural language prompt into a structured intent — essentially "which MCU, which
        peripherals, which interfaces." That's it. From B2 onward, the LLM is not in the loop.
      
        The handoff is the HIR (Hardware Intermediate Representation) — a typed Pydantic v2 schema that acts as the contract between stages. Every stage
         reads HIR, transforms it, writes it back. Components, connections, voltages, constraints, provenance — all structured, all typed. The
        constraint solver (B6) operates purely on this IR. It doesn't call an LLM, it doesn't take text input. It runs 11 deterministic checks: voltage
        compatibility, I2C address conflicts, power budget, pull-up value computation, decoupling capacitor sizing, etc.
      
        Component selection (B3) and topology (B4) are also deterministic. They query a SQLite knowledge base of 212 verified components with FTS5
        search and range queries. Pull-up values, crystal load caps, level shifters — all computed from datasheet specs stored in the DB, not generated
        by an LLM.
      
        The easiest way to verify this yourself: pip install boardsmith (no [llm] extra) and run:
      
        boardsmith build -p "ESP32 with BME280 sensor" --no-llm
      
        This runs the full pipeline — schematic, BOM, firmware — with zero network calls, zero API keys, zero LLM involvement. Same input → same output,
         every time. The --no-llm mode isn't a degraded fallback; it's the proof that the synthesis engine is self-contained.
      
        Now, to be fully transparent: v0.2 does introduce an agentic layer on top — boardsmith modify (brownfield patching) and boardsmith verify
        (semantic verification) use LLM reasoning in a tool-use loop. But these are separate from the core synthesis pipeline. They're optional, and
        they operate on finished schematics, not within the generation path.
  • ForestHubAI4 hours ago
    Hey HN,

    I've been designing embedded hardware for years, and I kept running into the same problem: I'd wire up the same ESP32 + BME280 + OLED circuit for the fifth time, re-derive the same pull-up resistor values, forget decoupling caps, and spend 30 minutes on something that should take 3. So I built boardsmith.

    *What it does:* You give it a text prompt like `"ESP32 with BME280 temperature sensor and SSD1306 OLED display"`, and it generates a complete KiCad 8 schematic (.kicad_sch), a BOM with JLCPCB part numbers, and working Arduino-compatible firmware. Not a template — an actual computed design with correct pull-up resistors, decoupling caps, I2C address assignments, and proper power distribution.

    The pipeline has 9 stages: intent parsing, normalization, component selection, topology synthesis, HIR composition, constraint refinement, BOM building, KiCad export, and confidence scoring. There are 11 constraint checks (ERC compliance, voltage/current budgets, pin assignment validation, I2C address conflicts, decoupling requirements, etc.). The output passes KiCad's own ERC.

    boardsmith also includes an agentic EDA layer: the ERCAgent automatically repairs ERC violations after schematic generation (bounded to 5 iterations with stall detection). `boardsmith modify` lets you patch existing schematics ("add battery management with TP4056") without touching the synthesis pipeline. And `boardsmith verify` runs 6 semantic verification tools against the design intent.

    *The key thing:* `boardsmith build -p "your prompt" --no-llm` works fully offline. No API key, no network access, no cloud calls. It's deterministic — same prompt, same output, every time. The LLM mode is optional and just improves intent parsing for ambiguous prompts. The actual synthesis, constraint solving, and schematic generation are all computed, not generated by a language model.

    *What it's good at:* ESP32 and RP2040 projects with sensors, displays, and actuators. I2C/SPI/UART topologies. Clean ERC. JLCPCB-ready Gerber output. 212 verified components with full electrical specs (not scraped datasheets — manually entered and cross-checked). 191 LCSC part mappings for direct JLCPCB SMT assembly.

    *What it's not good at:* High-speed digital design (no impedance-controlled routing, no length matching). Analog circuit design (no op-amp topologies, no filter synthesis). STM32 support is in beta and has rough edges. No multi-board designs.

    We're [ForestHub.ai](http://ForestHub.ai), a 4-person seed-funded team building tools for hardware engineers.

    The CLI is AGPL-3.0 (commercial license available for companies that need it). Source is on GitHub.

    Happy to answer questions about the architecture, the constraint solver, why we went AGPL, the ERCAgent repair loop, or how the HIR (Hardware Intermediate Representation) works as the contract between our synthesis and firmware tracks.

  • Marcus_FH4 hours ago
    btw. its free to use!
    • ForestHubAI4 hours ago
      exactly, a key feature I did not mention! lol