My project (https://phaestus.app/blog) takes a different approach: pre-validated circuit blocks on a fixed 12.7mm grid with standardized bus structures. The LLM picks which blocks you need and where they go, but the actual circuit design was done by humans and tested. No hallucinated resistor values, no creative interpretations of datasheets.
It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
Still WIP and the block library needs expanding, but the constraint-based approach means outputs are manufacturable by construction rather than "probably fine, let's see what catches fire."
I don't want to detract from what you're building, but I'm puzzled by this sentence. It very much sounds like the problem is that they're bad at circuits and that you're working around this problem by making them choose from a catalog.
Try that for code. "The problem isn't that LLMs are bad at coding, it's that we're asking them to write new programs when they should be doing selection and integration".
Not trying to be a smart ass here, I’ve been keeping an eye out for years.
The proof of the Erdos problem the other day was called novel by Terrence Tao. That seems novel to me.
I even had Gemini hallucinate a QFN version of the TPS2596 last night, it was so confident that the *RGER variant existed. In an automated pipeline, this would break things, but giving it a list of parts to use, it becomes a lot more useful!
Module based design is cool for getting the prototype going but once you get into production you want to optimize everything so it falls apart quickly when you need to move the parts (not blocks, parts) to fit the least possible amount of space, cut components that could be shared (do 8 blocks on one board each with its own decoupling caps need entire set of them? Probably not). Fine for prototyping/hobby stuff/one off but falls apart quickly in production.
Still, having working prototype quickly that can then be optimized in more traditional way can still be very valuable.
> It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
hardware optimising gets you far more money faster than software, because the cost of software not being optimal is mostly cost on the consumer (burning more CPU than it would if it was optimized), while for hardware each chip less is more money left in your pocket and there are actual size constraints that can be pretty hard edged vs software's "well the user will have to download extra MB more"
For the time being, I'm erring away from feature creep, even though I really, really want to though! For the sorts of products I would like this to make for the time being, simple I2C, SPI and GPIO driven peripherals are the limit. I only have 2 more weeks, and then I want to have a working, battery powered device on my desk. PCB, Enclosure, Firmware, everything.
Similarly, I haven't got a framework for anything mechatronic in the MCAD pipeline, so no moving parts (besides clickable buttons). Fixed devices are fine, like screens and connectors though.
It very much aligns with how I've approached hardware since I was 15 and had a massive stack of functional blocks of electronics circuitry that I would combine in all kinds of ways. I've lost the 3x5's, but I still work that way, build a simple block, test it, build another block, test that, hook the one to the other etc.
I may be able to set up an RSS feed for the blog if that interests you? edit: https://phaestus.app/feed.xml
There's a limited sign up currently on the site, which currently goes to an approval page. I don't think I'm quite ready for it to be fully open yet, as i'm paying all the inference, but I should be starting to populate the gallery soon with generated projects.
So far the language models aren’t great at HDL but I assume it’s just a training priority thing and not some characteristic of HDLs.
I know Ben is having some fun, perhaps making a valid point, with the burning component on the breadboard. I think it does underscore a difference between software vibing and hardware vibing—crash vs. fire.
But in fact vibe-breadboarding has drawn me deeper into the electronics hobby. I have learned more about op-amps and analog computing in the past two months in large part thanks to Gemini and ChatGPT pointing the way.
I know now about BAT54S Schottky diodes and how they can protect ADC inputs. I have found better ADC chips than the ones that come pre-soldered on most EDP32 dev boards (and have breadboarded them up with success). These were often problems I didn't know I should solve. (Problems that, for example, YouTube tutorials will disregard because they're demonstrating a constrained environment and are trying to keep it simple for beginners, I suppose.)
To be sure I research what the LLMs propose, but now have the language and a better picture in my mind to know what to search for (how do I protect ADC inputs from over or under voltages?). (Hilariously too, I often end up on the EE Stack Exchange where there is often anything but a concise answer.)
5V USB power, through-hole op-amp chips… I'm not too worried about burning my house down.
I can't think of any reason why you'd want to use Schottky diodes to protect op-amp inputs. They have high leakage currents and poor surge capabilities. Most op-amps have internal protection diodes, and if you need some extra ESD or overvoltage protection, a Schottky diode probably isn't the way.
I'm not taking an anti-LLM view here. I think they are useful in some fields and are getting better. But in this particular instance, there's a breadth of excellent learning resources and the one you've chosen isn't good.
"Schottky diodes to protect op-amp inputs…" Not op-amp inputs, ADC inputs (which may well come from an op-amp output though—I am playing with analog computing after all).
Depending on your setup: beware of your ground and realize that breadboards are an extremely bad fit for this sort of application. It's hard enough to get maximum performance out of a good DAC on a custom designed PCB, on a breadboard it can be a nightmare.
It's enough that I've now moved to KiCad layout and will wait for the boards to come back to see if the actual ADC data I am getting is more or less linear, noiseless…
Gemini was suggesting the circuit design and of course I'd do the final work myself, but I find vibe-circuit-building to be quite valuable.
It would catch any case where the stove is drawing power, irrespective of possible failure modes of the stove itself.
Irrespective, "letting the magic smoke out" has been a part of the electronic hobbyist's vernacular long before vibe-breadboarding. (Been there many times.)
Exactly. I'm a life-long software guy, but I've dabbled in electronics at various times. But typically I'd hit walls that I just didn't know how to get past, and it wasn't easy to find solutions. If I'd had an LLM to help, I'm pretty sure I'd have become much more deeply involved in electronics.
The logical next step is to use metal, but that's outside of my hobby tools. I found that JLCPCB offered sheet metal fabrication but I had no experience with sheet metal designs. I went to ChatGPT and was actually really impressed by how well it was able to guide me from design to final model file. I received the adapters last week and was really impressed by how nice they turned out.
All of that to say, AI-assisted design is actually lowering the bar of entry for a whole lot of problems and I am quite happy about it.
This seems ~identical to the situation where we can use a compiler or parser to return syntax errors to the agent in a feedback loop.
I don't know exactly what the tool calling surface would look like, but I feel like this could work.
I wonder if a SPICE skill would make LLMs safer and more useful in this area. I’m a complete EE newbie, and I am slowly working through The Art of Electronics to learn more. Being able to feed the LLM a circuit diagram—or better yet, a photo of a real circuit!—and have it guess at what it does and then simulate the results to check its work could be a great boon to hands-on learning.
Reading and interpreting datasheets: A- (this has gotten a LOT better in the last year)
Give netlist to LLM and ask it to check for errors: C (hit or miss, but useful because catching ANY errors helps)
Give Image to LLM and ask it to check for errors: C (hit or miss)
Design of circuit from description: D- (hallucinates parts, suggests parts for wrong purpose. suggests obsolete parts. Cannot make diagrams. Not an F because its textual descriptions have gotten better. When describing what nodes connect to each other now its not always wrong. You will have to re-check EVERYTHING though, so its usefulness is doubtful)
> ...
> Ah - that makes sense, that's why it's on fire
oh how very relatable, I've had similar moments.
I knew about SEDs (smoke emitting diodes) and LERs (light emitting resistors), but what do you call the inductor version?
Previous discussion: https://news.ycombinator.com/item?id=44542880
I know nothing...
The MVP was hand coded, leaned heavily on sympy, linear fits, and worked for simple circuits. The current PoC only falls back to sympy to invert equations, switches to GPR when convergence stalls, and use a robust differential evolution from scipy for combinatorial search. The MVP works, but now I have a mountain of slop to cleanup and some statistics homework to understand the limitations of these algorithms. It’s nice to validate ideas so quickly though.