It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
It would be interesting to see if you could feed the file into an LLM and get it to produce the feedback.
Also, it certainly wasn't the LLM; atopile doesn't allow you to specify routing as far as I'm aware, their docs seem to tell you to route in KiCad.
As said noob, do you have any resources for basic PCB design/routing? Along the lines of a simple list of things to look out for?
I've only ever done one, and for routing I basically did the "make two ground pours, then keep clicking until everything is connected" process that others have described in this thread. Probably about the same as I'd imagine an autorouter would have. And it seems like it worked fine in the end. But I'm wondering what obvious things I probably missed, and what the consequences are to missing them? PCB layout articles online seem to quickly get into topics like differential pair length matching, high-frequency / RF circuits, optimizing current return paths, controlled impedance, and so on... none of which I imagine will ever be relevant to me as a hobbyist.
Basics: learn to use your EDA software, properly configure it with your board house's capabilities, get correct footprints, read and re-read and re-re-read the datasheets for everything you use. Study other similar designs and try to understand everything they're doing and _why_.
- Place mounting holes and critical components first. Tiny boards and tiny components look bigger on-screen, zoom out to 1:1 real life scale as a sanity check!
- Use as many of the largest decoupling caps you can get. You don’t need multiple caps in different sizes; this comes from the old days of leaded caps when parasitics would be bad
- For power: use planes when possible; use a trace width calculator; always have a ground plane.
- Generally speaking, use the widest traces you can.
- There is a huge asterisk on this one, but most traces should be made as short as possible. Decoupling caps should be super close to where they're needed. This is one of the more common noob mistakes, but it can also lead you astray (making overly complex or compact PCBs on the first try.)
- Do not put capacitors or inductors close to the edges of a board, they will fail because of flexing!
- Check clearance between parts for pick and place and hand-soldered parts
- Always run DRC checks (there are also secondary DRC check tool websites/downloads aside from the one in your EDA software)
- Before sending it off, manually check for obvious common blunders (forgot the ground plane, no copper pour on ground plane, dead short, forgot to drill holes, wrong units, used the wrong footprints) - manually measure a few things on your design including footprints and pad sizes and cross reference this with an independent source. Check your files in different gerber viewers and hand-trace through the copper path from one component to the next. Visually preview the PCB and ensure you're not missing any copper anywhere.
- Don’t make things as small as possible right away! Make it big, test points, connectors, break out sketchy features into daughterboards etc, then shrink when it works
Beyond the basics:
- Understand your components. There are countless types of resistors and capacitors, to say nothing of the other component types. Getting more advanced, try to understand the various types, their lifespans, failure modes, heat tolerance. Pay attention to physical component sizes, if some capacitors of type X and rating Y are one volume and the others are half the volume by being half the height... why?
- Understand heat. For the most basic calculations: "With only natural convection (i.e. no airflow), and no heat sink, a typical two sided PCB with solid copper fills on both sides, needs at least 15.29 cm2/2.37 in2 of area to dissipate 1 watt of power for a 40°C rise in temperature. Adding airflow can typically reduce this size requirement by up to half. To reduce board area further a heat sink will be required." - from Thermal Design By Insight, Not Hindsight by Marc Davis-Marsh
- Get a better understanding of electricity and RF in general. This really pays dividends in terms of understanding why the "rules" are what they are.
For some interesting stuff beyond the basics, or to get yourself thinking, these links are great:
https://resources.altium.com/p/2-the-extreme-importance-of-p... by Rick Hartley
https://codeinsecurity.wordpress.com/2025/01/25/proper-decou... by Graham Sutherland
The "PCB Review" threads on r/PrintedCircuitBoard are great places to learn as well.
Beyond that... well, it's like any skill, learning the theory and best practices is great but the way to really improve is to get out there and look at (and design) tons of PCBs.
https://www.youtube.com/watch?v=ySuUZEjARPY ("How to achieve proper grounding")
https://www.youtube.com/watch?v=ZYUYOXmo9UU ("Keys to control noise, interference, and EMI in PCBs")
https://www.youtube.com/watch?v=QG0Apol-oj0 ("What your differential pairs wish you knew")
https://www.youtube.com/watch?v=0RyBCnowLsI ("Secrets of PCB optimization")
These are all by Rick Hartley (from GP's comment) and they are all worth watching in this order.
- impressive that this worked so well with LLM-generated atopile, given that atopile is about a year old!
- the hardest part of a PCB is still the routing and nonstandard parts of the design; what this did is basically "find a reference design, pick components that match the reference design, and put them on the correct nets" which is the easiest part of the process for people designing PCBs today
- much like with code, 99% of PCBs designed are fairly basic boards implementing the reference design with some small tweaks, and then there is a tiny amount of envelope-pushing designs/crazy complex stuff. Obviously you can't design some fancy PCB with complex RF with this, but give it some time and I'd bet you can probably make a lot of the basic stuff...
This may be true of hobbyists, but is very much not true in industry.
Maybe all those millions of IoT devices skew the number of PCBs made that look a bit like an ESP32 plus a button and an LED, but most unique designs probably aren't those. Endless march of things like new super-dense PC/mobile motherboards and all the weird hyper-specific industrial electronics takes far more effort, say.
And no one is sweating the few days to design and lay them out to meet the constraints for a simple ESP32 + a button. Primarily the size and shape and power usage. That's not the hard bit of slinging mass-market IoT trinkets, the software side is.
Signals integrity analysis used for integrating DRAM is a difficult skillset, requires trial and error, and if you manage all the design complexity somehow, manufacturing is likewise difficult. If you show up with a 8 layer PCB littered with BGA components, most manufacturers won't talk to you if you don't have volume or aren't willing to pay astronomical prices.
And let's not get into wireless - antenna design and interference testing is its own dark art, and even if you manage to make something that works, you have to certify it in every single country/economic block you want to sell to.
And all this to supplant a ready-made Pi Zero or ESP board. that costs basically nothing.
Clearly much bigger shops than ours have realized this - there are many high-volume commerical products that use a Raspberry PI as brains, or smart speakers using ESP32.
My point is, laying out very simple hobbyist-level designs is not 99% of PCB design effort in the world (the original claim).
It's not "envelope-pushing crazy designs" (per the original) to have 500 or 1000 or more components on your board even if you do use a module to offload the CPU and radio (or the field bus in industrial contexts). That's just normal levels of complexity.
And even if you do make a boutique smart speaker with a module and can afford it because it's a high-end device with margins, if you get to real volume, saving a dollar a unit by integrating might be worth it: you can still see change of out the savings after getting a design consultancy to redo your board if you're shipping millions of units.
It's not just IoT. Look around you in your daily life. Your average person has a few super-dense complex PCBs around them (laptop, phone, maybe a TV etc) but they have at the absolute minimum a few hundred simple PCBs. Your average household doodad these days requires multiple PCBs (power, control, and communications are typically separate.) How many things do you own with LEDs in them (including your lightbulbs?) How many modern cables do you own - because they all have PCBs in them now! I recently tore down a popular non-smart home appliance that's basically a fan, and it has five PCBs - one power, one control, one for the buttons on the top, one for the buttons on the side, and one for a sensor, none of those were off-the-shelf, and this is not unusual. If you really want to lose your mind, look at cars or children's toys, you'll go insane. Your average American with a car and an apartment, or a modest selection of modern children's toys, will own well into the four figures of simple PCBs. I think it's quite rare for your average person to own more than a handful of phones and laptops.
Beyond consumer products, still the overwhelming majority of PCBs are very simple. You say "all the weird hyper-specific industrial electronics" yet go into a sheet metal shop or a train factory or a refinery and count the ratio of simple PCBs to complex. Every sensor, every lightbulb, every cable, every scanner, every connected device... look at how simple process control is for most processes, or how simple most PCBs in most motorized devices are. The employees' smartphones will be the majority of the complex PCBs on the floor.
Now that I think of it - I don't think it's possible to buy or use a complex PCB without multiple accompanying simple PCBs. They virtually never take 110v, they typically take 110v->USB-C->5v DC, so that's two simple PCBs for power conversion plus two USB-C PCBs. You might use the device with a display (countless simple PCBs, its own power etc) or a mouse (yet more...) or some earbuds.
ETA: Other commenters suspect a traditional autorouter based on the poor layout quality. I agree that's also possible, and nothing in the video excludes that. It definitely wasn't the LLM, though.
I assumed the author was more experienced, I suppose this is more of an entry level hobbyist blog. There are some very fundamental problems with routing PCBs like this that are covered in introductory materials.
Good question. KiCAD once had a router, built in, or sort of built in, but it was taken out for licensing reasons. So who's doing that?
Is it really so implausible that these constraints could be built into the process/algorithm/agentic workflow?
Edit: Also, one could just look to the world of decision tree and route-finding algorithms that could probably do this task better than a language model.
It's like how pairing a coding agent that can run unit tests and iterate is way more powerful than code gen alone.
I did something like that already and It works ok.
I'm just electonic amateur and it happened to me to incorrectly wire up MOSFET or using ptype incorrectly.
I basically send screenshot of schematic and ask for feedback or suggestions. it actually provide nice feedback and for someone not very experienced it's very reassuring to get green light from LLM.
The author's prompt is basically already a meticulous specification of the PCB, even proactively telling the LLM to avoid certain pitfalls ("GPIO19 and GPIO20 on the ESP32-S3 module are USB D- and D+ respectively. Make sure these nets are labeled correctly so that differential routing works"). If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
Nowadays most mainstream LLMs support pre-bundled prompts. GitHub Copilot even made it a major feature and tools like Visual Studio Code have integrated support for prompt files.
https://docs.github.com/en/github-models/use-github-models/s...
Also, LLMs can generate prompt files too. I recommend you set aside 10 minutes of your time to vibe-code a prompt file for PCB generation, and then try to recreate the same project as OP. You'd be surprised.
> Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
I don't agree. Vibecoding doesn't exactly mean naive approaches to implementations. It just means you enter higher level inputs to generate whatever you're creating.
Sure, but the utility of that for PCB design wasn't demonstrated in the article. This is an expert going out of his way to give the LLM a task it can't fumble (and still does, a bit).
Forget about the article. Try it yourself. Set aside 5 or 10 minutes to ask any LLM of your choice to generate a LLM prompt to generate PCBs. Iterate over your prompt before using it to generate your PCB. See the result for yourself.
Coincidentally, we just built an MCP server for atopile, and Claude seems to love it. It makes a big difference in usability, and also exposes our re-usable design library[0].
A bit about atopile[1]: Our core idea is to capture design intent in a knowledge graph with constraints and high-level modeling of components and interfaces. This lets us do much more than just AI integrations: we’ve built an in-house constraint solver that can automatically pick passives (resistors, capacitors, etc) based on the values you've constrained in your design.
Currently, atopile directly generates KiCAD PCB files, so you can finish the layout (mainly the connections between reusable layout blocks). We're also generating artifacts like I2C bus trees and 3D models, with power trees and schematic generation on the roadmap.
Happy to answer questions or go into technical details!
Maybe this is pedantic, but I thought that the core point of "Vibe Coding" is that you do not look at the code. You "give in to the 'vibes'".
I don't know how to translate it into a physical hardware product exactly, but I think it would be manufacturing it without looking at it, plugging it in for your use-case and seeing if it works, then going back to the model, saying it didn't work, rinse, repeat.
It's the behaviorism of programming. (Pay no attention to the man behind the curtain).
Personally I use the term "agentic coding" if you are high leveling describing the specs to the LLM agent but still taking some minimal amount of time to review the diffs.
Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
Constructing detailed prompts to ultimately pair program impressive, complex outcomes is what I assumed vibe coding was. After 35 years of not being able to tell a computer to write the code for me, even getting an 80% coherent first pass of a sophisticated refactor was already radical enough.
If that's what vibe coding is, then nobody should be using that term because it might be the perfect example of "just because you can, doesn't mean you should".
IDK! I don't think Vibe Coding, with the definition that I understand, is a good idea.
But the term comes from here: https://x.com/karpathy/status/1886192184808149383
And the key parts are:
> "forget that the code even exists"
> "I don't read the diffs anymore"
I myself am unclear on what the "vibes" that one is giving into actually are. But terms should have meanings and my understanding from reading the original tweet is that "Vibe Coding" means something distinct from "coding using some AI to help".
I’m puzzled why the post calls it “surprisingly good” when it’s so bad and missing basic requirements for different parts. I guess it’s surprising that anything at all was produced, but it’s weird that the author can’t identify the basic problems with the design.
This is similar to situations where someone uses an LLM to vibe code an app until it kind of works, but then an experienced developer takes one look at the codebase and can immediately see it was not developed with any understanding of the code.
That said, the AMS1117 datasheet shows a tantalum cap on the output. This is presumably because the non-negligible ESR helps stabilize the regulator, though they don't say that explicitly. The LM1117 datasheet explains this better, stating that "the ESR of the output capacitor should range between 0.3 Ω to 22 Ω". (These are very similar parts, just from different manufacturers.)
The ceramic caps chosen here are probably below that, so perhaps it would ring even with correct layout. The prompt guided towards that bad choice when it said all caps should be 0603, since almost all 0603 capacitors are ceramic. The LLM was free to choose a regulator optimized for use with ceramic output caps, but it probably chose the xx1117 because it's so common.
If you can reliably automate that, it's still a pretty big deal.
Amazon has been hiding behind "it's a marketplace" for more than a decade. There's an insane amount of shit that should never be sold. Including, but not limited to, fake fire alarms sold as real ones. The CPSC tried going after Amazon but are stuck only going after listings once in awhile. I can't imagine the deaths caused by Amazon are only in the single digits.
I don't care about this one example project, but when thousands of people read about it and vibe-code their own hallucinated PCB, hopefully wasting their money is the worst thing that happens. They certainly won't be learning much if the AI does it for them. They also don't get the pride that comes from understanding. They are an imposter, and when someone asks if they made the thing, they will feel like an imposter. Nice job, noob!
I'm active in the world of amateur LED installations, and practically nobody realizes how easy it is to start a fire with a 500 watt power supply (or several of them connected together in bad ways) for their holiday lightshow. "AI" is not likely to help that and will probably make it worse.
"AI" is like the blind leading the blind, and it gives people permission to do the stupidest things. Sometimes it's right, but it's a gamble. It's not going to always give the same answer for the same question, and when it "hallucinates", a noob is unlikely to notice.
Except that:
- no parts placement
- no routing
Easily the two hardest / annoying steps in designing such a straightforward board.Parts placement could be automated, but you’d have to tell something what you wanted and at that point might as well just do the placement instead of describing placement requirements.
Maybe _then_ we can trust LLMs to design stuff for us.
Disclaimer: co-author of atopile here
Having well-established, unambiguous rules that must be followed for functionality seems to be a key predictor of AI success. The more constrained and rule bound the domain, the better LLMs perform.
– keep the AI traces in a separate layer or revision – run basic checks for clearance and width – eyeball the diff, accept what looks right, fix the rest
I mean, some people are claiming that LLMs can do scientific research, so the above isn't too much to ask.
Frameworks like atopile, tscircuit (disclaimer: I’m a tscircuit lead maintainer) and JITX are critical here because they enable the LLM to output the deep knowledge it already has. The author is missing a couple pieces to really get great output: 1) Context-friendly datasheets 2) DRC/Semantic review 3) LLM-compatible layout methods
The hardest to build is (3) and what I spend 90% of my time on. AI knows how do do spatial layout for things like flex or css grid but doesn’t have a layout method for PCBs. Our approach w/ tscircuit is to develop new layout systems that either match templates, new heuristic layouts (we are developing one called “pack”), or solve simple spatial constraints.
But tldr; it is only a matter of time before AI can output PCBs. It is not simple but we know what works with LLMs from witnessing the evolution of AI for website generation
It's been my direct and many-times-repeated experience that o3 is an incredible electronics engineering wingman, so long as you follow good LLM hygiene; basically, verify all important assumptions, actually read the datasheets, err on the side of too much detail.
The time spent crafting prompts is the time I would spend planning and iterating on designs anyhow. Unlike a human, I don't have to pay them by the hour to patiently explain the nuances of different diodes or suggest alternative parts. o3 is remarkably good at rapidly grokking intent and making suggestions that have unblocked me.
For the camp of armchair quarterbacks on this site who demand specific "evidence" that we're not all just hallucinating the value of these tools, here are two things that happened just this week:
I was blowing my brains out troubleshooting a touch IC, IS31SE5117A. No matter how good my reflow or how many units I tried, I could not bring up an I2C connection. Based only on the fact that Cref refused to rise above ~0.1V when it's supposed to be about 0.7V, it suggested that it seemed likely that I had units from a batch that had no firmware. After going back and forth with their lead engineer for a week, I ordered a few IS32SE5117A - automotive/medical spec, same chip - and it worked immediately, prompting a product recall.
I'd managed to implement galvanic isolation on my USB connection to eliminate audio hum, but it turns out that touching a capacitive pad on a device that has no outside ground connection means that static has nowhere to go but to reboot the microcontroller. I'd been chasing my tail on this for a while, but o3 suggested that instead of isolating my whole device, I could just isolate my MIDI OUT circuit. This is one of those facepalm moments that only seems obvious in hindsight. I told my partner that abandoning weeks of effort was first very hard, and then very easy.
Finally, last night I had Cursor generate both sides of an SPI connection between two ESP32-S3s, something I had never done before. I obviously could have figured it out in 2020, but it would have taken me 1-2 weeks and it wouldn't be nearly as clean or cover as many edge cases.
My hottest take is that LLMs are already (far?) more valuable for engineering tasks than coding. That's kind of unfair because by definition, these tasks involve coding. The speed at which I've been able to iterate has been kind of nuts.
Also: any claims that people who tackle complex domains from a cold start somehow aren't learning fundamentals from a mentor with infinite patience and awareness of every part and circuit design pattern are simply wrong.
We need that to stop if you're going to keep commenting here. HN is a place for thoughtful discussion, and it's only a place where people want to participate because others make an effort to keep the standards up. Commenting in this style is not what HN is for and it destroys what it is for. Please take a moment to read the guidelines and make an effort to observe them in future.