The UI is the inverse of whatever intuitive is. It's built on convention after convention after convention. If you understand the shibboleths (and I'm guessing most people take a certified course by a trainer for it?), then it's great, but if you don't, it really sucks to be you (i.e. me).
I would LOVE to try out what you've built, but I am afraid that if the model misinterprets me or makes a mistake, it'll take me longer to debug / correct it than it would to just build it from scratch.
The kinds of things I want to make in solidworks are apparently hard to make in solidworks (arbitrarily / continuously + asymmetrically curved surfaces). I'm assuming that there won't be too many projects like this in the training dataset? How does the LLM handle something that's so out of pocket?
Personally not familiar with curved models, but my understanding is that surface modelling with lofts guided by spline contours might be the way to go. Not sure if SW has those features.
FWIW, back in the day I tried solidworks, inventor, pro e, catia, solid edge, anything I could get my hands on. I struggled to find something that would click with me, thinking it was the software that's the problem. It really wasn't -- the mechanical design problem space is vast and the requirements are demanding, which makes for solutions with a certain level of complexity. I had entered with a lot of hidden assumptions and found it frustrating when the software required me to address them, and on top of that, there's just a lot of stuff to figure out. It helps to have someone around to help when you get stuck.. that was what got me over the hump. At this point I've been using solidworks almost every day for about 15 years, and it only fills me with blind rage every few days, which I think is pretty good for professional software.
Yeah, you need to invest time to learn it. I do understand the frustration when learning something new. I get it. However, your sentiment on this isn't leading to the correct conclusion. A piano or or a guitar are frustrating instruments until you get past a certain level of mastery.
Engineering tools do carry with them a degree of complexity. There are reasons for this. Some are, of course, better than others. I started in the dark ages with AutoCAD, then, over time, learned used ACAD 3D, Inventor, Pro-E, Solidworks, Fusion 360, Onshape, Siemens NX and CAM tools like Camworks and Mastercam; all in professional commercial, industrial or aerospace (NX) settings. I would rank Solidworks way up there in usability and functionality.
Of course, this isn't to say that there are lots of things that could be improved in Solidworks (and all of the CAD/CAM programs I mentioned).
Sometimes online resources like YouTube can feel (and actually be) really disjointed. Get yourself a good book on Solidworks and go through it front to back. At some point it will click. From that point forward it will feel like an extension of your brain. This is no different from learning to play the piano. When I use Solidworks I don't think about the UI, I just work on my designs.
This is good advice:
https://www.reddit.com/r/SolidWorks/comments/1gjfbwz/comment...
Good PDF course to start with:
https://my.solidworks.com/solidworks/guide/SOLIDWORKS_Introd...
And, of course, you can buy a full course for less than $10:
And yea, you should find a course from a training firm rather than official documentation. It sucks and theres a reason Fusion360 seems to be really eating into the market after 5-10yrs.
It's most likely so poorly set up that I finch considering working in that domain now.
Source: I've had friends who've worked there. Background: we studied computational engineering, but I got a non-domain software job. Sometimes I feel I learnt more being away from that sort of work.
My big tip if you can't find a button there is always the search bar. Just search the command you are looking for, it will even show you where the button is located for next time. That said, they don't move things around that much from year to year, I'm surprised if you can't find a command in a tutorial made in the last 10 years.
The features you are talking about sound like you want to be doing surfacing, which is definitely a more advanced modeling technique that I only recommend trying to learn once you understand the basics and can predict how the software wants you to model something.
If your work allows for it, go for freecad or better yet openscad if you're pursuing this new concept of LLM design. onshape is nice feature-wise but then you're just trusting a different group that has an even tighter grip around your unmentionables due to the saas nature.
To be fair : the constant betrayal of tech companies in my life has just pushed me a bit further towards local-only than most; I don't really condemn the -as-a-service industry, they've just been the first to pull rugs and then shrug their shoulders when their (usually already dwindling) customer base is screwed.
I had the pleasure to use both SolidWorks and Vivado professionally over the last decade and boy was solidworks fun in comparison.
https://github.com/MichaelAyles/heph/blob/main/blogs/0029blo...
I need to redo this blog, because I did it on a run where the enclosure defaulted to the exploded view, and kicanvas bugged out, either way, the bones of it is working. Next up is to add more subcircuits, do cloud compilation of firmware, kicad_pcb to gerbers.
Then order the first prototype!
I'm pretty in the weeds when it comes to this stuff,
First of all, I made https://www.circuitsnips.com/ , which is like thingiverse for circuits, so users can get bits of designs and copy and paste them into their own designs, but the reception was lukewarm at best, it needed bootstrap data, for which I scraped GH, which could have been a mistake, either way, for the tens of people using it, I think it's pretty neat!
Circuitsnips Blog:https://www.mikeayles.com/#circuitsnips-com Circuitsnips GH:https://github.com/MichaelAyles/kicad-library
So, then I created a fork of TOON, called TOKN, which is token optimised KiCad notation, which successfully compresses kicad schematic s-expressions by 93%, and is able to regenerate schematics from it. With the intention of trying to generate schematics, one shot them using frontier models, or even post-train an OSS model to see if that works, however when I benchmarked it, I could get 100% syntax validity, but the models hallucinated the components, the pins etc, so they would need grounding.
TOKN Blog:https://www.mikeayles.com/#tokn TOKN GH:https://github.com/MichaelAyles/tokn
Which brings me onto my next, next, (next?) side project: An embedded documentation MCP server!, Load in your PDF's, it parses them and puts things into a sqlite db that the LLM can interact with more efficiently. I mainly use it for writing hardware abstraction layers for chips that have terrible vendor support (looking at you, NXP KEA128). Honestly, everything about this chip is awful, even the GPIO is laid out terribly, it's like the meme where everything gets put in the square hole. PORTB? nah, you need to do PORTA + 16. Anyway...
Bitwise-MCP Blog:https://www.mikeayles.com/#bitwise-mcp Bitwise-MCP GH:https://github.com/MichaelAyles/bitwise-mcp
And if you've read this far, here's a little treat:
SCH: https://github.com/MichaelAyles/heph/blob/main/blogs/0029-im...
3D bottomside: https://github.com/MichaelAyles/heph/blob/main/blogs/0029-im...
In the Phaestus workflow, it chooses modules, for example the main cpu block just has an ESP32C6-XIAO board on it, since it has massive compute, and radio with wifi6 and zigbee, which covers 90% of IOT. Since this is larger than 0.5" it sits in the middle of a 2x2, which bridges both sets of north-south buses, with all common pins.
Bidged pinouts: https://github.com/MichaelAyles/heph/blob/main/blogs/0029-im...
Early XIAO: https://github.com/MichaelAyles/heph/blob/main/blogs/0029-im...
It's been a pain in the butt to design for, since the vias need enough clearance, and we are almost maxed out on our bottom side, so on the 2x2's i've given it as much room as I can in the middle to allow for topside routing with a few vias.
Then, when assembling the board, it does a 0.1mm overlap n/s, which merges the nets, whilst the system keeps track of what signals are used, so I2C is all multiparticipant, spi1 gets the default CS line, otherwise it can use the aux pins via a resistor selector, the AUX5/AUX6 aren't connected to the XIAO, so if you wanted to do something like a USB power monitor, the power can run isolated at higher voltages, as long as they get fed through a current/voltage sensing block, which could be 1x1, 1x2 or 2x2.
As for component placing in the enclosure, buttons are sub-boards with a wire-to-board connector, and the 1x1 module is a block that contains a connector and an PCA9570, which allows the button to be placed anywhere with defined mounting features.
Similar for LCD's, the pre-designed block has a FFC connector, and comes with constraints, e.g. place at end of design, so the screen can fold back on itself, whereas if it was mounted in the middle, it wouldn't be able to go where it needs to be.
Solidworks might be as close to a final form for CAD as you're going to get.
oh and they changed the price as well, it went up, and up, and up
OK, how do we work with our CMM and CAM software? How do we send files to clients and vendors? crickets
Not confidence inspiring.
https://adamkarvonen.github.io/machine_learning/2025/04/13/l...
Do any people with familiarity on what's under the hood know if the latent space produced by most transformer paradigms is only capable of natively simulating 1-d reasoning and has to kludge together any process for figuring geometry with more degrees of freedom?
An interesting thing about transformers is that they are world-class at compressing 2D image data even when not trained on anything but textual language ( https://arxiv.org/abs/2309.10668 ). Whether that notion is relevant for 3D content would be two or three figures over my pay grade, though.
LLMs struggle because I think there's a lot of work to be done with translating colloquial speech. For example, someone might describe a creating a tube is fairly ambiguous language, even though they can see it in their head: "Draw a circle and go up 100mm, 5mm thick" as opposed to "Place a circle on the XY plane, offset the circle by 5mm, and extrude 100mm in the z-plane"
You can represent a dog as a specific multi-dimensional array (raster image), but the word dog represents many kinds of images.
I'm building a mount for our baby monitor that I can attach to the side of the changing table. The pins are x mm in diameter and are y mm apart. [Image #1] of the mounting pins. So what needs to happen is that the pin head has to be large, and the body of the pin needs to be narrow. Also, add a little bit of a flare to the bottom and top so they don't just knocked off the rest of the mount.
And then I'll iterate. We need a bit of slop in the measurements there because it's too tight.
And so on. I'll do little bits that I want and see if they look right before asking the LLM to union it to the main structure. It knows how to use OpenSCAD to generate preview PNGs and inspect it.Amusingly, I did this just a couple of weeks ago and that's how I learned what a chamfer is: a flat angled transition. The adjustment I needed to make to my pins where they are flared (but at a constant angle) is a chamfer. Claude told me this as it edited the OpenSCAD file. And I can just ask it in-line for advice and so on.
I'd love to have that kind of UI for adjusting dimensions in regular (non-CAD) images. Or maybe adjusting the CSS on web pages?
This is definitely my experience as well. However, in this situation it seems we are mostly working in "local" space, not "world" space wherein there are a lot of objects transformed relative to one another. There is also the massive benefit of having a fundamentally parametric representation of geometry.
I've been developing something similar around Unity, but I am not making competence in spatial domains a mandatory element. I am more interested in the LLM's ability to query scene objects, manage components, and fully own the scripting concerns behind everything.
But I think this shows that these models can improve drastically on specific domains.
I think if three was some good datasets/mappings for spacial relation and CAD files -> text then a fine tune/model with this in its training data could improve the output a lot.
I assume this project is using a general LLM model with unique system prompt/context/MCP for this.
(Of course, you and I know it is, it's just that you're asking it to do a lot)
I guess it's all in the perspective
I then waited about an hour for the print to finish, only to discover I wanted to make some adjustments. While I was able to iterate a couple times, I quickly realized that there were certain structures that were difficult to describe precisely enough without serious time spent on wording and deciding what to specify. It got harder as the complexity of the object grew, since one constraint affects another.
In the end, I switched to FreeCAD and did it by hand.
Wasted half an hour generating absolute nonsense if it even compiled and ended up going with one of those svg gear generators instead lmao.
For people looking at a different angle on the "text to 3D model" problem, I've been playing with https://www.timbr.pro lately. Not trying to replace SolidWorks precision, but great for the early fuzzy "make me something that looks roughly like X" phase before you bring it into real CAD.
Here's an example I finished just a few minutes ago:
https://github.com/jehna/plant-light-holder/blob/main/src/pl...
[1] https://github.com/pedropaulovc/offline-solidworks-api-docs
[2] https://github.com/pedropaulovc/harmonic-analyzer/blob/main/...
I've tried ChatGpt and Claude on datasheets of electronic components, and I'm sorry to say that they are awful at it.
Before that is fixed, I don't have high hopes for an AI that can generate CAD/EDA models that correctly follow some specification.
For the most part they still suck at anything resembling real spatial reasoning but they're capable of doing incredibly monotonous things that most people wouldn't put themselves through like meticulously labeling every pin or putting strict design rule checks on each net or setting up DSN files for autorouter. It even makes the hard routing quite easy because it can set up the DRC using the Saturn calculator so I don't have to deal with that.
If you give them a natural language interface [1] (a CLI in a claude skill, thats it) that you can translate to concrete actions, coordinates, etc. it shines. Opus can prioritize nets for manual vs autorouting, place the major components using language like "middle of board" which I then use another LLM to translate to concrete steps, and just in general do a lot of the annoying things I used to have to do. You can even combine the visual understanding of Gemini with the actions generated by Opus to take it a step further, by having the latter generate instructions and the former generates JSON DSL to that gets executed.
I'm really curious what the defensibility of all these businesses is going to be going forward. I have no plans on entering that business but my limit at this point is I'm not willing to pay more than $200/mo for several Max plans to have dozens of agents running all the time. When it only takes an hour to create a harness that allows Claude to go hog wild with desktop apps there is a LOT of unexplored space but just about anyone who can torrent Solidworks or Altium can figure it out. On the other hand, if it's just a bunch of people bootstrapping, they won't have the same pressure to grow.
Good luck!
[1] Stuff like "place U1 to the left of U4, 50mm away" and the CLI translates that to structured data with absolute coordinates on the PCB. Having the LLM spit out natural language and then using another LLM with structured outputs to translate that to a JSON DSL works very well, including when you need Opus to do stuff like click on screen.
2 things related to what you said I hadn't put in the original post:
1. In our experience, the LLMs were awful at taking actions directly with any of the SolidWorks API scripting formats (C#, VBA, etc.). Probably 75% of what they wrote just failed to run, and even when they had access to browse the documentation it wasn't much better. If you're getting Opus or anything else to interact with SolidWorks from the CLI, can you say more about how you're getting it to interface effectively?
2. The LLMs are indeed surprisingly bad at spatial reasoning unless prompted specifically and individually. The most notable case of this is when they need to choose the right plane to sketch on. When creating revolve features, they'll often choose the face that would've only worked if they were going to extrude rather than revolve, and when creating sweeps they'll often try to put the sketch that's going to be swept on the same plane as the path that's being swept. If you go back and ask them why they did that and point out that it's wrong, they can fix it pretty fast, but when left to their own devices they often get quite stuck on this.
I just have a Solidworks plugin that translates CLI calls to JSON to Solidworks API calls and back again.
What really started working was creating a bunch of high level CLI commands so that Claude Code could query the part/assembly by asking stuff like "What is the closest distance between the southmost surface of Extrusion1 and the surface of Cylinder2" which would then be translated to a specific high level command or a bunch of lower level commands by Gemini 3 Flash. Those would then be translated to Solidworks API calls, as would any editing commands. It also really helps to give it the ability to parametrize the queries so instead of "all features smaller than 2mm" it can say "all features smaller than $MinFeatureSize", with some logic and a downstream LLM to translate that parameter into values in the design and review it with the human in the loop before committing it to the project.
The key is to absolutely minimize how often the LLMs think about numbers and have them think in relationships instead. The hard part is translating those relationships back to the CAD API calls but LLMs are much better at hot hallucinating if you resolve all the parametrized equations last.
Yes, huge +1 for this. I do this in a different field and it's quite impressive. At first it felt weird using different models together but they really do have certain strengths/weaknesses (in January 2026.)
Also, fascinating how quickly things are evolving around PCB design. It was only six months ago that "vibecoding" a PCB was far off (look how basic this was: https://news.ycombinator.com/item?id=44542880) and now that may soon become possible!
I have a mini query language in the CLI that implements a lot of spatial queries, both structured and via prompts (another LLM translates the prompt to a structured query), against the Altium file format and an intermediate representation I keep. Most queries and editing commands use relative positioning ("to the left of"), units ("right edge minus 10mm"), and parameters (U1.P1.Top + MinSpacing * 5), etc. The LLM rarely needs to use concrete units because it's mostly parametrized by component clearances and design rules - I just choose some numbers at the beginning like board size and layer stackup (mostly set by the fab).
The CLI has over a hundred subcommands and I use Claude skills to split up the documentation, but the agent actively explores the schematic and PCB themselves. The Claude skills include instructions to use the measurement subcommands to sanity check after making a move or when doing a review at the end, although I'm in the process of implementing a GPU based design rule checker. My Solidworks interface works the same but there are many more "verbs" there for the LLM to manage.
At the end of the day it's mostly just orchestrating another tool which does most of the spatial logic and calculations. It's definitely easier with Altium than Solidworks so far.
Not yet but once I'm ready it's all going to be open source.
> Also looked into automating altium more and found having to do a lot of GUI work to guide the models along.
Have you tried the Windows UI Automation/Accessibility APIs? You can download Accessibility Insights for Windows to see the data structure and it's well supported by Altium. It has everything you need to tell the LLM what's on screen without ever sending a screenshot (except for the 2d/3d CAD view) and the UIA provides an API that can actually click, focus, etc. without sending fake keyboard events manually. When reverse engineering the file format I put Opus on a loop and it just kept fiddling with the binary file format until Altium stopped throwing parsing errors.
> How much of going from ‘design in head’ to schematic and layout have you automated?
At this point everything but the final PCB routing. My autorouter and autoplacer are very much a work in progress and LLMs aren't great at routing complex traces, but they can brute force it given enough time and DRC checks. Right now I just shell out to autorouters like Specctra with a DSN and route the most important nets by hand in Altium. Since the LLM sets up all the design rules and can spend hours on a loop figuring out the optimal placement, it's usually a breeze. Especially compared to when I first started out in EE and spent days with part selection and footprint capture. Now a lot of that tedium is completely automated.
Soon I'll integrating probe-rs into the whole workflow and making end to end vibe PCBs, adding support for FPGAs, and integrating Solidworks so it does enclosures and molds too.
I'm just manipulating the file format directly because I don't want to be tied to Windows (bleh). I only fire up Altium for a few features that I haven't implemented yet like the autorouting and the UI automation was to let the LLM figure out file corruption issues.
I learned Altium long before KiCad was a viable option and Altium's ActiveRoute routing is the perfect blend between manual routing and autorouting so it's really hard to switch. ActiveRoute gets even better with the LLM filling out all the design rules.
ive been starting to play around with 2.5D cad system so i can sketch patterns for leather work and auto-convert into constraints, but id like to be able to have an agent in there too eventually
My wife was designing a spring-loaded model that fits in our baby walls so that we can make it more modularly attached to our walls and she used Blender. Part of it is that it's harder to make a slightly more complex model with an LLM.
Solidworks is out of our budget for the kind of things we're building but I'm hoping if this stuff is successful, people work on things down the market. Good luck!
Still have a long way to go, but if anyone wants to try you can do it here: https://grandpacad.com
If you want more free credits send me an email and I'm happy to give you some.
This is exactly what SGS-1 is, and it's better than this approach because it's actually a model trained to generate Breps, not just asking an LLM to write code to do it.
I am still hoping that openSCAD or something similar can grab hold of the community. openSCAD needs some kind of npm as well as imports for mcmaster-carr etc but I think it could work.
Do you consider adding support for AutoCAD or AutoCAD vertically integrated software like Civil 3D?
Yes - we're likely looking into other 3D systems in the future.
I've watched the video a couple times and the only thing I can see that it does wrong is the fillets on the handle (and maybe the way it used a spline & sweep for the handle could have been improved but it's no worse than you'd see from a new Solidworks user).
How big is that mug? Slowing it down and trying to catch some dim info the initial cylinder has a radius of ~39. Regardless of if that's default inches, that's one hell of a mug.
Sure you can scale it, but it's not great practice to design something and then just apply a scaling to get it to the intended size.
All the people who aren't professional CAD users have Fusion, which they can get for free. They would probably benefit the most from text to model AI, but would probably also be the least willing to pay.
Solidworks is the GOAT though