From [1]:
> Olof Kindgren
> 5th April 2025 at 10:59 am
> It’s a great achievement, but I’m of course a little sad to see that it’s not mentioned anywhere that Wuji is just a renaming of my CPU, SERV. They even pasted in block diagrams from my documentation.
[0] https://github.com/olofk/serv
[1] https://www.electronicsweekly.com/news/business/2d-32-bit-ri...
https://www.nature.com/articles/s41586-025-08759-9
Sadly I can't access the full article right now.
The paper itself is behind a paywall so I can't see it, but it looks from the references like they provided proper attribution.
It's unfortunate that some of the articles around it don't mention that, but it seems like the main point of this is discussing the process for building the transistors, and then showing that can be used to build a complete CPU, not the CPU design itself which they just used an off-the-shelf open source one, which is designed to use a very small number of gates.
38. Kindgren, O. et al. SERV - The SErial RISC-V CPU. GitHub http:/github.com/olofk/serv (2020).
The inkjet transistor article says "A total of 216 devices were tested with a yield of greater than 95%, thus demonstrating the true scalability of the process for achieving integrated systems." But 95% yield on the transistor level implies vanishingly low yield at the device level when you need thousands of transistors to build a full microprocessor.
Even the new MoS2 microprocessor discussed in the Ars article wasn't fabricated all at once. It was built up from sub-components like shift registers containing fewer transistors, then those components were combined to make a full microprocessor. See for example "Supplementary Fig. 7 | Yield analysis of wafer-level 8-bit registers." in the supplementary information:
https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
The yield of 8-bit registers, each consisting of 144 transistors, can reach 71% on the wafer.
[0] Am I remembering correctly that this is your area?
You can find Apple II schematics easily enough online. All the chips are common, off-the-shelf parts still available today. You can send the KiCAD drawings (also available) to a company like PCBWay and have PCBs made very cheaply and in small quantity. Then all you have to do is solder in the chips and other components and connect the board to a power supply.
Ben Eater actually has a free course on YouTube [1] all about building a breadboard computer!
[1] https://youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565d...
Sam Zeloof [1] actually went through the exercise of making his own semiconductors from scratch. It's a lot of chemistry and experimentation and quite interesting as an exercise, but not at all practical for building your own computer.
Printable transistors would take away the nasty chemistry bits that Sam had to deal with but otherwise wouldn't help much with making practical devices. Computers have a lot of very standard, "Lego brick" or jellybean components. Stuff like muxes/demuxes, shift registers, adders, and the like. These are the components you can buy off the shelf to build your own computer. Building these yourself on giant sheets of paper with a printer might be interesting but you'd get a far less practical, usable computer out of the deal.
As to doing it all "from sand". You can! At least sort of. It's always a question of how far down the stack you want to take it. After all you probably need to source rare earths from somewhere that isn't your backyard.
Check out pictures of the old processes before automated VLSI. It was all done by hand including crystalizing the silicon. You'll need a clean room and a bunch of weird supplies though.
Side note: I'm assuming anyone who knows any of these people would be interested that a new Dan Gelbart video just dropped[5]!
-----------------------------------------
Other side note: @YouTube people (and @GoogleSearch), can we talk about search? The updates have been progressively making it harder to find these types of accounts. People who do *highly* technical things. I get that these are aimed at extremely specific audiences but this type of information is some of the most valuable information on the planet. Lots of knowledge is locked into people's heads and these classes of videos are one of the biggest booms to knowledge distribution we've ever seen in the course of humanity. I understand that this does not directly lead to profits to YouTube (certainly it DOES for Google Search), but indirectly it does (keeps these users on your platform!) and has a high benefit to humanity in general. The beauty of YouTube and Google was you could access anything. That we recognized everyone was different and we could find like minded people in a vast sea. The problem search was meant to solve was to get us access to hard to find things. To find needles in ever growing haystacks! Please, I really do not want to return to the days of pre-search. Nor even early search! It should be _easier_ to find niche topics these days, not harder. LLMs aren't going to fix this. This is becoming an existential crisis and it needs to be resolved.[0] https://www.youtube.com/watch?v=UIqhpxul_og
[1] https://www.youtube.com/watch?v=FYgIuc-VqHE
[2] https://www.youtube.com/@SamZeloof
[3] https://www.youtube.com/watch?v=XVoldtNpIzI
In the meantime, for everybody that's been turned on to something really awesome and creative on YouTube somebody else got turned on to something really toxic.
[1] Something significant happens every 10 years in search relevance, and SBERT was one of those.
This is why ML is so fucking cool but it's also why they are really bad at details. Why you have to really wrestle with them to handle nuance. Easiest to see in image generators but they're much smaller. Do remember that these things are specific trained so that their outputs are preferential to humans. The result is that errors are in the direction of being difficult to be detected by human evaluators. Deciding if that's a feature or bug requires careful consideration.
This is not to say that LLMs and ML is useless or trash. They are impressive and powerful machines but neither are they magic and the answer to everything. We got to understand the limitations if we're to move forward. I mean that's the job of us here as researchers, engineers, and developers. Using a keen eye to find limits and then solve them (easier said than done lol)
Prompt engineers with their decoder models are going to always be wondering why they are always a bridesmaid and never a bride, with encoder models you can attain the holy grail of the system where you put text in one side and get, within calibrated accuracy, facts to put into the first kind of knowledge base. Or, for that matter, a good search engine for the second kind of knowledge base which could raise it above the "trash can" level.
I spent about 10 years working (sometimes for myself, sometimes for employers, sometimes part time, sometimes as a software developer sometimes as a business developer) on the problem of turning a mass of text into facts into text to solve problems like:
- Doctors write copious medical notes from which facts would be useful for themselves, payers, researchers, regulators.
- An accounting or legal firm may need to scan vast numbers of documents and extract facts for a audit or lawsuit
- An aerospace manufacturer has a vast database of documentation and maintenance notes (even from the teams at the airports) that it needs to keep on top of
- A fashion retailer wants to keep track of social media chatter to understand how it connects and fails to connect with customers and answer questions like "should we endorse sports star A or B?"
- Police and soldiers chat with each other over XMPP chat about encounters with "the other" which again are rich with entities, attributes, events, etc.
Tasks like this need an interactive system but you face the problem that people have an upper limit of 2000 or so simple decisions [1] in a sustainable day. The problem is large but it is not "boil the ocean" because you can set requirements for what gets extracted and use the techniques of statistical quality control as in Deming to know accuracy is in bounds.
You can give people tools to tag things in bulk, you can apply rules, you can give the people tools to create the rules. I worked on RNN and CNN based models, SVM, logistic, autoencoder and other models and before BERT they all sucked. If you have the interactive framework you can put encoder or decoder LLMs in and it is a revolution that makes systems like that much cheaper to develop and run for better effects.
[1] hot dog/not hot dog
/s
Intel and the French research institute CEA-Leti are jointly developing 2D transition-metal dichalcogenides (TMDs), such as molybdenum disulfide (MoS₂) and tungsten-based materials, for integration into 300mm wafers. These materials offer sub-1nm transistor channel thickness, making them ideal for extending Moore's Law beyond 2030.
[29 June 2023] https://compoundsemiconductor.net/article/117047/CEA-Leti_an...It's the same set of issues that you'll run into if you try to precisely quantify the thickness of a sheet of printer paper. It really depends on what you mean when you ask the question. The geometry of the electron shell, the minimum theoretical width once assembled into the theoretically optimal sheet, the impact of various imperfections in practice, the potential for more than a single layer to exist (and the associated averages), and a number of other things that aren't immediately coming to mind.
It's an issue of precision on the part of the party asking the question. We usually work on scales that are so large that such details aren't meaningful (if you can even measure them in the first place).
https://www.acsmaterial.com/monolayer-molybdenum-disulfide.h...
Is there any reason to think this won't improve with time? The Intel 4004 was "slow and inefficient" too?