222 pointsby PaulHoule7 days ago8 comments
  • ohazi3 days ago
    I suspected that this was the case when they mentioned adding "one bit at a time" -- the CPU design that they implemented is Olof Kindgren's SERV [0], a tiny bit-serial risc-v CPU/soc (award-winning, of course).

    From [1]:

    > Olof Kindgren

    > 5th April 2025 at 10:59 am

    > It’s a great achievement, but I’m of course a little sad to see that it’s not mentioned anywhere that Wuji is just a renaming of my CPU, SERV. They even pasted in block diagrams from my documentation.

    [0] https://github.com/olofk/serv

    [1] https://www.electronicsweekly.com/news/business/2d-32-bit-ri...

    • chmod7753 days ago
      They do mention SERV in their references (38).

      https://www.nature.com/articles/s41586-025-08759-9

      Sadly I can't access the full article right now.

    • koverstreet3 days ago
      That sort of copying without attribution should be considered outright misconduct; it certainly would be in academia.
      • lambda3 days ago
        Huh? This is a paper published in Nature, and it does cite Olof Kindgren and SERV in the references: https://www.nature.com/articles/s41586-025-08759-9#Bib1

        The paper itself is behind a paywall so I can't see it, but it looks from the references like they provided proper attribution.

        It's unfortunate that some of the articles around it don't mention that, but it seems like the main point of this is discussing the process for building the transistors, and then showing that can be used to build a complete CPU, not the CPU design itself which they just used an off-the-shelf open source one, which is designed to use a very small number of gates.

        • lelandbatey3 days ago
          Thanks to the Archive.org link, we can see that indeed they link directly to the SERV github in reference 38:

              38. Kindgren, O. et al. SERV - The SErial RISC-V CPU. GitHub http:/github.com/olofk/serv (2020).
        • reaperman3 days ago
          > The paper itself is behind a paywall so I can't see it

          https://archive.org/details/s41586-025-08759-9

      • 3 days ago
        undefined
    • inverted_flag3 days ago
      [flagged]
  • amelius3 days ago
    I'm still waiting for that inkjet printer that can print transistors.

    https://www.nature.com/articles/s41598-017-01391-2

    • godelski3 days ago
      Has anyone tried to replicate this? Seems like it would be very useful for amateur makers/hackers were it not for the $23k printer cost (no idea for the cost of the discussed silver ink). But surely someone crazy had access to one and tried or has tried to replicate on a cheaper printer? I figure HN has a decent chance of helping find said persons?
      • philipkglass3 days ago
        It's possible that the inkjet printed transistor is both replicable and impractical for building a full microprocessor.

        The inkjet transistor article says "A total of 216 devices were tested with a yield of greater than 95%, thus demonstrating the true scalability of the process for achieving integrated systems." But 95% yield on the transistor level implies vanishingly low yield at the device level when you need thousands of transistors to build a full microprocessor.

        Even the new MoS2 microprocessor discussed in the Ars article wasn't fabricated all at once. It was built up from sub-components like shift registers containing fewer transistors, then those components were combined to make a full microprocessor. See for example "Supplementary Fig. 7 | Yield analysis of wafer-level 8-bit registers." in the supplementary information:

        https://static-content.springer.com/esm/art%3A10.1038%2Fs415...

        The yield of 8-bit registers, each consisting of 144 transistors, can reach 71% on the wafer.

        • godelski3 days ago
          My knowledge of transistors is pretty limited[0]. Does the yield percentage refer to number of successful chips on a substrate or look more at the total number of successful transistors? (Or confusing hybrid-term like rain forecasts) I believe your comment implies the latter? So the number of successful processors is quite low? How many failed transistors can you have in a working microprocessor? (Probably not an easy to answer question?)

          [0] Am I remembering correctly that this is your area?

          • Out_of_Characte3 days ago
            Yield would be the amount of functioning chips. This may be chips, entire packages or even a complexer answere where good chips also need to be below certain leakage. Cores and caches could be disabled and the list of potential yield increasing tooling is always increasing when wafers nowadays costs thousands of dollars.
          • notjoemama3 days ago
            I’ll add chip designers add redundancy. When errors are detected in testing, they can disable sections of the chip by lasering fuses. That allows routing the circuit through a higher quality area. Quality is measured by not only that the circuit produces correct data but also within tolerances of timing and voltage. IIRC RAM is approximately 10% redundant. A good quality chip will use what meets the spec and leave good transistors unused. A poor quality chip will disable bad ones and only use the ones that meet spec.
        • exe343 days ago
          if you could print transistors, you could make computers the way Wozniak made them - a bunch of chips with a ton of wiring.
          • chongli3 days ago
            You can do that easily and cheaply today without a fancy transistor printer.

            You can find Apple II schematics easily enough online. All the chips are common, off-the-shelf parts still available today. You can send the KiCAD drawings (also available) to a company like PCBWay and have PCBs made very cheaply and in small quantity. Then all you have to do is solder in the chips and other components and connect the board to a power supply.

            • doublepg233 days ago
              You can even make a Mac SE/30 "from scratch" - it's mind blowing how many PCBs and chips people have made for retro computing. https://youtu.be/zc3sPoqOFG8?si=iIamSEB00mnxfQdL
              • chongli3 days ago
                Wow, thank you for this! At some point I really want to get my own Mac SE/30. I have a Mac Classic (inherited from my uncle) I still need to work on. This video is really exciting for anyone who wants to fix one of these vintage machines but ends up with a motherboard PCB that's been severely damaged by battery leakage.
            • exe343 days ago
              I think the appeal is the you can print out a couple of pages of chips and wire them up, not send out for chips and PCBs.
              • chongli3 days ago
                You can order a whole batch of chips and wire them up on breadboards without sending away to have a PCB made. The PCB step is the last one when you want to finalize your computer and package it up.

                Ben Eater actually has a free course on YouTube [1] all about building a breadboard computer!

                [1] https://youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565d...

                • fc417fc8023 days ago
                  There's just a different emotional sense between manufacturing the lego bricks yourself versus mail ordering the magic blocks that you can assemble into a finished product.
                  • chongli3 days ago
                    Lego bricks is an apt analogy. I don't know how many people would actually care to manufacture their own Lego bricks but millions of people enjoy putting Lego together.

                    Sam Zeloof [1] actually went through the exercise of making his own semiconductors from scratch. It's a lot of chemistry and experimentation and quite interesting as an exercise, but not at all practical for building your own computer.

                    Printable transistors would take away the nasty chemistry bits that Sam had to deal with but otherwise wouldn't help much with making practical devices. Computers have a lot of very standard, "Lego brick" or jellybean components. Stuff like muxes/demuxes, shift registers, adders, and the like. These are the components you can buy off the shelf to build your own computer. Building these yourself on giant sheets of paper with a printer might be interesting but you'd get a far less practical, usable computer out of the deal.

                    [1] https://www.youtube.com/@SamZeloof/videos

                  • rgzz3 days ago
                    I don't know much about this topic but you still need the magic bricks in the printer to make the magic bricks though, no? I guess this can be either depressing or relieving but I'm in the former category, I wish you could do this stuff from sand or something, without relying on modern technology, would be fun.
                    • fc417fc8022 days ago
                      In my mind the printer counts as a tool so it's a different category. Also you could always do the same by hand with a mask. The feature size might be a bit larger though.

                      As to doing it all "from sand". You can! At least sort of. It's always a question of how far down the stack you want to take it. After all you probably need to source rare earths from somewhere that isn't your backyard.

                      Check out pictures of the old processes before automated VLSI. It was all done by hand including crystalizing the silicon. You'll need a clean room and a bunch of weird supplies though.

      • superb_dev3 days ago
        I don’t think they’ve tried it yet, but it’s seems up the alley of Applied Science on YouTube
        • godelski3 days ago
          I'm not sure this is Ben's forte, but you're right that I wouldn't be surprised if he tries it, though he has done some circuit stuff[0,1] so nothing would surprise me from him. (Hi Ben! Love the work!) BUT I do think this is something Sam Zeloof[2] try. He's done some lithography using a projector[3]. Also there's Jeri Ellsworth, but I think she's shifted to mostly working with her AR project. Tons of old videos on that stuff if you're into it.

          Side note: I'm assuming anyone who knows any of these people would be interested that a new Dan Gelbart video just dropped[5]!

            -----------------------------------------
          
          Other side note: @YouTube people (and @GoogleSearch), can we talk about search? The updates have been progressively making it harder to find these types of accounts. People who do *highly* technical things. I get that these are aimed at extremely specific audiences but this type of information is some of the most valuable information on the planet. Lots of knowledge is locked into people's heads and these classes of videos are one of the biggest booms to knowledge distribution we've ever seen in the course of humanity. I understand that this does not directly lead to profits to YouTube (certainly it DOES for Google Search), but indirectly it does (keeps these users on your platform!) and has a high benefit to humanity in general. The beauty of YouTube and Google was you could access anything. That we recognized everyone was different and we could find like minded people in a vast sea. The problem search was meant to solve was to get us access to hard to find things. To find needles in ever growing haystacks! Please, I really do not want to return to the days of pre-search. Nor even early search! It should be _easier_ to find niche topics these days, not harder. LLMs aren't going to fix this. This is becoming an existential crisis and it needs to be resolved.

          [0] https://www.youtube.com/watch?v=UIqhpxul_og

          [1] https://www.youtube.com/watch?v=FYgIuc-VqHE

          [2] https://www.youtube.com/@SamZeloof

          [3] https://www.youtube.com/watch?v=XVoldtNpIzI

          [4] https://www.youtube.com/@JeriEllsworthJabber

          [5] https://www.youtube.com/watch?v=OuZjjActWmQ

          • PaulHoule3 days ago
            LLMs could help if they were specifically applied to the task [1], however people are actually applying them to the generation of countless slop videos. Google's problem, which I think there is no cure for, is that Google believes it is #1 and to quote Fatboy Slim "We're #1 why try harder?" If in some way they feel they have competition it is to be a 2nd rate TikTok, not be a better version of what made YouTube great.

            In the meantime, for everybody that's been turned on to something really awesome and creative on YouTube somebody else got turned on to something really toxic.

            [1] Something significant happens every 10 years in search relevance, and SBERT was one of those.

            • godelski3 days ago
              I'm highly skeptical. The current ML paradigm is highly reliant on aggregating data, but the issue we're discussing is about distinguishing subtle details over an extremely large search space. Sure, you can probably scale your way there but even accounting for superposition we're talking about an extremely large number of parameters because you aren't performing search, you're performing compression. You need to also remember the curse of dimensionality. The problem is that as the dimension increases the ability to distinguish the nearest neighbor from the further neighbor decreases. Effectively the notion of distance becomes undefined. (The dimensionality increases as parameters increase). So now you have to perform search over your compression.

              This is why ML is so fucking cool but it's also why they are really bad at details. Why you have to really wrestle with them to handle nuance. Easiest to see in image generators but they're much smaller. Do remember that these things are specific trained so that their outputs are preferential to humans. The result is that errors are in the direction of being difficult to be detected by human evaluators. Deciding if that's a feature or bug requires careful consideration.

              This is not to say that LLMs and ML is useless or trash. They are impressive and powerful machines but neither are they magic and the answer to everything. We got to understand the limitations if we're to move forward. I mean that's the job of us here as researchers, engineers, and developers. Using a keen eye to find limits and then solve them (easier said than done lol)

            • ezst3 days ago
              Again someone mistaking LLMs with knowledge bases. Must be a day finishing in `y`
              • PaulHoule3 days ago
                The original misunderstanding behind "knowledge base" was that, in the 1980s, it was an idea in symbolic AI that you'd develop a set of facts against an ontology designed for accurate inference and somehow by the 1990s it became a text repository with a search engine that may or may not work. Occasionally useful, sometimes hard to distinguish from a trash can. See Confluence.

                Prompt engineers with their decoder models are going to always be wondering why they are always a bridesmaid and never a bride, with encoder models you can attain the holy grail of the system where you put text in one side and get, within calibrated accuracy, facts to put into the first kind of knowledge base. Or, for that matter, a good search engine for the second kind of knowledge base which could raise it above the "trash can" level.

                • ezst3 days ago
                  "Funny" how that reminisces of the whole blockchain discussion. If the need is fully satisfied by a "boring" and cost-effective "facts" database, why would an adequate engineer push for (blockchain/)LLM instead?
                  • PaulHoule3 days ago
                    There were several reasons why "expert system" were rejected in the 1980s including competition with programmable calculators and spreadsheets and no correct paradigm for reasoning with uncertainty but the one most quoted was that the creation of that kind of database is not cost-effective.

                    I spent about 10 years working (sometimes for myself, sometimes for employers, sometimes part time, sometimes as a software developer sometimes as a business developer) on the problem of turning a mass of text into facts into text to solve problems like:

                    - Doctors write copious medical notes from which facts would be useful for themselves, payers, researchers, regulators.

                    - An accounting or legal firm may need to scan vast numbers of documents and extract facts for a audit or lawsuit

                    - An aerospace manufacturer has a vast database of documentation and maintenance notes (even from the teams at the airports) that it needs to keep on top of

                    - A fashion retailer wants to keep track of social media chatter to understand how it connects and fails to connect with customers and answer questions like "should we endorse sports star A or B?"

                    - Police and soldiers chat with each other over XMPP chat about encounters with "the other" which again are rich with entities, attributes, events, etc.

                    Tasks like this need an interactive system but you face the problem that people have an upper limit of 2000 or so simple decisions [1] in a sustainable day. The problem is large but it is not "boil the ocean" because you can set requirements for what gets extracted and use the techniques of statistical quality control as in Deming to know accuracy is in bounds.

                    You can give people tools to tag things in bulk, you can apply rules, you can give the people tools to create the rules. I worked on RNN and CNN based models, SVM, logistic, autoencoder and other models and before BERT they all sucked. If you have the interactive framework you can put encoder or decoder LLMs in and it is a revolution that makes systems like that much cheaper to develop and run for better effects.

                    [1] hot dog/not hot dog

    • bombela3 days ago
      I am building one. Right after I find out where to buy liquid semi-conductor paste.
    • lsllc3 days ago
      If it's anything like regular InkJet printers, the liquid semiconductor would paste be all dried up every time you went to use it, or the capacitor cartridge would run out long before the resistor one did!

      /s

  • neuroelectron3 days ago
    Intel and CEA-Leti Collaboration:

        Intel and the French research institute CEA-Leti are jointly developing 2D transition-metal dichalcogenides (TMDs), such as molybdenum disulfide (MoS₂) and tungsten-based materials, for integration into 300mm wafers. These materials offer sub-1nm transistor channel thickness, making them ideal for extending Moore's Law beyond 2030.
    
    [29 June 2023] https://compoundsemiconductor.net/article/117047/CEA-Leti_an...
  • metalman3 days ago
    I like where they say "a sheet that is only a bit over a single atom thick, due to the angles between its chemical bonds" it's funny that material science has achived ultimate precision, but it can only be talked about in general terms Is there any exact way to describe the thickness of molebdium disulfide sheets?, beyond "a bit over one atom thick" clearly they are etching parts of the sheet, and somehow attaching leads, but is it done strictly in two dimensions, ie: litteral, flat land?
    • fc417fc8023 days ago
      > Is there any exact way to describe the thickness of molebdium disulfide sheets?

      It's the same set of issues that you'll run into if you try to precisely quantify the thickness of a sheet of printer paper. It really depends on what you mean when you ask the question. The geometry of the electron shell, the minimum theoretical width once assembled into the theoretically optimal sheet, the impact of various imperfections in practice, the potential for more than a single layer to exist (and the associated averages), and a number of other things that aren't immediately coming to mind.

      It's an issue of precision on the part of the party asking the question. We usually work on scales that are so large that such details aren't meaningful (if you can even measure them in the first place).

    • roywiggins3 days ago
      Looks like a monolayer is about a nanometer thick.

      https://www.acsmaterial.com/monolayer-molybdenum-disulfide.h...

  • mikewarot3 days ago
    I wonder if Sam Zeloof and Atomic Semi are trying this out? It would be an excellent match for their "build in one atom at a time" approach.
  • gcanyon3 days ago
    Since it's a single-molecule thick, could this potentially be stacked thousands, or millions, thick to deliver ridiculous capacity? I assume heat dissipation would be a factor, but the article doesn't mention it.
  • gcanyon3 days ago
    > It's slow and inefficient

    Is there any reason to think this won't improve with time? The Intel 4004 was "slow and inefficient" too?

  • ChrisGammell3 days ago
    What is this, a MCU for ant(man)?! It needs to be at least...three times that thick!