The colleague claimed that there is no special magic. It's not that ASML is using some otherwise unknown laws of physics nor is any single step or component particularly special or novel. It's just that they meticulously optimized each step, and the sum of such steps is the winning solution.
In fact, this is probably why it's so hard to copy ASML. If there was a single magic component, a single or few engineers could be poached away to a competitor to copy it. However, copying a well-optimized company with many simultaneous optima is a much harder task.
Our discussion was in the context of why our quant hedgefund competitor was performing so well, far above the market norm. By nature and design, quant finance is an incredibly efficient field (and most techniques are more or less known by veterans), and we had thought unlikely that one fund could do so much better. Our conclusion was that this fund must be the well-optimized ASML of our field. My colleague happened to know the founder and indeed that was his personal impression as well.
Dan Gelbart gives exactly this view in a podcast he took part: https://youtu.be/UTgrWmOk4q8?si=Zp13SPqN_Vx-kFlq&t=1564
This is small change in the military budget.
This acquisition is also what gives the US Government the ability to veto customers of ASML even today- this is why Chinese semiconductor manufacturing is so far behind, because the USG controls who can access ASML's EUV work.
That seems like a potentially very cunning soft takeover, in that case.
Still, I think onshoring is strategically wise in a world where the US is actively antagonizing the EU.
Of course, having competitors is probably a good thing...
There is also SE which is a EU form for an AG, and various "partnership" forms that involve a partner that's fully liable. Usually, that partner is not an actual person but a "legal person", i.e. another SE or GmbH.
Even if you're not listed on a stock market, you might want to take on investments, e.g. "give me 10 million for 5% of the company" and I assume the latter is much easier with an AG.
Why Carl Zeiss is an AG I don't know. The West German Carl Zeiss was re-formed as a GmbH in 1946, but had changed to an AG by 1973. The East German Carl Zeiss was turned into a GmbH during reunification and then split in two. One part merged into the West German Carl Zeiss AG and the other is now called Jenoptik. Jenoptik was converted into an AG in 1996 and went public in 1998. AFAICT Carl Zeiss has been privately owned by the Carl Zeiss-Stiftung since 1889, except of course for the temporary East German part.
In most entity types, this involves a lot of paperwork while its quite easy within an AG.
AG does not mean necessarily its publicly traded.
Here in Sweden we have A LOT of companies own and operated by state and local government and they’re all “aktiebolag” which literally translates to “stock companies”. For smaller businesses you can register as a sole proprietor and some other odd structures if you are a group of people. You’ll often see the same thing for non-profits as well.
ASML would argue that it's legitimately justified because
a) there is mutual coinvestment (ASML owns 25% of Zeiss semi optics division) and thus there is symbiosis / shared risk not a simple exclusionary supply contract
b) no viable alternative customers exist for Zeiss so it doesn't matter
c) EUV litho is so tightly coupled to optics that having a single supplier is a technical necessity
d) the market was CREATED through innovation and investment across ASML and suppliers, rather than exclusionary conduct (cf. the difference between "a monopoly" and "monopolization")
The affordance of a monopoly also prevents free riding. ASML and Zeiss spent billions of dollars and decades co-developing very specific, custom-tailored technology. If a competitor could simply walk up to Zeiss and buy the lenses that ASML spent billions helping to develop, the competitor would be free riding on ASML's investment - and creating a chilling effect for future innovation. b) no viable alternative customers exist for Zeiss so it doesn't matter
c) EUV litho is so tightly coupled to optics that having a single supplier is a technical necessity
B can also be an argument against putting exclusivity into a contract.C is just a business decision - exclusivity due to need, not contract.
In the UK, if you are a supplier and lock in an exclusivity deal, and also you are small business, they don't treat you as legitimate business and company revenue gets taxed as employment income (IR35).
I wonder why regulators don't look into that. If they have exclusive deal, are they really in business or is it just some sort of tax structure masqueraded as supply chain?
The history of ASML involves a "failed" company that other multinationals felt they had to keep alive to allow the technologies to continue. And that's saying that the capital investment needed to produce a thing of that scale can't work if it is subject to a yearly profit cycle (or works much more poorly).
The further factor shaping ASML is that as chip technology has grown, the investment required for support technology has grown and so only a single supplier can remain profitable and it seem logical there would only be a single company acting as supplier (maintaining research and expertise in two or three huge companies, only one of which can be profitable at a time, is highly inefficient - why we're done down to 1-3 cutting edge chip makers at this point also).
So ASML was economically logical and it being in Europe is perhaps a combination of European tradition and Europe wanting some part of the global chip production system (which is by a fair bit the largest/most-valuable concentration of capital and technology in the world).
https://medium.com/@crcjeffkim/why-these-5-acquisitions-have...
The US doesn't randomly hold export veto power on ASML through unilateral threats. EUV had a lot of tech transfer from the US of the initial research as the sibling comment lays out, and the agreements for those transfers allowed restrictions.
Obviously, there are a lot of reasons why. But it boils down to having the vision, the belief and the strength to follow through over many years. It's important to not confound vision with random Kool-Aid. Instead it's grounded in research. That research is itself grounded in a strong vision and belief — it got laughed by the entire physics community at the time:
> 'people seemed unwilling to believe bending x-rays, and they tended to that we had actually made an image by regard the whole thing as a big fish story
Now contrast this with the current academic reality — "publish or perish" and the reality of venture financing and corporate culture that "depends" (arguably in self-inflicted manner, that's not 100% the case) on quarterly repots.
ASML is just a recent story, but if you look back, you'll see that most revolutions have a similar pattern of people crazy enough to deviate from the herd.
The rest— the immense financial risk, the 5000 suppliers, etc. came as a result of having the ability to see through all the noise and the grit to to follow through when everyone calls you an idiot for not doing something "useful"
However, if you start with the assumption that at some point, people are going to need a lot of fast parallel compute for something, you could rationally justify their long-term strategy. They skated where the proverbial puck was going. They couldn't see the puck, but they were pretty sure there was one. In hindsight that really does look like a safe bet.
NVIDIA just had their eyes open to an obvious market demand and made it easier by creating CUDA.
Certainly Jensen seemed to have an extremely long view on this burgeoning machine learning market in the early 2010's.
And importantly, that vision being correct. The graveyard of history is full of the dedicated yet incorrect.
Basically, ASML has an incredible pan-European supply chain of the sort of stuff Europe does best. Deep tech, advanced precision manufacturing, that sort of thing.
I know it's popular to poo-poo Europe around these parts, but those are the sort of thing we genuinely are the best in the world at. Technology isn't just shiny apps and LLMs. It's also this sort of thing, and the shiny LLMs wouldn't work without it.
Zernike, a Nobel prize winning physicist who worked on optics, was also Dutch, and developed the original Zernike theory of aberrations in the mid 20th century. This was a vast improvement over previous theories as it was far more useful for optical design and analysis.
So the Dutch have a rich history of developing the most advanced physical theories for optical engineering (all the way back to Huygens even)
Previously in the context of Apple I likened this to becoming a chess grandmaster: all you have to do is make the optimal decision every time you make a move, over and over again, for years
People don't like hearing that there isn't One Weird Trick which you can just copy, but it's the reality of these situations. To the extent that they can be analyzed, the best people to send are often anthropologists to look at the decision making culture. Culture is even harder to copy; this was a factor in the difficulties of TSMC Arizona starting up, despite it being literally the same company it's not the same people.
An orthogonal question is what makes sense as a measure of complexity. One could use "number of parts" (whatever that means): NASA says the Space Shuttle has 2.5 million moving parts, while the article says the ASML machine has over 100,000 components. Another issue is how to deal with composition. A TSMC fab is obviously more complex than a lithography machine since it contains a lithography machine, but maybe the fab doesn't count as a "machine". Another issue is complexity vs parts: a 32-Gb DRAM chip has about 68 billion transistors and capacitors, but it's not extremely complex, since it's mostly the same thing repeated. And then there's the question of distribution: can you really count the Internet as one "thing"?
It's kind of pointless to fret about whether it's "the most complex" like there's an objective 1-dimensional ranking that even has utility.
I can't remember if it was an ASML representative that said that, or if it was an overlaid asterisk that popped up on the screen at some point - but I definitely remember thinking about the space shuttle and Saturn V/Apollo and those sorts of things before I saw the qualifier.
Also, I think the axis it‘s probably most complex on is precision of individual parts and of their combination. Arguably chips themselves are more precise as their 'parts' are so small, but they are much more homogeneous compared to the EUV machine, where tons of different materials and part sizes need to combine.
Each one of these machines costs half a billion dollars and is protected by some of the most stringent export controls on the planet.
It does raise an interesting philosophical question: if I bolt two ASML lithography machines together, is the resulting machine more complicated?
A big part of it is the secrecy itself. Things get difficult when you can't communicate. Your pool of candidates for the job is limited: you may not want people with foreign connections, some people don't want to work for the military, don't like the paperwork, don't like the idea that they can't value their skills for another job, etc... In addition, military technology is supposed to work on the battlefield, you don't want delicate stuff there, you want rugged, repairable, proven, reliable.
I think the reason secret military stuff appear so advanced, besides the aura it projects, is that it deals with fields that are underrepresented outside of the military. Like stealth for instance. Stealth is of limited use outside of a conflict. So of course, the stealth package of a nuclear submarine will be much more advanced than the almost nonexistant civilian stealth technology. But for things that are relevant to civilians, like the reactors, engines, etc.., I am sure that what's in subs is relatively simple, and probably dated.
It seems like submarine propeller designs are all classified past 1960, even though quiet and efficient propellers pretty relevant to civilian ship design:
The thing about military stuff is that generally the budget is large and the goal is to design something better than what the enemy has. The civilian world for a long time wasn't willing to blow hundreds of thousands of dollars on ASICs to control phased-array radars; the military was. Now as a result of lots of military investment, the technology is so well-understood that Google put a phased array on a chip inside the front of the Pixel 5.
> In addition, military technology is supposed to work on the battlefield, you don't want delicate stuff there, you want rugged, repairable, proven, reliable.
What you want is stuff that wins fights, and it only needs to be repairable and reliable insofar as it wins fights. The US has the F-22, which is an ultra expensive jet that only has ~60% uptime. In war games, it achieves kill ratios of 100:1, so the military is more than happy to keep it around. When the US raided Osama bin Laden's compound they sent brand new stealth helicopters even though they knew the platform was less reliable.
I used to work for a military contractor.
The stuff we would get back from the field looked like it had been fed through a wood-chipper, and this was peacetime (1980s). They had these special field racks, that had a rackmount suspended inside a huge plastic box (with front and back panels). Didn't save the units inside, though. A lot of time, they were torn off the racks, and rattling around, inside the container.
The kit was not cheap. Our standard units (a super Bearcat Scanner, basically) cost about $40,000 USD (1980s USD). They were 2-4U units, and the racks usually had five or six of them.
There's an urban legend about Admiral Rickover. His office was on the second floor of the Pentagon. If a salesgoblin came in, with sample kit, it was said that he walked over to his window, and dropped it outside. He then said "If it still works, we'll talk."
So they are not Cloud-native and there is no Slack?
/s
:-D
>aircraft carrier
Having served on both, this is actually a pretty interesting comparison (at least to me).
Carriers are simply larger, so they likely win by scale, but im not sure on a more per-(sub)system basis.
Carriers have a lot of aircraft handling systems that subs dont, elevators and hangers. Also the carrier has group c&c stuff.
Subs have a lot of stealth systems carriers don't, being that they're visible from space. Lots of dive related stuff, o2/co2 handlers.
They both have weapons systems, hvac, propulsion, distillation, steam generators, reactors, air compressor, many others.
Not obvious to me which one is more complex!
In either case, the secret design has the same effect, but sub secrets are the top of the top of top secret. Spies that leak sub secrets, spend a long time in Leavenworth.
It said that from a complexity level to construct, the ASML Twin:EXE machine is much more complicated, esp. much more freh research was required to achieve their nanometer structures - a Falcon is a complex vehicle, but compared to "how much do we need to know to create it on an industrial scale",the ASML devices seems to be more complex.
Otherwise we might as well say the ASML machine is in orbit around the galactic center.
Googling for total part count also comes up with the 2.5M number. They move WRT Earth, but the vast majority do not move with respect to each other, is my guess.
For a sanity check comparison: Saturn V estimates are ~5 million total parts, and "tens of thousands" of moving parts. A ratio that sounds sort of normal.
I don't know, but number of parts doesn't seem good. I feel that complexity should be measured in bits, but how to tie it with something real idk. Maybe the amount of knowledge needed to reproduce the machine? It is hard to measure though, because knowledge in people heads can't me measured precisely, we can estimate it but it will be a very rough estimate.
But the knowledge by itself is not enough, because there difficulties when producing that pure knowledge can't solve, they need a specialized equipment or source materials, and arguably it adds to a complexity too.
Or we can try from completely different angle, how about the reaction of a machine to small perturbations? Like if I unscrew this bolt, how long it will take for a machine to explode? xD
I mean, I'm not an engineer really, but I have experience as a software developer, and subjectively complexity of a code is when you can't predict at all what will happen if you change this line of code. Maybe it can be taken as a basis for a measure?
Off topic: Does it blow anyone else's mind that a DRAM chip has more transistors on it than there are humans on the planet?
Yes.
Accepting these facts is one thing but "getting over the astonishment" never happens for me.
I don’t know that many people would classify the Space Shuttle as a machine. It doesn’t make anything.
The space shuttle can be thought of maybe as a collection of machines working in concert, but thinking of it as ONE machines renders the meaning of machine less useful.
A machine on the other hand has its roots in its mechanisms. It physically transforms something by applying mechanical power, and that's not necessarily done for you (e.g. printing device VS printing machine).
Whether a device can be composed out of many smaller devices, or whether a machine can be composed out of many smaller machines just doesn't seem to be relevant. That being said, language evolves with time and certain concepts find some overlap in general usage.
A machine is almost always a device, but a device isn't always a machine. A fancy earring can be a device, but it is clearly not a machine.
Wrt the space shuttle, I would take some issue because you could say it's not just one machine, but a collection of many, for example it probably has onboard computer systems that are not always in use. It would be a bit like saying that a whole factory is "a machine". Whereas the ASML devices serve one single clear purpose.
I think, if one were used to calculating cyclomatic complexity, such a headline is not only amusing, but also fascinating even if it is 'wrong' by .. some value system .. because the thought exercise to come up with a more cyclomatically complex machine, is rather a fruitful challenge. And that is why writers should be allowed to editorialize, because .. after all .. this is a thought-provoking article, isn't it ..
However, let us continue to postulate there are other forms of complexity that can be measured - what would you suggest are the other 3 or 4 contenders for the title?
ASML: Complexity as a strategic resource
ASML: The most hard to reproduce machine in the world
ASML: One of Europe's most complex strategic resources
As the the comment you're replying to just said, A) it's qualitative and B) it's perfectly fine to glaze the subject a bit in journalistic writing. It gives the article a quick hook to get readers interested, and if you actually read the article it becomes the least interesting thing about it.
And now that I've said that: I'd argue that if you consider the full "embodied complexity" of this machine's product lifecycle, it's hard to think of much else that compares to it. E.g., consider not just the machine itself, but also all of the R&D needed to get it to this point, and the amount of field experience necessary to make maintainable and reliable, and the engineering and supply-chain work necessary so that you can reliably ship them to customers around the world? While still being far, far ahead of all your competitors?
>ASML started off life within Philips, the Dutch consumer electronics giant.
Who started with light bulbs which were using the electrons for direct visual and UI/UX purposes. Some of the most simple electronic components, but quite a bit like appliances themselves. No surprise a lamp in English means either a bulb, an appliance, or both.
Vacuum tubes were the next step up in complexity and I guess you can take it from there.
In the early radio days it didn't take too many "ampules" to make a radio. Not nearly as complex as a cellphone, but bizarrely more complicated than a light bulb already.
The Edison Effect turned out to be a very strong force after all :)
At one time every building that had electronics, had vacuum tubes. When you moved a radio or TV set, you were carrying your own little vacuum chambers with you from place to place, even as late as CRT's.
With solid-state electronics like this, the vacuum chambers are much bigger, but are only located in a centralized factory process, so you don't have to carry them around with you if you want to be portable.
You wouldn't want to anyway, look how heavy they have gotten ;)
The search response also said MEMS oscillators (modern performance replacements for quartz crystals) use a high vacuum but given the iPhone 8 was famously "bricked" by Helium affecting a MEMS oscillator - I'm unsure about that!
https://www.vice.com/en/article/why-a-helium-leak-disabled-e...
https://en.wikipedia.org/wiki/Microelectromechanical_system_...
So, a (badly-designed) MEMS oscillator is basically a region of vacuum enclosed by a membrane that only helium can permeate. Of course exposing it to helium is going to permanently change its behavior! Once the helium gets in, there's no reason for it to leave, due to the vastly higher atmospheric pressure outside.
I remember when MEMS started getting capable of micro fluid handling, but wasn't aware of what they were doing with "lack of fluid" :)
Really bringing the little vacuum chambers out of the big vacuum chamber and onto the street.
Over the decades I have often thought about doing something like that, but not really micro.
Now I'm suspicious about something I hadn't considered before at a previous employer's chem lab. Last month when they called me back in, there was inconsistent oscillation being applied (sometimes not) to a key analog sensor on one instrument which is about the same vintage as the iPhone 6 where the problem showed up from the early MEMS oscillators. These instruments have been there for over a decade and it may be some other electronic problem, but this does coincide with a couple additional gas analyzers they brought in a few months ago which are now wasting about 5x more helium than I was using when I was controlling it. Exhausting into one lab, but not having a completely isolated ventilation system.
Now I know something to try next time and that's not even why they called me in this time.
This could actually be one of those problems that shows up intermittently depending on which way the "wind" blows ;)
Sometimes the best way to fix things is to wait until you're smarter :0
Thanks to all for very valuable info and links.
I don't have a question about ASML or the machine in particular, but I am curious about your thoughts on something: I've recently noticed a fair bit of media (blogs, YouTube videos, TikTok clips) about the same thing: this machine and the EUV process. Do you think interest in this topic is just a coincidence or did something happen to cause these different content creators and authors to do a piece on it at around the same time? What caused you to do a piece on this now?
Even before you get to the lithography machine you need silicon. For a long time we've known how this is done. You create what's called a boule, which is where you create a cylinder of almost pure silicon by seeding molten silicon with a crystal and slowly forming it. You then cut the boule into the silicon discs we often see. That machining and polishing itself has to be super-precise.
But I can remember when the tolerance for impurities was at 1 part per 300 million. I read recently that even 1 part per billion is now too impure. And that makes sense. The biggest chips are what? 80 billion transistors? I seem to remember NVidia makes chips in that range (or rather TSMC does for NVidia). At 1ppb that might make ruining your chip just too likely.
So my point is that there's a whole industry to make super-pure silicon which itself took amazing advancements and without that this machine would be a lot less useful.
Another part that amazes me is just how pervasive multiple layers on chips have become. I can remember when that was novel. The upper layers are made by cheaper machines with EUV reserved for a transistor "base layer" where all the interconnects really are.
It's amazing just how many problems had to be solved to make this posible.
The more straightforward video of ASML EUV is from Branch Education: https://www.youtube.com/watch?v=B2482h_TNwg
Because that vid gives an overview of the whole machine, it gives context to what each scientist is talking about in the Veritasium interviews.
And it's a source of serious hazardous waste products. It's a tin-ion laser, operating in an ultra-pure vacuum, on an unbelievably high-energy band (even laser "lines" have definable bandwidths). There's really not a lot of wiggle room in materials selection for the laser.
In this case, its the latter.
The answer is that some people do quit and retire early, but even more are attracted to that career like moths to a flame, and work until they can't.
I do think they should raise pay for their existing employees at the same time. In fact, they should tie the compensation to progression in skill and experience, so that people who just came for the money and aren't cut out for the work or aren't in it for the long haul aren't attracted to the job. That's basically the traditional model anyway.
And yeah paying employees well might cost a bit of money (but really, not that much in the scope of things). If talent is their production bottleneck, it will be well worth the expense.
Software industry has always been plagued by attrition. Some companies pay well and mostly employ younger people. Older employees eventually filter out, either because they have already earned enough and prefer better work-life balance, or due to ageism. And then there are occasional downturns, where many people lose their jobs, can't find new ones, and end up leaving the industry permanently.
People generally prefer careers with multiple viable employers. Not just in the world, but also in the same metro area. That way you are less likely to get stuck in a job you don't like. But if you are an employer with unique requirements. Of skills that take years to learn and that people are not likely to acquire on their own. Then you may need to pay ridiculous money (more like AI than FAANG) to substantially widen your talent pipeline. And if you pay ridiculous money, you risk ridiculous consequences.
And if they do not, they risk ridiculous consequences. If Zeiss cannot afford to pay more, then it is underpricing its products.
Also, it sounds like the entire premise is "people don't want to work because they're not being paid enough" which is enough of a good reason by itself.
Is this the correct term? Why do these long radio waves have the name "London"?
Unfortunately all that I get googling the term is a guide to local FM station frequencies.
Is this just restating the size of the same shipment three times?
“Retaining the best workers is especially crucial in an area like photolithography, where a huge amount of tacit knowledge is used to assemble its machines. An ASML engineer once told He Rongming, the founder of Shanghai Micro Electronics Equipment, one of China’s top ASML competitors, that the company wouldn’t be able to replicate ASML’s products even if it had the blueprints. He suggested that ASML’s products reflected ‘decades, if not centuries’ of knowledge and experience. ASML’s Chinese competitors have systematically attempted to hire former ASML engineers, and there is at least one documented case of a former ASML employee unlawfully handing over proprietary information. But none of this appears to have narrowed the gap.”
This does not mean that China cannot or will not have something similar at some point, probably sooner than later.
Either China will catch up on this or that particular technology will become obsolete. But it is certain that they won't stay behind forever (measured in a small number of decades at most).
What is far less certain is what ASML will be able to do at that time, i.e. if they will be able to progress significantly over the state-of-the-art of today, or they will reach a plateau.
Besides China, there is a renewed effort in Japan to become competitive again, so ASML may face in the future both Chinese and Japanese competitors.
You might place an upper limit using history but in this case I'd guess that limit would end up being much larger than the present semiconductor industry itself might last.
There is a level of arrogance in the West that China does cheap but simple/low quality whereas this is only a stepping stone along the way. German car manufacturers went into China during the 90s with that mindset, and expecting it was forever, well they don't think that anymore...
On paper EUV relatively modest undertaking vs commercial aviation, EUV deeper integration vs commercial aviation breadth, but in terms of scale of effort for nation state coordination, EUV probably all things considered, easier to replicate because it has no regulatory slowdown, it's purely host country physics problem. Having enough talent and throwing it at problem x espionage x poaching talent x time will likely solve precision physics problem sooner than later. Vs commercial aviation which has complicated geopolitical/regulatory hurdles and magnitude more suppliers and scale. TLDR EUV has smaller organizational surface area for determined state to pursue through concentrating $$$, talent and effort. You can buy a ex ASML to bootstrap EUV development, much harder to get globe to buy COMAC without decades of airworthiness. There's a reason western analysts predict PRC EUV in 2030s (meanwhile PRC already beat prototype estimate timeline), but probably not realistic for global COMAC in same timeframe, and PRC been hammering at commercial aviation seriously long before EUV.
Of course, doing it "legally" is another question - someone in the US trying to replicate would likely run into patent and other issues.
But a top-secret Manhattan-style project done by the US or China? definitely doable, and if you add spy-shit in, perhaps even faster.
whatever many secrets are involved, information wants to be free and it's hard to believe that others won't figure it out.
by the time they do catch up we better be steps ahead. what's after EUV?
"With all the problems we have getting this to work? We ought to ship our drawings to our competitors to slow them down!"
Very tongue-in-cheek, but... yeah. The entire machine underwent a massive overhaul when it was discovered that bare, unoxidized titanium in the presence of elemental hydrogen would absorb so much it became brittle. Who knew? Maybe some few chemists, but none worked in ASML design, as it happened.
- ASML's High-NA EUV machines ready for high-volume production
- Machines have processed 500,000 wafers, showing technical readiness
- Full integration into manufacturing expected in 2-3 years, ASML's CTO says
After that, it may be X-rays.
A disruptive step would be to move to 3D printing, but that (among other issues) is too slow at the moment. Maybe, ideas from nano robotics (https://en.wikipedia.org/wiki/Nanorobotics) can help there.
The lithography equivalents of that are laser direct write lithography and e-beam lithography. They've been used for decades in research labs, but they're impossibly slow for any mass production.
Atomic Semi are trying to make some derivative of these processes happen at a commercial scale.
Even leaving size aside, I don't think that there are any credible way to 3D print something that complex.
Lithography enables that level of complexity because each layer is done in one go. I think any alternative technologies would have that property, too.
Well, even jet engine manufacturing is something that China is behind in (relatively speaking), and it (seems?) is simpler than some of the stuff in EUV machines.
It probably is. But it's probably in the same category of being one of the most difficult things to manufacture.
I can understand why you can't just take one apart and copy it.
There's (apparently) 4 decades of accumulated cutting edge scientific research that has gone into these machines.
I suspect the machinery, process and human expertise required to simply produce the parts required for these machines is the real moat (oh and I guess the US-led export controls too).
The build tolerances for components are incredible. There are 11 primary mirrors in an EUV machine, each one has something like 100 coats of ultra-pure materials that are precisely deposited in picometer-thick layers with tolerances in the nanometers, across a 1-meter wide curved surface.
Then you have to position the mirrors perfectly inside the machine, again with tolerances in the nanometers.
So even if you know what you need to do, having the equipment and expertise to do it is a different thing.
And that's just one part of the 100,000+ parts that make up an EUV machine.
But in this case the Chinese will just develop their own alternative, that might work as good or even better
There's sometime implicit knowledge in a technique that either doesn't get written down, or someone is so good at something you don't think certain details will matter.
In my old lab (biochemistry) some people just have good hands and are really good at making something repeatable, others not so much.
Makes one wonder: Would we be much better off of worse off if we reshaped society to do more of things, where a new technology is unlikely to work but highly beneficial in the limits? Would we sooner have 10 additional ASMLs or waste a lot of resources?
What is no longer mentioned is that ASML made another big gamble at the time they started on EUV. They decided to make an all-in-one chip making machine that took silicon and output chips (instead of matrices of chip circuits laid out on a wafer).
On paper, the machine would save a lot of money for the fab houses. IRL: no one asked for it, and no one was willing to risk their entire production on a single, untried, swiss army knife of a fabricator.
The whole program was a wash. People were reassigned and the project died a very quick death. ASML lost a ton of money on this misguided attempt, but not enough to choke them.
So, they rolled the dice twice, and one gamble paid off handsomely. If it went the other way, they'd be a smaller company, and Moore's Law would be overshooting reality. If neither paid off, they'd be DOA.
I maliciously searched for examples of the French thinking they invented the transistor.
Turns out the French do have a claim of inventing it simultaneously (at the same time as Bell) and the French even commercialised their version since their tech had different parameters (Apparently it was two German inventors working for Westinghouse in France on a project for French telecoms as I recall). The history of invention/commercialisation is usually wierd.
So the Germans invented it! JK
I like how you put this. It always seems weird to me how some people get hung up on these claims when it's so obvious that history is full of basically simultaneous inventions.
It seems obvious that the answer would be both? All of these things are "bets", at almost every level.
What happens is someone comes along and notices the 9/10 failed attempts and decides that the right thing to do is only to make bets which are guaranteed to win. So they get smaller and smaller.
> reshaped society
Invalidate all of ASML's patents = get cheaper chips, sooner.
It is intellectual property which gives some of us the ability to build these things and sell them to others - get rid of this phony concept and we can have more nice things...
What is happening with ASML now, once happened with the wheel.
Think about that.
You're basically saying "ASML's entire production line is worthless unless it is rare and coveted", which is .. obviously not true .. because of course the output is immensely useful.
The world needs more chipfabs, not less. A properly scaled chipfab in places like Broome or Santiago, or .. indeed in orbit .. would go a long way to sorting out the worlds fires.
The thing stopping us, is the international, imperial system of patents and intellectual 'property', which make nation states subservient to each other on the basis of ideas.
The ideas could be spreading far and wide, but we humans are keeping them in our cage, in which the only reward is having other cages to extract wealth from ..
If everyone could make these machines, there'd be more of these machines.
There are so many examples of this out there, already, that I find this specious "no next generation" argument to be either simply coming from bias, or ignorance.
For sure, we only care about Taiwan because there is one Taiwan. End patents: no more Taiwan problem.
My post is in violent agreement with this, for this generation of machines.
ASML spends ~$5B annually on R&D with the expectation that they will be able to make ~30% net profit in the future. If you remove patent protection, there will be more competition and obviously profit margins will fall.
I want to rephrase that for emphasis. The point of aa-jv's post was that we would get cheaper chips by invalidating IP. Cheaper chips means lower margins (because you have not lowered input prices). Lower margins was the explicit goal, so to the extent that the changes in IP law work, you will get lower margins for companies like ASML.
At that point, you have a field of companies looking at (say) 10% net returns, still needing to invest billions of new capital into R&D every year. Worse: no patents means that Company A could spend $5B on R&D and Company B could spend $0, and both of them could reap the benefits of that $5B by Company A. So it's not even necessarily clear that the industry would see much net innovation.
Are we even certain there are companies who would enter this capital-intensive business assuming IP was free? Compulsory licensing is a thing, but I am not aware of that even being something that has been requested.
But a five-nanometer node has a gate pitch of 45nm and a metal pitch of 20nm! Using different forms of the word "nanometer" in the same paragraph is very confusing...
since replicating EUVs is close to impossible.
https://www.rapidus.inc/en/news_topics/information/rapidus-b...
https://www.semimedia.cc/18196.html
If China and Japan are currently working on it, certainly South Korea is not far behind.
As a software engineer by trade, the above parable communicates to me two very important things and little else by comparison: that the machines are ultimately fragile and nowhere near "optimised", since the complexity is by own admission substantial to put it mildly; the machine is not a commodity, exactly, one of the million pieces breaking subtly likely renders it inoperable; its cost is proportional to its complexity (read: astronomic); by mere fact it's a focal point of geopolitics only supports the rest of the argument it's a machine of current stone age much like siege engines were at some point the closely guarded secret win-or-lose multiplers of feudal culture.
I mean it's certainly interesting to read about the complexity, but reducing the complexity and commoditising the whole thing is what's really going to be impressive I think :-)
I am probably speaking out against the nerd in us, and none of what I said should detract from enjoying the article or the subject, it's just that I think complexity here is the giveaway of us not having conquered UVL exactly, not quite yet :-) Or maybe we lack the right materials which would allow us to reduce the machine or make it less complex or prone to calibration related errors.
The one 'machine' encompasses more disciplines than most universities offer. It's really a whole bleeding edge factory compressed into a room.
What do you think "cutting edge" is, or Moore's law has been?
At one point you could have written a similar article about, say, 165nm, which is now going to the scrapyard. In the past these things have always gradually got more available and easier, with higher yields - but a new, better one appears.
But at some point we're going to reach an equilibrium with physics itself. Where, even with all the complexity we can muster, it's not possible to make it easier or get smaller.
What is the corresponding revolution in chip production? I imagine something like FPGAs for litography - a wafer that can somehow work on another wafer in a sandwich-like configuration. Such a process could potentially improve on each iteration and thus get very good, very fast.
I mean we're not talking AMD FX and Core 2 Duo here, it's Raptor Lake and Zen 3, it's perfectly viable and still being sold in droves right now.
There’s also the issue of older process nodes not being profitable enough anymore, which explaines why at the height of the chip supply crunch older ARM chips were in short supply but there was ample stock of the 20nm feature-sized RP2040.
I don't think I'm being entirely hyperbolic when I say the consumer market only exists to put devices that can connect to and feed the datacenter loads into the general populations hands.
And sure, a chip layout can be shrunk; but that requires a whole new recertification cycle.
Plus, space arrange could last years.
Heat dissipation in range of megawatts could be just prohibited by local regulations.
So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.
There's little need to put large datacenters in downtown Chicago and Manhattan.
Surely you don't believe that the entire chip industry had not thought of "wait what if we just make the chips bigger".
Same reason that so much work was put into increasing wafer diameter over the decades.
More chips per wafer means a lot.
Much more than for performance sake.
Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.
So yes, DC floor space is severely limited.
And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).
Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.
But since then the prices of server CPUs have ballooned and now their performance per dollar is many times worse than for desktop CPUs. Server CPUs have very good performance per watt, but the same performance per watt is achieved with desktop CPUs by underclocking them.
The only advantage of server CPUs is that they aggregate in a single socket the equivalent of many desktop CPUs, including not only the aggregate number of cores, but also the aggregate number of memory channels and the aggregate number of PCIe lanes. Thus a server computer becomes equivalent with a cluster of desktop computers that would be interconnected by network interfaces much faster than the typically available Ethernet links.
While for embarrassingly parallel tasks a server computer will cost many times more than a cluster of desktop computers with the same performance, it will have a much less disadvantage or it might even have a better performance/cost ratio for tasks with a lot of interprocess/interthread communication, where the tight coupling between the many cores hosted by the same socket ensures a lower latency and a higher throughput for such communication.
The owners of datacenters are willing to pay the much higher prices of modern server CPUs because the consolidation into a single server of multiple old servers brings economies in other components, due to less coolers, less power supplies, less racks, simpler maintenance and administration, etc.
While the prices of server CPUs at retail are huge, the biggest costumers, like cloud owners, can get very large discounts, so for them the difference in comparison with desktop CPUs is not as great as for SMEs and individuals. The large discounts that Intel was forced to accept during the last few years, to avoid losing too much of the market to AMD, were the cause why Intel's server CPU division has lost many billions of $.
ie there's lots of fun applications for radar, some of them have very complex math involved in manufacturing processes. Then you got automotive radar, you mainly need to get the position and velocity of some objects, the math is simpler. But you have to certify that stuff for ASIL-D, no one makes you ASIL-D radars, so you combine multiple radars. 3'Bs make a D as the saying goes.. Then you gotta worry about BOM costs because you want to ship 10 million cars..
There's impeller-type vacuum engines, which is what most people think about.
Then there's multiple-stage fans, where the purpose is to overcome random atomic vectors: an atom flying the wrong direction is more likely than not to hit a fan blade and bounce vaguely toward the output direction. Extra stages increase the odds of outward vectors, instead of rebounding off walls in some unhelpful direction. These are needed when the pressure is already so low that gas atoms don't hit each other, so they act like particles instead of gases.
There's also molecular getter pumps, that are reactive coatings inside the vacuum. Their purpose is to permanently adhere any stray molecules that tend to cling to surfaces (like H2O), so they won't eventually decouple and ruin the vacuum.
Each is used to reach increasing levels of "vacuum", which is more like "single-molecule denial gates" at that point.