Sure. Now do cooling. That this isn't in the "key challenges" section makes this pretty non-serious.
A surprising amount of the ISS is dedicated to this, and they aren't running a GPU farm. https://en.wikipedia.org/wiki/External_Active_Thermal_Contro...
This is not the 1960s. Today, if you have an idea for doing something in space, you can start by scoping out the details of your mission plan and payload requirements, and then see if you can solve it with parts off a catalogue.
(Of course there's million issues that will crop up when actually designing and building the spacecraft, but that's too low level for this kind of paper, which just notes that (the authors believe) the platform requirements fall close enough to existing systems to not be worth belaboring.)
So they wouldn't have the burden of cooling it down first, like on earth? Instead being able to rely on the cold out there, as long as it stays in the shadow, or is otherwise isolated from sources of heat? Again, with less mess to deal with, like on earth? Since it's fucking cold up there already? And depending on the ratio of superconducting logic vs. conventional CMOS or whatever, less need to cool that, because superconducting stuff emits less heat, and the remaining 'smartphony' stuff is easy to deal with?
If I had those resorces at hand, I'd try.
All the sources of power to run anything are also sources of heat. Doesn't matter if they're the sun or RTGs, they're unavoidably all sources of heat.
> Since it's fucking cold up there already?
Better to describe it as an insulator, rather than hot or cold.
> If I had those resorces at hand, I'd try.
FWIW, my "if I had the resources" thing would be making a global power grid. Divert 5% of current Chinese aluminium production for the next 20 years. 1 Ω the long way around when finished, and then nobody would need to care about the duty cycle of PV.
China might even do it, there was some news a while back about a trans-Pacific connection, but I didn't think to check if it was some random Chinese company doing a fund-raising round with big plans, or something more serious.
> If I had those resorces at hand, I'd try.
I would too, and maybe they will, eventually. This paper is merely exploring whether there's a point in doing it in the first place.
GPT says 1000 W at 50 C takes about 3 m^2 to radiate (edge on to Earth and Sun), and generating that 1000 W takes about... 3 m^2 of solar panel. The panel needs its backside radiator clear to keep itself coolish (~100 C), so it does need to be a separate surface. Spreading a 1000 W point source across a 3 m^2 tile (or half that if two-sided?) is perhaps not scary, even with weight constraints?
Hmm, from an order-of-magnitude perspective, it looks like an (L shaped) Starlink v2 sat has 100 m^2 of panel, low 10 kW draw, and a low 100 m^2 body area. And there are 10 k of them. So want something bigger. A 100 x 100 m sheet might get you 10 sats per 100,000 GPU data center.
Regards ISS, ISS has its big self, basking in the sunlight, needing to be cooled. Versus "the only thing sun-lit is panel".
More seriously though, the paper itself touches on cooling and radiators. Not much, but that's reasonable - cooling isn't rocket science :), it's a solved problem. Talking about it here makes as much sense as taking about basic attitude control. Cooling the satellite and pointing it in the right direction are solved problems. They're important to detail in full system design, but not interesting enough for a paper that's about "data centers, but in space!".
It's solved on Earth because we have relatively easy (and relatively scalable) ways of getting rid of it - ventilation and water.
Sure, in the same sense that I could build a bridge from Australia to Los Angeles with "no new tech". All I have to do is find enough dirt!
We're past the point of every satellite being a custom R&D job resulting in an entirely bespoke design. We're even moving past the point where you need to haggle about every gram; launch costs have dropped a lot, giving more options to trade mass against other parameters, like more effective heat rejection :).
But I think the first and most important point for this entire discussion thread is: there is a paper - an actual PDF - linked in the article, in a sidebar to the right, which seemingly nobody read. It would be useful to do that.
Now ask them to do the Australia / Los Angeles one.
"lol no"
The where and the scale matter.
Scale: Lots of small satellites.
I.e. done to death and boring. Number of spacecraft does not affect the heat management of individual spacecraft.
Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.
Challenging!
> Scale: Lots of small satellites.
So we're getting cheaper by ditching economies of scale?
There's a reason datacenters are ever-larger giant warehouses.
> Much like number of bridges you build around the world does not directly affect the amount of traffic on any individual one.
But there are places you don't build bridges. Because it's impractical.
Thus, if launch costs to LEO reach $200/kg, then the cost of launch amortized over spacecraft lifetime could be roughly comparable to data center energy costs, on a per-kW basis.
… If the [SpaceX] learning rate is sustained—which would require∼180 Starship launches/year—launch prices could fall to <$200/kg by∼2035.
… Realizing these projected launch costs is of course dependent on SpaceX and other vendors achieving high rates of reuse with large, cost-effective launch vehicles such as Starship.
> So we're getting cheaper by ditching economies of scale?The economy of scale here is count, not size. This is also why even data centres are made from many small identical parts, such as server racks, which are themselves made from many smaller identical parts.
What makes LEO cheaper than it used to be, has been reuse. We'll see if "bigger" actually plays out as Starship continues.
> But there are places you don't build bridges. Because it's impractical.
What is and isn't practical changes as technology develops.
Look, I am skeptical of space based beamed power and space based compute, but saying any given proposal must still be bad in 2035 because it would be bad with today's tech is like betting against the growth of EVs or PV in 2015, or against the internet in 1990.
(The reverse mistake is to say that it must succeed, like anyone in 1970 who was expecting a manned Mars mission by 1980).
> "lol no"
Given how many people dream of megastructures, I bet someone has this as an interview question, some variant of https://what-if.xkcd.com/160/ — I'd guess "a few trillion, tens of trillions of USD" for floating-bridges with anchors etc., but that's just my uninformed not-a-civil-engineer guess.
We do not have a solution for getting rid of megawatts or gigawatts of heat in space.
What the sibling comment is pointing out is that you cannot simply scale up any and every technology to any problem scale. If you want to get rid of megawatts of heat with our current technology, you need to ship up several tons of radiators and then build massive kilometer-scale radiation panels. The only way to dump heat in space is to let a hot object radiate infrared light into the void. This is an incredibly slow and inefficient process, which is directly controlled by the surface area of your radiator.
The amount of radiators you need for a scheme like this is entirely out of the question.
1. Take existing satellite designs like Starlink, which obviously manage to utilize certain amount of power successfully, meaning they solved both collection and heat rejection.
2. Pick one, swap out its payload for however many TPUs it can power instead. Since TPUs aren't an energy source, the solar/thermal calculation does not change. Let X be the compute this gives you.
3. Observe that thermal design of a satellite is independent from whether you launch 1 or 10000 of them. Per point 2, thermals for one satellite are already solved, therefore this problem is boring and not worth further mention. Instead, go find some X that's enough to give a useful unit of scaling for compute.
4. Play with some wacky ideas about formations to improve parameters like bandwidth, while considering payload-specific issues like radiation hardening, NONE OF WHICH HAVE ANY IMPACT ON THERMALS[0]. This is the interesting part. Publish it as a paper.
5. Have someone make a press release about the paper. A common mistake.
6. Watch everyone get hung up on the press release and not bother clicking through to the actual paper.
--
[0] - Well, some do. Note that fact in the paper.
Which mission of this kind exemplifies the solution? Where's the datacenter in the sky to which I can point my telescope?
> Whether they can make it work within the power and budget constraints is the actual challenge, but that's economics.
It's a weird world, where economics isn't a fundamental part of engineering, any engineering proposal's got to include it, much more one that has never been done beopre.
Big bunch of satellites communicating with each other?
Starlink.
Specifically Bus F9-2 and Bus F9-3 have PV arrays about the size needed for the upper limit of what I read a single DC rack might use (max 25kW, someone correct me if it is ever higher than that). That's what's being proposed here, making a DC by making each rack its own satellite.
Section 2.1 is seeing what data link is needed between satellites, and what you can actually get with realistic limitations, and how close the satellites need to be to make this work.
25kW? Don't tell me your engineers used that number in their calculations.
Reality:
GB200 NVL72 - 120 kW per rack
GB300 NVL72 - 150 kW per rack
Weight - 3,000–3,500 lbs per rack
Cost of liquid cooling on Earth - $50,000 per rack
by 2027 the new 800V-HVDC will be deployed - 1 MW per rack
I'd never imagined I'd be providing free engineering consultations to billionaires.
Thanks!
Ironically, I googled it and the first few results all agreed with each other at 25 kW.
Still, there's a reason why I phrased it as uncertainly as I did.
> I'd never imagined I'd be providing free engineering consultations to billionaires.
I never thought I'd be mistaken for a billionaire, but there we go.
From https://x.com/elonmusk/status/1984249048107508061:
"Simply scaling up Starlink V3 satellites, which have high speed laser links would work. SpaceX will be doing this."
From https://x.com/elonmusk/status/1984868748378157312:
"Starship could deliver 100GW/year to high Earth orbit within 4 to 5 years if we can solve the other parts of the equation. 100TW/year is possible from a lunar base producing solar-powered AI satellites locally and accelerating them to escape velocity with a mass driver."
I'm sure they'll be ready right after the androids and the robotaxi and the autonomous LA-NYC summoning.
> Starlink v3 already has a 60M length solar array, so they're already solving dissipation for that size.
Starlink v3 doesn't exist yet. They're renders at this point. Full-sized v2s haven't even flown yet, just mass simulators.
Please post where you are creating the bet. You should make a lot of money from it
You didn’t say it would be late, you said it’s impossible. Setup the bet sir
Perhaps you could reflect back on whether you were saying anything at all
You’re obviously intelligent. You could have bigger impact if you had the courage to be less cynical
Sure. AI datacenters IN SPAAAAAAAACE probably fall in the same vaporware category a good portion of Musk claims fall into. More DOGE/Hyperloop than Falcon 9/Tesla.
Impossible? No. Probable? Also no.
Am I missing something? Feels like an extremely strong indicator that we're in some level of AI bubble because it just doesn't make any sense at all.
Given Musk's behaviour on the world stage… I wouldn't bet on SpaceX being allowed to allow him on-premises after 2028, let alone direct the company and get it to deliver the price goals he's suggested in various places.
In fact everything in this paper is already solved by SpaceX except GPU cooling.
It's not absent - it's covered in the paper, which this blog release summarizes. There's a link to the paper itself in the side bar.
> In fact everything in this paper is already solved by SpaceX except GPU cooling.
Cooling is already solved by SpaceX too, since this paper basically starts with the idea of swapping out whatever payload is on Starlink with power-equivalent in TPUs, and then goes from there.
I'm surprised that Google has drunken the "Datacenters IN SPACE!!!1!!" kool-aid. Honestly I expected more.
It's so easy to poke a hole in these systems that it's comical. Answer just one question: How/why is this better than an enormous solar-powered datacenter in someplace like the middle of the Mojave Desert?
I think it's a good idea, actually.
A giant space station?
> no need for security
There will be if launch costs get low enough to make any of this feasible.
> no premises
Again… the space station?
> no water
That makes things harder, not easier.
>There will be if launch costs get low enough to make any of this feasible.
I don't know what you mean by that.
Fundamentally, it is, just in the form of a swarm. With added challenges!
> I don't know what you mean by that.
If you can get to space cheaply enough for an orbital AI datacenter to make financial sense, so can your security threats.
Right, in the same sense that existing Starlink constellation is a Death Star.
This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).
(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)
"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."
This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
> The example they gave uses 81 satellites…
Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.
> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.
The more compute you do, the more heat you generate.
> Data center is made of multiplies of some compute units.
And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
Yes, and yet I still fail to see the point you're making here.
Max power in space is either "we have x kWt of RTG, therefore our radiators are y m^2" or "we have x m^2 of nearly-black PV, therefore our radiators are y m^2".
Even for cases where the thermal equilibrium has to be human-liveable like the ISS, this isn't hard to achieve. Computer systems can run hotter, and therefore have smaller radiators for the same power draw, making them easier.
> And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
What you're doing here is like saying "cars don't work for a city because a city needs to move a million people each day, and a million-seat car will break the roads": i.e. scaling up the wrong thing.
The (potential, if it even works) scale-up here is "we went from n=1 cluster containing m=81 satellites, to n=10,000 clusters each containing m=[perhaps still 81] satellites".
I am still somewhat skeptical that this moon-shot will be cost-effective, but thermal management isn't why, Musk (or anyone else) actually getting launch costs down to a few hundred USD per kg in that timescale is the main limitation.
It's probably not why they're interested in it, but I'd like to imagine someone with a vision for the next couple decades realized that their company already has data centers and powering them as their core competency, and all they're missing is some space experience...
It gets very exciting if you don't have enough.
> Nothing to obsess about.
It's one of the primary reasons these "AI datacenters… in space!" projects are goofy.
I have my doubts that it's worth it with current or near future launch costs. But at least it's more realistic than putting solar arrays in orbit and beaming the power down
Night.
I mean, how good an idea this actually is depends on what energy storage costs, how much faster PV degrades in space than on the ground, launch costs, how much stuff can be up there before a Kessler cascade, if ground-based lasers get good enough to shoot down things in whatever orbit this is, etc., but "no night unless we want it" is the big potential advantage of putting PV in space.
https://x.com/elonmusk/status/1984868748378157312
They're already having a negative, contaminating effect on our upper atmosphere
Sending up bigger ones, and more (today there's some 8,800, but they target 30k), sounds ill-advised.
1: https://www.fastcompany.com/91419515/starlink-satellites-are... 2: https://www.science.org/content/article/burned-satellites-ar...