How does an artificial intelligence improve without upgrades or access to resources? We're past the steep acceleration curve of innovation - linear improvements now require exponential increases in resources.
Also, it's remarkable how much of the supply chain for components still rely on human labor. Maybe not the assembly and fab of chips - but some parts of the supply chain still rely on tooling equipment made in the 1960s! None of that is moving to space anytime this century.
Starlink is perhaps one of the most vulnerable infrastructures available. Every node de-orbits after 5 years and has to be replaced. And they can be destroyed by any number of terrestrial or low orbit solutions.
> Starlink is perhaps one of the most vulnerable infrastructures available. Every node de-orbits after 5 years and has to be replaced. And they can be destroyed by any number of terrestrial or low orbit solutions.
I have been thinking about this for years...
> And they can be destroyed by any number of terrestrial or low orbit solutions.
Please explain to me how one could accomplish this feat. I have considered creating radiation clouds, etc, but I have never been able to convince myself.
Keep in mind that the last US test was in 2008 using basic rocket weaponry. The targets were much bigger, deeper in space, and more maneuverable than the Starlink ones.
The only reason we don't do more "kinetic" satellite test strikes is precisely why they are so vulnerable - hitting one creates enough space debris to destroy many, many more other satellites in the same orbit. The only advantage starlink has is the sheer number, and low cost - but just a couple of strikes would be enough to effectively disable most of their network.
The idea of an unstoppable AGI in space is a completely implausible scenario based on current or near technology.
Space is huge. We live on the pond scum layer of the planet. However, my point was: what if a bunch of greedy morons launched >10k inference machines into orbit. Then, self propagating ASI occurred. Now what?
Honestly, the more realistic reason for orbital inferance is that the bull case for LLM automation is that what, >50% of knowledge workers (middle to upper-middle class) will be fired in the next few years? That would likely cause trouble. It's a lot harder for the 2020's Luddites to destroy their replacements if they are in orbit, vs in some field.
The other argument is power supply. Have not done the math on that.
Disclaimer: I am grasping at straws here to try to understand how orbital LLMs could ever break even. It just seems like a significant risk, for our species.
Orbital inference isn't intended to break even, or present a significant risk. It has every hallmark of a marketing stunt.
> Starlink [...] is a very difficult to destroy network with millions of clients/base stations.
I don't know a single professional who agrees with this statement. Starlink is currently being jammed very effectively, and it's apogee is not nearly far enough away to hide from ASAT weapons a-la Molniya satellites.