2 pointsby consumer451a day ago3 comments
  • legitstera day ago
    I think your premise is entirely too whacky to even play with.

    How does an artificial intelligence improve without upgrades or access to resources? We're past the steep acceleration curve of innovation - linear improvements now require exponential increases in resources.

    Also, it's remarkable how much of the supply chain for components still rely on human labor. Maybe not the assembly and fab of chips - but some parts of the supply chain still rely on tooling equipment made in the 1960s! None of that is moving to space anytime this century.

    Starlink is perhaps one of the most vulnerable infrastructures available. Every node de-orbits after 5 years and has to be replaced. And they can be destroyed by any number of terrestrial or low orbit solutions.

    • consumer451a day ago
      First, thank you so much for replying. This website has some of the best exchanges on the Internet.

      > Starlink is perhaps one of the most vulnerable infrastructures available. Every node de-orbits after 5 years and has to be replaced. And they can be destroyed by any number of terrestrial or low orbit solutions.

      I have been thinking about this for years...

      > And they can be destroyed by any number of terrestrial or low orbit solutions.

      Please explain to me how one could accomplish this feat. I have considered creating radiation clouds, etc, but I have never been able to convince myself.

      • legitstera day ago
        https://en.wikipedia.org/wiki/Anti-satellite_weapon

        Keep in mind that the last US test was in 2008 using basic rocket weaponry. The targets were much bigger, deeper in space, and more maneuverable than the Starlink ones.

        The only reason we don't do more "kinetic" satellite test strikes is precisely why they are so vulnerable - hitting one creates enough space debris to destroy many, many more other satellites in the same orbit. The only advantage starlink has is the sheer number, and low cost - but just a couple of strikes would be enough to effectively disable most of their network.

        • consumer451a day ago
          There are many ways to take out one satellite, an F-15, a Standard Missile from a Navy vessel, etc. Many countries have that capability. However, there is no known way to take out 1k out of a 30k constellation. Even if one accomplished that, then there are still 29k nodes left.
          • legitstera day ago
            Again, the debris cloud from one strike could disable the entire constellation on the same orbit. Especially if it was engineered to do so. But even doing literally nothing, the constellation will fall out of orbit over the course of 5 years if we choose not to supply it with new nodes.

            The idea of an unstoppable AGI in space is a completely implausible scenario based on current or near technology.

            • consumer451a day ago
              I agree, and I hate to do this as someone who is not an AGI/ASI person:

              Space is huge. We live on the pond scum layer of the planet. However, my point was: what if a bunch of greedy morons launched >10k inference machines into orbit. Then, self propagating ASI occurred. Now what?

              • legitstera day ago
                How would it self-propagate?
                • consumer451a day ago
                  Well, via a worm using "obvious" opportunities that no human had noticed. That is the entire magical ~.01% chance of some runaway ASI ideas, isn't it?

                  Honestly, the more realistic reason for orbital inferance is that the bull case for LLM automation is that what, >50% of knowledge workers (middle to upper-middle class) will be fired in the next few years? That would likely cause trouble. It's a lot harder for the 2020's Luddites to destroy their replacements if they are in orbit, vs in some field.

                  The other argument is power supply. Have not done the math on that.

                  Disclaimer: I am grasping at straws here to try to understand how orbital LLMs could ever break even. It just seems like a significant risk, for our species.

                  • bigyabai20 hours ago
                    It's easy to get caught up in the high-fantasy sci-fi politicking, here. For the past 50 years, there have been private American space ventures promising everything from round-trip tourism to asteroid mining, and it just hasn't happened. There's no shortage of imaginative ways to leverage space, but the realistically rewarding applications are quite limited.

                    Orbital inference isn't intended to break even, or present a significant risk. It has every hallmark of a marketing stunt.

  • consumer451a day ago
    I just want to be clear, my point is that Elon Musk and others are about to try to do this, via SpaceX IPO, etc... but this seems like a Great Filter vector.
  • bigyabaia day ago
    Humans can disrupt space inference with preexisting weapons. The US Navy AEGIS system is one such platform capable of doing so, once the Kessler syndrome is triggered then it's game over for space AI.

    > Starlink [...] is a very difficult to destroy network with millions of clients/base stations.

    I don't know a single professional who agrees with this statement. Starlink is currently being jammed very effectively, and it's apogee is not nearly far enough away to hide from ASAT weapons a-la Molniya satellites.

    • a day ago
      undefined