9 pointsby thm7 hours ago1 comment
  • jdw647 hours ago
    The article arguing that the “China threat” narrative is being used as a regulatory-avoidance strategy is interesting.

    What feels especially frightening is the way a political agenda is being blended into personal lifestyle- the argument that “if you are a good citizen, you should support AI.” The analysis of Palantir’s use of the China threat narrative as a way to avoid or weaken regulation was particularly interesting.

    When a campaign spreads fear by saying, “China will take our data and steal our jobs,” what it ultimately means is that these companies want more investment to flow toward themselves and toward American AI companies.

    What is the most frightening power of the media? It is Agendasetting. Right now, parts of the American media ecosystem are selling a political narrative: national security and technological supremacy.

    As far as I know, Super PAC funding can at least be tracked to some extent. But money flowing to individual influencers is much harder to regulate. That is where the problem appears. As the influence of legacy media declines, the influence and voice of individual influencers so called new media. become stronger. Public opinion can be shaped there as well. American AI companies are investing in this regulatory blind spot and using it to push regulation in a direction favorable to themselves.

    The new exercise of power through money and technology is frightening. But the most frightening part, for me, lies elsewhere.

    As someone from Korea — a third country that is neither the United States nor China — the fundamental issue I feel is that most of the American AI ecosystem is closed. Gemini, GPT, and Claude are subscription/API-based products, and their pricing and access conditions can change.

    If such changes happen, developers who want to escape vendor lock-in may start looking toward domestic models. And right now, Chinese open-weight models such as Qwen and DeepSeek already exert a very strong influence on those domestic model ecosystems. The United States is still centered around the CUDA ecosystem, but China already has its own CANN ecosystem.

    Outside Silicon Valley, the logical ecosystem of models that individuals can actually download, run, modify, and build upon may increasingly be shaped by China. Closed American models may still retain an advantage at the technological frontier, but open Chinese models can serve as a baseline price-resistance layer. If American companies try to raise prices too aggressively, strong Chinese open models can limit how far those price increases can go.

    The Linux server case feels similar.

    One reason data centers chose Linux was exactly this: at server scale, licensing costs, deployment control, automation, customization, and avoiding vendor lock-in matter. Windows Server still played an important role when vendor accountability, compatibility, or specific enterprise software mattered. But at large infrastructure scale, open systems largely won.

    A similar phenomenon may occur in artificial intelligence.

    Open local models do not necessarily need to be the best. If they are good enough, cheap, easy to deploy, and free from unstable vendor pricing, they can become a core part of the infrastructure layer.

    If that happens, what will become of American developers? Will they start thinking in a Chinese programming style?

    Right now, I read both Chinese and American programming sites. At the moment, I still mostly follow American open-source communities. But in the future, I may need to pay more attention to Chinese sites. Perhaps this is the time to start studying Chinese.

    • lopsotronic6 hours ago
      Reading the tea leaves, directing a young person to "learn Chinese" is probably good advice.

      To the larger point . . .

      At the time of writing, all deepseek or qwen models AFAIK are de facto prohibited in defense contracting, including local machine deployments via Ollama or similar. Although no legislative or executive mandate yet exists [1], it's perceived as a gap [2], and contracts are already including language for prohibition not just in the product but in any part of the software development environment.

      The attack surface for a (non-agentic) model running in local ollama is basically non-existent . . but, eh . . I do get it, at some level. While they're not l33t haXX0ring your base, the models are still largely black boxes, can move your attention away from things, or towards things, with no one being the wiser. "Landing Craft? I see no landing craft". This would boil out in test, ideally, but hey, now you know how much time your typical defense subcon spends in meaningful software testing[3].

      [1] See also OMB Memorandum M-25-22 (preference for AI developed and produced in the United States), NIST CAISI assessment of PRC-origin AI models as "adversary AI" (September 2025), and House Select Committee on the CCP Report (April 16, 2025), "DeepSeek Unmasked".

      [2] Overall, rather than blacklist, I'd recommend a "whitelist" of permitted models, maintained dynamically. This would operate the same way you would manage libraries via SSCG/SSCM (software supply chain governance/management) . . but few if any defense subcons have enough onboard savvy to manage SSCG let alone spooling a parallel construct for models :(. Soooo . . ollama regex scrubbing it is.

      [3] i.e. none at all, we barely have the ability to MAKE anything like software, given the combination of underwhelming pay scales and the fact defense companies always seem to have a requirement for on-site 100% in some random crappy town in the middle of BFE. If it wasn't for the downturn in tech we wouldn't have anyone useful at all, but we snagged some silcon refugees.

      • jdw645 hours ago
        However, since I am Korean, I am not in a position to take work from American defense contractors. Still, your analysis is very interesting to me.

        I am also a factory/industrial software developer in Korea, and the situation feels somewhat similar. Many developers are leaving factory software work because of the travel burden and the heavy responsibility. If the system fails to operate correctly, there can even be penalties. I am still doing this work because I need the money, but it is interesting that American defense contracting and Korean factory software seem to behave in somewhat similar ways.

        Recently, I was also told that using Chinese models such as DeepSeek is prohibited for some products that go into factories. So at the government-related level, this seems to be restricted, though I am not sure how things are handled in the private sector. From what I can see, private companies still seem to use them.

        In any case, your comment was very interesting. You pointed out something I had been missing. Thanks to you, I now have many things to think about. Thank you for the thoughtful comment.