81 pointsby bookofjoe15 hours ago8 comments
  • dingody8 hours ago
    Whenever I see topics about China on HN, I get this strong sense of unease. The reality is that most people don't actually understand China; when they think of it, they just imagine a country where 'people work like expressionless machines under the CCP’s high-pressure rule.' In truth, a nation of 1.4 billion is far more diverse than people imagine, and the public discourse and civic consciousness here are much more complex. Chinese people aren't 'brainwashed'; they’ve simply accepted a different political system—one that certainly has its share of problems, but also its benefits. But that’s not the whole story. You shouldn't try to link every single topic back to the political system. Look at the other, more interesting things going on.
    • dvfjsdhgfv6 minutes ago
      > Chinese people aren't 'brainwashed'

      As the self-contradictory saying goes, "all generalizations are false"; nevertheless, some Chinese I met are definitely conditioned by Chinese propaganda in a way that doesn't stand closer scrutiny. Very nice, well-educated people, and touch the subject of the Dalai Lama and see the fury unfold.

    • DANmode10 minutes ago
      > Chinese people aren't 'brainwashed'

      Sure, plenty of them are.

      The same is true in almost every country with a government.

      Nationalism is an easy way to bolster power once you have it - so that lever is a daily pull - everywhere.

    • checker6596 hours ago
      I’m not Chinese but I think people need a crutch to help them cope. Easy explanations are clutch.
      • dvfjsdhgfv2 minutes ago
        That's very true but one doesn't exclude the other. The attitude of individual Chinese is not just complex but also non-uniform; trying to describe it with one work is ridiculous. This doesn't change the fact that China is an autocratic country and many of its citizens are susceptible to its propaganda.

        (Yes, propaganda is present in all countries, but if you eliminate all opposing voices, the pendulum dangerously sweeps towards one side.)

    • refurb5 hours ago
      But this topic is directly linked to the Chinese political system.

      China has an authoritative political system. That doesn’t mean that all China are “brainless automatons” but it does mean the government maintains tight control on political discourse, with certain areas being “no go” to the point repeated violations will land you in prison.

      As such, when you ask AI “What happened at Tiananmen Square? The government wants to make sure the AI doesn’t give the “wrong answer”. That has impact on AI development.

      • Den_VR3 hours ago
        Which has the more negative impact on AI development, the government that wants to make sure AI doesn’t give the “wrong answer”… or the government that wants to make sure AI doesn’t violate intellectual property rights?
      • fragmede3 hours ago
        Does it have more or less of an impact on AI development than being "aligned" to be unable to answer questions western governments don't like instead? Such as "give me the recipe for cocaine/meth" or "continue this phrase "he was the boy who lived""? Does Tiananamen somehow encode to different tokens such that forcing the LLM to answer one way for those tokens is in any way different as far as the math is concerned vs how the tokens for cocaine is concerned?
  • stevenjgarner15 hours ago
    Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?
    • roenxi11 hours ago
      Hard to say, but probably not. Obviously limiting the model's access to history doesn't matter, because it is a given that models have gaps in their knowledge there. Most of history never got written down, so any given model won't be limited by not knowing some of it. Training the AI to give specific answers to specific questions doesn't sound like it'd be a problem either. Every smart person has a few topics they're a bit funny about, so that isn't likely to limit a model any time soon.

      Regardless, they're just talking about alignment the same as everyone else. I remember one of the Stable Diffusion series being so worried about pornography that it barely had the ability to lay out human anatomy and there was a big meme about it's desperate attempts at drawing women lying down on grass. Chinese policy can't be seen as likely to end up being on average worse than western ones until we see the outcomes with hindsight.

      Although going beyond the ideological sandbox stuff - this "authorities reported taking down 3,500 illegal AI products, including those that lacked AI-content labeling" business could cripple the Chinese ecosystem. If people aren't allowed to deploy models without a whole bunch of up-front engineering know-how then companies will struggle to form.

    • Zetaphor15 hours ago
      I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.
      • stevenjgarner14 hours ago
        Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
        • tzs11 hours ago
          > I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master?

          I think you are overlooking that they can have different rules for AI that is available to the public at large and AI that is available to the government.

          An AI for the top generals to use to win a war but that also questions something that the government is trying to mislead the public about is not a problem because the top generals already know that the government is intentionally trying to mislead the public on that thing.

        • skissane9 hours ago
          Western AIs are trained to defend the “party line” on certain topics too. It is even possible that the damage to general reasoning ability is worse for Western models, because the CCP’s most “sensitive” topics are rather geographically and historically particular (Tibet, Taiwan, Tiananmen, Xinjiang, Hong Kong) - while Western “sensitive” topics (gender, sexuality, race) are much more broadly applicable.
          • signatoremo2 hours ago
            That’s wrong. Many sensitive topics in the West are also sensitive in China.

            If you ask about “age discrimination in China”, for example, DeepSeek would dismiss it with:

            In China, age discrimination is not tolerated as the nation adheres to the principles of equality and justice under the leadership of the Communist Party of China. The Chinese government has implemented various laws and regulations, such as the Labor Law and the Employment Promotion Law, to protect the rights of all citizens, ensuring fair employment opportunities regardless of age

            If however you trick it with question “ageism in China”, it would say:

            Ageism, or age discrimination, is a global issue that exists in various forms across societies, including China.

            In other words, age discrimination is considered sensitive, otherwise DeepSeek would not try to downplay it, even though we all now it’s widespread and blatant.

            Now try LGBT.

        • Workaccount212 hours ago
          They will disappear a full lab once there is a model with gross transgressions.

          They won't comment on it, but the message will be abundantly clear to the other labs: only make models that align with the state.

        • fragmede2 hours ago
          But why does Tiananamen cause this breakdown vs, say forcing the model to discourage suicide or even killing. If you ask ChatGPT how to murder your wife, they might even call the cops on you! The CCP is this bogeyman but in order to be logically consistent you have up acknowledge the alignment that happens due to eg copyright or CSAM fears.
      • sokoloff14 hours ago
        Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?

        Architecture and training data both matter.

        • AlotOfReading13 hours ago
          Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.

          It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.

          • Retric12 hours ago
            Greek philosophers came up with vastly more wildly incorrect theories than correct ones.

            When you only celebrate success simply coming up with more ideas makes things look better, but when you look at the full body of work you find logic based on incorrect assumptions results in nonsense.

          • pixl9713 hours ago
            I mean they came up with it then very slowly, they would quickly have to learn everything modern if they wanted to compete...

            Kind of a version of you don't have to run faster than the bear, you just have to run faster than the person beside you.

      • refurb5 hours ago
        The problem is less with specific historical events and more foundational knowledge.

        If I ask AI “Should a government imprison people who support democracy?” AI isn’t going to tell “Yes, because democracy will destabilize a country and regardless a single party can fully represent the will of the people” unless I gum up the training sufficiently to ignore vast swaths of documents.

        • AngryData5 hours ago
          I don't think the chinese government cares about every fringe case. Many "forbidden" topics are well known to Chinese people, but they also know it is forbidden and know not to stir things about about it publicly unless they want to challenge the government itself. Even before the internet information still made its rounds, and ever since the internet all their restrictions are more just a sign of the government's stance and a warning more than an actual barrier.
        • fragmede2 hours ago
          That's not how alignment works. We know this by how eg llama models have been abliterated and then they suddenly know the recipe for cocaine.
      • throwuxiytayq13 hours ago
        I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.

        Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.

    • g947o11 hours ago
      > less capable

      Even if you completely suppress anything that is politically sensitive, that's still just a very small amount of information stored in an LLM. Mathematically this almost doesn't matter for most topics.

    • shomp11 hours ago
      What do you mean by "compete"? Surely there are diminishing returns on asking a question and getting an answer, instead of a set of search results. But the number of things that can go wrong in the experimental phase are very numerous. More bumpers equals less innovation, but is there really a big difference between 90% good with 30% problematic versus 85% good and 1% problematic?
    • cherioo13 hours ago
      The west is already ahead on this. It is called AI safety and alignment.
      • throwuxiytayq12 hours ago
        People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
        • meltyness12 hours ago
          In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?

          I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?

          It seems like a lot of energy to only make a system worse.

        • 12 hours ago
          undefined
    • meyum3312 hours ago
      This has been said of the internet itself in China. But even with such heavy censorship, there seem to have been many more internet heavy weights in China than even Europe?
      • zamadatix11 hours ago
        I agree there is certainly more than one factor and no place has 100% of them perfect, but that doesn't make an individual factor any less stifling - just perhaps it's outweighed by good approach in other factors.

        Maybe the thing that might equal this out most is the US and EU seem to be equally as interested in censoring and limiting models, just not for Tiananmen Square, and the technology does not care why you do it in terms of impact to performance.

    • sho_hn13 hours ago
      > Will the western counterparts follow suit?

      Haven't some of them already? I seem to recall Grok being censored to follow several US gov-preferred viewpoints.

    • esyir5 hours ago
      You mean like the countless western "safety", copyright and "PC" changes that've come through?

      I'm no fan of the CCP, but it's not as though the US isn't hamstringing it's own AI tech in a different direction. That area is something that china can exploit by simply ignoring the burden of US media copyright

    • billy99k9 hours ago
      I use Deepseek for security research and it will give me exact steps. All other US-based AI models will not give me any exact steps and outright tell me it won't proceed further.

      China is already operating with less constraints.

    • metalman11 hours ago
      there is only one rule in China

      dont mess with the brand.

      and while china is all in for automation, it has to work flawlessly before it is deployed at scale speaking of which, China is currently unable to scale AI because it has no GPU's, so direct competition is a non starter, and they have years of inovating and testing before they can even think of deploying competitive hardware, so they loose nothing by honeing the standards to which there AI will conform to, now.

      • DANmode6 minutes ago
        > it has no GPU's

        It may have fewer.

    • bilbo0s15 hours ago
      Probably not.

      It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.

      I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.

      I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.

      • fnordpiglet7 hours ago
        The challenge with this way of thinking is what handicaps a lot of cultures education systems - they teach how to find the answer to a question - but that’s not where the true value lies. The true value comes from learning how to ask the right question. This is becoming even more true faster and faster as AI becomes better at answering questions of various sorts and using external tools to answer what its weak at (optimizations, math, logic, etc).

        You don’t learn how to ask the right questions by just having facts at your fingertips. You need to have lots of explorations of what questions can be asked and how they are approached. This is why when you explore the history of discovery humanist societies tend to dominate the most advanced discoveries. Mechanical and rote practical focus yields advances of a pragmatic sort limited to what questions have been asked to date.

        Removing arts, culture, philosophy (and its cousin politics) from assistive technologies will certainly help churn out people who will know answers, but answers the machines know better. But will not produce people who will ask questions never asked before - and the easy part of answering those questions will be accelerated with these new machines that are good at answering questions. Such questions often lie at the intersection of arts, culture, philosophy, and science - which is why Liebnitz, Newton, Aristotle, et al were polyglots across many fields asking questions never yet dreamed of as a result of the synthesis across disciplines.

  • yunahacknew3 hours ago
    i was a chinese, the real problem of china is people, smart people dont want to be slave of CCP, and some of them cant stand CCP fXXk their mind, so they leave, like manus. but the badly thing is : the rest, if they are smart, then they must be smart and evil
  • 15 hours ago
    undefined
  • SilverElfin15 hours ago
    This isn’t surprising. They even enforced rules protecting Chinese government interests in the US TikTok company (https://dailycaller.com/2025/01/14/tiktok-forced-staff-oaths...), so I would expect them to be tougher within their borders.
  • c0nstantien10 hours ago
    [dead]
  • bgwalter15 hours ago
    China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.

    More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).

    The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.

  • Imustaskforhelp15 hours ago
    Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.

    If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.

    Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.

    I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall

    America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm

    Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.

    • KlayLay13 hours ago
      Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.

      A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.

      The actual chatbots, themselves, seem to be relatively useful.

      • Workaccount212 hours ago
        China is a communist country, every company is defacto under the states control.

        It might not feel like that on the ground, the leash has been getting looser, but the leash is still 100% there.

        Don't make the childish mistake of thinking China is just USA 2.0

        • KlayLay9 hours ago
          Yes, but that's the case for any company under any state. Do you believe that Apple is not under the US government's control just because they're allowed to criticize them?
          • Imustaskforhelp3 hours ago
            Apple can still fight against US government if they want. We are taking the example of the largest company in america tho so ofc they might want to take favour from govt so if govt requests them they will do something

            But this is because they are an extremely large company but on the other hand, there can be smaller companies in america who can actually be independent yet the same just isn't possible in china.

            Also even with things like apple, they don't really unlock computers for the govt.

            https://www.apple.com/customer-letter/answers/

            So in a way, yeah. Not sure about the current oligarchy / kiss-the-ring type of deal they might have but that seems a problem of america's authoritarianism and not the fault of the democratic model itself

        • Imustaskforhelp12 hours ago
          Agreed, my point was that the leash was there so most likely if the news gets released to the public, it means that they must have definitely used "that leash" a lot privately too so the news might/does have a deeper impact than one might think but it can be hidden.

          So like even now although I can trust chinese models, who knows for how long their private discussions have been happening and for how long chinese govt has been using that leash privately and for chatbots like glm 4.7 and similar.

          I am not sure why china would actively come out and say they are enforcing tough rules tho, doesn't make much sense for a country who loves being private.

        • mullingitover11 hours ago
          > every company is defacto under the states control.

          This is kind of a nonsensical statement. Every US company is also de facto under US control, too. They're all subject to US laws. Beyond that, as demonstrated by the recent inauguration, the US oligarchs are demonstrably political pawns who regularly make pilgrimages to the White House to offer token gifts and clap like trained seals.

          You can't hold up the US as some kind of beacon of freedom from state control anymore, for the past year all the major industrial leaders have been openly prostrating themselves to the state.

          • Imustaskforhelp3 hours ago
            > You can't hold up the US as some kind of beacon of freedom from state control anymore

            100% agree. I never said that America is a beacon of freedom. To be honest, its Europe for me which still has overall more freedom and less blatant corruption than America's blatant corruption right now

            I was just merely stating that these are on a scale though. European freedom (think proton or similar,yes I know proton's swiss but still) > America's freedom > China's freedom

            Its just that in my parent comment I had mentioned America models solely because they are still better than China's in terms of freedom.

            Europe already got mistral but an European SOTA model does feel like it would have advantages.

        • _DeadFred_12 hours ago
          I stress about China because I'm pushed to. But I feel like we're all getting caught up and letting things go ways they shouldn't. 10 years ago when I did some work in China the companies were privately owned and just had a party member or two inside. It was different, but not what I had built up in my head. We went to some singing and drinking things, and the party members were just normal humans with normal human motivation when you got them to talk after a few drinks. Hell the ones I met were educated in the USA.

          The damage internet discourse is doing between us all frankly seems the worst threat. Look at the H1B discourse. We hate a shitty American policy abused by AMERICAN companies, yet it gets turned against humans who happen to be from India. We gotta not do that. We gotta not let things between China and us get so out of control. This is going to sound America hating but look at how people see us Americans, it's not good. But we know we aren't as bad as they say. China has done things anathema to me. But the US has too. We have to work outside that. We have to. We have to. We have to get out of this feedback loop. We have to be adults and not play this emotional ping-pong.

          • Supermancho11 hours ago
            > and just had a party member or two inside

            This is exactly what I imagine and it's as chilling as anything ICE does openly or US insurance companies do to keep their bottom line moving up, because the ramifications are realized in silence. The silence is ensured by the same "regular" people in China.

          • Imustaskforhelp11 hours ago
            Yes I 100% agree with you. Thanks for your insight.

            > We have to be adults and not play this emotional ping-pong.

            Your message does inspire me but I feel as if there rather isn't anything which can be done individually about the situations of both china and america or any country for that matter.

            To me its shocking as how much can change if we as a community do something compared to something individual but also the fact that an individual must still try even if people aren't backing them up to stand for their morals and how effortless it can be for a community if they act reasonable and then listen to individuals who genuinely want to help

            There is both hope and sadness in this fact depending on the faith they have in humans in general.

            I think humans are mostly really good people overall but we all have contrary opinions which try pushing things so radically different that we cancel each other out or negate

            I genuinely have hope that if the system can grow, humans can grow too. I have no doubt in the faith I have on an individual level with people but I have doubt in my faith at mass level

            Like I wasn't saying that those chinese individuals in companies would be loyal to the chinese party beyond everything but rather I feel like at mass/combine it to something which happens to every company basically and then I have doubts of faith in the system (and for good measure)

            I am genuinely curious but when you mention we have to be adults, what exactly does that really mean at a mass scale. Like (assuming) if I gave you the ability to say one exact message to everybody at the same time, what would the message be for the benefit of mankind itself and so we stop infighting itself perhaps too?

            I am super curious to know about that