As the self-contradictory saying goes, "all generalizations are false"; nevertheless, some Chinese I met are definitely conditioned by Chinese propaganda in a way that doesn't stand closer scrutiny. Very nice, well-educated people, and touch the subject of the Dalai Lama and see the fury unfold.
Sure, plenty of them are.
The same is true in almost every country with a government.
Nationalism is an easy way to bolster power once you have it - so that lever is a daily pull - everywhere.
(Yes, propaganda is present in all countries, but if you eliminate all opposing voices, the pendulum dangerously sweeps towards one side.)
China has an authoritative political system. That doesn’t mean that all China are “brainless automatons” but it does mean the government maintains tight control on political discourse, with certain areas being “no go” to the point repeated violations will land you in prison.
As such, when you ask AI “What happened at Tiananmen Square? The government wants to make sure the AI doesn’t give the “wrong answer”. That has impact on AI development.
Regardless, they're just talking about alignment the same as everyone else. I remember one of the Stable Diffusion series being so worried about pornography that it barely had the ability to lay out human anatomy and there was a big meme about it's desperate attempts at drawing women lying down on grass. Chinese policy can't be seen as likely to end up being on average worse than western ones until we see the outcomes with hindsight.
Although going beyond the ideological sandbox stuff - this "authorities reported taking down 3,500 illegal AI products, including those that lacked AI-content labeling" business could cripple the Chinese ecosystem. If people aren't allowed to deploy models without a whole bunch of up-front engineering know-how then companies will struggle to form.
I think you are overlooking that they can have different rules for AI that is available to the public at large and AI that is available to the government.
An AI for the top generals to use to win a war but that also questions something that the government is trying to mislead the public about is not a problem because the top generals already know that the government is intentionally trying to mislead the public on that thing.
If you ask about “age discrimination in China”, for example, DeepSeek would dismiss it with:
In China, age discrimination is not tolerated as the nation adheres to the principles of equality and justice under the leadership of the Communist Party of China. The Chinese government has implemented various laws and regulations, such as the Labor Law and the Employment Promotion Law, to protect the rights of all citizens, ensuring fair employment opportunities regardless of age
If however you trick it with question “ageism in China”, it would say:
Ageism, or age discrimination, is a global issue that exists in various forms across societies, including China.
In other words, age discrimination is considered sensitive, otherwise DeepSeek would not try to downplay it, even though we all now it’s widespread and blatant.
Now try LGBT.
They won't comment on it, but the message will be abundantly clear to the other labs: only make models that align with the state.
Architecture and training data both matter.
It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.
When you only celebrate success simply coming up with more ideas makes things look better, but when you look at the full body of work you find logic based on incorrect assumptions results in nonsense.
Kind of a version of you don't have to run faster than the bear, you just have to run faster than the person beside you.
If I ask AI “Should a government imprison people who support democracy?” AI isn’t going to tell “Yes, because democracy will destabilize a country and regardless a single party can fully represent the will of the people” unless I gum up the training sufficiently to ignore vast swaths of documents.
Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.
Even if you completely suppress anything that is politically sensitive, that's still just a very small amount of information stored in an LLM. Mathematically this almost doesn't matter for most topics.
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
Maybe the thing that might equal this out most is the US and EU seem to be equally as interested in censoring and limiting models, just not for Tiananmen Square, and the technology does not care why you do it in terms of impact to performance.
Haven't some of them already? I seem to recall Grok being censored to follow several US gov-preferred viewpoints.
I'm no fan of the CCP, but it's not as though the US isn't hamstringing it's own AI tech in a different direction. That area is something that china can exploit by simply ignoring the burden of US media copyright
China is already operating with less constraints.
dont mess with the brand.
and while china is all in for automation, it has to work flawlessly before it is deployed at scale speaking of which, China is currently unable to scale AI because it has no GPU's, so direct competition is a non starter, and they have years of inovating and testing before they can even think of deploying competitive hardware, so they loose nothing by honeing the standards to which there AI will conform to, now.
It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.
I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.
I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.
You don’t learn how to ask the right questions by just having facts at your fingertips. You need to have lots of explorations of what questions can be asked and how they are approached. This is why when you explore the history of discovery humanist societies tend to dominate the most advanced discoveries. Mechanical and rote practical focus yields advances of a pragmatic sort limited to what questions have been asked to date.
Removing arts, culture, philosophy (and its cousin politics) from assistive technologies will certainly help churn out people who will know answers, but answers the machines know better. But will not produce people who will ask questions never asked before - and the easy part of answering those questions will be accelerated with these new machines that are good at answering questions. Such questions often lie at the intersection of arts, culture, philosophy, and science - which is why Liebnitz, Newton, Aristotle, et al were polyglots across many fields asking questions never yet dreamed of as a result of the synthesis across disciplines.
More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).
The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.
It might not feel like that on the ground, the leash has been getting looser, but the leash is still 100% there.
Don't make the childish mistake of thinking China is just USA 2.0
But this is because they are an extremely large company but on the other hand, there can be smaller companies in america who can actually be independent yet the same just isn't possible in china.
Also even with things like apple, they don't really unlock computers for the govt.
https://www.apple.com/customer-letter/answers/
So in a way, yeah. Not sure about the current oligarchy / kiss-the-ring type of deal they might have but that seems a problem of america's authoritarianism and not the fault of the democratic model itself
So like even now although I can trust chinese models, who knows for how long their private discussions have been happening and for how long chinese govt has been using that leash privately and for chatbots like glm 4.7 and similar.
I am not sure why china would actively come out and say they are enforcing tough rules tho, doesn't make much sense for a country who loves being private.
This is kind of a nonsensical statement. Every US company is also de facto under US control, too. They're all subject to US laws. Beyond that, as demonstrated by the recent inauguration, the US oligarchs are demonstrably political pawns who regularly make pilgrimages to the White House to offer token gifts and clap like trained seals.
You can't hold up the US as some kind of beacon of freedom from state control anymore, for the past year all the major industrial leaders have been openly prostrating themselves to the state.
100% agree. I never said that America is a beacon of freedom. To be honest, its Europe for me which still has overall more freedom and less blatant corruption than America's blatant corruption right now
I was just merely stating that these are on a scale though. European freedom (think proton or similar,yes I know proton's swiss but still) > America's freedom > China's freedom
Its just that in my parent comment I had mentioned America models solely because they are still better than China's in terms of freedom.
Europe already got mistral but an European SOTA model does feel like it would have advantages.
The damage internet discourse is doing between us all frankly seems the worst threat. Look at the H1B discourse. We hate a shitty American policy abused by AMERICAN companies, yet it gets turned against humans who happen to be from India. We gotta not do that. We gotta not let things between China and us get so out of control. This is going to sound America hating but look at how people see us Americans, it's not good. But we know we aren't as bad as they say. China has done things anathema to me. But the US has too. We have to work outside that. We have to. We have to. We have to get out of this feedback loop. We have to be adults and not play this emotional ping-pong.
This is exactly what I imagine and it's as chilling as anything ICE does openly or US insurance companies do to keep their bottom line moving up, because the ramifications are realized in silence. The silence is ensured by the same "regular" people in China.
> We have to be adults and not play this emotional ping-pong.
Your message does inspire me but I feel as if there rather isn't anything which can be done individually about the situations of both china and america or any country for that matter.
To me its shocking as how much can change if we as a community do something compared to something individual but also the fact that an individual must still try even if people aren't backing them up to stand for their morals and how effortless it can be for a community if they act reasonable and then listen to individuals who genuinely want to help
There is both hope and sadness in this fact depending on the faith they have in humans in general.
I think humans are mostly really good people overall but we all have contrary opinions which try pushing things so radically different that we cancel each other out or negate
I genuinely have hope that if the system can grow, humans can grow too. I have no doubt in the faith I have on an individual level with people but I have doubt in my faith at mass level
Like I wasn't saying that those chinese individuals in companies would be loyal to the chinese party beyond everything but rather I feel like at mass/combine it to something which happens to every company basically and then I have doubts of faith in the system (and for good measure)
I am genuinely curious but when you mention we have to be adults, what exactly does that really mean at a mass scale. Like (assuming) if I gave you the ability to say one exact message to everybody at the same time, what would the message be for the benefit of mankind itself and so we stop infighting itself perhaps too?
I am super curious to know about that