Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.
Even in a more benign case, people often get 'gremlins', weird failures and BSOD due to some kernel apis being intercepted and overridden incorrectly.
The solution here is to establish root of trust from boot, and use the OSes sandboxing features (like Job Objects on NT and other stuff). Providing a secure execution environment is the OS developers' job.
Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.
It'd be really interesting to see what would happen - for instance, what fraction of players would pick each pool during the first few weeks after launch, and then how many of them would switch after? What about players who joined a few months or a year after launch?
Unfortunately, pretty much the only company that could make this work is Valve, because they're the only one who actually cares for players and is big enough that they could gather meaningful data. And I don't think that even Valve will see enough value in this to dedicate the substantial resources it'd take to try to implement.
This is roughly what Valve does for CS2. But, as far as I understand, it's not very effective and unfortunately still results in higher cheating rates than e.g. Valorant.
The example still kind of applies. In the CS world, serious players use Faceit for matchmaking, which requires you to install a kernel-level anticheat. This is basically what you're suggesting, but operated by a 3rd party.
But so far that still seems to be miles away.
But anyway counterstrike did have community policing of lobbies called overwatch - https://counterstrike.fandom.com/wiki/Overwatch
It was terrible as it required the community to conclude beyond reasonable doubt the suspect was cheating, and cheats today are sophisticated enough to make that conclusion very difficult to make
My understanding of the proposal is that it advertises no invasive anticheat (meaning mostly rootkit/kernel anticheat). So, the value proposition is anyone who doesn't want a rootkit on their computer. This could be due to anything from security concerns to desiring (more) meaningful ownership of one's devices.
I guess I didn't exactly make that clear...
A few of the arguments advanced by the "anti-anticheat" crowd that inevitably pops up in these threads are "anticheat is ineffective so there's no point to using it" and "anticheat is immoral because players aren't given a choice to use it or not and most of them would choose to not use it".
I don't believe that either of these are true (and given the choice I would almost never pick the no-anticheat queue), but there's not a lot of good high-quality data to back that up. Hence, the proposal for a dual-queue system to try to gather that data.
Putting in the community review of the no-anticheat pool is just to head off the inevitable goalpost-moving of "well of course no system would be worse than a crappy system (anticheat), you need to compare the best available alternative (community moderation)".
Cheaters are by definition anomalies, they operate with information regular players do not have. And when they use aimbots they have skills other players don't have.
If you log every single action a player takes server-side and apply machine learning methods it should be possible to identify these anomalies. Anomaly detection is a subfield of machine learning.
It will ultimately prove to be the solution, because only the most clever of cheaters will be able to blend in while still looking like great players. And only the most competently made aimbots will be able to appear like great player skills. In either of those cases the cheating isn't a problem because the victims themselves will never be sure.
There is also another method that the server can employ: Players can be actively probed with game world entities designed for them to react to only if they have cheats. Every such event would add probability weight onto the cheaters. Ultimately, the game world isn't delivered to the client in full so if done well the cheats will not be able to filter. For example: as a potential cheater enters entity broadcast range of a fake entity camping in an invisible corner that only appears to them, their reaction to it is evaluated (mouse movements, strategy shift, etc). Then when it disappears another evaluation can take place (cheats would likely offer mitigations for this part). Over time, cheaters will stand out from the noise, most will likely out themselves very quickly.
So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…
When you have enough randomly distributed variables, by the law of big numbers some of them will be anomalous by pure chance. You can't just look at any statistical anomaly and declare it must mean something without investigating further.
In science, looking at a huge number of variables and trying to find one or two statistically significant variables so you can publish a paper is called p hacking. This is why there are so many dubious and often even contradictory "health condition linked to X" articles.
They will all cluster in very different latent spaces.
You don't automatically ban anomalies, you classify them. Once you have the data and a set of known cheaters you ask the model who else looks like the known cheaters.
Online games are in a position to collect a lot of data and to also actively probe players for more specific data such as their reactions to stimuli only cheaters should see.
> With that goal in mind, we released a patch as soon as we understood the method these cheats were using. This patch created a honeypot: a section of data inside the game client that would never be read during normal gameplay, but that could be read by these exploits. Each of the accounts banned today read from this "secret" area in the client, giving us extremely high confidence that every ban was well-deserved.
Anyway, this isn’t the Olympics, a professional sport, or Chess. It’s more like pickup league. Preserving competitive purity should be a non-goal. Rather, aim for fun matches. Matchmaking usually tries to find similar skill level opponents anyway, so let cheaters cheat their way out of the wider population and they’ll stop being a problem.
Or, let players watch their killcams and tag their deaths. Camper, aimbot, etc etc. Then (for players that have a good sample size of matches) cluster players to use the same tactics together.
Treating games like serious business has sucked all the fun out of it.
Matching based on skill works only as long as you have an abundance of players you can do that based on. When you have to account for geography, time of day, momentary availability, and skill level, you realize that you have fractured certain players far too much that it’s not fun for them anymore. Keep in mint that “cheaters” are also looking for matches that would maximize their cheats. Maybe it’s 8PM Pacific Time with tons of players there, but it’s 3 AM somewhere else with much limited number of players. Spoof your ping and location to be there and have fun sniping every player in the map. Sign up for new accounts on every play, who cares. Your fun as a cheater is to watch others lose their shit. You’re not building a character with history and reputation. You are heat sniping others while they are not realizing it. It may sound limited in scope and not worth the effort for you, but it’s millions of people out there tht ruin the game for everyone.
Almost every game I know of lets players “watch their kill cam”, and cheaters have adapted. The snipped people have a bias to vote the sniper was cheating, and the snipers have a bias to vote otherwise. Lean one way or the other, and it’s another post on /r/gaming of how your game sucks.
Unpopular opinion: cheaters don’t, griefers do.
“Cheater” is a pejorative for someone who sidesteps the rules and uses technology instead of, uh, pardon a potentially word choice, innate skills. They don’t inherently want to see others suffer as they stomp - it’s a matchmaking bug they’re put where they don’t belong. They just want to do things they cannot do on their own, but what are technically possible. A more positive term for that is a “hacker”.
Griefers are a different breed, they don’t just enjoy own success but get entertained by others’ suffering. Not a cheating issue TBH (cheats merely enable more opportunities), more like “don’t match us anymore, we don’t share the same ideas of fun” thing. “Black hat” is close enough term I guess.
YMMV, but if someone performs adequately for my skill levels (that is, they also don’t play well) then they don’t deprive me of any fun irrespective of how they’re playing.
Yes, its prize pool is order of magnitude higher than either of Olympics sports or Chess.
That solution only works on servers hosted by players - I've never seen huge game companies that run their own servers (like GTA) have dedicated server admins. I guess they think they can just code cheaters out of their games, but they never can.
It's kind of weird that we still don't have distributed computing infrastructure. Maybe that will be another thing where agents can run near the data their crunching on generic compute nodes.
> The general simplistic answer from those who never had to design such a game or a system of “do everything on the server” is laughably bad.
What “Netflix did” was having dead-simple static file serving appliance for ISPs to host with their Netflix auth on top. In their early days, Netflix had one of the simplest “auth” stories because they didn’t care.
It would add some latency but could be opt-in for those that care enough for all players in a match to take the hit.
You can't make a competitive fps game with a dumb terminal, it can't work because the latency is too high so that's why you have to run local predictive simulation.
You don't want to wait the server to ack your inputs.
I grew up with star trek and star wars wondering what a “I’ll transfer 20 units to you” meant. Bitcoin was an eye opener in the idea of “maybe this is possible” to me. But it shortly became true to me that it’s not the case. There is no way still for random agents to prove they are not malicious. It’s easier in a network within the confines of Bitcoin network. But maybe I’m not smart enough to come up with a more generalized concept. After all, I was one of the people who read the initial bitcoin white paper on HN and didn’t understand it back then and dismissed it.
I have always wondered why more companies don't do trust based anti cheat management. Many cheats are obvious from anyone in the game, you see people jumping around like crazy, or a character will be able to shoot through walls, or something else that impossible for a non-cheater to do.
Each opponent in the game is getting the information from the cheating player's game that has it doing something impossible. I know it isn't as simple as having the game report another player automatically, because cheaters could report legitimate players... but what if each game reported cheaters, and then you wait for a pattern... if the same player is reported in every game, including against brand new players, then we would know the were a cheater.
Unless cheaters got to be a large percentage of the player population, they shouldn't be able to rig it.
Players in some games with custom servers run webs of trust (or rather distrust, shared banlists). They are typically abused to some degree and good players are banned across multiple servers by admins acting in bad faith or just straight up not caring. This rarely ends well.
I used to run popular servers for PvP sandbox games and big communities, and we used votebans/reports to evict good players from casual servers to anarchy ones, where they could compete, but a mod always had to approve the eviction using a pretty non-trivial process. This system was useless for catching cheaters, we got them in other ways. That's for PvP sandboxes - in e-sports grade games reports are useless for anything.
I played COD4 a lot, though not competitively. I used to say that I had a bad day if I didn't get called a cheater once.
I didn't cheat, never have, but some people are just not aware of where the ceiling is.
The cheaters that annoyed us back then were laughably obvious. They'd just hold the button with a machine gun and get headshots after headshots, or something blatant like that.
Out of curiosity I did a quick internet search and a couple of months ago a new wave of bots has emerged. Those bots also join as majority group but never fully join the game, they simply take up slots in a team, preventing others from joining. Makes you wonder why the server isn't timing them out.
And even that's the (relatively) straightforward part. The hard part is doing this without injuring the kernel enough that the only sensible solution for the security conscious is a separate PC for gaming.
Kernel anti-cheat isn't an elegant solution either. It's another landmine, security holes, false positives, broken dev tools, and custody battles with Windows updates while pushing more logic server-side still means weeks of netcode tuning and a cascade of race conditions every time player ping spikes, so the idea that this folds to "better code disipline" is fantasy.
I play fps competitively and valorant is by far the most least cheater fps game on the market
nothing perfect in software world and this is the best tool for its job
if your pc is so important then maybe don't install these particular software
its all about trade off
(Not being sarcastic.)
Sort of like nuclear weapons
https://www.forbes.com/sites/paultassi/2025/01/20/elon-musk-...
Okay, chill. I'm willing to believe that anti-cheat software is "sophisticated", but intercepting system calls doesn't make it so. There is plenty of software that operates at elevated privilege and runs transparently while other software is running, while intentionally being unsophisticated. It's called a kernel subsystem.
I was not aware that attackers could potentially manipulate attestation! How could that be done? That would seemingly defeat the point of remote attestation.
Defeating remote attestation will be a key capability in the future. We should be able to fully own our computers without others being able to discriminate against us for it.
There is guidance on "Active" attacks [1], which is to set up your TPM secrets so they additionally require a signature from a secret stored securely on the CPU. But that only addresses secret storage, and does nothing about the compromised measurements. I also don't know what would be capable of providing the CPU secret for x86 processors besides... an embedded/firmware TPM.
[1] https://trustedcomputinggroup.org/wp-content/uploads/TCG_-CP...
A more sophisticated attacker could plausibly extract key material from the TPM itself via sidechannels, and sign their own attestations.
But the main point there is that this setup is prohibitively expensive for most cheaters.
It is not "fake", a software TPM is real TPM but not accepted/approved by anticheat due to inability to prove its provenance
(Disclosure: I am not on the team that works on Vanguard, I do not make these decisions, I personally would like to play on my framework laptop)
This seems much more doable today than in the past as machines boot in moments. Switching from secure "xbox mode" to free form PC mode, would be barely a bump.
Now, I see one major difference, heterogenous vs homogenous hardware (and the associated drivers that come with that). In the xbox world, one is dealing with a very specific hardware platform and a single set of drivers. In the PC world (even in a trusted secure boot path), one is dealing with lots of different hardware and drivers that can all have their exploits. If users are more easily able to modify their PCs and set of drivers one, I'd imagine serious cheaters would gravitate to combinations they know they can exploit to break the secure/trusted boot boundary.
I wonder if there are other problems.
Well it's definitely not game developer written kernel anti-cheat on consoles.
https://www.vice.com/en/article/fs-labs-flight-simulator-pas...
Company decides to "catch pirates" as though it was police. Ships a browser stealer to consumers and exfiltrates data via unencrypted channels.
https://old.reddit.com/r/Asmongold/comments/1cibw9r/valorant...
https://www.unknowncheats.me/forum/anti-cheat-bypass/634974-...
Covertly screenshots your screen and sends the image to their servers.
https://www.theregister.com/2016/09/23/capcom_street_fighter...
https://twitter.com/TheWack0lian/status/779397840762245124
https://fuzzysecurity.com/tutorials/28.html
https://github.com/FuzzySecurity/Capcom-Rootkit
Yes, a literal privilege escalation as a service "anticheat" driver.
Trusting these companies is insane.
Every video game you install is untrusted proprietary software that assumes you are a potential cheater and criminal. They are pretty much guaranteed to act adversarially to you. Video games should be sandboxed and virtualized to the fullest possible extent so that they can access nothing on the real system and ideally not even be able to touch each other. We really don't need kernel level anticheat complaining about virtualization.
You do not need kernel access to make spyware that takes screenshots. You do not need a privileged service to read the user’s browser history.
You can do all of this, completely unprivileged on Windows. People always seem to conflate kernel access with privacy which is completely false. It would in fact be much harder to do any of these things from kernel mode.
There are far better ways to detect cheating, such as calculating statistics on performance and behaviour and simply binning players with those of similar competency. This way, if cheating gives god-like behaviour, you play with other godlike folks. No banning required. Detecting the thing cheating allows is much easier than detecting ways in which people gain that thing, it creates a single point of detection that is hard to avoid and can be done entierly server side, with multiple teirs how mucb server side calculation a given player consumes. Milling around in bronze levels? Why check? If you aren't performing so well that yoh can leave low ranks, perhaps we need cheats as a handicap, unless co sistently performing well out of distribution, at which point you catch smurfing as well.
point is focusing on detecting the thing people care about rather than one of the myriad of ways people may gain that unfair edge, is going to be easier and more robust while asking for less ergregious things of users.
Anti-cheat is not used to "protect" bronze level games. FACEIT uses a kernel level anti cheat, and FACEIT is primarily used by the top 1% of CS2 players.
A lot of the "just do something else" crowd neglects to realize that anticheat is designed to protect the integrity of the game at the highest levels of play. If the methods you described were adequate, the best players wouldn't willingly install FACEIT - they would just stick with VAC which is user-level.
> There are far better ways to detect cheating, such as calculating statistics on performance
Ask any CS player how VAC’s statistical approach compares to Valorant’s Vanguard and you will stop asserting such foolishness
The problem with what you are saying is that cheaters are extremely determined and skilled, and so the cheating itself falls on a spectrum, as do the success of various anticheat approaches. There is absolutely no doubt that cheating still occurs with kernel level anticheats, so you’re right it didn’t “solve” the problem in the strictest sense. But as a skilled player in both games, only one of them is meaningfully playable while trusting your opponents aren’t cheating - it’s well over an order of magnitude in difference of frequency.
Simply put, the game companies want to own our machines and tell us what we can or can't do. That's offensive. The machine is ours and we make the rules.
I single out kernel level anticheats because they are trying to defeat the very mitigations we're putting in place to deal with the exact problems you mentioned. Can't isolate games inside a fancy VFIO setup if you have kernel anticheat taking issue with your hypervisor.
By this same logic: As far as I'm concerned, if the game developer only wants to allow players running anticheat to use their servers then they're just exercising their god given rights as the owner of the server.
My position is this is unfair discrimination that should be punished with the same rigor as literal racism. Video games are the least of our worries here. We have vital services like banks doing this. Should be illegal.
You can argue about the methods used for anticheat, but your comment here is trying to defend the right to cheat in online games with other people. Just no.
I rather suspect that the reason for this is the current gaming economy of unlockable cosmetics that you can either grind for, or pay for. If people can cheat in single player or PvE, they can unlock the cosmetics without paying. And so...
Don't play with untrusted randoms. Play with people you know and trust. That's the true solution.
Kernel level AC is a compromise for sure and it's the gamers job to assess if the game is worth the privacy risk but I'd say it's much more their right to take that risk than the cheaters right to ruin 9 other people's time for their own selfish amusement
If it kills online gaming, then so be it. I accept that sacrifice. The alternative leads to the destruction of everything the word hacker ever stood for.
You are hijacking this thread about VOLUNTARY ceasing of freedom as if the small community even willing to install these is a slippery slope to something worse. You have a point when it comes to banking apps on rooted phones and I'm with you on that but this is not the thread for it
Do you have evidence valve is working to infect the linux kernel for everyone?
Mind you, it doesn't mean that the Linux kernel will be "infected for everyone". It means that we'll see the desktop Linux ecosystem forking into the "secure" Linux which you don't actually have full control of but which you need to run any app that demands a "secure" environment (it'll start with KAC but inevitably progress to other kinds of DRM such as video streaming etc). Or you can run Linux that you actually control, but then you're missing on all those things. Similar to the current situation with mainline Android and its user-empowering forks.
People can dual boot, what's wrong with a special gaming linux distribution?
You may think it's your "god-given right" to cheat in multiplayer games, but the overwhelming majority of rational people simply aren't going to play a game where every lobby is ruined by cheaters.
The computers are supposed to be ours. What we say, goes. Cheating may not be moral but attempts to rob us of the power that enables cheating are even less so.
Remote attestation is the ultimate surrender. It's not really your machine anymore. You don't have the keys to the machine. Even if you did, nobody would trust attestations made by those keys anyway. They would only trust Google's keys, Apple's keys. You? You need not apply.
Anti cheat don't run on modern console, game dev knoes that the latest firmware on a console is secure enough so that the console can't be tempered.
This is the exact sort of nonsense situation I want to prevent. We should own the computers, and the corporations should be forced to simply suck it up and deal with it. Cheating? It doesn't matter. Literal non-issue compared to the loss of our power and freedom.
It's just sad watching people sacrifice it all for video games. We were the owners of the machine but we gave it all up to play games. This is just hilarious, in a sad way.
They also have VM checks. I "accidentally" logged into MGM from a virtual machine. They put my account on hold and requested I write a "liability statement" stating I would delete all "location altering software" and not use it again. (Really!)
looking at cards is a way easier problem than rendering a 3d world with other players bouncing around. I imagine you could just send the card player basially a screenshot of what you want them to see and give them no other data to work with and that would mostly solve cheating.
But gambling can be way more complicated than just looking at cards so maybe there's a lot more to it.