So yeah, I want to discuss or point out why I think Valve needs to fix Anti-Cheat issues. They have VAC but apparently its doing jackshit, be it Counter Strike 2 (any previous iterations) or something like Hunt: Showdown the prevalence of cheating players is non deniable. For me personally it has come to a point that I am not enjoying playing those games anymore, although they are great games by itself. But the amount of occurrences being killed or playing against cheaters is at a height, where I don’t see the point anymore.
- Why I think Valve is the only company able to something against cheaters?
Because they have the tools with VAC already aiming to prevent cheaters. Valve has got the resources to actually invest into something more profound which could be used for any game where anti-cheat protection needs to be implemented. And lastly Valve is the company which is interested in furthering the ability to gaming on Linux, the anti-cheat solution needs to work on both operating systems. Only Valve has the motivation and means to achieve that with their knowledge and resources. What do you guys think about the topic? Is the fight against cheaters hopeless? Do you think some other entity should provide anti-cheat protection, why? I skimmed over “anti cheat in linux kernel” posts in the net, but I have very little knowledge about the topic, what is your stance on it?
Edited: Mixed EAC with VAC. EAC seems to be part of Epic Company. Both of these tools seem unable to prevent cheating like mentioned above.
I described a plan here: https://pawb.social/comment/4536772
Not perfect, but neither are rootkits.
Why do you call anti-cheat software rootkits? Rootkits are malicious.
It’s software I don’t want running on my system and the kernel mode stuff has full hardware access.
Yes. It’s a matter of knowing what you trust on your pc and understanding your threat model. Programs running in user mode can also be malicious.
Inexhaustive of things that kernel mode code can do that unprivileged (without “root”) user mode cannot:
And so on. The question you should be asking isn’t “are they going to do this?” but instead “why are they even asking for this permission in the first place?”.
A game where you run around pretending to be a space marine doesn’t need low level access to your hardware.
And that makes it malware
I’d argue that any software that is adversarial towards the user/computer owner, and takes actions specifically to hinder an action by them, on their own machine, is malicious.
We’d be absolutely apoplectic if the government demanded we install a surveillance tool on our laptops in order to e.g. access the DMV website or file our taxes, but when someone tells us to in order to play a game, it’s okay? Nah.
So are anti-cheats
You can call them good for the community, but that doesn’t change what they are or what they’re meant to do
Or all the things they could be doing with that access and you not knowing about
Client side anticheat is all malware, and they do it because violating consumer privacy and protections is cheaper than writing good and secure server code.
FFS, most games farm out the security to a third party that creates the malware, just to push off any liability and at “yeah, we’re totally doing something about cheaters” and that third party has way more motivation to act maliciously than the game maker, doing things like collecting data and selling it.
When was the last time you saw malicious software with a EULA and an uninstaller?
MCafee.
Also… “uninstaller”
I mean AI sounds like a legit idea. In the past e.g. battle.net from Blizzard was also just looking for “patterns”. And AI could be much better at that. The question is, how do you get the required information without having any clientside info? To distinguish between a good player and a bot would be very very time consuming to train an AI on that level.
All you really need is where the character is looking, their location and the terrain map, all of which are things the server has authority over or can check easily.
Distinguishing between a good player and a bot probably won’t be that hard. A simple aimbot would probably fire exactly at a target’s (0, 0) coordinate, while a good player may be a frame or two early or late. Someone with wallhacks will behave differently if they know someone is around a corner. There’s almost certainly going to be small “tricks” like that that an AI can pick up on.
We went through this in RuneScape with auto miners. You just randomise locations and times slightly and it’s almost impossible to tell the difference.
It’s so easy to get around.
Depends on whether people working on cheats can see the anti-cheat detection code. It’s hard to ensure that one data set is statistically-identical to another data set.
I remember at one point, reading about use of Benford’s law, that the IRS looked at leading digits on tax forms. On legit tax data, “1” is a more-common leading digit.
Recently, Russia had a vote in which there was vote fraud, where some statisticians highlighted it in a really clear way – you had visible lines in the data in voting districts at 5% increments, because voting districts had been required to have a certain level of votes for a given party, and had stuffed ballot boxes to that level.
If I can see the cheat-detection code, then, yeah, it’s not going to be hard to come up with some mechanism that defeats it. But if I can’t – and especially if that cheat-detection code delays or randomly doesn’t fire – it may be very hard for me to come up with data that passes its tests.
bots are way more elaborate than that, even 20 years ago there were randomization patterns.
Unless the aimbot is using its own AI learning system, it’ll not behave as a human would. For example, it might fire at a random point in a circle, where a human might have better aim along the horizontal axis or something.
Bots can be updated to, its the same game with hacks and exploits, it just depends on the resources available at each side.
Randomization patterns don’t mean much if the ai can detect a meaningful difference between a players normal reaction time and patterns and the reaction when they score points.
The point of the bot is to improve player performance. That performance change is detectable by an ai with the right metrics to watch.
If it wasn’t detectable, there wouldn’t be a reason to use the cheat in the first place.
This strategy won’t catch full botting, where there is no human input, but that’s why they layer the security.
It’s not as easy as you make it to be.
It’s also not as hard as the people skimping on security make it out to be
How would a server-only method detect esp or wallhacks, which are generally speaking client-only exploits?
People with wallhacks will deliberately move their crosshairs over people that they see through walls. Or, if they know the server is watching for that, they’ll make a subconscious effort to never have their crosshairs over someone through walls.
Wall hacks are enabled by poor server code.
The server shouldn’t send info the player shouldn’t have in the first place.
We can only hope to play a good game with such perfect design one day.
I don’t think anyone is discounting the limitations game developers are under, but that changes nothing about the lazy and anti-consumer decision to resort to malware to enforce behavioral compliance instead of designing the code to deal with it in server space.
It can be and has been done in other industries where security is a priority. This is a result of the owners of game companies skimping out on security.
What makes you think that anything client-side will be allowed to work as it should?
I don’t. Anything on the client can be tampered with. It’s the server’s job to make sure anything they receive is both valid and consistent with how a human would act.