When playing astro‑maze, the delay is noticeable, and in a 2D action game such delays are especially apparent. Games that don’t rely on tight real‑time input might perform better. (I'm connecting from Europe, though.)
If you add support for drawing from images (such as spritesheets or tilesheets) in the future, and the client stores those images and sounds locally, the entire screen could be drawn from these assets, so no pixel data would need to be transferred, only commands like "draw tile 56 at position (x, y)."
(By the way, opening abstra.io in a German-language browser leads to https://www.abstra.io/deundefined which shows a 404 error.)
I think this is inevitable unless I add some optimism/interpolation in the client
Also, thanks for the feedback! I will fix the Abstra landing page
try https://www.abstra.io/en instead
My approach lives in some place between video streaming and data streaming in terms of performance
It's not intended to be faster than a proper client that brings a lot of logic and information that diminish the amount of information required to be transfered
My proof of concept is more about: Can my dev exp be much better without relying on the video streaming approach? (which is havier)
This method of multiplayer you propose is inferior in basically every way: you can't do client-side prediction to make inputs feel smoother, and non-trivial scenes will surely take up more bandwidth than just transmitting entity deltas.
Let me tell you that there is cheating in cloud rendering solution ( Stadia, AWS Luna ect ... )
So 100% there is cheating in your solution.
It's trivial to read the screen.
Especially with today's computer vision
The cheating I'm more protected (just as stadia, etc..) is regarded to client/code exploitation
which we don't have to worry about in this approach
Sprite sheets are png with ztxt blocks with meta/frame info and a list of drawing operations to be done to construct vsprites based on any runtime server side operations done on the sprites.
There is limited client coding via popup Web view windows and a few js apis back to the client but nothing you can build too much off of.
(SS14 brings this model to an open source c# framework called The Robust Engine but has some limitations related to maintainer power tripping over who should be allowed to use their open source project.)
It was playable.
I wonder if you can use speculative execution to play the game a few frames ahead and then the client picks what to display based on user input, or something like that.
Each frame is 16ms, so you’d have to work ahead 6 frames to conquer the nominal latency of around 100ms, which may actually be 200ms round trip.
(In that case, something like Haskell would be a good candidate to build a DSL to build the decision tree to send to the JS client…)
With rollback/lockstep, there's no need for a server simulation at all. Most games are not doing that: the client's local simulations are less important than the server's simulation, even missing information (good to prevent wallhacks). Any dropped packets are handled with the server telling the client the exact positions of everything, leading to warping. Dropped packets and latency also only affect the problem player, rather than pausing everyone's simulations.
If you were able to make it, it would be kind of a Hail Mary moment for making easy server games without the latency.
This doesn't work in 3D. Unless you have the server do the work of the GPU and compute occlusion, you'll end up sending data to the client that they shouldn't be able to have (e.g. location of players and other objects behind walls)
Don’t some competitive games more or less already do this? Not sending player A’s position to player B if the server determines player A is occluded from player B?
I seem to recall a blog post about Apex Legenda dealing with the issue of “leaker’s advantage” due to this type of culling, unless I’m just totally misremembering the subject.
Regardless, seems like it would work just fine even in 3D for the types of games where everyone has the same view more or less.
If someone makes an AI that plays the game as a good player, then it’s effectively indistinguishable from a real player who is good. If they make it super-humanly good, then it would probably be detectable anyway.
It’s still fair in the sense that all players have the same (intended) information per the game rules.
Someone has even rigged up a system like that to a TENS system to stimulate the nerves in their arm and hand to move the mouse in the correct direction and fire when the crosshair is over the enemy.
We are definitely already there.
- take a screenshot
- run massive skeletal detection on it to get the skeletons of any humanoids
- of those skeletons, pick a target closest to the player
- for that target, get coordinates of head node
- run a PID to control the cursor to where the head node is located
- move the camera one frame, repeat the whole process. If you can fit that pipeline to 16ms it can run in real time.
I'm using Gzip since it comes with all browsers hence a easy approach
That said, I will find som zstd decompressor for js/wasm and try!
edit:
I just added and the difference was huge! Thank you!
This is cool ... but I suspect just pushing video frames like Stadia etc did is just as efficient these days and a lot less complicated to implement and no special client really needed. Decent compression, and hardware decode on almost every machine, hardware encode possible on the server side, and excellent browser support.
That said.. I don't think stadia could do that since it's not opinionated about the game engine. Unless they go really deep on the graphics card instructions instead, but then it becomes comparable to pixel rendering I guess
I'm always biased since I test locally with no delay when developing :)
I'm thinking about simple ML to predict inputs and feedbacks. Since the amount of data generated in the streaming is massive and well structured, it looks like a possible approach
But sometimes, i been left without a recharge, and without shooting, and I don't know why.
It seems that I should add a better visual feedback haha