I have a top-of-the-line 4K TV and gigabit internet, yet the compression artifacts make everything look like putty.
Honestly, the best picture quality I’ve ever seen was over 20 years ago using simple digital rabbit ears.
You especially notice the compression on gradients and in dark movie scenes.
And yes — my TV is fully calibrated, and I’m paying for the highest-bandwidth streaming tier.
Not my tv, but a visual example: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
That’s why, presumably, Netflix came up with the algorithm for removing camera grain and adding synthetically generated noise on the client[0], and why YouTube shorts were recently in the news for using extreme denoising[1]. Noise is random and therefore difficult to compress while preserving its pleasing appearance, so they really like the idea of serving everything denoised as much as possible. (The catch, of course, is that removing noise from live camera footage generally implies compromising the very fine details captured by the camera as a side effect.)
1. camera manufacturers and film crews both do their best to produce a noise-free image 2. in post-production, they add fake noise to the image so it looks more "cinematic" 3. to compress better, streaming services try to remove the noise 4. to hide the insane compression and make it look even slightly natural, the decoder/player adds the noise back
Anyone else finding this a bit...insane?
This is not correct, camera manufacturers and filmakers engineer _aesthetically pleasing_ noise (randomized grains appear smoother to the human eye than clean uniform pixels). The rest is still as silly as it sounds.
But yes, there are definitely also many DPs that like their grain baked-in and camera companies that design cameras for that kind of use.
It is only an issue in content delivery.
Does this explain why i dislike 4K content on a 4K TV? Where some series and movies look too realistic, what in turn gives me a amateur film feeling (like somebody made a movie with a smartphone).
https://en.wikipedia.org/wiki/Soap_opera_effect
Which is generally associated with excess denoisong rather than with excess grain.
This comment that I replied to is almost a textbook description of the soap opera effect.
The interpolation adds more FPS, which is traditionally a marker of film vs TV production.
This is patently wrong. The rest builds up on this false premise.
1.2.: Google "how to shoot a night scene" or something like that. You'll find most advice goes something along the lines of "don't crank the ISO up, add artificial lighting to brighten shadows instead". When given a choice, you'll also find cinematographers use cameras with particularly good low-light performance for dark scenes (that's why dark scenes in Planet Earth were shot on the Sony A7 - despite the "unprofessional" form factor, it had simply the best high-ISO performance at the time)
2: Google "film grain effect". You'll find a bunch of colorists explaining why film grain is different from ISO noise and why and how you should add it artificially to your films.
Next, no shit aesthetics are subjective, I never said this is the one objective truth. I said this is a thing that many people believe, as evidenced by the plethora of resources talking about the difference between noise and grain, why tasteful grain is better than a completely clean image and how to add it in post.
And finally, come on, it's obvious to everyone in this thread that I'm referring to digital, which is also not "just a subset" it's by far the biggest subset.
So idk what your point is. Most things are shot digitally. Most camera companies try to reduce sensor noise. Most camera departments try to stick to the optimal ISO for their sensor, both for dynamic range and for noise reasons, adjusting exposure with other means. In my experience, most people don't like the look of sensor/gain/iso/whatever noise. Many cinematographers and directors like the look of film grain and so they often ask for it to be added in post.
Besides the many/most/some qualifiers possibly not matching with how you percieve this (which is normal, we're all different, watch different content, work in different circles...), where exactly am I wrong?
> it's obvious to everyone in this thread that I'm referring to digital
It was blindingly obvious that you meant digital. That’s why I pointed this out. Without mentioning that it is only a concern with digital photography, your points become factually incorrect on more than one level; because the thread wasn’t talking specifically about digital photography, some of your points about noise don’t apply even if they were correct—which they aren’t, by your own admission that photography is subjective. Producing a noise-free image is not the highest priority for film crews (for camera manufacturers it is, but that’s because it means more flexibility in different conditions; it does not mean film crews will always prioritize whatever settings give them the lowest noise, there are plenty of higher priorities), and in some cases they choose to produce image with some noise despite the capability to avoid it.
Sorry, with your googling suggestions it just reads like a newbie’s take on subject matter.
TLDR - because although you made valid points on a specific area, you made no acknowledgement to my own favorite specific area, thusly I shall publicly declare your valid points shall not be taken seriously for others to read
h.264 only had FGS as an afterthought, introduced years after the spec was ratified. No wonder it wasn’t widely adopted.
VP9, h.265 and h.266 don’t have FGS.
1. Video codecs like the denoise, compress, synthetic grain approach because their purpose is to get the perceptually-closest video to the original with a given number of bits. I think we should be happy to spend the bits on more perceptually useful information. Certainly I am happy with this.
2. Streaming services want to send as few bytes as they can get away with. So improvements like #1 tend to be spent on decreasing bytes while holding perceived quality constant rather than increasing perceived quality while holding bitrate constant.
I think one should focus on #2 and not be distracted by #1 which I think is largely orthogonal.
The hard disk space to store an episode of a show is $0.01. With peering agreements, the bandwidth of sending the show to a user is free.
I'm not sure why you think this, but it's one of the oddest things I've seen today.
The more streams you can send from a single server the lower your costs are.
That's not a correctly calibrated TV. The contrast is tuned WAY up. People do that to see what's going on in the dark, but you aren't meant to really be able to see those colors. That's why it's a big dark blob. It's supposed to be barely visible on a well calibrated display.
A lot of video codecs will erase details in dark scenes because those details aren't supposed to be visible. Now, I will say that streaming services are tuning that too aggressively. But I'll also say that a lot of people have miscalibrated displays. People simply like to be able to make out every detail in the dark. Those two things come in conflict with one another causing the effect you see above.
Someone needs to tell filmmakers. They shoot dark scenes because they can - https://www.youtube.com/watch?v=Qehsk_-Bjq4 - and it ends up looking like shit after compression that assumes normal lighting levels.
i disagree completely. i watch a movie for the filmmakers story, i don’t watch movies to marvel at compression algorithms.
it would be ridiculous to watch movies shot with only bright scenes because streaming service accountants won’t stop abusing compression to save some pennies.
> …ends up looking like shit after compression that assumes normal lighting levels.
it’s entirely normal to have dark scenes in movies. streaming services are failing if they’re using compression algorithms untuned to do dark scenes when soooo many movies and series are absolutely full of night shots.
It should be noted, as well, that this generally isn't a "not enough bits" problem. There are literally codec settings to tune which decide when to start smearing the darkness. On a few codecs (such as VP1) those values are pretty badly set by default. I suspect streaming services aren't far off from those defaults. The codec settings are instead prioritizing putting bits into the lit parts of a scene rather than sparing a few for the darkness like you might like.
The issue is just that we don't code video with nearly enough bits. It's actually less than 8-bit since it only uses 16-235.
Before COVID Netflix were at least using 8Mbps for 1080P content. With x264 / beamr it is pretty good, and even better on HEVC. Then COVID hit, every streaming service not just Netflix have excuses to lower their quality due to increased demand with limited bandwidth. Everything went down hill since then. Customer got used to lower quality I dont believe they ever bring it back up. Now it is only something like 3-5Mbps according to previous test posted on HN.
And while it is easy for HEVC / AV1 / AV2 to have 50%+ bitrate real world savings compared to H.264 saving at 0.5 - 4Mbps range, once you go pass that the savings begin to shrink rapidly to the point good old x264 encoder may perform better at much higher bitrate.
Kate - Netflix - 11.15 Mbps
Andor - Disney - 15.03 Mbps
Jack Ryan - Amazon - 15.02 Mbps
The Last of Us - Max - 19.96 Mbps
For All Mankind - Apple - 25.12 Mbps
https://hd-report.com/streaming-bitrates-of-popular-movies-s...
You will be made to feel the springs on the cheapest plan/mattress, and it's on purpose so you'll pay them more for something that costs them almost nothing.
Adaptive streaming isn't really adaptive anymore. If you have any kind of modern broadband, the most adaptive it will be is starting off in one of the lower bitrates for the first 6 seconds before jumping to the top, where it will stay for the duration of the stream. A lot of clients don't even bother with that anymore; they look at the manifest, find the highest stream, and just start there.
That can happen at even the highest bitrates if "HDR" is not enabled in the video codec.
Related video: https://www.youtube.com/watch?v=h9j89L8eQQk
Also the whole "you can hear more with lossless audio" is just straight up a lie.
The “best” quality of streaming you have is Sony Core https://en.wikipedia.org/wiki/Sony_Pictures_Core but it has a rather limited library.
Pricing, if I am reading the site correctly: $7k-ish for a server (+$ for local disks, one assumes), $2-5k per client. So you download the movie locally to your server and play it on clients scattered throughout your mansion/property.
Not out of the world for people who drop 10s of thousands on home theater.
I wonder if that's what the Elysium types use in their NZ bunkers.
No true self-respecting, self-described techie (Scotsman) would use it instead of building their own of course.
Right now, Netflix can say stuff like "we think the 4K video we're serving is just as good." If they offer a real-4K tier, it's hard to make that argument.
The biggest jump in quality was when everything was still analog over the air, but getting ready for the digital transition.
Then digital over the air bumped it up a notch.
You could really see this happen on a big CRT monitor with the "All-in-Wonder" television receiver PCI graphics adapter card.
You plugged in your outdoor antenna or indoor rabbit ears to the back of the PC, then tuned in the channels using software.
These were made by ATI before being acquired by AMD, the TV tuner was in a faraday cage right on the same PCB as the early GPU.
The raw analog signal was upscaled to your adapter's resolution setting before going to the CRT so you had pseudo better resolution than a good TV like a Trinitron. You really could see more details and the CRT was smooth as butter.
As the TV broadcaster's entire equipment chain was replaced, like camera lenses, digital sensors and signal processing they eventually had everything in place and working. You could notice these incremental upgrades until a complete digital chain was established as designed. It was really jaw-dropping. This was well in advance of the deadline for digital deployment, so the signal over-the-air was still coming in analog the same old way.
Eventually the broadcast signal switched to digital and the analog lights went out, plus the All-in-Wonder was not ideal with a cheap converter like analog TV's could get by with.
But it was still better than most digital TVs for a few years, then it took years more before you could see the ball in live sports as well as on a CRT anyway.
Now that's about all you've got for full digital resolution, live broadcasts from your local stations, especially live sports from a strong interference-free station over an antenna. You can switch between the antenna and cable and tell the difference when they're both not overly compressed.
The only thing was, digital engineers "forgot" that TV was based on radio (who knew?) so for the vast majority of "listeners" on the fringe reception areas who could get clear audio but usually not a clear picture if any, too bad for you. You're gonna need a bigger antenna, good enough to have gotten you a clear picture during the analog days. Otherwise your "clean" digital audio may silently appear on the screen as video, "hidden" within the sparse blocks of scattered random digital noise. When anything does appear at all.
At the higher prices, I'd have to agree with you. If you pay for the best you should get the best.
That I find super hard to believe!
It could be a single channel, but usually you have many in the multiplex. I don't know how it works in the US, but for DVB-T(2) that's how it is.
Circa 2019, after the FCC "repack" / "incentive auction" (to free-up TV channels for cellular LTE use) it became very common for each RF channel to carry 4+ channels. But to be fair, many broadcasters did purchase new, improved MPEG-2 encoders at that time, which do perform better with a lower bit-rate, so quality didn't degrade by a lot.
Is this just people being clever or is it also more processing power being thrown at the problem when decoding / encoding?
For example, changes from one frame to the next are encoded in rectangular areas called "superblocks" (similar to a https://en.wikipedia.org/wiki/Macroblock). You can "move" the blocks (warp them), define their change in terms of other parts of the same frame (intra-frame prediction) or by referencing previous frames (inter-frame prediction), and so on... but you have to do it within a block, as that's the basic element of the encoding.
The more tightly you can define blocks around the areas that are actually changing from frame to frame, the better. Also, it takes data to describe where these blocks are, so there are special limitations on how blocks are defined, to minimise how many bits are needed to describe them.
AV2 now lets you define blocks differently, which makes it easier to fit them around the areas of the frame that are changing. It has also doubled the size of the largest block, so if you have some really big movement on screen, it takes fewer blocks to encode that.
That's just one change, the headline improvement comes from all the different changes, but this is an important one.
There is new cleverness in the encoders, but they need to be given the tools to express that cleverness -- new agreement about what types of transforms, predictions, etc. are allowed and can be encoded in the bitstream.
Is there a reason codec's don't use the previous frame(s) as stored textures, and remap them on the screen? I can move a camera through room and a lot of the texture is just reprojectivetransformed.
That's what AV1 calls global motion and warped motion. Motion deltas (translation/rotation/scaling) can be applied to the whole frame, and blocks can be sheared vertically/horizontally as well as moved.
Consider a scene with a couple of cars moving on a background, one can imagine a number of vertices around the contour of each car, and reusing the previous car, it makes no sense to force the shape of blocks. The smaller the seams between shapes (reusing previous frames as textures), the fewer pixels it needs to reconstitute de novo. The more accurate the remapping xy_old(x_prev,y_prev)-><x,y>, the lower the error signal that needs to be reconstructed.
Also the majority of new contour vertex locations can be reused as the old contour locations in the next frame decoding. Then only changes in contour vertexes over time need to be encoded, like when a new shape enters the scene, or a previously static object starts moving. So there is a lot of room for compression.
I mean, that's more or less how it works already. But you still need a unit of granularity for the remapping. So the frame will store eg this block moves by this shift, this block by that shift etc.
This is exactly what I question. Why should there be block shaped units of granularity? defining a UV-textured 3D mesh that moves and carries previous decoded pixel values should have much less seams, with a textured mesh instead of blocks the only de novo pixel values would be the seams between reusable parts of the mesh, for example when an object rotates and reveals a newly visible part of its surface.
It's true you could still accidentally violate a patent but that minefield is clearing out as those patents simply have to become more esoteric in nature.
But that's not my main point. My main point is that we are going down a fitting path with codecs which makes it hard to come up with general patents that someone might stumble over. That makes patents developed by the MPEG group far less likely to apply to AOM. A lot of those more generally applicable patents, like the DCT for example, have expired.
1) it harms interoperability
2) I thought math wasn’t patentable?
At the absolute compression limit, it's no longer video, but a machine description of the scene conceptually equivalent to a textual script.
All of this requires a significant amount of extra logic gates/silicon area for hardware decoders, but the bit rate reduction is worth it.
For CPU decoders, the additional computational load is not so bad.
The real additional cost is for encoding because there’s more prediction tools to choose from for optimal compression. That’s why Google only does AV1 encoding for videos that are very popular: it doesn’t make sense to do it on videos that are seen by few.
Clever matters a lot more for encoding. If you can determine good ways to figure out the motion information without trying them all, that gets you faster encoding speed. Decoding doesn't tend to have as much room for cleverness; the stream says to calculate the output from specific data, so you need to do that.
Better codecs are an overall win for everyone involved.
I don’t remember ever watching a movie and wishing for a better codec, in the last 10 years
I do wish ATSC1 would adopt a newer codec (and maybe they will), most of the broadcasters cram too many subchannels in their 20mbps and a better codec would help for a while. ATSC3 has a better video codec and more efficient physical encoding, but it also DRM and a new proprietary audio codec, so it's not helpful for me.
They also get increased power usage, lesser battery life, higher energy bills, and potentially earlier device failures.
> Better codecs are an overall win for everyone involved.
Right.
Mobile/power constrained devices don't use software decoding, that just a path to miserable experience. Hardware decoding is basically required.
Meanwhile my desktop can SW decode 4k youtube with 3% reported cpu usage.
I like how you padded this list by repeating the same thing thrice. Like, increased power usage is obviously going to lead to higher energy bills.
And it’s especially weird because it’s not true? The current SOTA codec AV1 is at a sweet spot for both compression and energy demand (https://arxiv.org/html/2402.09001v1). Consumers are not worse off!
But, I mean, your expectation is not that unreasonable, computers were quite good by 2013. It is just an eye-opening framing.
And there’s no transfer of effort to the user. Compute complexity of video codecs is asymmetric. The decode is several order of magnitude cheaper to compute than the encode. And in every case, the principal barrier to codec adoption has been hardware acceleration. Pretty much every device on earth has a hardware-accelerated h264 decoder.
I find the idea fun, kinda like using snapchat filters on characters, but in practice I'm sure it'll be used to cut corners and prevent the actual creative vision from being shown which saddens me.
It feels like we’re losing something, a shared experience, in favor of an increasingly narcissistic attitude that everything needs to shapeable to individual preferences instead of accepting things as they are.
I’d be somewhat interested in something like a git that generates movies, that my friends can push to.
Extremely widespread mass media fiction broadcast are sort of an aberration of the last 75 years or so. I mean, you’d have works in ancient times—the Odyssey—that are shared across a culture. But, that was still a story customize by each teller, and those sorts of stories were rare. Canon was mainly a concern of religions.
It’s just for fun, we give it far too much weight nowadays.
May be more data and numbers. Including Encoding Complexity increase, decoding complexity. Hardware Decoder roadmap. Compliance and Test kits. Future Profile. Involvement and improvement to both AVIF the format and the AV2 image codec. Better than JPEG-XL? Are the ~30% BDRATE compared to current best AV1 encoder or AV1 1.0 as anchor point? Live Encoding improvements?
[1] https://aomedia.org/events/live-session-the-future-of-innova...
So it seems like they checked that all their ideas could be implemented efficiently in hardware as they went along, with advice from real hardware producers.
Hopefully AV2-capable hardware will appear much quicker than AV1-capable hardware did.
Wait, I just discovered GPUs, nevermind. [giggles]
Still, the ability to do specialized work should probably be offloaded to specialized but pluggable hardware. I wonder what the economics of this would be...
Providing a production grade verified RTL implementation would obviously be useful but also entire companies exist to do that and they charge a lot of money for it.
A h265 or AV1 decoder requires millions of logic gates (and DRAM memory bandwidth.) Only high-end FPGAs provide that.
The complexity of video decoders has been going up exponentially and AV2 is no exception. Throwing more tools (and thus resources) at it is the only way to increase compression ratio.
Take AV1. It has CTBs that are 128x128 pixels. For intra prediction, you need to keep track of 256 neighboring pixels above the current CTB and 128 to the left. And you need to do this for YUV. For 420, that means you need to keep track of (256+128 + 2x(128+64)) = 768 pixels. At 8 bits per component, that's 8x768=6144 flip-flops. That's just for neighboring pixel tracking, which is only a tiny fraction of what you need to do, a few % of the total resources.
These neighbor tracking flip-flops are followed by a gigantic multiplexer, which is incredibly inefficient on FPGAs and it devours LUTs and routing resources.
A Lattice ECP5-85 has 85K LUTs. The FFs alone consume 8% of the FPGA. The multiplier probably another conservative 20%. You haven't even started to calculate anything and your FPGA is already almost 30% full.
FWIW, for h264, the equivalent of that 128x128 pixel CTB is 16x16 pixel MB. Instead of 768 neighboring pixels, you only need 16+32+2*(8+16)=96 pixels. See the difference? AV2 retains the 128x128 CTB size of AV1 and if it adds something like MRL of h.266, the number of neighbors will more than double.
H264 is child's play compared later codecs. It only has a handful of angular prediction modes, it has barely any pre-angular filtering, it has no chroma from luma prediction, it only has a weak deblocking filter and no loop filtering. It only has one DCT mode. The coding tree is trivial too. Its entropy decoder and syntax processing is low in complexity compared to later codecs. It doesn't have intra-block copy. Etc. etc.
Working on a hardware video decoder is my day job. I know exactly what I'm talking about, and, with all due respect, you clearly do not.
Your argument about your large amount of flops is odd. You would only store data that way if you needed everything on the same cycle. You say there's a multiplexor after that. Data storage + multiplexor is just memory. Could use a bram or lutram which would cut down on that dramatically, big if there's a need based on later processing which you haven't defined. And even then, that's for AV1 which isn't AV2 and may change
Let’s cut to the chase. AV2 will not be smaller than AV1 at all. The linked article doesn’t say that. The slides don’t say that either.
The only thing that could make somebody think that it’s smaller is the claim that all tools have been validated for hardware efficiency. The goal of this process is to make sure that none of the new tools make the HW unreasonably explode in size, not to make the codec smaller than before, because everyone knows that this is impossible if you want to increase compression ratio.
Let’s look at 2 of those new tools. MRLS: this adds multiple reference lines, just like I expected there would be. Boom! Much more complexity for neighbor handling. I also see more directions (more angles.) That also adds HW. The article mentions improved chroma from luma. Not unexpected because h266 already has that, and AV2 needs to compete against that. AV1 has a basic 2x2 block filter. I expect AV2 to have a more complex FIR filter, which makes things significantly harder for a HW implementation.
You are delusional if you think AV2 will be smaller than AV1.
The reason I brought up neighbor handling is because it’s so easy to estimate its resource requirements from first principles, not because it’s a huge part of a decoder. But if neighbors alone already make a smaller FPGA nearly impossible, it should be obvious that the whole decoder is ridiculous.
So… as for storing neighbors in RAM: if I’d bring this up at work, they’d probably send me home to take mental health break or something.
Neighbor processing lives right inside the critical latency loop. Every clock cycle that you add in that loop impacts performance. You need to update these neighbors after predicting every coding unit. Oh, and the article mentions that the CTB size (“super block” in AV2 parlance) has been increased from 128x128 to 256x256. Good luck area reducing that. :-)
And what hobbyist is sending off decoding chips to be fabbed? If this exists, it sounds interesting if incredibly impractical.
While it worked, I don't think it ever left my machine. Never moved past software decoding -- I was a broke teen with no access to non-standard hardware. But the idea has stuck with me and feels more relevant than ever, with the proliferation of codecs we're seeing now.
It has the Sufficiently Smart Compiler problem baked in, but I tried to define things to be SIMD-native from the start (which could be split however it needed to be for the hardware) and I suspect it could work. Somehow.
They're called GPUs... They're ASICs rather than FPGAs, but it's easy to update the driver software to handle new video codecs. The difficulty is motivating GPU manufacturers to do so... They'd rather sell you a new one with newer codec support as a feature.
The main point of having ASICs for video codecs these days is efficiency, not being able to real-time decode a stream at all (as even many embedded CPUs can do that at this point).
But often a new codec requires decoders to know how to work with new things that the fixed function hardware likely can't do.
Encoding might actually be different. If your encoder hardware can only do fixed block sizes, and can only detect some types of motion, a driver change might be able to package it up as the new codec. Probably not a lot of benefit, other than ticking a box... but might be useful sometimes. Especially if you say offload motion detection, but the new codec needs different arithmetic encoding, you'd need to use cpu (or general purpose gpu) to do the arithmetic encoding and presumably get a size saving over the old codec.
Isn't AVI a container format and not a codec?
I don’t understand why 60fps never became ubiquitous, a pan scene in 30fps is horrible, its almost stroboscopic to me.
AVIF is also a container format, and I believe should be adaptable to AV2, even if the name stands for "AV1 image format". It could simply just be renamed to AOMedia Video Image Format for correctness.
Maybe that’s what we did in the past and it was a bad idea. It’d be useful to know if you can read the file by looking only at its extension
> It’d be useful to know if you can read the file by looking only at its extension
That would be madness, and there's already a workaround - the filename itself.
For most people, all that matters is an MKV file is a video file, and your configured player for this format is VLC. Only in a small number of cases does it matter about an "inner" format, or choice of parameter - e.g. for videos, what video codec or audio codec is in use, what the bitrate is, what the frame dimensions are.
For where it _matters_, people write "inner" file formats in the filename, e.g. "Gone With The Wind (1939) 1080p BluRay x265 HEVC FLAC GOONiES.mkv", to let prospective downloaders choose what to download from many competing encodings of exactly the same media, on websites where a filename is the _only_ place to write that metadata (if it were a website not standardised around making files available and searching only by filenames, it could just write it in the link description and filename wouldn't matter at all)
Most people don't care, for example, that their Word document is A4 landscape, so much that they need to know _in the filename_.
That's pretty much always been the case. File extensions are just not expressive enough to capture all the nuances of audio and video codecs. MIME types are a bit better.
Audio is a bit of an exception with the popularity of MP3 (which is both a codec and a relatively minimal container format for it).
It doesn't look like AV2 does any of that yet though fortunately (except film grain synthesis but I think that's fine).
I imagine e.g. a picture of an 8x8 circle actually takes more bits to encode than a mathematical description of the same circle
I wonder if there are codecs with provisions for storing common shapes. Text comes to mind - I imagine having a bank of 10 most popular fonts an encoding just the difference between source and text + distortion could save quite a lot of data on text heavy material. Add circles, lines, basic face shapes.
There also seems to be a fair bit of attention on that problem space from the real-time comms vendors with Cisco [1], Microsoft [2] and Google [3] already leaning on model based audio codecs. With the advantages that provides both around packet loss mitigation and shifting costs to end user (aka free) compute and away from central infra I can't see that not extending to the video channel too.
[0]: https://mtisoftware.com/understanding-ai-upscaling-how-dlss-...
[1]: https://www.webex.com/gp/webex-ai-codec.html
[2]: https://techcommunity.microsoft.com/blog/microsoftteamsblog/...
[3]: https://research.google/blog/lyra-a-new-very-low-bitrate-cod...
[1]https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
Not quite yet as shown in H.267. But at some point the computational requirement vs bandwidth saving benefits would no longer make sense.
It works amazingly well with text compression, for example: https://bellard.org/nncp/
AI video could mean that essential elements are preserved (actors?) but other elements are generated locally. Hell, digital doubles for actors could also mean only their movements are transmitted. Essentially just sending the mo-cap data. The future is gonna be weird
> It would be interesting to see how far you could get using deepfakes as a method for video call compression.
> Train a model locally ahead of time and upload it to a server, then whenever you have a call scheduled the model is downloaded in advance by the other participants.
> Now, instead of having to send video data, you only have to send a representation of the facial movements so that the recipients can render it on their end. When the tech is a little further along, it should be possible to get good quality video using only a fraction of the bandwidth.
— https://news.ycombinator.com/item?id=22907718
Specifically for voice, this was mentioned:
> A Real-Time Wideband Neural Vocoder at 1.6 Kb/S Using LPCNet
You could probably also transmit a low res grayscale version of the video to “map” any local reproduction to. Kinda like how a low resolution image could be reasonably reproduced if an artist knew who the subject was.