Google’s always messed around with the encoding methods of uploads. It’s been the wrong place for “true to source” video hosting for a long, long time, as anyone who tries to do image/video quality comparisons knows well.
They’re clearly deep frying the video here and should roll that shit back, but honestly I don’t get why there’s such a fuss. Since when have most people cared so much about encoding artifacts? I bet Google’s been experimenting with small GANs for a little deblocking or deblurring for years without anyone outside of pixel peeper/video encoding forums bringing it up.
I remember the early AV1 encodes being particularly notorious. They were very high bitrate (hence the meme of “uploading to YouTube for free AV1 encoding!), but IIRC they also denoised the snot out of uploads, and had weird artifacts you wouldn’t see with h264/vp9…
This reminds me of an animation “remaster” I helped with a long time ago. As one part of many manual and “oldschool video filter” steps, waifu2x (with some masking/temporal stabilization) was used to double the resolution of DVDs, and the fandom freaking loved it. It looked good, all things considered.
Fast forward to the past year. This knowledge has been forgotten, apparently.
I bring this up in the fandom subreddit, and they don’t believe me. No, I was dogpiled. I got banned over an anti AI posting policy.
I get it I guess, but still. When it comes to pre-LLM machine learning, I think people need to chiiill.
I’m not surprised they are doing AI upscaling is something like that. As the article states, you can see hints of AI processing in the vids and it hasn’t always been like that. Thought I was the only one seeing it. Glad to know I’m not crazy.