Tag Archives: video-quality

Toward a Better Quality Metric for the Video Community

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/toward-a-better-quality-metric-for-the-video-community-7ed94e752a30

by Zhi Li, Kyle Swanson, Christos Bampis, Lukáš Krasula and Anne Aaron

Over the past few years, we have been striving to make VMAF a more usable tool not just for Netflix, but for the video community at large. This tech blog highlights our recent progress toward this goal.

VMAF is a video quality metric that Netflix jointly developed with a number of university collaborators and open-sourced on Github. VMAF was originally designed with Netflix’s streaming use case in mind, in particular, to capture the video quality of professionally generated movies and TV shows in the presence of encoding and scaling artifacts. Since its open-sourcing, we have started seeing VMAF being applied in a wider scope within the open-source community. To give a few examples, VMAF has been applied to live sports, video chat, gaming, 360 videos, and user generated content. VMAF has become a de facto standard for evaluating the performance of encoding systems and driving encoding optimizations.

VMAF stands for Video Multi-Method Assessment Fusion. It leans on Human Visual System modeling, or the simulation of low-level neural-circuits to gather evidence on how the human brain perceives quality. The gathered evidence is then fused into a final predicted score using machine learning, guided by subjective scores from training datasets. One aspect that differentiates VMAF from other traditional metrics such as PSNR or SSIM, is that VMAF is able to predict more consistently across spatial resolutions, across shots, and across genres (for example. animation vs. documentary). Traditional metrics, such as PSNR, are already able to do a good job evaluating the quality for the same content on a single resolution, but they often fall short when predicting quality across shots and different resolutions. VMAF fills this gap. For more background information, interested readers may refer to our first and second tech blogs on VMAF.

Recently, we migrated VMAF’s license from Apache 2.0 to BSD+Patent to allow for increased compatibility with other existing open source projects. In the rest of this blog, we highlight three other areas of recent development, as our efforts toward making VMAF a better quality metric for the community.

*The runtime ratio between the floating-point & optimized vmafossexec vs. the fixed-point & optimized vmaf executable, measured in the single-thread mode.

Speed Optimization

Improving the speed performance of VMAF has been a major theme over the past several years. Through low-level code optimization and vectorization, we sped up VMAF’s execution by more than 4x in the past. We also introduced frame-level multithreading and frame skipping, that allow VMAF to run in real time for 4K videos.

Most recently, we teamed up with Facebook and Intel to make VMAF even faster. This work took place in two steps. First, we worked with Ittiam to convert from the original floating-point based representation to fixed-point; and second, Intel implemented vectorization on the fixed-point data pipeline.

This work has allowed us to squeeze out another 2x speed gain on average while maintaining the numerical accuracy at the first decimal digit of the final score. The figure above shows the relative speed improvement under Intel Advanced Vector Extension 2 (Intel AVX2) and Intel AVX-512 intrinsics, for video at 4K, full HD and SD resolutions. Also notice that this is an ongoing effort, so stay tuned for more speed improvements.

New libvmaf API

The new BSD+Patent license allows for increased compatibility with existing open source projects. This brings us to the second area of development, which is on how VMAF can be integrated with them. For historical reasons, the libvmaf C library has been a minimal solution to integrate VMAF with FFmpeg. This year, we invested heavily on revamping the API. Today, we are annoucing the release of libvmaf v2.0.0. It comes with a new API that is much easier to use, integrate and extend.

This table above highlights the features achieved by the new API. A number of areas are worth highlighting:

  • It is extensible without breaking the API.
  • It is easy to add a new feature extractor. And this can easily support future evolution of the VMAF algorithms.
  • It becomes very flexible to allocate memory and incrementally calculate VMAF at the frame level.

The last feature makes it possible to integrate VMAF in an encoding loop, guiding encoding decisions iteratively on a frame-by-frame basis.

“No Enhancement Gain” Mode

One unique feature about VMAF that differentiates it from traditional metrics such as PSNR and SSIM is that VMAF can capture the visual gain from image enhancement operations, which aim to improve the subjective quality perceived by viewers.

The examples above demonstrate an original frame (a) and its enhanced versions by sharpening (b), and histogram equalization (c), and their corresponding VMAF scores. As one can notice, the visual improvement achieved by the enhancement operations are reflected in the VMAF scores. Most recently, a tune=vmaf mode was introduced in the libaom library as an option to perform quality-optimized AV1 encoding. This mode achieves BD-rate gain mostly by performing frame-based image sharpening prior to video compression (e). For a comparison, AV1 encoding without image sharpening is demonstrated in (d).

This is a good demonstration of how VMAF can drive perceptual optimization of video codecs. However, in codec evaluation, it is often desirable to measure the gain achievable from compression without taking into account the gain from image enhancement during pre-processing. As demonstrated by the block diagram above, since it is difficult to strictly separate an encoder from its pre-processing step (especially for proprietary encoders), it may become difficult to use VMAF to assess the pure compression gain. This dilemma is well aligned with two voices we have heard from the community: users seem to like the fact that VMAF could capture the enhancement gains, but at the same time, they have expressed concerns that such enhancement could be overused (or abused).

We think that there is value in disregarding enhancement gain that is not part of a codec. We also believe that there is value in preserving enhancement gain in many cases to reflect the fact that enhancement can improve the visual quality perceived by the end viewers. Our solution to this dilemma is to introduce a new mode called VMAF NEG (“neg” stands for “no enhancement gain”). And we propose the following:

  • Use the NEG mode for codec evaluation purposes to assess the pure effect coming from compression.
  • Use the “default” mode to assess compression and enhancement combined.

How does VMAF NEG mode work? To make the long story short: we can detect the magnitude of the VMAF gain coming from image enhancement, and subtract this effect from the measurement. The grayscale map in (f) above demonstrates the magnitude of the image sharpening performed in tune=vmaf. And we can subtract this effect from the VMAF scores. The VMAF NEG scores are also shown in (a) ~ (e) above. As we can see, the VMAF scores are largely muted by the enhancement subtraction in the NEG mode. More details about VMAF NEG mode can be found in this tech memo.

What Comes Next

We are committed to improve the accuracy and performance of VMAF in the long run. Over the past several years, through field testing and feedback from the users, we have learned extensively about the existing algorithm’s strengths and weaknesses. We believe that there is still plenty of room for improvement.

The NEG mode is our first step toward more accurately quantifying the perceptual gain without image enhancement. When operating in its regular mode, it is known that VMAF tends to overpredict perceptual quality when image enhancement operations, like oversharpening, lead to quality degradation. We plan to address this in future versions, by imposing limits on the enhancement attainable.

We have identified a number of other areas for further improvement, for example, to better predict perceived quality under challenging cases, such as banding and blockiness in the shades. Other potential areas of improvement include better model temporal masking effects in high motion sequences and also more accurately capture the effects of encoding videos generated from noisy sources. We will continue to leverage Human Visual System modeling, subjective testing and machine learning as we work toward a better quality metric for the video community.


Toward a Better Quality Metric for the Video Community was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Optimized shot-based encodes for 4K: Now streaming!

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/optimized-shot-based-encodes-for-4k-now-streaming-47b516b10bbb

by Aditya Mavlankar, Liwei Guo, Anush Moorthy and Anne Aaron

Netflix has an ever-expanding collection of titles which customers can enjoy in 4K resolution with a suitable device and subscription plan. Netflix creates premium bitstreams for those titles in addition to the catalog-wide 8-bit stream profiles¹. Premium features comprise a title-dependent combination of 10-bit bit-depth, 4K resolution, high frame rate (HFR) and high dynamic range (HDR) and pave the way for an extraordinary viewing experience.

The premium bitstreams, launched several years ago, were rolled out with a fixed-bitrate ladder, with fixed 4K resolution bitrates — 8, 10, 12 and 16 Mbps — regardless of content characteristics. Since then, we’ve developed algorithms such as per-title encode optimizations and per-shot dynamic optimization, but these innovations were not back-ported on these premium bitstreams. Moreover, the encoding group of pictures (GoP) duration (or keyframe period) was constant throughout the stream causing additional inefficiency due to shot boundaries not aligning with GoP boundaries.

As the number of 4K titles in our catalog continues to grow and more devices support the premium features, we expect these video streams to have an increasing impact on our members and the network. We’ve worked hard over the last year to leapfrog to our most advanced encoding innovations — shot-optimized encoding and 4K VMAF model — and applied those to the premium bitstreams. More specifically, we’ve improved the traditional 4K and 10-bit ladder by employing

In this blog post, we present benefits of applying the above-mentioned optimizations to standard dynamic range (SDR) 10-bit and 4K streams (some titles are also HFR). As for HDR, our team is currently developing an HDR extension to VMAF, Netflix’s video quality metric, which will then be used to optimize the HDR streams.

¹ The 8-bit stream profiles go up to 1080p resolution.

Bitrate versus quality comparison

For a sample of titles from the 4K collection, the following plots show the rate-quality comparison of the fixed-bitrate ladder and the optimized ladder. The plots have been arranged in decreasing order of the new highest bitrate — which is now content adaptive and commensurate with the overall complexity of the respective title.

Fig. 1: Example of a thriller-drama episode showing new highest bitrate of 11.8 Mbps
Fig. 2: Example of a sitcom episode with some action showing new highest bitrate of 8.5 Mbps
Fig. 3: Example of a sitcom episode with less action showing new highest bitrate of 6.6 Mbps
Fig. 4: Example of a 4K animation episode showing new highest bitrate of 1.8 Mbps

The bitrate as well as quality shown for any point is the average for the corresponding stream, computed over the duration of the title. The annotation next to the point is the corresponding encoding resolution; it should be noted that video received by the client device is decoded and scaled to the device’s display resolution. As for VMAF score computation, for encoding resolutions less than 4K, we follow the VMAF best practice to upscale to 4K assuming bicubic upsampling. Aside from the encoding resolution, each point is also associated with an appropriate pixel aspect ratio (PAR) to achieve a target 16:9 display aspect ratio (DAR). For example, the 640×480 encoding resolution is paired with a 4:3 PAR to achieve 16:9 DAR, consistent with the DAR for other points on the ladder.

The last example, showing the new highest bitrate to be 1.8 Mbps, is for a 4K animation title episode which can be very efficiently encoded. It serves as an extreme example of content adaptive ladder optimization — it however should not to be interpreted as all animation titles landing on similar low bitrates.

The resolutions and bitrates for the fixed-bitrate ladder are pre-determined; minor deviation in the achieved bitrate is due to rate control in the encoder implementation not hitting the target bitrate precisely. On the other hand, each point on the optimized ladder is associated with optimal bit allocation across all shots with the goal of maximizing a video quality objective function while resulting in the corresponding average bitrate. Consequently, for the optimized encodes, the bitrate varies shot to shot depending on relative complexity and overall bit budget and in theory can reach the respective codec level maximum. Various points are constrained to different codec levels, so receivers with different decoder level capabilities can stream the corresponding subset of points up to the corresponding level.

The fixed-bitrate ladder often appears like steps — since it is not title adaptive it switches “late” to most encoding resolutions and as a result the quality stays flat within that resolution even with increasing bitrate. For example, two 1080p points with identical VMAF score or four 4K points with identical VMAF score, resulting in wasted bits and increased storage footprint.

On the other hand, the optimized ladder appears closer to a monotonically increasing curve — increasing bitrate results in an increasing VMAF score. As a side note, we do have some additional points, not shown in the plots, that are used in resolution limited scenarios — such as a streaming session limited to 720p or 1080p highest encoding resolution. Such points lie under (or to the right of) the convex hull main ladder curve but allow quality to ramp up in resolution limited scenarios.

Challenging-to-encode content

For the optimized ladders we have logic to detect quality saturation at the high end, meaning an increase in bitrate not resulting in material improvement in quality. Once such a bitrate is reached it is a good candidate for the topmost rung of the ladder. An additional limit can be imposed as a safeguard to avoid excessively high bitrates.

Sometimes we ingest a title that would need more bits at the highest end of the quality spectrum — even higher than the 16 Mbps limit of the fixed-bitrate ladder. For example,

  • a rock concert with fast-changing lighting effects and other details or
  • a wildlife documentary with fast action and/or challenging spatial details.

This scenario is generally rare. Nevertheless, below plot highlights such a case where the optimized ladder exceeds the fixed-bitrate ladder in terms of the highest bitrate, thereby achieving an improvement in the highest quality.

As expected, the quality is higher for the same bitrate, even when compared in the low or medium bitrate regions.

Fig. 5: Example of a movie with action and great amount of rich spatial details showing new highest bitrate of 17.2 Mbps

Visual examples

As an example, we compare the 1.75 Mbps encode from the fixed-bitrate ladder with the 1.45 Mbps encode from the optimized ladder for one of the titles from our 4K collection. Since 4K resolution entails a rather large number of pixels, we show 1024×512 pixel cutouts from the two encodes. The encodes are decoded and scaled to a 4K canvas prior to extracting the cutouts. We toggle between the cutouts so it is convenient to spot differences. We also show the corresponding full frame which helps to get a sense of how the cutout fits in the corresponding video frame.

Fig. 6: Pristine full frame — the purpose is to give a sense of how below cutouts fit in the frame
Fig. 7: Toggling between 1024×512 pixel cutouts from two encodes as annotated. Corresponding to pristine frame shown in Figure 6.
Fig. 8: Pristine full frame — the purpose is to give a sense of how below cutouts fit in the frame
Fig. 9: Toggling between 1024×512 pixel cutouts from two encodes as annotated. Corresponding to pristine frame shown in Figure 8.
Fig. 10: Pristine full frame — the purpose is to give a sense of how below cutouts fit in the frame
Fig. 11: Toggling between 1024×512 pixel cutouts from two encodes as annotated. Corresponding to pristine frame shown in Figure 10.
Fig. 12: Pristine full frame — the purpose is to give a sense of how below cutouts fit in the frame
Fig. 13: Toggling between 1024×512 pixel cutouts from two encodes as annotated. Corresponding to pristine frame shown in Figure 12.
Fig. 14: Pristine full frame — the purpose is to give a sense of how below cutouts fit in the frame
Fig. 15: Toggling between 1024×512 pixel cutouts from two encodes as annotated. Corresponding to pristine frame shown in Figure 14.

As can be seen, the encode from the optimized ladder delivers crisper textures and higher detail for less bits. At 1.45 Mbps it is by no means a perfect 4K rendition, but still very commendable for that bitrate. There exist higher bitrate points on the optimized ladder that deliver impeccable 4K quality, also for less bits compared to the fixed-bitrate ladder.

Compression and bitrate ladder improvements

Even before testing the new streams in the field, we observe the following advantages of the optimized ladders vs the fixed ladders, evaluated over 100 sample titles:

  • Computing the Bjøntegaard Delta (BD) rate shows 50% gains on average over the fixed-bitrate ladder. Meaning, on average we need 50% less bitrate to achieve the same quality with the optimized ladder.
  • The highest 4K bitrate on average is 8 Mbps which is also a 50% reduction compared to 16 Mbps of the fixed-bitrate ladder.
  • As mobile devices continue to improve, they adopt premium features (other than 4K resolution) like 10-bit and HFR. These video encodes can be delivered to mobile devices as well. The fixed-bitrate ladder starts at 560 kbps which may be too high for some cellular networks. The optimized ladder, on the other hand, has lower bitrate points that are viable in most cellular scenarios.
  • The optimized ladder entails a smaller storage footprint compared to the fixed-bitrate ladder.
  • The new ladder considers adding 1440p resolution (aka QHD) points if they lie on the convex hull of rate-quality tradeoff and most titles seem to get the 1440p treatment. As a result, when averaged over 100 titles, the bitrate required to jump to a resolution higher than 1080p (meaning either QHD or 4K) is 1.7 Mbps compared to 8 Mbps of the fixed-bitrate ladder. When averaged over 100 titles, the bitrate required to jump to 4K resolution is 3.2 Mbps compared to 8 Mbps of the fixed-bitrate ladder.

Benefits to members

At Netflix we perform A/B testing of encoding optimizations to detect any playback issues on client devices as well as gauge the benefits experienced by our members. One set of streaming sessions receives the default encodes and the other set of streaming sessions receives the new encodes. This in turn allows us to compare error rates as well as various metrics related to quality of experience (QoE). Although our streams are standard compliant, the A/B testing can and does sometimes find device-side implementations with minor gaps; in such cases we work with our device partners to find the best remedy.

Overall, while A/B testing these new encodes, we have seen the following benefits, which are in line with the offline evaluation covered in the previous section:

  • For members with high-bandwidth connections we deliver the same great quality at half the bitrate on average.
  • For members with constrained bandwidth we deliver higher quality at the same (or even lower) bitrate — higher VMAF at the same encoding resolution and bitrate or even higher resolutions than they could stream before. For example, members who were limited by their network to 720p can now be served 1080p or higher resolution instead.
  • Most streaming sessions start with a higher initial quality.
  • The number of rebuffers per hour go down by over 65%; members also experience fewer quality drops while streaming.
  • The reduced bitrate together with some Digital Rights Management (DRM) system improvements (not covered in this blog) result in reducing the initial play delay by about 10%.

Next steps

We have started re-encoding the 4K titles in our catalog to generate the optimized streams and we expect to complete in a couple of months. We continue to work on applying similar optimizations to our HDR streams.

Acknowledgements

We thank Lishan Zhu for help rendered during A/B testing.

This is a collective effort on the part of our larger team, known as Encoding Technologies, and various other teams that we have crucial partnerships with, such as:

If you are passionate about video compression research and would like to contribute to this field, we have an open position.


Optimized shot-based encodes for 4K: Now streaming! was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Improving our video encodes for legacy devices

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/improving-our-video-encodes-for-legacy-devices-2b6b56eec5c9

by Mariana Afonso, Anush Moorthy, Liwei Guo, Lishan Zhu, Anne Aaron

Netflix has been one of the pioneers of streaming video-on-demand content — we announced our intention to stream video over 13 years ago, in January 2007 — and have only increased both our device and content reach since then. Given the global nature of the service and Netflix’s commitment to creating a service that members enjoy, it is not surprising that we support a wide variety of streaming devices, from set-top-boxes and mobile devices to smart TVs. Hence, as the encoding team, we continuously maintain a variety of encode families, stretching back to H.263. In addition, with 193M members and counting, there is a huge diversity in the networks that stream our content as well as in our members’ bandwidth. It is, thus, imperative that we are sensible in the use of the network and of the bandwidth we require.

Together with our partner teams, our endeavor has always been to produce the best bang for the bit, and to that end, we have aggressively moved towards adopting newer codecs — AV1 being a recent example. These efforts allow our members to have the best viewing experience whenever they watch their favorite show or movie. However, not all members have access to the latest and greatest decoders. In fact, many stream Netflix through devices which cannot be upgraded to use the latest decoders owing to memory limitations, device upgrade cycles, etc., and thus fall back to less efficient encode families. One such encode family that has wide decoder support amongst legacy devices is our H.264/AVC Main profile family.

A few years ago, we improved on the H.264/AVC Main profile streams by employing per-title optimizations. Since then, we have applied innovations such as shot-based encoding and newer codecs to deploy more efficient encode families. Yet, given its wide support, our H.264/AVC Main profile family still represents a substantial portion of the members viewing hours and an even larger portion of the traffic. Continuing to innovate on this family has tremendous advantages across the whole delivery infrastructure: reducing footprint at our Content Delivery Network (CDN), Open Connect (OC), the load on our partner ISPs’ networks and the bandwidth usage for our members. In this blog post, we introduce recently implemented changes to our per-title encodes that are expected to lower the bitrate streamed by over 20%, on average, while maintaining a similar level of perceived quality. These changes will be reflected in our product within the next couple of months.

What we have improved on

Keeping in mind our goal to maintain ubiquitous device support, we leveraged what we learned from innovations implemented during the development of newer encode families and have made a number of improvements to our H.264/AVC Main profile per-title encodes. These are summarized below:

  • Instead of relying on other objective metrics, such as PSNR†, VMAF is employed to guide optimization decisions. Given that VMAF is highly correlated with visual quality, this leads to decisions that favor encodes with higher perceived quality.
  • Allowing per-chunk bitrate variations instead of using a fixed per-title bitrate, as in our original complexity-based encoding scheme. This multi-pass strategy, previously employed for our mobile encodes, allows us to avoid over-allocating bits to less complex content, as compared to using a complexity-defined, albeit fixed, bitrate for the entire title. This encoding approach improves the overall bit allocation while keeping a similar average visual quality and requires little added computational complexity.
  • Improving the bitrate ladder that is generated after complexity analysis to choose points with greater intelligence than before.
  • Further tuning of pre-defined encoding parameters.

† which we originally used as a quality measure, before we developed VMAF.

Performance results

In this section, we present an overview of the performance of our new encodes compared to our existing H.264 AVC Main per-title encodes in terms of bitrate reduction, average compression efficiency improvement using Bjontegaard-delta rate (BD-rate) and other relevant metrics. These figures were estimated on 200 full-length titles from our catalog and have been validated through extensive A/B testing. They are representative of the savings we expect our CDN, ISP partners, and members to see once the encodes are live.

It is important to highlight that the expected >20% reduction in average session bitrate for these encodes corresponds to a significant reduction in the overall Netflix traffic as well. These changes also lead to an improvement in Quality-of-Experience (QoE) metrics that affect the end user experience, such as play delays (i.e. how long it takes for the video to start playing), rebuffer rates, etc., as a result of the reduction in average bitrates. In addition, footprint savings will allow more content to be stored in edge caches, thus contributing to an improved experience for our members.

Summary

At Netflix, we strive to continuously improve the quality and reliability of our service. Our team is always looking to innovate and to find ways to improve our members’ experiences through more efficient encodes. In this tech blog, we summarized how we made improvements towards optimizing our video encodes for legacy devices with limited decoder support. These changes will result in a number of benefits for our members while maintaining perceived quality. If your preferred device is streaming one of these profiles, you’ll experience the new encodes soon — so, sit back, grab the remote, and stream away, we’ve got your back!

If you are passionate about research and would like to contribute to this field, we have an open position in our team!


Improving our video encodes for legacy devices was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.