Tag Archives: set-top box

Native Frame Rate Playback

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/native-frame-rate-playback-6c87836a948

by Akshay Garg, Roger Quero

Introduction

Maximizing immersion for our members is an important goal for the Netflix product and engineering teams to keep our members entertained and fully engaged in our content. Leveraging a good mix of mature and cutting-edge client device technologies to deliver a smooth playback experience with glitch-free in-app transitions is an important step towards achieving this goal. In this article we explain our journey towards productizing a better viewing experience for our members by utilizing features and capabilities in consumer streaming devices.

If you have a streaming device connected to your TV, such as a Roku Set Top Box (STB) or an Amazon FireTV Stick, you may have come across an option in the device display setting pertaining to content frame rate. Device manufacturers often call this feature “Match Content Frame Rate”, “Auto adjust display refresh rate” or something similar. If you’ve ever wondered what these features are and how they can improve your viewing experience, keep reading — the following sections cover the basics of this feature and explain the details of how the Netflix application uses it.

Problem

Netflix’s content catalog is composed of video captured and encoded in one of various frame rates ranging from 23.97 to 60 frames per second (fps). When a member chooses to watch a movie or a TV show on a source device (ex. Set-top box, Streaming stick, Game Console, etc…) the content is delivered and then decoded at its native frame rate, which is the frame rate it was captured and encoded in. After the decode step, the source device converts it to the HDMI output frame rate which was configured based on the capabilities of the HDMI input port of the connected sink device (TV, AVR, Monitor etc). In general, the output frame rate over HDMI is automatically set to 50fps for PAL regions and 60fps for NTSC regions.

Netflix offers limited high frame rate content (50fps or 60fps), but the majority of our catalog and viewing hours can be attributed to members watching 23.97 to 30fps content. This essentially means that most of the time, our content goes through a process called frame rate conversion (aka FRC) on the source device which converts the content from its native frame rate to match the HDMI output frame rate by replicating frames. Figure 1 illustrates a simple FRC algorithm that converts 24fps content to 60fps.

Figure 1 : 3:2 pulldown technique to convert 24FPS content to 60FPS

Converting the content and transmitting it over HDMI at the output frame rate sounds logical and straightforward. In fact, FRC works well when the output frame rate is an integer multiple of the native frame rate ( ex. 24→48, 25→50, 30→60, 24→120, etc…). On the other hand, FRC introduces a visual artifact called Judder when non-integer multiple conversion is required (ex. 24→60, 25→60, etc…), which manifests as choppy video playback as illustrated below:

With Judder
Without Judder

It is important to note that the severity of the judder depends on the replication pattern. For this reason, judder is more prominent in PAL regions because of the process of converting 24fps content to 50fps over HDMI (see Figure 2):

  • Total of 50 frames must be transmitted over HDMI per second
  • Source device must replicate the original 24 frames to fill in the missing 26 frames
  • 50 output frames from 24 original frames are derived as follows:
  • 22 frames are duplicated ( total of 44 frames )
  • 2 frames are repeated three times ( total of 6 frames )
Figure 2: Example of a 24 to 50fps frame rate conversion algorithm

As a review, judder is more pronounced when the frequency of the number of repeated frames is inconsistent and spread out e.g. in the scenario mentioned above, the frame replication factor varies between 2 and 3 resulting in a more prominent judder.

Judder Mitigation Solutions

Now that we have a better understanding of the issue, let’s review the solutions that Netflix has invested in. Due to the fragmented nature of device capabilities in the ecosystem, we explored multiple solutions to address this issue for as many devices as possible. Each unique solution leverages existing or new source device capabilities and comes with various tradeoffs.

Solution #1: Match HDMI frame rate to content Native Frame Rate

The first solution we explored and recently enabled leverages the capability of existing source & sink devices to change the outgoing frame rate on the HDMI link. Once this feature is enabled in the system settings, devices will match the HDMI output frame rate with the content frame rate, either exactly or an integer multiple, without user intervention.

While this sounds like the perfect solution, devices that support older HDMI technologies e.g. HDMI v<2.1, can’t change the frame rate without also changing the HDMI data rate. This results in what is often referred as an “HDMI bonk” which causes the TV to display a blank screen momentarily. Not only is this a disruptive experience for members, but the duration of the blank screen varies depending on how fast the source and sink devices can resynchronize. Figure 3 below is an example of how this transition looks:

Figure 3: Native frame rate experience with screen blanking

Solution #2 : Match HDMI frame rate to content Native Frame Rate w/o screen blanking

Improvements in the recent HDMI standards (HDMI 2.1+) now allow a source device to send the video content at its native frame rate without needing an HDMI resynchronization. This is possible through an innovative technology called Quick Media Switching (QMS) which is an extension of Variable Refresh Rate (VRR) targeted for content playback scenarios. QMS allows a source device to maintain a constant data rate on the HDMI link even during transmission of content with different frame rates. It does so by adjusting the amount of non-visible padding data while keeping the amount of visible video data constant. Due to the constant HDMI data rate, the HDMI transmitter and receiver don’t need to resynchronize, leading to a seamless/glitch-free transition as illustrated in Figure 4.

HDMI QMS is positioned to be the ideal solution to address the problem we are presenting. Unfortunately, at present, this technology is relatively new and adoption into source and sink devices will take time.

Figure 4: Native frame rate experience without screen blanking using HDMI QMS

Solution #3: Frame Rate Conversion within Netflix Application

Apart from the above HDMI specification dependent solutions, it is possible for an application like Netflix to manipulate the presentation time stamp value of each video frame to minimize the effect of judder i.e. the application can present video frames to the underlying source device platform at a cadence that can help the source device to minimize the judder associated with FRC on the HDMI output link.

Let us understand this idea with the help of an example. Let’s go back to the same 24 to 50 fps FRC scenario that was covered earlier. But, instead of thinking about the FRC rate per second (24 ⇒ 50 fps), let’s expand the FRC calculation time period to 3 seconds (24*3 = 72 ⇒50*3 = 150 fps). For content with a native frame rate of 24 fps, the source device needs to get 72 frames from the streaming application in a period of 3 seconds. Now instead of sending 24 frames per second at a regular per second cadence, for each 3 second period the Netflix application can decide to send 25 frames in the first 2 seconds (25 x 2 = 50) and 22 frames in the 3rd second thereby still sending a total of 72 (50+22) frames in 3 seconds. This approach creates an even FRC in the first 2 seconds (25 frames replicated twice evenly) and in the 3rd second the source device can do a 22 to 50 fps FRC which will create less visual judder compared to the 24->50 fps FRC given a more even frame replication pattern. This concept is illustrated in Figure 5 below.

Figure 5: FRC Algorithm from Solution#3 for 24 to 50 fps conversion

NOTE: This solution was developed by David Zheng in the Partner Experience Technology team at Netflix. Watch out for an upcoming article going into further details of this solution.

How the Netflix Application Uses these Solutions

Given the possible solutions available to use and the associated benefits and limitations, the Netflix application running on a source device adapts to use one of these approaches based on factors such as source and sink device capabilities, user preferences and the specific use case within the Netflix application. Let’s walk through each of these aspects briefly.

Device Capability

Every source device that integrates the Netflix application is required to let the application know if it and the connected sink device have the ability to send and receive video content at its native frame rate. In addition, a source device is required to inform whether it can support QMS and perform a seamless playback start of any content at its native frame rate on the connected HDMI link.

As discussed in the introduction section, the presence of a system setting like “Match Content Frame Rate” typically indicates that a source device is capable of this feature.

User Preference

Even if a source device and the connected sink can support Native content frame rate streaming (seamless or non-seamless), a user might have selected not to do this via the source device system settings e.g. “Match Content Frame Rate” set to “Never”. Or they might have indicated a preference of doing this only when the native content frame rate play start can happen in a seamless manner e.g. “Match Content Frame Rate” set to “Seamless”.

The Netflix application needs to know this user selection in order to honor their preference. Hence, source devices are expected to relay this user preference to the Netflix application to help with this run-time decision making.

Netflix Use Case

In spite of source device capability and the user preferences collectively indicating that the Native Content Frame Rate streaming should be enabled, the Netflix application can decide to disable this feature for specific member experiences. As an example, when the user is browsing Netflix content in the home UI, we cannot play Netflix trailers in their Native frame rate due to the following reasons:

  • If using Solution # 1, when the Netflix trailers are encoded in varying content frame rates, switching between trailers will result in screen blanking, thereby making the UI browsing unusable.
  • If using Solution # 2, sending Netflix trailers in their Native frame rate would mean that the associated UI components (movement of cursor, asset selection etc) would also be displayed at the reduced frame rate and this will result in a sluggish UI browsing experience. This is because on HDMI output from the source device, both graphics (Netflix application UI) and video components will go out at the same frame rate (native content frame rate of the trailer) after being blended together on the source device.

To handle these issues we follow an approach as shown in Figure 6 below where we enable the Native Frame Rate playback experience only when the user selects a title and watches it in full screen with minimal graphical UI elements.

Figure 6: Native Frame Rate usage within Netflix application

Conclusion

This article presented features that aim to improve the content playback experience on HDMI source devices. The breadth of available technical solutions, user selectable preferences, device capabilities and the application of each of these permutations in the context of various in-app member journeys represent a typical engineering and product decision framework at Netflix. Here at Netflix, our goal is to maximize immersion for our members through introduction of new features that will improve their viewing experience and keep them fully engaged in our content.

Acknowledgements

We would like to acknowledge the hard work of a number of teams that came together to deliver the features being discussed in this document. These include Core UI and JS Player development, Netflix Application Software development, AV Test and Tooling (earlier article from this team), Partner Engineering and Product teams in the Consumer Engineering organization and our data science friends in the Data Science and Engineering organization at Netflix. Diagrams in this article are courtesy of our Partner Enterprise Platform XD team.


Native Frame Rate Playback was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Life of a Netflix Partner Engineer — The case of extra 40 ms

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/life-of-a-netflix-partner-engineer-the-case-of-extra-40-ms-b4c2dd278513

Life of a Netflix Partner Engineer — The case of the extra 40 ms

By: John Blair, Netflix Partner Engineering

The Netflix application runs on hundreds of smart TVs, streaming sticks and pay TV set top boxes. The role of a Partner Engineer at Netflix is to help device manufacturers launch the Netflix application on their devices. In this article we talk about one particularly difficult issue that blocked the launch of a device in Europe.

The mystery begins

Towards the end of 2017, I was on a conference call to discuss an issue with the Netflix application on a new set top box. The box was a new Android TV device with 4k playback, based on Android Open Source Project (AOSP) version 5.0, aka “Lollipop”. I had been at Netflix for a few years, and had shipped multiple devices, but this was my first Android TV device.

All four players involved in the device were on the call: there was the large European pay TV company (the operator) launching the device, the contractor integrating the set-top-box firmware (the integrator), the system-on-a-chip provider (the chip vendor), and myself (Netflix).

The integrator and Netflix had already completed the rigorous Netflix certification process, but during the TV operator’s internal trial an executive at the company reported a serious issue: Netflix playback on his device was “stuttering.”, i.e. video would play for a very short time, then pause, then start again, then pause. It didn’t happen all the time, but would reliably start to happen within a few days of powering on the box. They supplied a video and it looked terrible.

The device integrator had found a way to reproduce the problem: repeatedly start Netflix, start playback, then return to the device UI. They supplied a script to automate the process. Sometimes it took as long as five minutes, but the script would always reliably reproduce the bug.

Meanwhile, a field engineer for the chip vendor had diagnosed the root cause: Netflix’s Android TV application, called Ninja, was not delivering audio data quickly enough. The stuttering was caused by buffer starvation in the device audio pipeline. Playback stopped when the decoder waited for Ninja to deliver more of the audio stream, then resumed once more data arrived. The integrator, the chip vendor and the operator all thought the issue was identified and their message to me was clear: Netflix, you have a bug in your application, and you need to fix it. I could hear the stress in the voices from the operator. Their device was late and running over budget and they expected results from me.

The investigation

I was skeptical. The same Ninja application runs on millions of Android TV devices, including smart TVs and other set top boxes. If there was a bug in Ninja, why is it only happening on this device?

I started by reproducing the issue myself using the script provided by the integrator. I contacted my counterpart at the chip vendor, asked if he’d seen anything like this before (he hadn’t). Next I started reading the Ninja source code. I wanted to find the precise code that delivers the audio data. I recognized a lot, but I started to lose the plot in the playback code and I needed help.

I walked upstairs and found the engineer who wrote the audio and video pipeline in Ninja, and he gave me a guided tour of the code. I spent some quality time with the source code myself to understand its working parts, adding my own logging to confirm my understanding. The Netflix application is complex, but at its simplest it streams data from a Netflix server, buffers several seconds worth of video and audio data on the device, then delivers video and audio frames one-at-a-time to the device’s playback hardware.

A diagram showing content downloaded to a device into a streaming buffer, then copied into the device decode buffer.
Figure 1: Device Playback Pipeline (simplified)

Let’s take a moment to talk about the audio/video pipeline in the Netflix application. Everything up until the “decoder buffer” is the same on every set top box and smart TV, but moving the A/V data into the device’s decoder buffer is a device-specific routine running in its own thread. This routine’s job is to keep the decoder buffer full by calling a Netflix provided API which provides the next frame of audio or video data. In Ninja, this job is performed by an Android Thread. There is a simple state machine and some logic to handle different play states, but under normal playback the thread copies one frame of data into the Android playback API, then tells the thread scheduler to wait 15 ms and invoke the handler again. When you create an Android thread, you can request that the thread be run repeatedly, as if in a loop, but it is the Android Thread scheduler that calls the handler, not your own application.

To play a 60fps video, the highest frame rate available in the Netflix catalog, the device must render a new frame every 16.66 ms, so checking for a new sample every 15ms is just fast enough to stay ahead of any video stream Netflix can provide. Because the integrator had identified the audio stream as the problem, I zeroed in on the specific thread handler that was delivering audio samples to the Android audio service.

I wanted to answer this question: where is the extra time? I assumed some function invoked by the handler would be the culprit, so I sprinkled log messages throughout the handler, assuming the guilty code would be apparent. What was soon apparent was that there was nothing in the handler that was misbehaving, and the handler was running in a few milliseconds even when playback was stuttering.

Aha, Insight

In the end, I focused on three numbers: the rate of data transfer, the time when the handler was invoked and the time when the handler passed control back to Android. I wrote a script to parse the log output, and made the graph below which gave me the answer.

A graph showing time spent in the thread handler and audio data throughput.
Figure 2: Visualizing Audio Throughput and Thread Handler Timing

The orange line is the rate that data moved from the streaming buffer into the Android audio system, in bytes/millisecond. You can see three distinct behaviors in this chart:

  1. The two, tall spiky parts where the data rate reaches 500 bytes/ms. This phase is buffering, before playback starts. The handler is copying data as fast as it can.
  2. The region in the middle is normal playback. Audio data is moved at about 45 bytes/ms.
  3. The stuttering region is on the right, when audio data is moving at closer to 10 bytes/ms. This is not fast enough to maintain playback.

The unavoidable conclusion: the orange line confirms what the chip vendor’s engineer reported: Ninja is not delivering audio data quickly enough.

To understand why, let’s see what story the yellow and grey lines tell.

The yellow line shows the time spent in the handler routine itself, calculated from timestamps recorded at the top and the bottom of the handler. In both normal and stutter playback regions, the time spent in the handler was the same: about 2 ms. The spikes show instances when the runtime was slower due to time spent on other tasks on the device.

The real root cause

The grey line, the time between calls invoking the handler, tells a different story. In the normal playback case you can see the handler is invoked about every 15 ms. In the stutter case, on the right, the handler is invoked approximately every 55 ms. There are an extra 40 ms between invocations, and there’s no way that can keep up with playback. But why?

I reported my discovery to the integrator and the chip vendor (look, it’s the Android Thread scheduler!), but they continued to push back on the Netflix behavior. Why don’t you just copy more data each time the handler is called? This was a fair criticism, but changing this behavior involved deeper changes than I was prepared to make, and I continued my search for the root cause. I dove into the Android source code, and learned that Android Threads are a userspace construct, and the thread scheduler uses the epoll() system call for timing. I knew epoll() performance isn’t guaranteed, so I suspected something was affecting epoll() in a systematic way.

At this point I was saved by another engineer at the chip supplier, who discovered a bug that had already been fixed in the next version of Android, named Marshmallow. The Android thread scheduler changes the behavior of threads depending whether or not an application is running in the foreground or the background. Threads in the background are assigned an extra 40 ms (40000000 ns) of wait time.

A bug deep in the plumbing of Android itself meant this extra timer value was retained when the thread moved to the foreground. Usually the audio handler thread was created while the application was in the foreground, but sometimes the thread was created a little sooner, while Ninja was still in the background. When this happened, playback would stutter.

Lessons learned

This wasn’t the last bug we fixed on this platform, but it was the hardest to track down. It was outside of the Netflix application, in a part of the system that was outside of the playback pipeline, and all of the initial data pointed to a bug in the Netflix application itself.

This story really exemplifies an aspect of my job I love: I can’t predict all of the issues that our partners will throw at me, and I know that to fix them I have to understand multiple systems, work with great colleagues, and constantly push myself to learn more. What I do has a direct impact on real people and their enjoyment of a great product. I know when people enjoy Netflix in their living room, I’m an essential part of the team that made it happen.


Life of a Netflix Partner Engineer — The case of extra 40 ms was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

HDMI — Scaling Netflix Certification

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/hdmi-scaling-netflix-certification-8e9cb3ec524f

HDMI — Scaling Netflix Certification

Scott Bolter, Matthew Lehman, Akshay Garg ¹

At Netflix, we take the task of preserving the creative vision of our content all the way to a subscriber TV screen very seriously. This significantly increases the scope of our application integration and certification processes for streaming devices like set-top-boxes (STBs) and TVs. However, given a diverse device ecosystem, scaling this deeper level of validation for each device presents a significant challenge for our certification teams.

Our first step towards addressing this challenge is to actively engineer the removal of manual and subjective testing approaches across different functional touch points of the Netflix application on a streaming device. In this article we talk about one such functional area, High Definition Multimedia Interface (HDMI), the challenges it brings in relation to Netflix certification on STBs, and our in-house developed automated and objective testing workflows that help us simplify this process.

Why test High Definition Multimedia Interface (HDMI) ?

The HDMI spec includes several protocols and capabilities that are key to successfully transmitting audio, video, and other digital messages from source (STB) to sink (display) devices. Some of these capabilities include:

  • Extended Display Identification Data (EDID)
  • Audio and Video metadata (Info Frames) to help communicate media formats like multi-channel audio and High Dynamic Range (HDR) video
  • High-Bandwidth Digital Content Protection (HDCP)
  • Consumer Electronics Control (CEC)

A high quality Netflix experience on the STB device depends on the correct implementation of each of these capabilities, so we have a vested interest in thoroughly testing them.

Scaling HDMI Testing

Here are some of the challenges associated with HDMI testing on STBs:

  • The need to physically obtain and replicate different home entertainment setups of TVs and Home Theater Systems (HDMI topologies).
  • Time spent in manually changing these topologies between and during different tests.
  • Inconsistent test results due to different device models used in the HDMI topology.
  • Subjectivity in test results to accommodate differences in HDMI sink behaviors.

To deal with these scaling challenges, we have opted to integrate API-enabled HDMI Signal Analyzers into our test infrastructure. This provides the ability to simulate different HDMI topologies within a test case by leveraging the analyzer’s API.

Simulating multiple HDMI Topologies²

Next, we will cover basics of the HDMI protocols highlighted in the previous section and walk through the automation workflows that we have developed to address the related challenges.

EDID

Background

Every HDMI-capable TV transmits its Extended Display Identification Data, or EDID, to the connected HDMI source device (STB). The EDID is the means by which a sink device advertises its supported audio and video capabilities such as the spatial resolution (number of pixels), the temporal resolution (number of frames per second), and the color formats in which these frames of pixels are rendered.

EDID exchange between HDMI Sink and HDMI Source

Testing Approach

Using an HDMI Analyzer we can advertise the EDID of any HDMI capable sink to a STB device. This allows us to simulate an environment under which the device under test (DUT) i.e. the STB behaves just as it would if it were physically connected to an HDMI sink represented by that EDID. This ability to emulate different HDMI sinks has proven very useful for us, yielding increased automation, objective evaluation and scalability of a number of our test cases. In comparison, previously a tester was tasked to gather many physical HDMI sink devices, plug them into the DUT, validate, and move to the next scenario.

Test Area: Requested Media Profile Validation

A Netflix capable STB should request accurate media streams from our cloud service for optimal user experience. The media stream format and fidelity to be requested is decided by a combination of the inherent HDMI output capabilities of the STB and that of the connected HDMI topology. For example, an HD-only STB should not request 4K video streams. Likewise, a STB connected to a TV with stereo-only speakers should not request multi-channel Dolby Digital Plus Atmos audio streams. In order to comprehensively test the accuracy of the media streams being requested by the DUT under different HDMI setups, we emulate a variety of HDMI sinks with distinct resolution and media format capabilities by looping through a collection of EDID files on an HDMI analyzer connected to the DUT. For each EDID version we then validate the media profiles being requested by the DUT in an automated manner by comparing them against a reference expected set. This ensures that the media streams requested by DUT accurately reflect its HDMI capabilities and the active HDMI topology.

HDR

Background

HDR enables a wider range of colors, deeper blacks, and brighter specular highlights. However, when graphics in the sRGB color space, such as subtitles and media player controls, are composited on a video layer in the HDR format they need to be correctly converted into a wider BT.2020 color space and a larger range of luminance. Netflix gives guidance to preserve the original creative intent of the non-HDR graphics, so they appear the same when rendered in HDR output mode. This concept is known as perceptual mapping (BT.2087.0).

An increasing number of STB devices are capable of producing HDR output within the entire user experience rather than only during video playback, making accurate graphics color space and luminance mapping a more important part of a good user experience. Incorrectly mapped, these graphics can appear dim or colors can look oversaturated as you can see in the images below.

Undersaturated Netflix UI
Expected Netflix UI after SDR to HDR graphics conversion
Oversaturated Netflix UI

Testing Approach

Even if a STB follows the Netflix recommended sRGB to HDR color volume mapping for graphics, the end result on a screen is rather subjective. Different display panels add their own characteristics to the final output. Some testers might even prefer oversaturated graphics. Thankfully we can use an HDMI analyzer in an automated manner to remove this subjectivity from our testing.

Test Area: sRGB Graphics to HDR Color Volume mapping

Using an HDMI Analyzer we can objectively measure the pixel values for characteristics such as chromaticity and luminance. In our HDR-specific tests we use graphics that cover both the boundary of the sRGB color space as well as its entire luminance range. When these images are then applied to the graphics plane of a STB sending HDMI output in HDR mode, the STB has to convert its graphics plane into HDR color volume so that it can output both the graphics and video elements in a unified format. By capturing this STB output on an HDMI analyzer we measure and verify that after this graphics color volume conversion on the STB, the output pixel values of the graphics section follow the expected boundaries of it’s original non-HDR color space and luminance range as per our perceptual mapping requirement. The figure below highlights this testing process.

Validation of SDR to HDR color volume conversion for graphics

HDCP

Background

With the goal of preventing content piracy, it is of utmost importance that a STB device running the Netflix application is able to protect our content from being compromised on that device. One of the many important steps toward ensuring this is to validate STB devices’ adherence to High-bandwidth Digital Content Protection (HDCP) policies as specified in the Digital Rights Management (DRM) licenses associated with our streams.

A Netflix DRM license typically provides a mapping of the minimum required HDCP version (v1.4 or v2.2) for each content resolution i.e. the minimum HDCP version that must be established on the link between HDMI source (STB) and sink device (display) for the source to be able to send the associated decrypted video signal at a specific resolution over the HDMI cable. In order to effectively apply these HDCP policies, we must be able to trust the HDMI source device’s reporting of the effective HDCP state negotiated with the HDMI sink as well as its enforcement of the minimum required HDCP version for each output content resolution.

Testing Approach

As the task of procuring various audio/video repeaters (e.g. Home Theater Systems), HDMI switches and connected displays is very time-consuming and does not scale, once again we lean in on using an HDMI analyzer for our test automation purposes.

Test Area: Accurate HDCP Version Reporting

Leveraging the analyzer, we can simulate the following HDMI topologies:

  1. STB connected to a TV
  2. STB connected to a repeater which in turn is connected to a TV

In each topology, we can also tweak the level of HDCP support i.e. HDCP v1.4 or HDCP v2.2 on the repeater and the TV individually in an automated manner using the relevant HDMI Analyzer API’s. These abilities allow us to create multiple test setups as shown in the figure below and in each such setup, the DUT is required to report the effective HDCP version (the lowest version in the topology) to the Netflix application so that our service can serve the appropriate content to the client in that configuration.

Multiple possible HDCP version configurations

Test Area: Adherence to HDCP Policies

While testing of the reported HDCP version would ensure that the DUT sends correct data to Netflix services to obtain the appropriate content streams and DRM licenses, we also need to test that the DUT adheres to the video output restrictions stipulated in that license e.g. blocking content requiring HDCP v2.2 when HDCP v1.4 is negotiated on the HDMI link. To ensure this we again use an HDMI analyzer to emulate different HDMI topologies virtually and initiate playback using a variety of DRM licenses that stipulate distinct types of content protection rules for video output. Finally, we switch across different HDCP versions on the HDMI Analyzer, ensuring that in each configuration, DUT is able to follow the DRM license stipulated video output protection rules by taking one of the following required actions:

  1. Allow HDMI output of the video stream as-is
  2. Downscale the video output resolution if resolution is the content protection criteria
  3. Block the HDMI video output completely
  4. Stop playback by throwing an insufficient HDCP protection error to Netflix application

each of which can be validated in an automated manner leveraging the HDMI analyzer and the relevant Netflix application events.

CEC

Background

Consumer Electronics Control (CEC) protocol implementations on HDMI devices typically provide a convenience of an indirect device control e.g. using a TV remote to control the volume of the connected home theater system. However, aside from this benefit, CEC messages can also indicate which HDMI input of a downstream device (HDMI sink) is actively being used or if the downstream device itself is in a standby state. This is something of interest to a streaming application like Netflix running on an HDMI source. Whether or not the STB running Netflix is connected to an active HDMI input on the sink device has implications for what the Netflix application should or should not be doing, so we want the STB to correctly signal this active or inactive CEC state to the Netflix application.

Testing Approach

Once again, to remove the issues of variability in how CEC is branded on a sink device, how it is enabled in a device menu system, and under what conditions relevant CEC messages are transmitted, we can use an HDMI analyzer to send carefully crafted CEC operational codes to the STB.

Test Area: Accurate CEC Active State Notifications

After sending custom CEC messages targeted to the DUT on the HDMI bus, we can ensure that it behaves correctly in response to these messages. Some of these test scenarios are highlighted in the figure below.

CEC Active State notification in different scenarios

As an example we could send a CEC message to the STB to notify it that the active HDMI input on the HDMI sink has changed to some other source. Likewise we can also simulate the occurrence of an HDMI sink standby transition by broadcasting a CEC Standby message. In both scenarios we expect the source device to become CEC inactive and notify this updated CEC state to its local Netflix application.

Summary

At Netflix we deeply care about the quality of experience for our subscribers. It motivates us to invest in test automation to scale our approach to ensure the best possible device integration from our partners. The ideas discussed here represent a tip of the iceberg with many more challenges still left to be identified and addressed. If you are passionate about device test automation and want to help us solve these kinds of problems, please check out our jobs site for exciting opportunities.

[1] Equal contribution from all authors.

[2] Diagrams courtesy of Sunny Kong.


HDMI — Scaling Netflix Certification was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.