Welcome to the world of embedded devices and video playback! This guide is part of a larger series on video playback, diving into the unique challenges and solutions for embedded systems. Whether you’re working with Settop Boxes (STBs), Smart TVs, or even a video-capable fridge, this series is designed to help you navigate the complexities of video playback on embedded devices.
In this chapter, we’ll explore the intricacies of video playback on embedded devices, the hardware pipelines, bespoke drivers, and the limitations that may arise due to outdated software and hardware constraints. We’ll also discuss how to get started with a new embedded device project, including essential tips for experimentation, documentation, and collaboration with integration teams.
Find the full series here:
Embedded devices often run on Linux-based operating systems with limited CPU/GPU resources, which leads to the utilization of dedicated video decoding pipelines. This is quite different from desktop or mobile devices, where the same GPU handles both graphics and video.
In embedded devices, the output display consists of several physical output “planes.” One of these is the dedicated video output plane, which is connected to the hardware video decoder, scaler, and other components. This dedicated plane is necessary because the CPU in embedded devices, typically ARM or MIPS processors, cannot decode HD/4K video in software.
Embedded devices require specific drivers to control the hardware video pipeline. These drivers allow communication between the software and the hardware, enabling video decoding, demuxing, and display. For browser integrations, this is achieved through private APIs or GStreamer video plugins.
Layers of Complexity
Multiple layers exist between an application requesting video playback and the actual hardware performing video decoding. This complexity creates room for errors and limitations, which depend on the drivers and browser integrations. Understanding these limitations is crucial for successful video playback on embedded devices.
Embedded devices often have less frequent updates than desktop or mobile devices, leading to limited playback capabilities. Devices may only be able to handle content from around their development/deployment period and may struggle with newer video formats or containers.
While some devices receive frequent software updates, the hardware decoder ultimately limits the video playback capabilities. Since hardware decoders cannot be updated like software, they will never support features beyond their initial design specifications. This is a downside of using hardware decoding instead of software decoding with a more capable CPU.
Understanding Display Layers and the “Hole Punch” Mechanism
In embedded devices, video playback often occurs on the backmost display layer. Typically, the layers are organized as follows:
- Graphics (front/top visible)
- Data/Teletext (for subtitles or teletext in Europe)
- Picture in Picture (PiP) video
- Main video (back/behind)
In desktop devices, video is decoded in software on the CPU and then decoded on the same GPU as the graphics. This means that the OpenGL layer handling all the graphics is aware of the video textures and composites them correctly. You can overlay graphics with positioning along the z-axis using the z-index property in DOM-based apps or Lightning templates.
However, in the case of embedded devices, video is rendered on a separate hardware pipeline displayed on a different display layer. This means the browser, being on a higher graphical plane, automatically obstructs the video. To overcome this, browsers typically use a “hole punch” mechanism to create a big transparent “hole” in the graphics based on the
<Video> element z-index ordering, preventing the browser’s background from obstructing the video.
Not all browsers handle this process properly. If you hear audio but don’t see any video, the video might be obstructed, and you need to ensure your lowest DOM element is transparent. For Lightning-based applications, this process is slightly different, as the graphics output is through WebGL. Most browsers allow the video to pass through the WebGL layer, making the OpenGL context in the canvas object transparent, allowing video to pass through it.
However, be aware that rendering an element with a solid background might obstruct the video. To avoid this, ensure everything is hidden or transparent when the video passes through. Some projects use a hole punch equivalent shader to achieve this, but it’s more cost-efficient to ensure the video player page or component doesn’t have any background and overlays the video player controls only when necessary.
How to Start on an Embedded Device
When starting a new project on a new device, especially Settop Boxes (STBs) and Smart TVs, it’s crucial to read the documentation. I know, I know, it’s not the most fun thing, but trust me, it’s worth it . Check if the docs describe anything about video formats, supported codecs, and playback limitations. This will save you time and headaches.
Next, it’s time for some hands-on experimentation. Create or find a simple HTML5 app that requests a video playback session, and print logs of all the events and durations happening. We’ll dive deeper into the
Video element in the next chapter.
Keep an eye out for:
- Duration accuracy: Can the video player accurately determine the length of the clip?
- Playback position: Does the system track the position correctly, even when fast-forwarding or rewinding?
- Event firing: Are events like canPlayBack, ready, and ended working as expected?
- Video formats and bitrates: Test different formats and bitrates to find the device’s limits.
- Endurance test: Can the device continuously play your content for 24 hours without issues?
Document your findings and share them with the integration team, if there’s one. Keep track of any limitations or issues you encounter, as they might be useful later in your development process.
So, grab a cup of coffee, dig into the documentation, and let the fun begin!