Adam Taylor’s MicroZed Chronicles, Part 223: Video Mixing with the Zynq SoC or Zynq UltraScale+ MPSoC

2017年11月8日 | By News | Filed in: News.

http://ift.tt/2ArXKJb

 

By Adam Taylor

 

 

So far, all of my image-processing examples have used only one sensor and produce one video stream within the Zynq SoC or Zynq UltraScale+ MPSoC PL (programmable logic). However, if we want to work with multiple sensors or overlay information like telemetry on a video frame, we need to do some video mixing.

 

Video mixing merges several different video streams together to create one output stream. In our designs we can use this merged video stream in several ways:

 

  1. Tile together multiple video streams to be displayed on a larger display. For example, stitching multiple images into a 4K display.
  2. Blend together multiple image streams as vertical layers to create one final image. For example, adding an overlay or performing sensor fusion.

 

To do this within our Zynq SoC or Zynq UltraScale+ MPSoC system, we use the Video Mixer IP core, which comes with the Vivado IP library. This IP core mixes as many as eight image streams plus a final logo layer. The image streams are provided to the core via AXI Streaming or AXI memory-mapped inputs. You can select which one on a stream-by-stream basis. The IP Core’s merged-video output uses an AXI Stream.

 

To give a demonstration of the how we can use the video mixer, I am going to update the MiniZed FLIR Lepton project to use the 10-inch touch display and merge a second video stream using a TPG. Using the 10-inch touch display gives me a larger screen to demonstrate the concept. This screen has been sitting in my office for a while now so it’s time it became useful.

 

Upgrading to the 10-inch display is easy. All we need to do in the Vivado design is increase the pixel clock frequency (fabric clock 2) from 33.33MHz to 71.1MHz. Along with adjusting the clock frequency, we also need to set the ALI3 controller block to 71.1MHz.

 

Now include a video mixer within the MiniZed Vivado design. Enable layer one and select a streaming interface with global alpha control enabled. Enabling a layer’s global alpha control allows the video mixer to blend the alpha on a pixel-by-pixel basis. This setting allows pixels to be merged according to the defined alpha value rather than just over riding the pixel on the layer beneath. The alpha value for each layer ranges between 0 (transparent) and 1 (opaque). Each layer’s alpha value is defined within an 8-bit register.

 

 

Image1.jpg 

 

 

Insertion of the Video Mixer and Video Test Pattern Generator

 

 

 

Image2.jpg

  

Enabling layer 1, for AXI streaming and Global Alpha Blending

 

 

The FLIR camera provides the first image stream. However we need a second image stream for this example, so we’ll instantiate a video TPG core and connect its output to the video mixer’s layer 1 input. For the video mixer and test pattern generator, be sure to use the high-speed video clock used in the image-processing chain. Build the design and export it to SDK.

 

We use the API xv_mix.h to configure the video mixer in SDK. This API provides the functions needed to control the video mixer.

 

The principle of the mixer is simple. There is a master layer and you declare the vertical and horizontal size of this layer using the API. For this example using the 10-inch display, we set the size to 1280 pixels by 800 lines. We can then fill this image space using the layers, either tiling or overlapping them as desired for our application.

 

Each layer has an alpha register to control blending along with X and Y origin registers and height and width registers. These registers tell the mixer how it should create the final image. Positional location for a layer that does not fill the entire display area is referenced from the top left of the display. Here’s an illustration:

 

 

 

Image3.jpg 

 

Video Mixing Layers, concept. Layer 7 is a reduced-size image in this example.

 

 

To demonstrate the effects of layering in action, I used the test pattern generator to create a 200×200-pixel checkerboard pattern with the video mixer’s TPG layer alpha set to opaque so that it overrides the FLIR Image. Here’s what that looks like:

 

 

 

Image4.jpg

 

Test Pattern FLIR & Test Pattern Generator Layers merged, test pattern has higher alpha.

 

 

 

Then I set the alpha to a lower value, enabling merging of the two layers:

 

 

 

Image5.jpg 

 

Test Pattern FLIR & Generator Layers merged, test pattern alpha lower.

 

 

 

We can also use the video mixer to tile images as shown below. I added three more TPGs to create this image.

 

 

 

Image6.jpg 

 

Four tiled video streams using the mixer

 

 

The video mixer is a good tool to have in our toolbox when creating image-processing or display solutions. It is very useful if we want to merge the outputs of multiple cameras working in different parts of the electromagnetic spectrum. We’ll look at this sort of thing in future blogs.

 

 

You can find the example source code on the GitHub.

 

Adam Taylor’s Web site is http://ift.tt/1AANc2l.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

First Year E Book here

First Year Hardback here.

 

 

  

MicroZed Chronicles hardcopy.jpg 

 

 

Second Year E Book here

Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

IT.数码

via Xcell Daily Blog articles http://ift.tt/2fBJIws

November 7, 2017 at 04:07AM


发表评论

电子邮件地址不会被公开。 必填项已用*标注