Delivering FPGA Vision to the Masses

2015年12月16日 | By News | Filed in: News.

Source: https://forums.xilinx.com/t5/Xcell-Daily-Blog/Delivering-FPGA-Vision-to-the-Masses/ba-p/671481

By Kiran Nagaraj, Christophe Caltagirone, Dinesh Nair, National Instruments Corp

The parallel processing capability of FPGAs, such as those in the Xilinx All Programmable FPGA lineup, is a natural fit for implementing many image processing algorithms. FPGAs can be used for performing both data-intensive processing and high-speed sensor measurements. The devices also have incredibly low latency, which is critical for vision applications because latency accounts for the time that elapses until a decision is made based on the image data. FPGAs can help avoid jitter and thus serve as highly deterministic processing units.

Building a heterogeneous system that includes an FPGA, however, introduces serious programming challenges for system designers. As time-to-market pressures mount, vision system designers need the ability to prototype a solution with complex features quickly. Programming on heterogeneous systems requires a tool that can help the domain expert design intellectual property (IP) functions on multiple platforms and test the vision algorithm before compiling and running the algorithm on the target hardware. The tool should allow easy access to throughput and resource usage information throughout the prototyping process.

NI refers to this as algorithm engineering: the process by which you, the domain expert, can focus on solving the problem at hand without being preoccupied with the underlying hardware technology. NI’s Vision Development Module (VDM) with Vision Assistant arms you with that capability.

VDM with Vision Assistant helps in fast prototyping and code generation, FPGA resources estimation, automatic code parallelization, and synchronization of parallel streams (for such tasks as latency balancing). VDM includes more than 50 FPGA image processing functions as well as functions to transfer images efficiently between the processor and the FPGA. You can use Vision Assistant within VDM to rapidly prototype and develop FPGA vision applications.

Vision Assistant is a configuration-based prototyping tool that empowers you to iterate on image processing algorithms and see how changes in parameters affect the image. With Vision Assistant, you can visualize the output (processed image) after every vision block in an image pipeline (Figure 1). You can use the tool to test different algorithms and parameters on different sets of images without having to compile your IP, thereby greatly reducing the time required to design your vision algorithm.

 
NI Vision Assistant Figure 1.jpg

NI has customized the tool to handle FPGA programmers’ requirements. The key concerns when building any algorithm on an FPGA are resource consumption on the FPGA fabric, the latency of the pipeline and the maximum frequency the algorithm can achieve on a specific fabric. Vision Assistant helps by providing an estimate of the resources consumed for each block in the image pipeline. You can use the tool to test the results of algorithms in the prototyping environment and the deployed code to ensure that your implementation yields the same results. The vision FPGA IP of the Vision Development Module lets developers use massively parallel processing and the Xilinx Vivado High-Level Synthesis (HLS) tool to achieve fully pipelined, low-latency, architecture-optimized vision IP on the FPGA. Vision FPGA IP from NI currently targets three Xilinx FPGA families—Kintex-7, Virtex-5 and Spartan-6—as well as the Xilinx Zynq-7000 All Programmable SoCs.

The vision FPGA IP toolset provides preprocessing functions such as edge detection filters, convolution filters, lowpass filters, gray morphology, binary morphology and threshold. It also includes vision IP functions that perform arithmetic and logical operations, as well as functions that output results such as the centroid. Another function, the Simple Edge Tool, finds edges along a line and is useful for caliper applications. The Quantify function accepts a masked image as well as the image stream to be processed and returns a report that has information about the area, mean and standard deviation of the regions defined by the masked image. Linear Average computes the average pixel intensity (mean line profile) on all or part of the image.

The latest addition to NI’s vision FPGA IP list is the Particle Analysis Report. You can perform particle analysis, or blob analysis, to detect connected regions or groupings of pixels in an image and then make selected measurements of those regions. With this information, you can detect flaws on silicon wafers, detect soldering defects on electronic boards or locate objects in motion control applications.

Nearly 70 percent of NI’s vision FPGA IP functions were developed using the IP Builder, a utility in LabVIEW FPGA that allows you to code in graphical code using LabVIEW and then output RTL code using Vivado HLS. The major advantage of this approach is that users familiar with graphical coding can develop the application along with a directive file that states their frequency and latency requirements. Using LabVIEW IP Builder with Vivado HLS generates the appropriate VHDL code.

Vivado HLS is a good fit for vision development because it helps abstract algorithmic descriptions and data-type specifications (integer, fixed-point) from the generated C code of the IP Builder. It also generates the necessary simulation models for early testing of functionality. The generated architecture-aware VHDL code yields high-quality, highly repeatable results.

Note: This blog was adapted from an article by the same name that appeared in the recently published issue of Xcell Software Journal. To read the full article online, click here. To download a PDF of the entire issue, click here.


发表评论

邮箱地址不会被公开。 必填项已用*标注