Carnegie Robotics’ FPGA-based GigE 3D cameras help robots sweep mines from a battlefield, tend corn, and scrub floors

2017年8月30日 | By News | Filed in: News.


Carnegie Robotics currently uses a Spartan-6 FPGA in its GigE 3D imaging sensors to fuse video feeds from the two video cameras in the stereo pair; to generate 2.1 billion correspondence matches/sec from the left and right camera video streams; to then generate 15M points/sec of 3D point-cloud data from the correspondence matches; which in turn helps the company’s robots to make safe movement decisions and avoid obstacles while operating in unknown, unstructured environments. The company’s 3D sensors are used in unmanned vehicles and robots, which generally weigh between 100 and 1000 pounds, operate in a variety of such unstructured environments in applications as diverse as agriculture, building maintenance, mining, and battlefield mine sweeping. All of this is described by Carnegie Robotics’ CTO Chris Osterwood in a new 3-minute “Powered by Xilinx” video, which appears below.


The company is a spinout of Carnegie Mellon University’s National Robotics Engineering Center (NREC), one of the world’s premier research and development organizations for advanced field robotics, machine vision and autonomy. It offers a variety of 3D stereo cameras including:



  • The MultiSense S7, a rugged, high-resolution, high-data-rate, high-accuracy GigE 3D imaging sensor.
  • The MultiSense S21, a long-range, low-latency GigE imaging sensor based on the S7 stereo-imaging sensor but with wide (21cm) separation between the stereo camera pair for increased range
  • The MultiSense SL, a tri-modal GigE imaging sensor that fuses high-resolution, high-accuracy 3D stereo vision from the company’s MultiSense S7 stereo-imaging sensor with laser ranging (0.4 to 10m).




Carnegie Robotics MultiSense SL Tri-Modal 3D Imaging Sensor.jpg



Carnegie Robotics MultiSense SL Tri-Modal 3D Imaging Sensor





All of these Carnegie Robotics cameras consume less than 10W, thanks in part to the integrated Spartan-6 FPGA, which uses 1/10 of the power required by a CPU to generate 3D data from the 2.1 billion correspondence matches/sec. The Multisense SL served as the main perceptual “head” sensor for the six ATLAS robots that participated in the DARPA Robotics Challenge Trials in 2013. Five of these robots placed in the top eight finishers during the DARPA trials.


The video below also briefly discusses the company’s plans to migrate to a Zynq SoC, which will allow Carnegie Robotics’ sensors to perform more in-camera computation and will further reduce the overall robotic system’s size, weight, power consumption and image latency. That’s a lot of engineering dimensions all being driven in the right direction by the adoption of the more integrated Zynq SoC All Programmable technology.


Earlier this year, Carnegie Robotics and Swift Navigation announced that they were teaming up to develop a line of multi-sensor navigation products for autonomous vehicles, outdoor robotics, and machine control. Swift develops precision, centimeter-accurate GNSS (global navigation satellite system) products. The joint announcement included a photo of Swift Navigation’s Piksi Multi—a multi-band, multi-constellation RTK GNSS receiver clearly based on a Zynq Z-7020 SoC.





Swift Piksi Multi GNSS Receiver.jpg 



Swift Navigation Piksi Multi multi-band, multi-constellation RTK GNSS receiver, based on a Zynq SoC.




There are obvious sensor-fusion synergies between the product-design trajectory based on the Zynq SoC as described by Chris Osterwood in the “Powered by Xilinx” video below and Swift Navigation’s existing, Zynq-based Piksi  Multi GNSS receiver.


Here’s the Powered by Xilinx video:







via Xcell Daily Blog articles

August 30, 2017 at 03:57AM


电子邮件地址不会被公开。 必填项已用*标注