Nvidia Brings Computer Vision and Deep Learning to the Embedded World

2015年11月11日 | By News | Filed in: News.

Source: http://hackaday.com/2015/11/10/nvidia-brings-computer-vision-and-deep-learning-to-the-embedded-world/

Today, Nvidia announced their latest platform for advanced technology in autonomous machines. They’re calling it the Jetson TX1, and it puts modern GPU hardware in a small and power efficient module. Why would anyone want GPUs in an embedded format? It’s not about frames per second; instead, Nvidia is focusing on high performance computing tasks – specifically computer vision and classification – in a platform that uses under 10 Watts.

For the last several years, tiny credit card sized ARM computers have flooded the market. While these Raspberry Pis, BeagleBones, and router-based dev boards are great for running Linux, they’re not exactly very powerful.  x86 boards also exist, but again, these are lowly Atoms and other Intel embedded processors. These aren’t the boards you want for computationally heavy tasks. There simply aren’t many options out there for high performance computing on low-power hardware.

NvidiaThe Jetson TX1 and Developer Kit. Image Credit: Nvidia

Tiny ARM computers the size of a credit card have served us all well for general computing tasks, and this leads to the obvious question – what is the purpose of putting so much horsepower on such a small board. The answer, at least according to Nvidia, is drones, autonomous vehicles, and image classification.

Image classification is one of the most computationally intense tasks out there, but for autonomous robots, there’s no other way to tell the difference between a cyclist and a mailbox. To do this on an embedded platform, you either need to bring a powerful general purpose CPU that sucks down 60 or so Watts, or build a smaller, more efficient GPU-based solution that sips a meager 10 Watts.


The Jetson TX1 uses a 1 TFLOP/s 256-core Maxwell GPU, a 64-bit ARM A57 CPU, 4 GB of DDR4 RAM, and 16 GB of eMMC flash for storage, all in a module the size of a credit card. The Jetson TX1 runs Ubuntu 14.04, and is compatible with the computer vision and deep learning tools available for any other Nvidia platform. This includes Nvidia Visionworks, OpenCV, OpenVX, OpenGL, machine learning tools, CUDA programming, and everything you would expect from a standard desktop Linux box.

To put the Jetson TX1 to the test, Nvidia compared its deep learning performance to the latest generation Intel i7 processor. This deep learning task uses a neural net trained to differentiate between objects; pedestrians, cars, motorcycles and cyclists if the TX1 will be controlling an autonomous car, or packages and driveways if the TX1 will be used in the first generation of delivery drones. In this image classification task, the TX1 outperformed the i7, while drawing far less power.


It’s important to note the power efficiency Nvidia is going for here. To put this much computational performance on a robot with a modern CPU, you would have to use a part that draws 65 Watts. This requires a large power supply, a big heat sink, and simply more mass than what Nvidia is offering.


So far, the core of the Jetson TX1 has already found its way into commercial products. The only slightly creepy Jibo social robot uses the TX1 for face tracking and image classification. The newly announced DJI Manifold also uses an Nvidia Jetson.

The Jetson TX1 Developer Kit will be available from Newegg, Amazon, Microcenter, and the Nvidia store on November 12th for $599 retail, or $299 with an educational discount. The TX1 module itself – with a 400 pin board-to-board connector I can’t source at the moment – will be available for $299 in 1000 unit quantities through worldwide distributors. There is no pricing yet for single modules, although I would expect a group buy to pop up somewhere on the Internet shortly.

While a $600 retail price for a development board is a big ask for the kind of products we usually see hitting the Hackaday Tip Line, comparing the Jetson TX1 to a $30 Raspberry Pi does it a disservice; by no means is this a general purpose device, and apart from a few outliers who are still mining Bitcoins on their GPUs (and on Nvidia cards, at that), doing anything besides playing video games with a graphics card is specialty work.

As far as developments in embedded computing go, this is a big one. Over the last 10 years or so, the coolest parts of desktops and servers has been the raw power a GPU can throw at a problem. Putting a module this capable in a package that draws only 10 Watts is a huge advancement, and although it’s not really meant for the garage-based tinkerer, the possibilities for the advancement in the field of computer vision on embedded platforms is huge.


邮箱地址不会被公开。 必填项已用*标注