Nvidia explains its supercomputer for cars — CES 2016
Nvidia explains its supercomputer for cars — CES 2016
2016-01-05
welcome we're going to talk about
self-driving cars today last year I said
that in the future when you build cars
it's going to be a lot more like a
computer computer vision technology that
had evolved up to this point are going
to be difficult to continue to advance
to the point where we could have cars
that drives itself
all of this computation capability is
going to have to be done in real time
all of this we've been working on the
last year ladies and gentlemen the
Nvidia drive px 2
this is the world's first in-car AI
supercomputer and it's designed to make
it possible for us to realize the vision
of self-driving cars the computational
capability of px 2 is equivalent to a
hundred and fifty MacBook Pros all
together this entire supercomputer fits
in your trunk very nicely the size of a
lunch box
our vision is to make it possible for us
to finally realize a self-driving car
humans are the least reliable part of
the car
we represent almost all of the
fatalities that are caused around the
world over a million deaths each year or
replacing the human altogether
self-driving car technology is surely
going to make a great contribution to
society the biggest problem is
perception first of all what is
happening around me what are things that
I should be concerned about and how
should the car deal with it
the folks at Nvidia research work
together to make it possible to take
advantage of the CUDA GPU that we
invented we were able to accelerate the
training by 30 to 40 times deep learning
is able to achieve super human
perception capability but it is now
possible for us to train these
incredibly complex networks to recognize
objects of all kinds just to show it to
you let's now see what it can do it took
a month to train the original network
with the imagenet data set without GPU
acceleration that month would have been
a couple of years this is an upcoming
data set will be publicly available soon
the cityscape data set there's more
training images and they're very very
finely segmented and detailed it's a
very modern data set not one feature
detector was coded by hand it's
basically like holding up millions and
millions of flash cards to the computer
and telling it to learn and basically
nudging in the right direction when it
gets things wrong so this gives you the
next level of perception saying well you
know what can I Drive on what is this
thing at this pixel that I'm looking at
so it's a much more robust way to handle
perception in a car but there's so much
more to do there's so much more to do
now
we do we want to be able to recognize
objects we want to be able to recognize
circumstances every bus is not the same
their passenger buses public transit
buses where it's okay to just drive by
school buses you should stop every truck
is not a truck some trucks or ambulances
and you should pull aside all of these
different scenarios not only do you have
to recognize what they are but
understand the circumstances the special
circumstances these are all trainable
things so whereas today we're focused on
training detection perception very
shortly we're going to move towards
training for recognizing circumstances
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.