Why the three laws of robotics won't save us from Google's AI - Gary explains
Why the three laws of robotics won't save us from Google's AI - Gary explains
2016-09-29
hello my name is Gary Sims from Android
authority now we are on the verge of a
new era for Humanity and that is the
rise of artificial intelligence we
already see some of it today in things
like Google now and driverless cars over
the next few decades we're gonna see
lots and lots of changes now Isaac
Asimov wrote these three laws of
robotics and some people think that
actually they're a good set of laws for
governing future a I and I'm here to
tell you that they're not so let's
remind ourselves of the Three Laws of
Robotics but before we do that let me
tell you about weak AI and strong AI now
if you've watched my previous video
you'll know there are two types of a I
now weak AI is basically a sophisticated
computer program that pretends that
simulates intelligence so you can ask
questions it gives you answers you can
even ask about the current state of
affairs it will know things you might
even ask its favorite color it can tell
you but actually it's just a computer
program might be neural networks it
might be some kind of functions they
might be a combination that both is
still just a computer program it doesn't
actually have self-awareness it can't do
abstract thinking it certainly hasn't
got free will it's just a computer
program that has a very sophisticated
user interface now the other type of
automated is what they call strong AI
and that basically means AI that is
self-aware it is a into a mind it's not
just a brain it has a mind it can do
free thinking you can do abstract
thinking it's got free will and
basically that's where the difference
comes down to it is the free will the
ability to choose I like to put it this
way if we have a week a iced driverless
car you can call it up and say come and
get me from the shopping mall and it
will just obey is programming and come
and get you
if you bring up a strong AI and say come
and pick me up it could say no I'm
watching a movie I don't want to I don't
really have the energy right now I'm
doing something else it has a free will
it is self-aware it is in
dependent and those that course are very
two different things and that's where
the Three Laws of Robotics come into it
how can you have something that's going
towards intelligence going towards
strong AI and yet restrain its actions
so it doesn't just do whatever it likes
so I'm reading here from a collection of
Isaac Asimov robot stories and this is
the first story called run around where
the three rules are explicitly stated so
let's see what it says
powers radio voice was terse in
Donovan's ear now look let's start with
the three fundamental rules of robotics
the three rules that are built most
deeply into a robot's positronic brain
one a robot must not injure a human
being or through inaction allow a human
being to come to harm - a robot must
obey the orders given to it by humans
except where such orders would conflict
with the first law and three a robot
must protect its own existence as long
as such protection does not conflict
with the first or second laws so there
you have it those three laws now before
we go any further we must point out that
all of Asimov stories are about how
those three rules don't work in fact
that first story runaround is about how
a robot is stuck between laws - and law
3 the law he wanted to obey a command
given to it and he wanted to protect its
own existence so it was running towards
something to fill a command but when it
got there it found out was in danger so
to start to run back again and in fact
it then just started running around in a
circle and in fact this is a common
problem in computer science in general
if you have a thermostat for your room
for example and you set it to a certain
temperature what you don't want is a
room to get to that temperature and then
to drop down by even 0.1 of a degree and
then the heating come back on again and
then it would go back to the temperature
and then it would switch off and then
it'd go down and you would basically get
your heating go on off on off on off on
off all the time so what you actually do
is you build in a range you say when it
gets up to this then it dropped down by
this then go back up again and in fact
that's a simple thing in computer
science but what azimoff is showing here
is that when you state things explicitly
like this
they can be easily misinterpreted so
let's look at rule number one for
example no harm should come to a human
being well that's easy when you're
looking at the ideas of a car is about
to crash into somebody and robot dashes
across the road grabs you and drags you
out a way of the car that's great but
what does harm mean I mean physical harm
emotional harm psychological harm I mean
there's so many different ways to define
the word harm now in the books actually
Asimov goes to the logical conclusion
that actually robots are allowed to lie
because if they tell someone the truth
they could actually be harming them
emotionally
so obviously that doesn't work all right
things like smoking or fast food smoking
is considered across the world to be
dangerous harmful to your health and yet
there are millions of people that choose
to smoke and therefore if this rule was
implemented literally robots would have
nothing to do other than to go and pull
cigarettes out of people's mouths
because they say they were harming them
or what about fast food if you eat too
much fast food too much saturated fats
too many bad ingredients it's gonna do
you harm so do robots March around
closing down rest fast food restaurants
I mean what does harm mean is that such
an ambiguous definition that it's no
good for the context of defining the
behavior of a robot and ultimately
Asimov wrote a story where he showed
that harm could be interpreted as not
harm to a human but harm to humanity and
therefore you have the rise of the
machines they try to take over actually
to protect humanity from our own errors
and from our own bad actions and what
about this idea it has to obey a rule
from a human well which human mean is a
three-year-old the same as a ten year
old is a ten year old the same as a
thirty year old is a normal citizen the
same as a police officer is a police
officer the same as a member of a
parliament you could just keep defining
roles again and again and again so if a
three year old says to a robot let's
jump up and down on the sofa actually a
robots probably gonna be pretty heavy
he's gonna wreck that sofa pretty
quickly but however it was obeying a
command and no humans came to harm of
course when mum and dad come home
they'll say what are you doing
and it will immediately stop but it
those rules don't define what it should
listen to a three-year-old or not the
problem with the three rules is there's
no moral compass
it's got no way of knowing what's right
and what's wrong so for example let's
say you'd have a driverless car you send
out go and get me some fast food and it
drives to the fast food place and you
find out it's closed so it looks on its
map and it says right there's a pizza
place 50 miles away and so off it goes
to the 50 mile away or Peter now that's
not the right thing to do obviously now
at some point it maybe would need to
have further instructions from its
owners but how does it know the
difference between what's good and
what's bad the three rules don't tell it
that now maybe going an extra 50 miles
for a pizza wouldn't be a good idea but
you've heard a sick child and you were
desperate to get some medication and the
nearest open available source that
medication was 50 miles away you'd
probably want the robot to go get it but
how does it choose those three rules
don't tell it anything they don't help
define what's right and what's wrong now
we as humans we look at what's right and
what's wrong every single day and
sometimes we get it right and sometimes
we get it wrong and there are debates
even at a national level about what is
right and what is wrong what is moral
and what is immoral now how does a robot
get that does it get that by looking at
a data set of people's actions does it
study what humans do and then define
what is right and what is wrong because
actually if you look at what we do as
humanity then actually there are some
pretty terrible things that we do and
you wouldn't want a robot to take over
from those things to learn from those
things so then there's the question of
Cameron ITB learned from a data set and
what about the people who are
programming the robots do they program
their morality into it rather than
something that another group of people
would consider to be right or because it
to be wrong and what about the idea that
we all want to be better than we really
are we all want to be different but yet
we find there's a weakness in us that
means we can't always do the things we
want to do and sure you wouldn't want a
robot to have that you'd want the robot
to be a perfect moral character where
does it get
morality certainly doesn't get them from
the Three Laws of Robotics so as we're
defining the progress of artificial
intelligence we are start to need to ask
questions about freewill and about
morality because those things will
define how a robot behaves not whether
it should just pull a person out from in
front of a car or not
well my name's go Sims from Andrew
Authority I hope you enjoyed this video
and look at the future now why we're
looking at this is because we are on the
verge of got self-driving as cars we've
got google's AI already doing so many
different things on means I even read
there are now recruitment agencies that
are using AI to scan through CV so if
your CV doesn't match up to what the AI
is expecting you don't even get called
for an interview we know that Google
search results are controlled in a
certain way and I'm sure there are
different types of weak AI involved in
that searching and how that is filtered
for different people for different
reasons regions for different types of
search results we're going to be finding
AI creeping into so many different
things and the question is where does it
get his guidance about what's right and
what's wrong
also you want to talk more about this
poke to the annual Authority for oh
there's a special topic that I've opened
there just so that we can discuss the
three laws of robotics
I look forward to seeing you there well
if you enjoyed this video please do give
it a thumbs up don't forget to subscribe
to and royalties YouTube channel you can
follow me on Twitter and on Instagram
and on Google+ and you can follow
handwritten thority the same on all
those social media networks don't forget
to download the Android authority app
and don't forget to go to andrew
authority comm because we are your
source for all things Android
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.