Will the emergence of AI mean the end of the world?
Will the emergence of AI mean the end of the world?
2015-06-10
some quite famous people have been
making some quite public statements
about the potential dangers of
artificial intelligence recently first
of all there was Elon Musk who said that
artificial intelligence could
potentially be as dangerous as nuclear
weapons since then Bill Gates and
Stephen Hawkins have also chimed in so
what's all the fuss about what are they
all worried about or today we're going
to ask ourselves the question will the
emergence of artificial intelligence
mean the end of the world as we know it
so what's all the fuss about
in a nutshell they've all been reading a
book by Nick Bostrom an Oxford professor
of philosophy who wrote a book called
super intelligence paths dangerous
strategies which outlines the dangers
and possible strategies for dealing with
very sophisticated artificial
intelligence ever since the dawn of the
computer age authors and scientists and
philosophers and movie makers have been
talking about artificial intelligence in
one form or another in the 1960s and
1970s we were told we were just one step
away from making a computer that can
think now obviously that didn't happen
and in all fairness today's AI experts
are less specific about when the
problems of creating an AI will be
solved they are also more circumspect
about what AI actually means artificial
intelligence it seems is pretty hard
thing to define it certainly isn't just
knowledge when talking about AI people
start to use words like self-awareness
sentience abstract thinking
understanding consciousness mind
learning and intuition the subject of
general intelligence and artificial
intelligence is obviously quite emotive
and it's certainly profound that's why
the AI community has come up with three
specific words to help define what they
mean by artificial intelligence those
words are weak AI strong AI and
artificial super intelligence
artificial super what did I just say
okay we might need to take a step back
here and define these terms properly
so a weak AI is a system that can
simulate or imitate intelligence however
at no point does it actually have a mind
or self-awareness at no point to its
creators claim that it has a mind or
self-awareness for example when I was 10
or 11 years old my grandfather wrote a
chatbot on a micro computer I was able
to type in sentences and it would reply
with intelligent and even witty
different comments and that was amazing
for an eleven-year-old and really you
couldn't even consider it to be weak AI
however if you multiplied that up by
several factors of magnitude you start
to get the idea of what I'm talking
about when we talk to our smartphones
when we say okay Google and then ask it
a question about the weather or about
the sports calls
that's we ki now multiplies that again
by several factors of magnitude and
you'll get an understanding of where
things are going on what we could even
achieve within the next few years we ki
can be divided into two specific groups
generalized AI and specific narrow AI
now the narrow weak AI that we see today
is the kind of programs that compete
chess masters or the kind of program
that IBM built with its Watson system
when it was able to play jeopardy and
beat the champions at their own game now
if you imagine in the future we could
take those specific narrow systems and
combine them into a more general system
than that is generalized weak AI
so that was we ki what is strong AI
strong AI is a theoretical computer
system that actually has a mind to all
intents of purposes is the same as a
human in terms of understanding terms of
freewill in terms of sentence terms of
consciousness
it doesn't simulate consciousness it has
consciousness doesn't simulate free will
it has free will and so on now when
science fiction writers and philosophers
talk about AI they generally mean strong
AI the Hal 9000 was strong AI the Cylons
are strong AI Skynet is strong AI the
robots in Asimov stories are strong AI
the computers that run the matrix are
strong AI and so on now the thing about
strong AI is that it can perform
artificial intelligence research itself
that means it can create a better
version of itself one that's more
intelligent one that's faster we can
upgrade itself to be more intelligent
and more faster that means it can grow
and that's what people are worried about
so assuming that it is possible to
create strong AI and that it has the
same general level of intelligence as a
human and assuming it then performed
artificial intelligence research itself
and grows this will eventually lead to
the emergence of artificial
superintelligence a artificial
intelligence that is far superior than
human in terms of its speed and in terms
of its intelligence it will be able to
solve problems in order of magnitude
much faster than any human can it will
be super intelligent in his book on
artificial superintelligence nick
bostrom talks about what the emergence
of an ASIO will mean for us if we are
unable to restrain an ASI what will be
the outcome as you can imagine parts of
the book talk about the end of the human
race as we know it the idea of course is
that there will be a thing called a
singularity a major event that changes
the course of the human race and that
can include extinction he also covers
things that we should be doing now to
make sure that this singularity never
happens this is why Elon Musk says
things like I'm increasingly inclined to
think that there should be some
regulatory oversight maybe at the
national and international level to make
sure that we don't do something very
foolish
I like science fiction science fiction
is fun it's one of the my favorite
genres on TV movie and in books
however we mustn't forget it is fiction
now all good science fiction of course
is based on some science fact and
occasionally some science fiction turns
into science fact
however we mustn't forget just because
we can write about something just
because we can hypothesize about
something just because we can imagine
something doesn't mean it's possible for
example when I was in my late teens all
the rage was room-temperature
superconductors they were written about
in all of the science magazines it was
on a lot of TV programs they were talked
about in such a way that you thought
they were going to appear very very soon
of course I never did Theory doesn't
always equal practice there are actually
some very strong arguments against the
emergence of strong AI and artificial
superintelligence one of the best
arguments against the idea that an AI
can have a mind was put forward by John
Searle an American philosopher and
professor of philosophy at Berkeley it's
known as the Chinese room argument and
it goes like this
imagine a locked room with a man inside
who doesn't speak any Chinese in the
room he has a rulebook which tells him
how to respond to messages in Chinese
the rulebook doesn't translate the
Chinese into his native language it just
tells him how to form a reply based on
what is given outside the room a native
Chinese speaker passes messages under
the door to the man the man takes the
messages looked up the symbols and
follows the rules about which symbol to
write in the reply the replies then pass
to the person outside since the reply is
in good Chinese the person outside the
room will believe that the person inside
the room speaks Chinese if the replies
are sufficiently interesting the idea
that the man in the room speaks Chinese
is reinforced for example if the note
pushed and Labrador asks what will be
the weather next week and the reply was
I don't know I've been stuck in this
room since last Tuesday then the person
outside the room will be further
convinced that the man inside the room
is a Chinese speaker however the key
points are the man in the room does not
speak Chinese the man in the room does
not understand his messages and the man
in the room does not understand the
replies he is writing when you apply
this idea to AI you can see very quickly
that a machine doesn't actually have
intelligence it just mimics intelligence
it never actually understands what it's
receiving and
understand its replies is just following
a set of rules as john sol put its
syntax is insufficient for semantics
another argument against strong AI is
the fact that computers can't have
consciousness and they'll never have
consciousness because consciousness
can't be computed this is the idea in
Sir Roger Penrose his book the emperor's
new mind in the book he says we fought
comprising of non computational elements
computers can never do what human beings
can it's also interesting to note that
not all AI experts think the strong AI
is possible you might imagine since this
is their field of expertise though it'll
be very keen to promote the ideas of
strong AI but actually many of them
don't think it's possible for example
professor Kevin Warwick of Reading
University who sometimes known as
captain cyborg Duty's predisposition to
implanting various bits of tech into his
body is a proponent of strong AI however
professor mark Bishop of Goldsmith's
University London is a vocal opponent of
strong AI what is even more interesting
is that professor Warwick used to be
professor bishops boss when they worked
together
at Reading University two experts who
work together and you have very
different ideas about strong AI
if faith is defined as the conviction of
things yet unseen then you need to have
faith to believe in strong AI and
actually it's a blind faith because
there are no indications at all at the
moment that strong AI is at all possible
of course we kiii is very possible we
see it even now it's just about
processing power about algorithms about
technique like neural networks and other
things that yet haven't been invented we
see it now in its infancy and it's going
to grow it's going to change our lives
quite dramatically but the idea that a
computer can become a sentient being I
don't have that faith since humans have
consciousness and consciousness isn't
computable according to saw Roger
Penrose then what is it why do we have
it
so Roger Penrose tries to describe it in
terms of quantum mechanics and quantum
physics however there is an alternative
what if man is not just a biological
machine what if there's more to man
history philosophy and theology are all
peppered with the idea that man is more
than just a clever monkey the dualism of
the mind and the body is often linked
with Rene Descartes he argued that
everything could be doubted even the
existence of his own body the fact that
he could doubt means that he could think
and because he thinks he exists it is
sometimes Freight like this since I
doubt I think since I think I exist or
often I think therefore I am
the notion of dualism is found in many
tenants of theology God is spirit and
those who worship Him must worship in
spirit and truth is one example this
leads us to interesting questions like
what is spirituality what is love
does man have eternity set into his
heart is it possible that we have words
for things like spirit soul
consciousness because we are more than
just a body as one ancient writer put it
who can tell the thoughts of a man
except for the spirit that is within him
the biggest assumption made by believe
it as strong AI that the human mind can
be reproduced in a program but if man is
more than just a body with a brain on
top of it if the mind is the working of
biology and something else then strong
AI will never be possible having said
that the growth of weak AI is going to
be rapid and fast during Google i/o 2015
the search giant even included a section
in its keynote speech on deep neural
networks these simple week' eyes are
being used in Google search engine in
Gmail and in Google's photo service like
most technologies the progress in this
area will snowball with each step
building on the work done previously
ultimately services like Google now Siri
and Cortana will become very easy to use
due to their natural language processing
ability and we will look back and
chuckle at how primitive it all was in
the same way that we look back fondly on
VHS vinyl records and analog mobile
phones my name is Gary Sims from Andrew
authority and I hope you found this
video interesting now I know that some
parts of it are probably going to be
quite controversial and I'm sure a lot
of you won't agree with what I wrote and
what I've said however if you did find
the video interesting please give it a
thumbs up
now as trepidation I say please use the
comments below to tell me what you think
about weak AI strong AI and the possible
emergence of artificial
superintelligence
and as for me I'll see you in my next
video
okay how you can turn the camera off now
how what are you doing
put that down how-how-how get away from
me how how how what are you doing man
somebody anybody
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.