hello I'm Gary Sims from Andrew
authority and some of the buzzwords at
the moment are machine-learning ml and
artificial intelligence AI and there's a
whole question about the ethics of the
decisions that are made by AI and m/l
systems now we don't actually have any
ants at the moment but there are
certainly lots of talking points so if
you want to find out what are the
problems with ethics and AI please let
me explain now of course the whole area
of ethics and morality is in itself
absolutely huge so we'll just put that
to one side for the moment and let's
just deal with how AI and ML deal with
the area of ethics and morality for
example in business you can have a
business that is ethical or a business
that is unethical now the problem is
that not everything that is legal is
ethical and not everything that is
unethical is illegal so for example they
might be strong lobby groups that Lobby
a government to make the laws in a
certain way or maybe a weak government
doesn't actually address certain things
and therefore a business can function
and that business can itself be with
inside the bounds of the law however its
activities are unethical and certainly
one thing we don't want in the future is
an AI or a machine learning algorithm
that is staying within the bounds of the
law but it's pushing as hard as it can
to see how far it can go but actually
remain inside the law itself in fact
when it finds a loophole it will go
through that loophole and expose it
because that will be its principal gain
to fulfill its goal and if his goal is
fulfilled through a loophole that's
exactly what that system will do well my
favorite examples of ethics is the idea
of fairness now I have three children
and I have raised them together with my
wife and we have tried to give them an
ethical and moral standing about what is
right and what is wrong but one of the
things we never needed to teach them was
what is fairness they always knew when
something was not fair and they were not
receiving the fair portion now it was a
different thing
teach them to always be fair to others
that was a whole different lesson but
they knew intrinsically when something
was not fair now will an AI or a machine
learning system know that something is
not fair because it might be the most
efficient it might be the most
cost-effective it might be the thing
that reaches the goal quickest has been
programmed with but it's completely
unfair so how does the ml system know
that let me give an example let's say we
had an ml system a machine learning
system that was controlling emergency
services so we're talking about you know
ambulances and paramedics and fire
engines and maybe even you know sort of
first-aid and maybe hospitals and
emergency rooms okay and this machine
learning system is there to try to
optimize the use of all of those
resources now that's a really good idea
but the problem is that if you set
certain goals then the machine learning
algorithm will shortcut those goals it
will cheat to fulfill those goals so for
example if you said that the goal of
this system is to treat as many people
as you can well if that was the case
then maybe the machine learning system
would give more priority to people with
ingrowing toenails or to people who've
got cats stuck up a tree because very
quickly the appropriate resources can be
given the problem can be solved and then
the next case can be taken so the
throughput the number of people helped
actually goes up quite high the guy who
needed a ten-hour operation is put to
one side because there are 10 people who
can have a 45-minute operation is quick
and cheaper and therefore 10 people get
treated rather than one and of course
this has no ethical dimension to it it's
purely a numbers game and you can also
get the opposite way around if you say
well we're tweak the machine learning so
that doesn't do that well actually then
you can find that all it does is treat
people that need ten-hour operations and
then people that are in less risk
situations never get treated cuz they're
not going to die they're not in harm's
way so why should they be treated at all
so setting the goals is really really
important and a machine learning
algorithm will look at the data and look
at its goals and it will find the
quickest way to achieve that goal and of
course made
treatment and treating people is
incredibly complicated because not only
are you dealing with risk you're also
dealing with a whole bunch of factors
including age and the severity of an
illness and you're looking at maybe
other secondary issues that affect the
primary issues like lifestyle maybe
income diet you know the kind of
education they've had the kind of way
they treat their own body all these
things come in to the factors that play
in how you treat people now of course a
machine learning algorithm could make
the wrong decisions that are unethical
but according to a dangerous being given
it was the right answer
and one of the problems is that all ml
systems are based on data so the machine
learning algorithm actually gains his
experience from the data that has been
given now if that data is itself biased
that it will itself create biased rules
and it will follow those rules even
though they are biased and we're
assuming therefore incorrect or one of
the reasons the data set can be biased
is because it's missing data so you may
have a whole bunch of characteristics
and factors that are listing in the
statistics but actually there's
something that is missing that would
actually change the characteristic of
the whole data set and of course there
are a whole bunch of built-in intrinsics
that we know about that actually are not
expressed in data so for example the
value of life or the value of another
person and this brings us to the next
question which is who is to blame when a
machine learning system makes the wrong
decision for example let's say you bring
a friend around and you say to your
smart speaker tell me a joke
and it tells a joke that is not
appropriate it tells a joke that is
offensive who is to blame for that is it
me as the owner of that smart speaker
did I have something do I do something
wrong to make it behave like that is it
the producer of that you know is it the
the manufacturer of that particular
smart speaker is it the programmer do I
have to kind of take a guy called Fred
to court to say you know why did you let
your smarts be could do that or is it
the data set
person or the people who provide the
data said you they provide a data
set that was not a ethical because I was
talking about jokes but of course when
you multiply this up to be indifferent
knows education and health care and
business practices self-driving
automation all these things then these
mistakes that get made can be much more
costly than just someone being offended
by a joke they didn't appreciate and
then of course that leads to the
flipside of the question if in the
future we have machines that are you
know fully autonomous and they are
making decisions based on their
programming if they commit a crime are
they guilty can a robot be guilty of a
crime do robots have any rights so these
are all interesting areas that are to do
with morality and ethics because the
bottom line is data doesn't give us
except thing that can't be gleaned from
just a set of data points I'm going
sinful and raw thority and I really hope
you enjoyed this quick trip down this
quite complicated path about ethics and
morality and artificial intelligence if
you enjoyed this video please do give it
a thumbs up also please subscribe to the
Android or thority youtube channel is
really good if you hit that Bell icon up
there so you get a notification every
time we release a new video and last but
not least please do go to Android
Authority comm because we are your
source all things
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.