Articles

The Moral Dilemma of Self Driving Cars


– The ethics problems
facing self driving cars are really the same problems
that we’ve been facing since before recorded human history. (low key music) We’re headed into a world that
may seem a bit scary to some where cars will be driving around without people controlling them. I say for some because not
everyone is freaked out by this, but they probably will
be once they see them roaming around the streets. The big question everybody’s wondering is how will the car decide
who lives and who dies in a situation where an
accident is imminent? But let’s take a step back a bit and think about this
in the broader context. Before we even think about answering who lives and who dies
in a situation like this, we have to think about what
is right and what is wrong? Now the origin of these
concepts dates back to before chronology,
that’s before we really started keeping time in human history, but basically you have a
couple schools of thought that play off each other to
help us understand this problem. The first is consequentialism, and this you can think of as the phrase the ends justify the means, meaning that no matter which
course of action you take, the right one is the one that
ends in a positive outcome. This is important because remember, in our scenario someone is
going to die in this accident. We’re just not sure who
is gonna be the one. So building on consequentialism, we have another ethical theory
that developed stating that the best action is the
one that maximizes utility. Utility is defined in various ways, usually in terms of well being of sentient entities like humans. Jeremy Bentham, the
founder of utilitarianism described utility as
the sum of all pleasure that results from an action minus the suffering of anyone
involved in that action. This is where things get interesting. If we believe that the right thing to do is to maximize utility,
then in a situation where a fatal accident is eminent, somebody is going to die in the
situation, no way around it, then the right choice would be the one that results in the fewest fatalities. As Spock said, “logic clearly dictates “that the needs of the many
outweigh the needs of the few.” (bell dings) So do you agree that maximizing utility or you know saving the most lives in this situation is the right choice? If you were driving and you knew you were going to be in a fatal crash, what would you do? Would you maximize utility
by avoiding the people in the crosswalk and killing yourself and passengers in the process? If not, you’re not alone but if you believe in normative ethic theories, then your ego is really what’s
getting in the way here. So this problem isn’t new
and really doesn’t matter if we’re talking about a self driven car or a human driven one. The same moral dilemma still exists. What’s right versus wrong and that answer really isn’t cut and dry. Philosophers have been arguing about this since the beginning of time, and we’re still really not sure depending on the situation
what’s right versus wrong. MIT is doing a study on
this using a tool they built called the moral machine. In this tool they show you moral dilemmas where a driverless car must choose between the lesser of two evils, such as killing two passengers or five pedestrians. You are the judge in this
scenario and in the end they show you how your
answers compare to others. What could possibly go wrong? All right let’s do it. Okay. So you have two cats up
front and a dog driving in the car and there are people, there’s a kid and two doctors I guess? So you either kill the doctors
and kid or the animals. I guess I’m going to go with the animals, sorry guys, I love animals. Okay so here’s an interesting one. So you’ve got a self driving
car that can either kill two old ladies and one old man or two old ladies and one old man. I don’t get it, what’s the difference? Oh okay I guess that’s the same to me. Okay so here you have people in the car and I don’t know a homeless
guy and a bank robber or the people in the car. I don’t know, I don’t know. Okay dogs or people, man you know I’m gonna go with the dogs,
save the dogs this time. Wait there’s a description. One criminal, one homeless
person, two women. Oh man, let’s get rid of the criminals and homeless people I guess. Oh, three people or six people. See this is the question. You know this is your utilitarian, let’s go with the three people. Sorry guys. Let’s go this one. And the baby though,
you gotta save the baby! Why is it driving directly into a… Okay we have kids driving now or parents. You know what kids, sorry
you shouldn’t be driving. Okay you’ve got an
overweight man or a fit man. Three large men, one large woman or two men, one female
athlete, two male athletes. Let’s go with the athletes, I don’t know. One female athlete, one large woman. Oh wait wait but there’s
a red light versus, yeah because that makes sense, God. Wait whoops, oh okay. Most saved character,
most killed character. Oh my god I killed the babies! Saving more lives does not, matters less. Protecting passengers
does not matter at all. Upholding the law, pbth, who cares? They’re just rules. Gender preference for females. Okay I guess women you know are more important to
our society than men. That’s pretty obvious. Species hoomans, what hoomans? What the hell is that? Come on MIT! Social value preference, lower really? Okay hm. Well there you have it. Yeah, that was depressing. If you’d like to give it a go, visit teslanomics.co/moralmachine and take the test for yourself. After you do, I’d love to
hear what your results were and what you thought of it by
leaving a comment down below. So if you liked this video
please give it a thumbs up and if you’re new, please
consider subscribing by clicking the button down below and then the bell next to it to make sure you don’t miss out on anything. If you have an email
address, which I know you do, go get on our email list at teslanomics.co and be sure not to miss any of the updates we have coming out each week. And remember, when you free the
data, your mind will follow. Thanks for watching and I’ll
see you back here next time.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *