Tuesday, December 9, 2014

The Autonomous Car Dilemma

Yayyyy! It's a blog post that's not a Passion Project update!


For our ethics unit in Gifted and Talented I, we've been looking at a variety of ethical dilemmas and developing our own opinions on them. We've thought about everything from mad philosophers and train tracks to lifeboats and 13-year-old twins. The most recent problem we've come across is one that actually exists in the real world - the new technologies being created to make self-driving, or autonomous, cars. Many of these dilemmas involved "What should it hit?" problems such as the one below:
Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?
The answer is pretty obvious - mostly all people would hit the Volvo SUV because it is larger than a Mini Cooper. There would be less damage, and there is less of a chance of you harming the other person. The only reasonable opinion for the other side is that there is a possibility of yourself being hit more. In my opinion, it is always best to put others before yourself, and if you have enough time to make this decision, it's probably your fault anyway.


This dilemma is pretty simple and self-explanatory, much unlike the next one:
...imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?
I don't know if I'll ever reach a strong decision on this one. My opinion on it is actually revealed later on, but I must first explain the different outcomes of each decision.

If you are to run into the cyclist with the helmet, you have lower chances of killing him, however, you'll probably still kill him or severely damage him until his death. Either way, you don't want this to happen, but your chances are even slimmer with the helmet-less cyclist. So my first instinct is to go with the motorcyclist with a helmet.

However, if the world turns to autonomous cars as is predicted to happen, people won't wear helmets just to avoid being hit. If I go back to my statements from before, this means that the person will only heighten their chances of having a head injury from something that is probably not a car. Either way, helmet or no helmet, both have almost equal chances of dying from car crashes that aren't autonomous.

It seems that I have points for both sides - which is when we bring in our third option:
A robot car’s programming could generate a random number; and if it is an odd number, the car will take one path, and if it is an even number, the car will take the other path.
I agree with this idea. This way, several things will be accomplished:

1. Motorcyclists will not refrain from wearing helmets because they know that autonomous cars won't crash into them.
2.  Motorcyclists will instead be more inclined to wear a helmet, because they know that there is no driver control and they have equal chances as any other cyclist for being hit. They will always wear their helmets ... just in case.
3. Motorcyclists will develop more responsibility in other aspects of their life as well.

The last question/dilemma is this: If the driver is not making control decisions, should the driver be responsible for any outcomes at all? In my opinion, if the driver has absolutely no control over their vehicle, then they are not to blame, clear;y. Why would you blame them if it was the car's programming rather than themselves?

If the person does have a steering wheel, I think they should be at least questioned about it. Were they fully aware of their surroundings at the time of the crash? Were they even awake at the time? In some circumstances, I think we can use this to determine who is at fault - this doesn't mean it wasn't an accident though.

In conclusion, these ethical dilemmas have been extremely thought-provoking and have helped to strengthen my views and beliefs. Hopefully those behind Google's self-driving cars and other autonomous car hopefuls will discover the answer.

Quotes and dilemmas are from this place.

Until next time,
Charles
Official Website: crouton.net


Thursday, December 4, 2014

Passion Project Update #2

I've missed a few of the passion project days, so right now my progress hasn't been too great.

I'm doing some more research on different things associated with the fantasy genre, as well as some "real" fantasy things such as modern witchcraft and how these things "work." I am trying to look deeper into the world of the extraordinary by seeing how they may have emerged from things that we see in the real world as we know it.

The entire project, I've been trying to define the word "fantasy" as more than just "things that aren't real." I think that I've finally settled on a reasonable definition:

"things that have not yet been seen, and possibly may never be seen"

This is a very simple definition, but I think that one of the most important things I've learned in this project is that fantasy is not limited to the things that aren't real. Even though I don't genuinely believe in fairies, dragons, etc., if you eliminate the possibilities, there's no point to writing about them. The technology that we have today was fantasy at one point. the future could hold even bigger possibilities.

So to conclude my project, I'll be doing some more research and analysis and creating a presentation about my topic. And I just don't know how to end this so here's a picture of a really happy watermelon: