By VANSHIKA CHOWDHARY
August 1, 2017
“Although it’s unlikely for now that our technology really will attack us, Westworld raises a serious question about this technology that pervades every aspect of our lives: Can we trust it?”
Imagine an amusement park unlike any other. Forget roller coasters, water slides, carousels and any other wacky rides you’ve seen or been on (I’m talking about you, Goliath).
Imagine lassoing horses, saving damsels-in-distress and riding into the horizon.
Imagine the Wild West, literally come to life.
That is the premise of Westworld, the hit HBO series centered around such an amusement park. Guests, people who visit the park, find themselves thrust into the Wild West, where they can interact with hundreds of Hosts, humanoid robots imbued with artificial intelligence, and submerge themselves in intricate story lines.
HBO
As amazing as Westworld may be, it hides a darkness – the Hosts are rebelling. It is something that has haunted our society forever, growing with our use of technology.
We’ve always feared that the technology we love will rebel against us – rabid coffee machines, the Terminator and more.
Contrary to popular opinion, artificial intelligence isn’t limited to government defense projects. Many companies are experimenting with it i.e. Google’s self-driving car, personal assistants such as Siri and Alexa, even Netflix’s recommendation service.
Artificial intelligence is a fixture in our daily lives, quietly running our homes, phones and cars.
Although it’s unlikely for now that our technology really will attack us, Westworld raises a serious question about this technology that pervades every aspect of our lives: Can we trust it?
Researchers are forced to answer this difficult question when faced with moral dilemmas and how artificial intelligence can solve them.
source: Popular Machines
Consider the picture above: a self-driving car goes down the road when a family suddenly walks in front of it. Does the car swerve onto an oncoming lane, hitting another person that is already there? Or does the car swerve off the road and hit a roadblock? Or does it continue forward and hit the family?
One researcher from Duke University, Professor Vincent Conitzer investigated such dilemmas and how artificial intelligence would solve them.
He and his team asked people to respond to these dilemmas to figure out their thought patterns while making ethical choices; his goal was to make predictions regarding what a human would do in a given situation.
According to Conitzer, artificial intelligence has to understand the whole situation, taking heed of factors such as rights (i.e. a patient’s rights in a kidney transfer) and roles (i.e. the doctor’s).
“Incorporating these morally relevant factors among others could enable AI to make moral decisions that are safer, more robust, more beneficial, and acceptable to a wider range of people”, wrote Conitzer in his 2017 article “Moral Decision Making Frameworks for Artificial Intelligence”.
Conitzer and his team came up with two approaches for coding morality: game theory and machine learning.
Game theory is a mathematical study of winning and losing and the decisions required to do so; it is commonly used in economics, political science, and many other fields.
Conitzer attempted to come up with moral solution concepts.
Solution concepts are solutions to a game represented in game theory, assuming that each player is self-interested.
“The analysis of the game must be intertwined with the assessment of whether an agent (player) morally should pursue another agent’s well-being”, said Conitzer.
If we consider our opponent’s feelings and moral concepts such as altruism and betrayal, we can be deterred from doing something we wouldn’t think twice about not doing if we were self-interested.
For example, consider this classic trust game.
“Player 1 is given some amount of money, say $100. She is then allowed to give any fraction this money back to the experimenter, who will then triple this returned money and give it to player 2. Finally, player 2 can return any fraction of the money he has received to player 1”, Conitzer explains in his article.
If a computer played this game, it would likely never give any money to its opponent, being completely self-interested – if humans play this game, they do trade money, as they are concerned with seeming unfair, selfish, disloyal, or untrustworthy. They consider the past, present, and future when making a decision.
Conitzer recognizes the subjectivity involved in determining what factors are morally relevant or even more important than others.
“Some people will think that it is morally wrong to lie to protect a family member, whereas others will think that lying in such circumstances is not only permitted, but required,” explains Conitzer, “nonetheless, a successful moral AI system does not necessarily have to dictate one true answer in such cases…it can use either the moral values of a specific individual or group or some aggregate of moral values depending on the situation.”
However, there are flaws with this approach.
Not all moral dilemmas can be represented in game theory. You can’t distinguish doing vs. allowing harm, a crucial distinction in philosophy and ethics, in game theory because they would both result in the player losing.
“Consider a situation where a runaway train will surely hit and kill exactly one innocent person standing on a track, unless the player intervenes and puts the train on another track instead, where it will surely hit and kill exactly one other innocent person”, explained Conitzer.
In addition, the way a dilemma is framed (or described) can affect how humans respond to it; the same applies with game theory and machines.
Conitzer turned to machine learning instead.
Machine learning is a type of AI that allows a machine to learn something without being explicitly programmed. Conitzer suggests two steps to using machine learning.
The first is to code values within the machine so they recognize when they have a moral dilemma (conflicting values) and recognize all of the relevant features.
“When a self-driving car must decide whether to take one action or another in an impending-crash scenario, natural features include the expected number of lives lost for each course of action, which of the people involved were at fault, etc.”
The second step is to give them the ability to classify an action as morally right or wrong following the five basic moral foundations: harm/care, fairness/reciprocity, loyalty, authority, and purity. These actions can also be quantified (how wrong would that action be?).
Essentially, the machine will play out a zero-sum game where one person’s wins must result in another person’s losses and see what decisions lead to the lowest loss.
Some argue that machine learning simply cannot account for the extreme range of possible moral dilemmas.
It’s all well and good if a machine can tell you whether or not to give your friend a hundred bucks, but you can’t just send it into a military crisis and leave it to figure out how to respond.
Furthermore, AIs may be artificially intelligent, but they are still simpletons, easily abused by humans.
For example, on Westworld, the Man In Black, a sadistic, veteran guest of Westworld, commandeers Lawrence, one of the artificial people in the park, and forces him to abandon his programmed story line to do his bidding.
Before you scorn at the fictional example, consider Microsoft’s Twitter chat-bot Tay. She was meant to be a teenage girl declaring her love for all humans. In less than 24 hours, Twitter turned her into a full Nazi, as a result of users forcing her to repeat racist, fascist, and sexist tweets.
Maybe humans aren’t meant to interact with AI and it should remain the quiet force behind life as we know it.
A husband and wife duo from the University of Connecticut and University of Hartford respectively, Susan and Michael Anderson disagree and work to develop robots that can care for the elderly. They’d like to use the machine learning approach, but only let the robot interact with ethicists as average people are clearly too irresponsible.
Susan Anderson cites philosopher W. D. Ross as inspiration for their work.
“When prima facie (essential) moral duties conflict each other, they have to be balanced and weighed against each other.”
They considered a situation where the patient refused the advised treatment from a healthcare worker. The robot learned the moral response to four specific scenarios and was then able to generalize the appropriate response.
“You should attempt to convince the patient if either the patient is likely to be harmed by not taking the advised treatment or the patient would lose considerable benefit,” said Sarah. “Not just a little benefit, but considerable benefit by not taking the recommended treatment.”
This was the general ethical principle the machine was able to deduce on its own, using morals it learned from the ethicists.
It’s clear that programming morals into a machine isn’t impossible – reasonable, even. Humans follow a basic process in making decisions, even the toughest moral decisions, so it is possible, though not simple, for a machine to learn that decision making process too.
Soon enough, we will be able to fully trust our machines to make difficult moral decisions. But that doesn’t necessarily mean that we should use it to make every decision.
We lose a crucial part of ourselves the more we rely on machines to do things for us. Studies have already shown a decreased ability in our spatial navigation systems because of the emergence of the GPS and everyone’s heard of that man who drove to Iceland unwittingly, following his GPS blindly.
We may soon have a moral GPS, but I shudder to think about what will happen to us if we rely on it to make all of the hard decisions for us.