THE further along the path of science and technology humanity shuffles, the trickier the moral dilemmas become. Two things happened this week to make me wonder whether or not machines should have a "Hero Mode", an overriding programme that makes human life more important than the task it has been programmed to do (even if that is to protect your life).
It all starts with an interview with Hiroshi Ishiguro, director of the Intelligent Robotics Laboratory at the University of Osaka in Japan. He’s perhaps most famous as the man who built a robot replica of himself. You can find out about him here and you can watch his creepy, Uncanny Valley robots here.
He’s travelling in South Africa at the moment, giving a lecture entitled, "Will robots replace humans?" and I managed to get an interview.
His short answer to that question is yes. Robots are computers, he says. Computers have replaced a number of human activities, because they’re quicker and more accurate. A robot, being a sophisticated computer, will continue this trend. He specifically cites the example of robot help for the elderly. "In Japan, we have a lot of old people. Who will look after them?" He suggests robots.
Writer Gary Marcus poses an interesting question: "Your car is speeding along a bridge at 50mph when [an] errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call."
This now moves beyond ethics (which pertains to the choices of an individual) to morality (which is a shared value system). Because if you are driving your car, and choose to swerve to avoid hitting the children, or decide to save your own skin, it is your ethical choice. But if cars have to choose, then it is a preprogrammed decision, and becomes a moral one — it is part of a shared value system.
In a thought experiment, let’s assume there are two options: one, that an onboard computer cannot distinguish between a bus of children and a single-driver vehicle; and two, that it can (I’m thinking life-form sensors).
In the first option, the driverless car either always swerves or doesn’t. Would you place yourself at the mercy of a vehicle that would always put someone else’s life ahead of your own when it snaps into "Hero Mode"? In the second, you are actually asking a computer to make a value judgment on life. Here’s an example: a 20-year-old Nelson Mandela is in the driverless car (it’s a thought experiment, so bear with me). Are 40 children’s lives more valuable than an iconic symbol of peace who is credited with bringing down the apartheid regime? Hundreds and thousands of lives can depend on one individual. If the driverless car makes that value judgment, how is it deciding whose life is more important? This becomes a "greater good" question — to which philosophers will do more justice than a science editor.
But the main point is, who will be asking these questions?