“IF YOU FOLLOW the ongoing creation of self-driving cars, then you probably know about the classic thought experiment called the Trolley Problem. A trolley is barreling toward five people tied to the tracks ahead. You can switch the trolly to another track—where only one person is tied down. What do you do? Or, more to the point, what does a self-driving car do?
Even the people building the cars aren’t sure. In fact, this conundrum is far more complex than even the pundits realize.” – from Self-Driving Cars Will Teach Themselves to Save Lives—But Also Take Them (by Cade Metz, Wired)
Yes, this is a conundrum – but the real dilemmas that will arise from self-driving cars and other “smart” machines will not be the rare life-or-death ones. They will be the smaller, every day, every millisecond decisions. They will be 99.9999% mundane and hardly noticeable — until they aren’t. Since all these machines will be networked, not only will they make decisions, they will communicate, and therefore negotiate with others.
Imagine an ambulance is making its way through traffic. Drivers know to pull over and make way. Self-driving cars will have to as well. That’s a decision to override normal behavior based on social convention and law. But what if you have a pregnant passenger who has just gone into labor? Should it matter that the ambulance you make way for has a patient with a much less critical condition? Should there be some way to assess priority?
What if the siren is from a police car that carries an officer just impatient to get to lunch? (It happens!) Should these vehicles negotiate for their own rights to move ahead of others? Traffic signals could also be in this negotiation. They could change or stay green a little longer depending on ranked requests from the network of vehicles in the vicinity. Should every car have an emergency setting that allows it the same precedence as an ambulance? Should emergencies have ratings that would create ranked priorities in traffic? These decisions are going to be made, it’s a matter of when, not if. It’s up to us as a society to think about and decide on these issues.
And what about paying for priority? People pay to use express lanes and we don’t consider that unfair. Is it the same social calculus to have a negotiated price that would get you home 5 minutes faster by changing the traffic signals just a bit, or having other self-driving cars that didn’t pay move slightly to the right as you glide by? It would hardly be noticeable if your car decided to change lanes or slow down by 5%. You might assume it was avoiding a jam or taking a better route, when instead your car just lost a negotiation with a better financed machine. But once you give up the wheel and get used to the car deciding these things, will you know?
Would you let your house adjust the temperature to serve the social good of using less energy in a heat wave? Smart thermostats already do that. How much different would it be to bid for the privilege of using the air conditioner when you really really want it? “I’ll pay up to $20 extra to have A/C tonight.” If it’s a zero sum game, that means that some other person won’t get their A/C that night. So — is it a way to raise more money for public utilities or the opening battle in an invisible, machine-negotiated class war?
So back to the trolley problem. Are five people more important than one? Only if people are equal. That’s going to be programmable, a set value in the complex algorithm that controls the trolley (or car, or traffic light, or…). But as machines learn, it would be as obvious to assign more complex and variable values to humans as chess playing machines assign values to the different chess pieces. So the calculation of who to put in danger would be based not on a calculation of five people vs. one person, but how much each of those people are “worth” to the system that’s doing the decision making. Who (or what) will create that valuation? Will it be based on age, net worth, Instagram followers, number of Tony Awards, or what? If one of those trolley-bound people had the means to pay or some other status that gave them extra worth-points in the calculation, who will decide if that’s fair, just, or human?
We shouldn’t leave that to the machines.