MIT Technology Review: Why self driving cars must be programmed to kill
An article popped up multiple times in my Facebook-stream in the past days, titled “Why self driving cars must be programmed to kill.”, published in the MIT Technology Review. I think the “impossible ethical dilemma” the article says it posits is false. I think saying that it needs to be solved “before they can become widespread” is even more wrong as the problems will be in the transition much more than in the new normal.


photo Saad Faruque

Autonomous cars will not become mainstream because they know what is the ‘morally right’ thing to do in screwed-up situations. They will be mainstream because they will not allow those screwed-up situations to arise. Unlike us. That is the point of self-driving cars: not to be like us drivers, but to be unlike us.

In fact self-driving cars will not be autonomous at all in the literal sense. They will be networked-driving. They will be autonomous only relative to the passenger formerly known as driver, but in synchronisation and negotiation with everything else.

Let’s look at the false dilemma first
The premise of the article is that a car ends up heading towards a group of ten people in the road and no time to stop. It then has to choose: run over those 10 people, killing them. Or drive into the wall on the side, killing the driver. Or alternatively driving into the same wall on the side, killing a child or a granny standing there on the sidewalk.

The first question here is not what the car software should do to figure out ‘the right thing’, as the article says however. The first question is, how on earth did a self driving car end up in a situation where it was ‘too late to stop’ at all? The article lamely explains how such a situation came up: “One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time.” The solution however is in that lead-up that is not described.

The two key elements here that cars need to solve, so the ethical choice above need not be made at all, are:
1) preventing ‘unfortunate sets of events’ to arise in the first place
2) preventing it is ever ‘too late to stop’ by erring on the side of caution

Autonomous cars will not go where there is no data

The last one is a good example of how self-driving cars already are different from human drivers. If a human has insufficient data he will keep going on (I assume there are likely no obstacles on my road until I see otherwise). If a car has insufficient data it will slow down or stop (It won’t move until data tells it there is no obstacle to moving). Autonomous cars will not go where there is no data.

The first one, preventing an ‘unfortunate set of events’ requires more attention therefore, as it contains some assumptions to unpack. But the short answer is: it’s not just the car.

The car is not the sole unit of sensing, nor the sole source of data
There seems to be an underlying assumption in these discussions that the car is the only unit with sensors. There is no reason why that would be the case when autonomous traffic is widespread, or even before. Everything will have sensors.

Sensors are cheap or getting cheap fast. All cars have them, all phones have them, buildings have them, and realistically every road sign, lamp post, piece of road surface, piece of clothing can have them too. If they don’t already.

A self-driving car will be able, and needs to be able, to take in data from external data sources, to get a better understanding of its environment. Those external data sources can be anything and are already growing up in parallel to the self-driving car, ready to be used.

Lamp posts in the inner city of Eindhoven (Living Lab Stratumseind, link in Dutch) already monitor noise levels, crowd movement, and detect altercations. They can change light levels and color to influence passers-by. These lamp posts already are registering the 10 people from the dilemma above and their overall behaviour, and are able to tell your car before you come around the corner, allowing a car to reduce speed anticipating they will suddenly cross the road if their behaviour indicates such a thing. So it will stop in time, or avoid needing to stop at all.

All main roads in the Netherlands already know the number of vehicles passing by in each lane and their speed at any given time, and all that data is published on-line in real time. Those sensors already now are able to tell you whether you should be slowing down because traffic in front of you is denser or slower than you, well before you see their tail lights. Parking spaces along roads in various cities already know if they are occupied or in the process of being vacated. Intelligent Transportation Systems (ITS) in general are blanketing the EU road system in sensors.

Data pheromones, just like ants, will keep us from bumping into each other

The cars themselves will also be communicating and sharing sensor data. Allowing your car to ‘smell’ unexpected crowds of pedestrians blocks away, and navigate around it, with ‘data pheromones’.

Tesla cars already compare notes amongst eachother, and “work as a network“. Your own car already takes in satellite data every journey. Your own navigation software already shares your car’s behaviour with every other user of that software, to help detect detours, traffic speed changes, route changes, roadblocks and traffic jams. Beyond sharing descriptive data, any other machine actor could just as easily share intentions (“I will turn left in 250 meters, and slow down for it starting in 90 meters / 5 seconds”), allowing others to pre-emptively respond.

The omnipresence of data sources will only increase. The road surface can tell if it is covered in oil, water or ice, and slippery, and let the world know. Traffic lights already can tell what speed of approach will allow you to get through fastest and could let your car know. Where a human driver would try to run an orange light, an autonomous car would stop if it knows it would not influence the overall speed of its journey or that stopping would allow a ‘green wave’ in subsequent lights.
Even the phones of those 10 pedestrians in the original dilemma are able to detect and signal sudden changes in speed and direction, and soon sensors in their clothing or shoes might too.

When the roadsurface, lamp posts, phones and clothing, every car, or every other vehicle (bike, skateboard, moped) around you are part of the eyes and ears of your car, all to better navigate and negotiate passage, there will be no surprises. Surprises happen when you have just one range-limited sensor: the eyes of the driver.

The autonomous car software will take advice from anything around it, except your and my brainstem

An autonomous car is not autonomous really, other than freed from the driver’s control. It is driving on tracks just like existing driverless metro trains, except these tracks are made of data. Where those tracks run is continuously shifting based on all traffic actors continuously negotiating passage, and the signalled actions and intentions of every other actor.

The car will not be the sole unit of decision making either
Another faulty assumption I think is to see the car as sole unit of decision making. Just as anything around the car can be providing data, anything can also be an actor itself, forcing a response from ‘my’ car as it sees its environment changing. Every other vehicle, and immobile objects too, will make decisions.

Road surfaces can go beyond merely detecting they are slippery because of ice, so that cars decide to slow down, by actively declaring itself closed for the coming 23 minutes and 42.5 seconds while a salt truck is on its way there. Road signs, taking data from lamp posts about increased pedestrian activity can signal to change the road to one-way traffic or close it off until the crowd has dispersed.

Traffic will ride on tracks. Tracks made of data. Traffic rules will be fluid, and traffic flow emergent.

Sensors are cheap, and adding algorithms to each of them to act on its own sensor data is not much more expensive. Where traffic rides on tracks of data, the rules of traffic can be datafied too. Fluid traffic rules will result, and autonomous cars will obey them (and even if they don’t all other actors will see that in the data streams and adapt accordingly).

Saying the self-driving car is not autonomous other than from the previously needed driver, is not the same however as saying someone else or something else is at the helm. There no longer is a helm to be at. There is just traffic flow, emerging from the negotiated decisions from each actor continuously optimizing its journey by endless series of ‘probe, sense, respond’.

Existing ongoing analog trends will also play a role
Already in many cities around the world various types of traffic are separated into different streams. Separate lanes, or even separate routes altogether. Where that separation is not possible other measures (like speed reduction in residential streets) are usually taken. With all those things also becoming reflected in data, it will be even easier to do. It will also be much easier to locally change the primacy of the car in traffic design on the data level than in physical reality.

False dilemmas shift attention away from getting solutions faster
So the solution to the article’s ‘impossible dilemma’ is to not just look at the system ‘car with sensors and software’ but at both the other similar systems (cars, pedestrians, bicycles) around it, and at the super system it operates in (the road, built up environment, road design, traffic design) as well. The car will stop in time because the lamp posts, grandma’s coat, the road surface and every other sensing object will collaborate with it to there being no urgency at all.

So no ethical dilemma’s then? On the contrary!
While choosing to run over granny or crash into a group of ten other people is a false dilemma, there are many other real ethical dilemma’s to solve.

The article in MIT Technology Review suggested the false dilemma needs to be solved as a precondition of autonomous cars becoming normal. I think the period before those cars are normal will be much more challenging. When only a handful of cars are rational actors because they are autonomous from drivers, they will be experienced as weird and unpredictable by you and me who still have only our eyes to go by.

We’ve got 99 ethical problems, but killing granny ain’t one.

When we do get to mainstream, when traffic has become highly datafied, including street signs, lamp posts, road surfaces etc., there are many ethical dilemma’s as to who gets to influence the algorithms and data streams a car takes as input. Already in the US cars are being remotely shut down if their owners don’t pay their car loans on time. Should that be allowed? Can local government declare an entire neighbourhood a no-go area for specific groups of, or all, cars by having the roads tell the data layer they are closed? Can your insurance company tell my car to not do something? Do we even need insurance? Will individual car ownership still make sense, and if not, who then owns fleets? Can a lamp post be allowed to discriminate who gets to drive down the street (residents only!), or signal the police if it profiles a car as burglars? Can cars even be used by burglars anymore, because the cars know where they’ve been, and the lamp posts know which cars were there?

And right now, can we see, check and change the software in our cars? Can we see what type of algorithmic influences have been programmed in? No, I can’t, nor can I for most other sensing devices around me. Don’t you need to know how your current car, drive-by-wire as it already is, makes autonomous decisions? Already your Volkswagen autonomously decides from sensor data if it is on a test track or out on the road, and changes behaviour accordingly. Already John Deere tractors is ready to sue farmers for checking and altering the software on their tractors, basically arguing your tractor isn’t yours, you’ve only rented a license to an operating system.

So the conclusion of the article I fully share: we need an ethics of algorithms. Just not for deciding when it’s ok to run over granny.