The self-driving car is coming. We anticipate it with a mixture of bemusement, disquiet, and excitement, all converging in disbelief: It’s not possible, not really. But it is not just possible but a virtual—so to speak—certainty, within perhaps a decade.
It’s 2032 and you’re heading into town in your self-driving car. You are alone, asleep in the back seat. Unbeknownst to you, a drama looms ahead. A child runs in front of your autonomous vehicle without warning. The car brakes, but since it is going at a fair pelt, it must swerve as well—to the left or right.
On the left-hand side of the road another car is approaching, driven by Jasmine Jones, a 23-year-old computer programmer who works for Waymo, owned by Google (as it happens, Waymo manufactured your car). Jones has just started her first job, having completed her PhD in Gender Truth. She and her boyfriend Ignatius Pope have just set the date for their marriage, which is to take place in six months’ time, and put down a deposit on a home. Ignatius works as an algorithm formulator for Waymo’s main competitor, Tesla, the market leader.
Walking toward your vehicle on the right-hand sidewalk is retired philosopher and pro-life activist Fred Taylor, who is 72 and has just that day received a diagnosis of prostate cancer. He is a widower and has three adult children, all living long distances away, who call him occasionally but visit rarely. Behind Prof. Taylor is a green hedge; on Jasmine Jones’s side a high concrete wall.
The computer responsible for driving your car, using GPS, lidar, and other sensors, divines all this information in a split nanosecond and steers rightward, knocking down Fred Taylor. He will later be taken to hospital and pronounced dead-on-arrival. You awake to find your vehicle shuddering to a halt with its nose in a Portugal Laurel hedge.
The self-driving car is coming. We anticipate it with a mixture of bemusement, disquiet, and excitement, all converging in disbelief: It’s not possible, not really. But it is not just possible but a virtual—so to speak—certainty, within perhaps a decade.
Even now we are just beginning to wrestle with the implications. Although there has been a scatter of books on the topic, it has yet to surface in the neocortex of Western society. Academic papers on the implications of artificial intelligence (AI), for example, address the implicit threat to human labor, or wrestle with questions like, “Will intelligent robots acquire human rights?” Even when papers touch on self-driving vehicles, they tend to talk about traffic-flow models or how it will affect the concept of responsibility to supplant human decision-making with cybernetically-formulated algorithms. The occasional study deals with the issue of “artificial moral agents” (AMAs), but these tend to be accompanied by a darkish innuendo. These studies hint that human beings are so morally deficient that it would be beneficial to replace them with entities with artificial morality that implement only what is “good” about humanity.
An even more challenging issue lurks downstream from the responsibility question: How will humanity cope in a world in which there is no recourse to justice, reckoning, or even satisfactory closure after algorithms cause death or serious injury?
A fully autonomous vehicle is one capable of driving on its own, without human intervention. Humans carried in such vehicles are always passengers. Self-driving vehicles are an example of a new category of machine, in that they have access to public thoroughfares on much the same basis as humans: without constraint of track or rail—hence “autonomous.” Computer-generated movement of machines is a brave initiative for all kind of reasons, and will necessitate radical changes in laws and cultural apprehension.
Self-driving cars use sensors, cameras, and GPS to mediate between the world and the computer. In the event of a situation such as that described above, the car will make a judgment. But how—by what criteria?—is the software to be programmed to make these judgments? How, as dictated by its programming algorithm, should the computer prioritize the information it receives? And will we be able to come to terms with these “decisions” when the outcome involves the death or serious injury of a loved one?
In the approaching dispensation, the word “moral” may need to be amended or replaced. The almost universal human experience of morality is not capable of being comprehensively codified and tabulated by the computer. If we remove moral action from the remit of human beings and vest it in computers, in what sense will we be able to go on calling this morality? Will it be sufficient to incorporate into the algorithm some formulaic version of John Stuart Mill’s principle of utility, or Immanuel Kant’s categorical imperative?
Subscribe to Free “Top 10 Stories” Email
Get the top 10 stories from The Aquila Report in your inbox every Tuesday morning.