Self-driving cars are perhaps the most obvious harbinger that “the future” is finally here. Autonomous robots providing travel is at this point a science fiction cliché, and, yet, at this point the U.S. is on the cusp of such a technological “revolution,” with various forms of self-driving cars already seen on roads and available for purchase, and the technology only getting more prevalent. And, like many new quote/unquote “innovations” it is being met with largely open arms. This is unfortunate, as the self-driving car, while it may often be heralded as a lifesaver or panacea to all traffic issues, comes with pressing problems of its own, many of which would outweigh any positives self-driving cars might have. These include the outsourcing of moral decisions to corporations, incredible security concerns, declining ownership rights, and increased polarization between vehicle and vehicle owners.
Every conscious activity that humans take requires some sort of moral choice, including driving a car. While a lane change or a right turn on red, or a decision to brake for a rabbit may not seem like a moral choice (well hopefully that last one does), they all do. For example, switching lanes to pass a slow moving car, no matter how well executed, requires that the driver make a choice that their speed is worth the potentially dangerous distraction to the other drivers. The same applies to braking for a rabbit crossing the road. Braking to save the life of a rabbit means that you are weighing the life of the rabbit over that someone who could potentially be behind you, not see the sudden brake, and bump into you. While I’m not weighing these decisions as either good or bad, they are moral decisions that everyone has to constantly make while driving.
It should be readily apparent then, that when making self-driving cars the programmers of said cars will have to program moral choices into the cars. Meaning that people who buy self-driving cars are outsourcing their moral choices to companies like Toyota, Ford, or GM. And while the outsourcing of moral choice to a soulless, profit-driven company deserves a book-length examination, there are a few practical concerns that can be addressed here. The biggest of these revolves around the idea of Utilitarianism. Utilitarianism, the idea that the morally correct decision is the one that brings about the most good for the most people, seems to be America’s working definition of morality (disregarding any blibbering Ayn Rand followers and those working for the U.S. government of course). For instance, when presented with a scenario where they have to choose between letting one person die and letting four people die most people choose to let the single person die and save the four, (also known as the “Trolley Problem.”) This changes, however, if people are presented with a scenario where instead of having the choice over whether or not people die, they are that one person set aside from the other four. In scenarios like this, most people end up saving themselves instead of saving the four. Human instinct is for preservation.
So how do self-driving cars solve the trolley problem? Do we let corporations program a moral course of action for us? If self-driving cars ever become prevalent, this question needs to be decided.
Morality is not the only problem with self-driving cars. There is also a huge security risk. In 2016 alone, the world has seen hackers tap into massive private and public databases, with millions of people’s information being accessed by unsavory elements. Companies like Sony and Target all faced security risk, and even government groups like the DNC faced massive security problems. It would be ludicrous to suggest that cars were somehow safe from similar attacks. The concern of hacking only grows when it becomes more and more obvious that we are getting closer and closer to the “Internet of Things” that we have been promised for so long by our Silicon Valley tech overlords. An Internet of Things would surely include a personal vehicle connected to the internet, a connection which even many non-self driving cars already have. So in an age where hackers can access the records of a company like Sony, it is just a small leap to suggest that a similar group can access the millions of cars produced by a company like GM.
It is not just hacking that is a security concern either. As with all software and hardware platforms it needs upgrades/new parts. It is impossible to control these software/hardware updates so that they always come out perfect. As we see with the Galaxy Note 7, it can even be life threatening. And while this particular problem currently exists in the car market, product recalls occur semi-regularly, it is much easier for a regular person to replace a broken muffler, than it is for a regular person to re-code a self-driving car. And while a windshield wiper recall might be annoying, it’s defiantly not life-threatening, unlike say, a bug in the code for a self-driving car.
A third, and to some, perhaps lesser concern with self-driving cars is declining ownership rights. In our current capitalist system (my thoughts on this can be found elsewhere) private ownership of property is the most important right. Private ownership means that, within reason, an individual can do whatever they want with what they own. There has been a concerning weakening of this ideal within recent years however. Services like Spotify and Netflix, both of which I willingly use, have virtually decimated the actual owning of music and movies. The same trend has been happening in cars. With more and more advanced computer systems being placed in cars, there has been an increasing amount of legislation designed to make it illegal for car owners to “tinker” with their cars. This trend would likely only increase with self-driving cars. And whether this weakening of private property rights is a good or a bad thing is up for discussion, it is important to note that self-driving cars would only further this.
The fourth and final reason that self-driving cars are not all they are cracked up to be is that, at least in the short-term and potentially in the long-term, they will only work to further the gap between the rich and the poor. Self-driving cars, as with all new technology, will be expensive when they first arrive on the market. And yet, they are being marketed and lobbied as a savior of humanity. So, if people can’t afford to buy a self-driving car, are they actively being a drain on society? Even if self-driving cars do become so ubiquitous that they are affordable for all, what would the difference between a self-driving Porsche and a self-driving Nova be? Would the difference be solely in the amenities, or would their also be cuts in the self-driving system in each car? These are questions that all must be answered/legislated before self-driving cars go any further.
The U.S.’s current fascination with new technology and “disruption” has mostly been considered a net positive. However, with the recent controversies surrounding other “disrupters” like Theranos, Soylent, and Uber, it is high time that we take a longer look at new “disruptive” technologies and make some hard decisions about where we take them in the future. Let’s start with self-driving cars.