FB Pixel no scriptThe moral dilemmas behind developing self-driving vehicles | KrASIA

The moral dilemmas behind developing self-driving vehicles

Written by Sarah Koh Published on   4 mins read

Can autonomous vehicles make ethical decisions at times of emergency without human intervention?

Self-driving cars have captured people’s imagination for decades. In their minds, fully autonomous vehicles (AVs)—those in pop culture—will be able to operate themselves without any human intervention.

Despite the advances made in self-driving technology, there is still no car that is 100% fully autonomous today. While carmakers such as Tesla, General Motors, and Stellantis have developed features such as pedestrian detection, lane departure warnings, traffic sign recognition, and blind-spot detection, fully automated self-driving vehicles likely won’t be ready anytime before 2030, according to Accenture’s estimation.

Pros and cons of driverless technology

Driverless cars have the potential to transform society in positive ways. According to the United States’ National Highway Traffic Safety Administration, the benefits of self-driving cars range from increased mobility for seniors and the disabled to efficiency and convenience.

Higher levels of vehicle automation are also expected to increase road safety by reducing traffic accidents and, ultimately, preventing them altogether. In fact, research has shown that 94% of crashes are due to human error.

At the same time, driverless technology can be beneficial for society by lowering carbon emissions and paving the way for more sustainable ways of living.

In recent years, some technological progress has been observed in AV technologies. Many self-driving vehicles on the market are currently equipped with advanced driver-assistance systems, which prevent drivers from drifting out of their lanes or help them stop in time to avoid a crash or reduce its severity.

For example, General Motors has developed a system that allows its vehicles to control steering and acceleration. Honda has a Civic sedan with a suite of features such as Traffic Jam Assist, which manages the vehicle’s acceleration, braking, and steering.

Meanwhile, Tesla has its Autopilot feature, which has been marketed as having “Full Self-Driving” capability. Despite the marketing claims, Tesla’s “Full Self-Driving” function has only achieved Level 2 (Driver Assistance) out of six possible levels on the widely used automated driving taxonomy by The Society of Automotive Engineers (SAE). Specifically, Tesla’s vehicles still require a human to take control of the vehicle at any time.

Risks and challenges

While there are potential economic and social benefits to driverless technologies, they could also present a myriad of risks.

Vehicle malfunctions, for instance, can be a safety hazard for road users. Last year, Tesla came under investigation after its vehicles operating on autopilot crashed into parked emergency vehicles. In February this year, 354 complaints were filed by Tesla owners over the past nine months over “phantom braking” in Tesla’s Model 3 and Model Y.

Self-driving cars are also vulnerable to cyberattacks as they are primarily software-driven products compared to traditional cars. As they become increasingly dependent on software to operate, there are growing concerns that hacking and cyberattacks could compromise the proper functioning of self-driving cars, and pose a threat to both drivers and car manufacturers.

Another issue is the lack of clear regulations to govern the self-driving car industry. In a recent survey carried out by McKinsey & Company on the future of autonomous vehicles, many countries lack comprehensive regulations on safety and operating standards. For example, rules on autonomous cars across Europe remain fragmented. In the EU, a legislative framework specifically dedicated to the approval of AVs does not exist. In China, different regulations have also emerged at the municipal level.

The lack of regulatory clarity has legal implications, including the issue of liability. It raises the question: When cars can operate themselves, are accidents the responsibility of the manufacturer or the driver?

Ethical complexities

These risks also highlight the ethical challenges that carmakers and governments have to face in the development of driverless cars.

One such issue involves the testing of autonomous vehicles. For example, self-driving carmakers have been using public roads in the US as a test lab for self-driving experiments.

According to a recent report from the US Department of Motor Vehicles, self-driving car companies such as Waymo and Cruise nearly doubled their testing miles to a record 4.1 million miles on California roads from December 2020 through November 2021 compared to the previous year.

Despite their potential to improve road safety, the testing of AVs on public roads could put unwitting drivers at risk. This highlights the moral dilemma that carmakers, regulators, and the public face: Does AV regulation create safer roads for people, or will it slow the adoption of driverless technologies that can reduce traffic accidents?

Another major ethical issue in the development of self-driving vehicles involves the ability of machines in making moral decisions. While the introduction of autonomous vehicles may reduce the number of road accidents, such events cannot be ruled out based on past records. For example, when self-driving cars developed by Waymo and Cruise were operating in San Francisco at record levels in 2021, 53 collisions ensued.

In the event of a crash, ethical decisions have to be made, which often puts people in a moral quandary, based on a study by researchers from the Massachusetts Institute of Technology.

Based on the “Moral Machine” online survey of 2.3 million people worldwide in 2018, respondents were presented with 13 scenarios in which a collision involving an autonomous vehicle killing a combination of passengers and pedestrians was unavoidable. The participants were asked who they would spare.

A significant finding from the survey was that moral principles that guided drivers’ decisions varied from country to country. Another was how women and men viewed ethical and moral situations differently.

Hence, without a consensus on a universal moral code, this would make it almost impossible to develop a car that universally satisfies the ethical frameworks adopted by populations around the world.

Moving forward, it is important to broaden the public discussion from a focus on the crash behavior of vehicles to the many types of social change that AV technology can be involved in. These include looking at factors such as required levels of safety, the distribution of responsibilities between regulators and vehicle providers, and the trade-offs between privacy and other interests.

In particular, policymakers, carmakers, and the public will need to develop an agreement on compromises and prioritization among these ethical considerations.


Auto loading next article...