Laws, Ethics, and AI: Many Questions Without Clear Answers

The marriage of robots and ethics has been explored by philosophers, technologists, and even science fiction writers since long before robots even existed. Isaac Asimov’s “Three Laws of Robotics” first appeared in 1942 – predating today’s advanced artificial intelligence by half a century.

But as AI technology advances and intermingles with human life more and more, questions of laws and ethics have never been more relevant.

 

The Case of the Self-Driving Car

Modern Diplomacy writer Maksim Karliuk shows the debate on this subject is very real, offering the case of the Mercedes self-driving car in unavoidable car accident and human safety prioritization.

When representatives at Mercedes said their self-driving cars would prioritize the lives of passengers [not pedestrians], the German Federal Ministry of Transport and Digital Infrastructure answered that “such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.”

Questions raised: Who decides ethical guidelines? How / do we define which lives are more important than others?

 

Self-driving Mercedes-Benz F 015 Luxury in Motion concept car. Source: Mercedes-Benz

 

The Case of Autonomous Weapons

The Guardian journalist Bonnie Docherty covered the United Nations’ meeting to address the creation and use of autonomous weapons, also known as “killer robots.” What made it so newsworthy was that the group had already met previously four times to discuss the same issue.

“Legally, ‘killer robots’ would lack human judgment, meaning that it would be very challenging to ensure that their decisions complied with international humanitarian and human rights law,” she reported.

“For example, a robot could not be preprogrammed to assess the proportionality of using force in every situation,” Docherty reported, “And it would find it difficult to judge accurately whether civilian harm outweighed military advantage in each particular instance.”

Questions raised: How do law makers and algorithm coders plan for complex, nuanced circumstances? What levels of force should robots be allowed to use?

 

Legal and Ethical Hope for AI

Although a challenge to navigate, panelists at IAPP Global Privacy Summit are starting to lay the groundwork for tackling artificial intelligence laws and ethics.

 

IAPP Global Privacy Summit 2018
Source: IAPP

 

Panelist and senior vice president of public policy at SIIA, Mark MacCarthy stated that regulating artificial intelligence can be tough in certain instances, while other times very obvious. A simple way to regulate the application of artificial intelligence, MacCarthy adds, is to look to laws already established.

“Using machine learning isn’t a get out of jail free card,” he says. “You can’t say, for example, ‘I’m using AI, so I don’t need to live up to fair lending laws.’”

Additionally, for guidance companies should look to SIIA’s ethical principles for AI. MacCarthy highlighted four principals at the 2018 summit:

 

  1. Rights: Participate in artificial intelligence applications that respect the law and human rights.

 

  1. Justice: Steer clear of artificial intelligence applications that target vulnerable groups.

 

  1. Welfare: Strive to use artificial intelligence to improve the welfare of all humans, so that all communities can benefit.

 

  1. Virtue: Use artificial intelligence in virtuous ways to help human beings.

 

While it’s not a complete solution, it’s a start. More dialogue like the one at the Global Privacy Summit is needed to work through these major questions.

Don’t be afraid of what artificial intelligence can do for your business! Connect with Novatio Solutions now.

 

Leave a Reply

Your email address will not be published. Required fields are marked *