TEL AVIV - The MIT-hosted ‘Moral Machine’ study has surveyed public preferences regarding how artificial intelligence (AI) applications should behave in various settings over the past few years. One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.
For example, in an AV version of the classic ‘trolley problem,’ some might prefer that the car strike a convicted murderer before harming others, or that it hit a senior citizen before a child. Still, others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination.
Such quandaries are usually reserved for courtrooms or police investigations after the fact. But in the case of AVs, choices will be made in a matter of milliseconds, which is not nearly enough time to reach an informed decision. What matters is not what people know, but what the car knows. The question, then, is what information AVs should have about the people around them, and should firms be allowed to offer different ethical systems in pursuit of a competitive advantage?
Consider the following scenario: a car from China has different factory standards than a car from the US but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver ethical preferences differ from those of the driver of the US car, which system should prevail?
Beyond culturally based differences in ethical preferences, o
The content herein is subject to copyright by Project Syndicate. All rights reserved. The content of the services is owned or licensed to The Yuan. The copying or storing of any content for anything other than personal use is expressly prohibited without prior written permission from The Yuan, or the copyright holder identified in the copyright notice contained in the content.