adimas - Fotolia
- Nicole Laskowski, News Director
What should a self-driving vehicle do when faced with a life-or-death scenario? That's loosely the question behind...
a new crowdsourcing platform created by the Scalable Cooperation Group at the MIT Media Lab. Researchers want to get people talking about morality and machine intelligence.
The Moral Machine project does this by asking for visitors to participate in the "judging" process. In each scenario, participants are presented with two options and asked to determine which option is more acceptable. Should, for example, the self-driving car kill a pedestrian crossing the street and save the passenger, or should the car kill the passenger and save the pedestrian? Is there a lesser of two evils? Edmond Awad and Sohan Dsouza, the two research assistants responsible for developing the site, want humans to weigh in.
Awad and Dsouza sat down with SearchCIO recently to talk about the site, the problem with the data they're collecting and the findings they've uncovered since the site launched in June.
What is the Moral Machine?
Edmond Awad: The Moral Machine is a platform to gather the human's perspective on moral decisions being done by machines. Basically, the two goals of this platform are: First, to understand what the public thinks about what machines should do, such as self-driving cars. Self-driving cars are, today, the best example of autonomous machines making autonomous decisions. Second, [the goal is] to promote a discussion about this topic, because it's being neglected by car manufacturers. It's kind of a tricky point for them to talk about, which is, 'How can we implement ethical decisions into machines?'
Sohan Dsouza: While the self-driving car technology has been growing very fast, the ethical questions have not been resolved. And these questions have to be resolved one way or another. That's why we have the site.
The Moral Machine has attracted more than 2 million participants. What kind of information are you collecting?
Awad: Until recently, we were only collecting people's responses to the scenarios.
Dsouza: And we are able to geolocate those.
Awad: Recently, we added a demographic survey at the end of a session, which helps us know more about the demographics of people visiting.
Have you had a chance to analyze the data?
Awad: We haven't yet dug into the details of the data. So far, the data is a bit noisy. We're trying to refine the collection process. Each time we go over it, we realize that we will need more to clean this data. But, the general trend is that we realize some broad cultural differences. For example, we realize that western countries prefer utilitarian decisions more than the eastern countries. We also realize that the eastern countries prefer to save passengers over pedestrians more than western countries.
What is making the data noisy?
Awad: When we built this website, we could have collected more data on people. But what we realized is that it would require more tasks from users. For example, we could ask users to sign up, make an account, and that would help us get clean data from the beginning. But, of course, we thought that would be too much work for the user. So, we wanted to make this easier.
We wanted to make it so that everyone could play this by sparing them the effort of getting into all of the details of answering questions. We now know we have a lot of people coming in, and we could ask [those questions] to those who are interested at the end. That's how we can identify different users.
How long do you plan to keep the platform operating?
Dsouza: We'll continue upgrading it. We already added the survey recently, and we're going to internationalize it to make it multilingual. We'll probably add more features in the future. The platform itself as a concept and the level of abstraction is clearly something that works. So, we plan to make use of that and gather more data.
What are the next steps?
Awad: Trying to refine the data, trying to get clearer data. We're planning to do internationalization, which is translating the website. This is important on many different levels. First of all, we want this to reach more people and not be limited to those who speak English in other countries. Already, we have participants from more than 110 countries, but we know that most of these people are English speakers. So, we want to reach more people.
Another thing, we want to collect data about the participants. So far, our data is kind of biased because those who have answered our questions don't represent the country [they're in]. We want to have answers that represent [individual] countries, which we could get by having people answer in their native language. Third, answering questions in your native language is different than answering in another language. There was already an article about this -- about how people's answers change when they're talking about something in their own language. So, this is a major step for us.
Cognitive robotics will affect the job market
Better get used to working alongside robots
The enterprise will benefit from smart robots in the workplace
Dig Deeper on AI ethics issues
Sodinokibi emerging as a diverse, multi-vector threat to businesses
John Suffolk's grilling by MPs exposes the tech industry's troubling amorality
Amazon vs Amazon
NHS Business Services Authority trials AWS chatbot tech to field customer calls in contact centre