AI ethics under scrutiny: Page 2 of 2

January 10, 2017 // By Julien Happich
The newly launched Ethics and Governance of Artificial Intelligence Fund aims to foster global research that advances AI for the public interest, a focus that in itself will probably be subject to debate.

The Media Lab and the Berkman Klein Center for Internet and Society will leverage a network of faculty, fellows, staff, and affiliates to address society’s ethical expectations of AI, using machine learning to learn ethical and legal norms from data, and using data-driven techniques to quantify the potential impact of AI, for example, on the labour market.

Work of this nature is already being undertaken at both institutions. The Media Lab has been exploring some of the moral complexities associated with autonomous vehicles in the Scalable Cooperation group, led by Iyad Rahwan. And the Personal Robots group, led by Cynthia Breazeal, is investigating the ethics of human-robot interaction.

“The thread running through these otherwise disparate phenomena is a shift of reasoning and judgment away from people," says Jonathan Zittrain, co-founder of the Berkman Klein Center and professor of law and computer science at Harvard University.

“Sometimes that's good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”

 

Related articles:

AI recognizes correct code, fixes programming errors

Does AI have a 'white guy' bias?

Can call centers keep you cheerful? AI listens

Honda wants to “read” driver’s emotions

Graphcore's execs on machine learning, company building