MENU

AI ethics under scrutiny

AI ethics under scrutiny

Business news |
By Julien Happich



Initially funded with $27 million from the Knight Foundation; LinkedIn co-founder Reid Hoffman; the Omidyar Network; the William and Flora Hewlett Foundation; and Jim Pallotta, founder of the Raptor Group, the new fund will also seek to advance public understanding of AI.

The MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University will serve as the founding anchor institutions and are expected to reinforce cross-disciplinary work and encourage intersectional peer dialogue and collaboration.

“AI’s rapid development brings along a lot of tough challenges,” explains Joi Ito, director of the MIT Media Lab. “For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?

How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”

Rather than focusing on niche AI applications, the new initiative aims to breaking down silos among disciplines and take an informative role for society as a whole, complementing and collaborating with existing efforts and communities, such as the upcoming public symposium “AI Now,” which is scheduled for July 10 at the MIT Media Lab.

The fund will also oversee an AI fellowship program, identify and provide support for collaborative projects, build networks out of the people and organizations currently working to steer AI in directions that help society, and also convene a “brain trust” of experts in the field.


The Media Lab and the Berkman Klein Center for Internet and Society will leverage a network of faculty, fellows, staff, and affiliates to address society’s ethical expectations of AI, using machine learning to learn ethical and legal norms from data, and using data-driven techniques to quantify the potential impact of AI, for example, on the labour market.

Work of this nature is already being undertaken at both institutions. The Media Lab has been exploring some of the moral complexities associated with autonomous vehicles in the Scalable Cooperation group, led by Iyad Rahwan. And the Personal Robots group, led by Cynthia Breazeal, is investigating the ethics of human-robot interaction.

“The thread running through these otherwise disparate phenomena is a shift of reasoning and judgment away from people,” says Jonathan Zittrain, co-founder of the Berkman Klein Center and professor of law and computer science at Harvard University.

“Sometimes that’s good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”

 

Related articles:

AI recognizes correct code, fixes programming errors

Does AI have a ‘white guy’ bias?

Can call centers keep you cheerful? AI listens

Honda wants to “read” driver’s emotions

Graphcore’s execs on machine learning, company building

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s