Advances in artificial intelligence are happening in big ways, but the progress is all from the most technical minds and companies on Earth. Those in the liberal arts and humanities, however, are now thinking they need to get in on AI before it drastically changes how we live.
To do AI research for the public interest from an entirely new perspective, LinkedIn (LNKD) founder Reid Hoffman, the Omidyar Network (a philanthropic investment firm) and the Knight Foundation (which invests in journalism and arts) have put together a $27 million fund for AI research. Called the “Ethics and Governance of Artificial Intelligence Fund,” it applies the humanities, social sciences and other disciplines to the development of AI.
“Artificial intelligence and complex algorithms in general, fueled by big data and deep-learning systems, are quickly changing how we live and work…” reads the announcement from the Knight Foundation. “Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers.”
MIT Media Lab and Harvard’s Berkman Klein Center will serve as the founding academic institutions for the initiative. Together with Hoffman (who contributed $10 million), the Knight Foundation (which contributed $5 million) and Omidyar (which contributed $10 million), they’ll form a governing board to distribute awards and facilitate other activities at the intersection of AI and other disciplines. The William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group, have also each committed $1 million to the fund, which is expected to grow as other funders come on board.
“Since even algorithms have parents and those parents have values that they instill in their algorithmic progeny, we want to influence the outcome by ensuring ethical behavior, and governance that includes the interests of the diverse communities that will be affected,” Alberto Ibargüen, president of Knight Foundation, said in the post.
The announcement lists the following as issues the Ethics and Governance of Artificial Intelligence Fund will seek to address:
- Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
- Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
- Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
- Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
- Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?
“One of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society,”said Joi Ito, director of the MIT Media Lab. “How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”
What Ito is touching on has been a hot topic in the field thus far, with some AIs already displaying certain biases, especially when it comes to race. When it came to the lack of diversity in the winners of the first beauty contest judged by AI, for example, the CEO of the company behind the project told the Observer “technical limitations” were to blame. The White House even pointed out racism in AI in an official report in October, and we can’t forget the disaster that was Tay, Microsoft’s AI chatbot meant to learn from Twitter, which ended up being baited into sending out racist tweets.