Mark Zuckerberg has made it clear to us that Facebook’s problems have gotten so big that only artificial intelligence has a chance at solving them—particularly those stemming from the incessant flow of content generated by Facebook (META)’s more than two billion users, all speaking different languages.
It’s not just the sheer amount of the content circulating on Facebook that matters. Terrorism content, for example, seems straightforward enough for algorithms to handle. “I think we have capacity in 30 languages that we are working on,” Zuckerberg said of Facebook’s 200-people team focused on flagging and deleting terrorism content at a congressional hearing last April. “And, in addition to that, we have a number of AI tools that we are developing… that can proactively go flag the content.”
SEE ALSO: No One Seems to Buy Mark Zuckerberg’s ‘Soft Side’
But there’s harmful content that’s hard even for humans to detect, such as hate speech, which Zuckerberg conceded “is nuanced” and “is an area where I think society’s sensibilities are also shifting quickly.”
So, how have those AI tools come along since Zuckerberg touted them on Capitol Hill? And are they working? In a new interview with Wired magazine, Facebook’s head of AI, Jerome Pesenti, shared a first-hand look at the progress—as well as limitations—of the development of artificial intelligence at the social media giant.
“We’ve made a lot of progress,” Pesenti said of the use of AI in content policing. “Moderating automatically, or even with humans and computers working together, at the scale of Facebook is a super challenging problem.”
As with the pace of development in the broader AI field, Facebook’s AI lab first made progresses in understanding graphic content, which came in handy for flagging nudity and violence in images and videos on the platform. The lab recently made breakthroughs in understanding language, Pesenti explained. “We can understand if people are trying to bully, if it’s hate speech, or if it’s just a joke.”
“By no measure is it a solved problem, but there’s clear progress being made,” he added.
Facebook is also working on tools to detect deepfake videos, which Pesenti said are not yet a serious problem on Facebook but his team is “trying to be proactive about.”
Speaking of more general AI and its overarching goal in terms of the impact on human society, Pesenti acknowledged that the field still has a long way to go.
“As a lab, our objective is to match human intelligence… Deep learning and current AI, if you are really honest, has a lot of limitations,” he said. “We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding.”