The European Union‘s latest advice for tech companies sounds a little unrealistic.
In a set of new recommendations issued on Thursday, the European Commission urged companies operating in EU member countries to remove terrorist content within one hour of publishing.
The recommendations also covered other illegal content, ranging from hate speeches and child sexual abuse material to counterfeit products and copyright infringement, but didn’t specify as strict of a time frame for content moderation.
“Considering that terrorist content is particularly harmful in the first hours of its appearance online, companies should as a general rule remove such content within one hour of its flagging by law enforcement authorities and Europol [European Police Office],” the European Commission explained in a statement.
The recommendations will most largely affect large social media platforms like Facebook, Twitter and YouTube. Companies don’t need to panic yet, as the guidelines are not legally binding.
But, first of all, is one hour even a reasonable expectation?
Timely removal of illegal content is more difficult than most people think, especially as some platforms have grown to such large scales that timely human curation is simply impossible.
YouTube (owned by Google), which has 300 hours of video content being uploaded every minute, can detect problematic content within two hours, at best, with the help of cutting-edge technology.
In June 2017, YouTube introduced a machine learning tool to help moderate online content. It’s unclear how much time it took to take down illegal content before the implementation, but machine learning algorithms have enabled YouTube to detect half of the illegal content within two hours and 70 percent of that content within eight hours, Susan Wojcicki, YouTube CEO wrote in a report in December 2017.
Algorithms are responsible for flagging content, and the decision whether or not to delete a video eventually falls in the hands of a human staffer, a Google spokesperson told Observer.
In addition, YouTube generates a digital fingerprint for every video uploaded to the platform, so, once a video has been flagged, it won’t be permitted for re-uploading, even by a different person.
Facebook uses a similar tool to prevent illegal content from being re-uploaded. Without citing specific numbers, Facebook told Observer that it has “made good progress removing various forms of illegal content.”
However, such a practice is difficult for smaller platforms to copy.
Both Facebook and YouTube’s content moderation tools are built in-house. Due to the nature of video content, YouTube’s algorithms are trained to recognize and analyze image and sound elements, an area in which artificial intelligence scientists are working hard to find major breakthroughs.
The Computer & Communications Industry Association, a Washington, D.C.-based nonprofit representing tech companies, criticized the EU’s recommendation as unrealistic and potentially harmful to the tech economy.
“Such a tight time limit does not take due account of all actual constraints linked to content removal and will strongly incentivize hosting services providers to simply take down all reported content,” Maud Sacquet, a senior manager at the group, said in statement.
The industry group further warned that adopting broad voluntary detection of illegal content across the internet would “lead to widespread online censorship by forcing hosting services providers to suppress potentially legal content.”
“We’re doing more than ever to prevent the abuse of our services, including hiring more people and investing in machine learning technology, and we’re making real progress,” Thea O’Hear, a spokesperson for Google’s Europe operation, told Observer.
“We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook,” a Facebook spokesperson said.