Let’s face it: the internet is a horrible, horrible place. For every cute cat photo on Pinterest, there’re dozens and dozens of sites, message boards and chat rooms that cater to white supremacist, pedophiles, hate groups—basically the scum of the earth.
We’ve known how sleazy the internet is since the days of watching Chappelle’s Show.
Back in August, website infrastructure and security service provider, Cloudflare, announced it was cutting service to 8chan.
To backtrack: Cloudflare operates a content delivery network and keeps sites up and running—and protects domains—no questions asked. 8chan, that nefarious go-to message board site for hate groups (which has “cleverly” rebranded as 8kun), was one of those protected domains. In a blog post announcing the company would be terminating service for 8chan, Cloudflare CEO Matthew Prince called it a “cesspool of hate” and stated that “even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit.”
This ethical tech decision came after the man responsible for the mass shooting in El Paso posted a lengthy racist and anti-immigration manifesto. Guess where he posted it? 8chan—right before the attack that killed 20 people.
A great story of the triumph of morals and ethics—if it ends there. But there’s more…
Now, despite Cloudflare’s ethical proclamation to stop supporting “platforms that have demonstrated they directly inspire tragic events and are lawless by design,” Tel Aviv-based startup, L1ght has pointed out that Cloudflare is still currently providing security and infrastructure services to a multitude of abhorrent websites—everything from child pornography to the infamous Westboro Baptist Church.
Holy double standard, Cloudflare!
L1ght, a safety-as-a-service startup that brands itself as an “anti-toxicity company,” has reached out to Cloudflare three times over the last few months to no response. Nada. Zip. Zero.
L1ght’s mission is to make the internet a safer place for children
So how was L1ght able to point out Cloudflare for dropping the ball in the cesspool of hate department? In non-technical terms, the company’s platform is a B2B API that social networks and games can plug into, which uses machine learning and artificial intelligence to track and filter out bad shit on the web. (It’s not a consumer product for parents.)
In more technical terms, “L1ght detects online toxicity, specifically against children, using algorithms that are trained to think like kids and their potential attackers,” CEO Zohar Levkovitz told Observer. “We target bullying, shaming, child abuse, self-harm and hate speech.”
L1ght’s algorithm predicts harmful and toxic behavior online, particularly behavior directed toward kids.
“Our diverse team of data scientists, PhDs, psychologists and more uses deep learning together with human knowledge to understand not just negative content sent through text, video, audio and images, but the context and nuances behind the text,” explained Levkovitz. “The margin of error is smaller since detecting toxicity doesn’t have to be in real time; most conversations between kids and their abusers take a long time and build up.”
Still, could deep learning mistake irony or a joke as something toxic—even though it wasn’t meant to be toxic? How does L1ght pick up on these language-use subtleties?
“We can’t expand on how we reveal nuances and context due to IP consideration, but that’s exactly why we are working so hard. Our algorithms are unique in the way that they detect nuance, it’s what sets us apart from competitive platforms,” said Levkovitz. “For example, with pedophiles, it’s a process called ‘grooming.’ We focus on the context of conversations and try and conclude if they are about to turn toxic and a flag should be raised. Our algorithms can also detect things in real time, but in general, these things play out over the course of a few days/weeks/months.”
Most of L1ght’s findings are for its clients to handle according to their policies and rules.
“We are only the technology and not the ‘judge,’ said Levkovitz. “But we did manage to help get 130,000 pedophiles off WhatsApp.”
It’s data science meets humanity; let’s Venn diagram that cross-section.