This Controversial Company Is Selling Your Social Media Photos to Law Enforcement

Output of an artificial intelligence system from Google Vision, performing facial recognition.

Clearview AI is a massive database comprised of some three billion photos scraped from social media platforms—and then sold to law enforcement agencies. Smith Collection/Gado/Getty Images

Question: When does your face no longer belong to you?
Answer: When your face is used by Clearview AI.

Clearview CEO Hoan Ton-That recently told CBS This Morning that his company has the right to use your photos that are publicly posted on social media. Ton-That trumpeted the First Amendment as his defense.

SEE ALSO: How LSD, Nuclear Weapons Led to Facial Recognition

So, get used to it: your face might no longer be protected, delete that social media account—starting yesterday. 

To backtrack, Clearview AI is your least favorite Black Mirror episode come to life. According to CNN, the company is a massive database comprised of some three billion photos scraped (large amounts of data which is extracted from sites) from social media platforms such as Facebook, Instagram, Twitter and YouTube. (Did you read the update to the terms of service?) Clearview will even retain these photos in their database—after the user deletes them or makes their accounts private.

Oh boy, oh boy.

Here’s some more creepy info: Clearview sells access to its database, comprised of these three billion-plus social media photos, to law enforcement agencies, which then match unknown faces to these images.

So those fun party pics you posted last month could already be in Clearview AI’s database for law enforcement purposes. Same holds true for those party pics you posted 10 years ago…

And Clearview doesn’t just hand over their database out of the kindness of its heart. No. The Chicago Police Department paid roughly $50,000 for a two-year Clearview “pilot,” which was paid for via taxpayer dollars. Clearview also says that over 600 law enforcement agencies across the country use their platform.

Ton-That went on to say that these images are only used for investigations after the fact and Clearview is not a 24/7 surveillance system. He also said his system can identify a person from an image in seconds by matching up unknown people to their online photos. Ton-That claims that the results are 99.6% accurate. Take a photo from the Clearview app, and the match is made from there.

Clearview—creepy or creepiest?

So, it’s basically a Big Brother system, in which we, social media users, have helped law enforcement keep tabs on citizens by supplying them with our party pics.

Needless to say, there’s been controversy; Twitter, Google and Facebook have sent cease and desist letters telling Clearview to delete the data scooped from their sites. Already, the New Jersey cops have been told to stop using the controversial, privacy-invasive platform.

So many moral and ethical issues are at play here. Clearview has previously proclaimed that its technology had been verified using “the ACLU’s facial recognition accuracy methodology.” The ACLU told BuzzFeed that it has rejected Clearview’s assertion based on the civil liberty group’s methodology for testing Amazon’s Rekognition, stating:

The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands.

Further, Nick Thompson, editor-in-chief of Wired, expressed his creepiness concerns to CBS News, stating: “The initial intended use is not the only use.”

Yes, the Clearview app will soon become the ultimate stalker tool.

This Controversial Company Is Selling Your Social Media Photos to Law Enforcement