
Contractors at Facebook working in a counter-terrorism unit in Ireland got inadvertently doxxed by the site last year, according to a report from The Guardian.
Approximately 1,000 different Facebook employees and contractors flagged terrorism related content as inappropriate as at least part of their job. Moderators always got notifications when something on a page got removed, but a bug in the system caused that notification to reveal the Facebook profile of the person who did it.
The Guardian obtained this information by interviewing one of the contractors in Facebook’s counter-terrorism unit, an Iraqi-born Irish citizen, hired in part because he spoke Arabic. He spent his days watching beheading videos and propaganda online. He said the bug revealed his identity to groups like ISIS and Hezbollah. It scared him so much he went into hiding.
To do their work, contractors logged into the site using their personal Facebook profiles, the same profiles where they shared photos of their kids, YouTube clips and opinions about Trump. Facebook told the Guardian that it was looking into setting up administrative accounts for this sort of work.
The company is in the process of testing these new accounts now.

The fact that contractors had to use their personal accounts to do their work is an extreme example of what we’ve known for a long time: social media has blurred the boundary between professional and private life. Facebook’s even overtaking LinkedIn for professional networking. But the Facebook contractors’ experience reflects an underlying assumption that it’s always best to connect a person’s identity with everything they do online. Effectively, it always is—even when we think it’s not.
Facebook has one of the best security teams in the world, but even it had a data spill. The more data that accumulates in the more places, the more likely it is to spill. A big spill was just reported yesterday.
Chris Vickery, a white hat security researcher found a voter file with records on nearly 200 million Americans hosted on an unencrypted Amazon S3 server. The file contained publicly available voter files (voter names, party affiliations and elections they had voted in), but the database owners had appended additional data to each record, details about their political sympathies, polling responses, likely voting behavior, polling data and more. Each record was a robust political biography of millions of Americans with the dangerous habit of participating in democracy.
Everyone knows that companies profile users, but firms uncover real insights when they compare notes.
By doing so, corporate America gets better and better at matching up supposedly anonymous browsing to personally identifiable information. So American Apparel might, for example, know if someone who has bought stuff for them online before has been shopping for sweaters or skirts on other sites. They could turn that into a remarkably convenient email about a “sale” that just happens to correspond to exactly the sort of stuff that former customer has been looking at.
Similarly, when a long time in-store shopper at J. Crew decides to make their first visit to their website, it might know not to promise the same big discount to someone that visits the site and has never bought anything from the company before.
In corporate speak, this is referred to as improving customer experience or user delight, but it’s straight up surveillance. And it goes well beyond marketing.
There’s a company called Acxiom that specializes in linking consumer behavior when they aren’t logged in (when they aren’t even using a computer) with personally identifiable information. It’s detailed this week in a report on corporate surveillance from Cracked Labs, but it’s also spelled out on the website for an Acxiom product called LiveRamp.

Acxiom claims it can somehow respect privacy while also connecting behavior that occurred when people weren’t logged in to personally identifiable information, such as email addresses.
How that fits a reasonable person’s understanding of privacy is anyone’s guess.
A company can say whatever it wants. It doesn’t have to be true.
In the early days of social media, people posted a lot of ridiculous photos and said some crazy stuff. Those were the days, before social media became a place for role playing self-righteousness and good fortune. Then in the late 2000s, netizens got barraged with a flurry of advice about how sharing the wrong material could jeopardize our employment prospects.
Just last year, hiring managers admitted that most of them look at social profiles of potential candidates to learn more about them and assess their fit. And people without much credit history are more likely to get a loan if their social networks connect them to people with a good credit history (which means the opposite is also true).
Social media was pitched as a way of connecting with your friends, old and new. Since then, it’s become clear that social media and professional media aren’t really distinct. It should all be called “total media,” because we share all that we do all the time.
Digital critic Tijmen Schep broke down the implications of total media in a new report he published last week called “Social Cooling.” It connects the internet’s profiles of its users to worse health care, lower credit scores and diminished job prospects.
The Facebook contractors’ case is scary. The company’s underlying assumption that its workers should always use their real profile to do their Facebook work revealed that it was too confident in its security team, but it found a fix.
That’s great, but when everything we do online can be connected to us whether we’re logged in or not, there is no fix. When there’s no fix, we start behaving like everything we do is being watched. When that happens, we get a little more conformist, a little more boring.
Maybe a duller word isn’t exactly scary to most people, but it should, at the very least, be kind of a bummer.