If there’s one thing that pisses off academic types about Silicon Valley, it is the tech industry’s refusal to get consent before running experiments on their users.
Most of the time, tech companies’ experiments test questions no more serious than, “Do users get more excited about giving themselves bunny ears or elephant tusks?” Sometimes, though, the questions get considerably weightier, like that time in 2012 when Facebook tested to see whether or not negative emotions were contagious.
In that light, it’s unsettling to think about ways that these companies toy with us. It’s no surprise that when the head of product at Facebook-owned Instagram, Kevin Weil, came out to give a live interview for Stanford eCorner, the first question from the audience was about the company’s ethics around experimentation. But his interviewer, Stanford Professor Tina Seelig, jumps in almost immediately to try to make the hardball question a soft one.
‘I haven’t felt the tension at Instagram’
The exchange happens just before the 32 min mark, in the above (it should start there). Someone from the audience asks:
He begins by saying he’d previously asked this question to someone at Facebook and wasn’t very impressed by the answer. Then he says, “So I’m curious: What frameworks or guidance or codes of conduct that you employ within Facebook or Instagram around how you employ consumer behavior psychology to nudge users in one way or another in a way you think is appropriate, informs your values and is ultimately ethical.”
“I’ll start with Instagram, because that’s the only place I’ve worked,” Weil replies, but immediately stumbles under the question. After a pause, he continues, “It’s a good question. I’m not sure that it comes up… I mean, that’s sort of an abstract way of thinking about things. We tend to approach problems much more pragmatically.”
That’s when Prof. Seelig pops in, rephrasing the question to: “Do you have an ethical code that ends up influencing the way the product gets shaped?”
Which is a much softer question. “How the product gets shaped” could be the font used for people’s comments or the exact distance between those two circular bands that frame your avatar. That’s not the question the audience member asked. He asked about how the company tries to get people to do what they want them to do.
‘We don’t tend to do abstract learning experiments’
To his credit, though, Weil does try to tackle the actual question despite Seelig’s defensive maneuver. He explains that Instagram’s mission is to bring you closer to the people you care about, so he gives the example of which posts get prioritized when you open the app (he calls it “feed ranking”). Should it give more priority to the celebrity you obsess over or people you actually seem to know?
“It’s a hard problem figuring out what to optimize for,” Weil says.
Ultimately, though, he kind of punts.
“I guess I haven’t felt the tension at Instagram that maybe you’re referencing,” he said.
“We only run experiments that we think we would ship,” he concludes, “We don’t tend to do abstract learning experiments of maybe the kind that you’re referencing.” Here, Weil has to be referring to the Facebook experiment we mentioned above, and it’s really surprising that four years after that flap, the company doesn’t seem to have a better line on the issue.
As Weil concludes, Prof. Seelig can’t wait to move on to another question. Which is weird, because one might have thought an academic would have been interested in that discussion.
For his part, Weil seemed to say that there just isn’t that much at stake in the kind of experiments people run at Instagram. They aren’t necessarily playing around with people’s minds, just trying to improve engagement and performance.
Intriguingly, the very next question speaks directly to a successful experiment Instagram ran to change user behavior. The company wanted people to post a lot more than they were, so stealing Stories from Snapchat proved successful. The question following the question about ethics proved to be a dress rehearsal for a grilling Weil would get about Stories at Techcrunch Disrupt in New York, as we previously reported.
But are Instagram’s features really of such low emotional valence?
Earlier this month a study by the Royal Society for Public Health in the UK fingered Instagram as the worst app for the mental health of young people, after a survey of 1,500 people aged 14 to 24, as CNN reported. So apparently there is something very real at stake with how Instagram works.
Which doesn’t necessarily mean that Instagram, Facebook, YouTube or other social networks need to run experiments with quite the same ethical rigor as universities. Do we really want a world in which every time Facebook convenes a committee of greybeards to review a disclosure report to go to a pre-screened set of beta testers every time it wants to jam a new digital toy into its app?
That sounds way more boring than it would be useful, so Weil has a point: a lot of times we are really just talking about bunny ears versus mastodon tusks. That said, subsequent to the outcry, Facebook did make some changes… by changing its user agreement, as Techcrunch reported. Apparently now it has some kind of internal review process for tests that cross some sort of emotional threshold, but Techcrunch reported no third-party auditing. So it’s still accountability free. Facebook and Instagram have not replied to a request for comment.
These apps are now the most powerful machines in the world, and it wouldn’t be horrible if the digital robber barons seriously grappled with ethical questions rather than limply hiding behind terms of service composed by sharksuit lawyers paid $1,000 an hour to do nothing more than cover Zuckerberg’s ass.
It didn’t make sense to publish newspaper content on the internet as scanned copies of the newspaper, and it won’t make sense to cram academic researchers’ ethical norms into digital products. But the industry needs something, because today tech companies’ ethical standards don’t amount to much more than: if you use it, we can bruise it.