The 2024 Presidential election season has kicked into high gear, and it’s a big moment for social media content moderators. The internet is becoming more personal and stratified for its users, and algorithms are only getting better at delivering information that most resonates with each person individually, regardless if it’s good, bad or simply untrue.
Content moderators for major social platforms Meta (META)’s Facebook and Instagram are tasked with keeping the spread of misinformation at bay, but they often are dealing with issues as they come. Because of this, they haven’t been able to prevent some historical civil unrests, such as political insurrections, that were built on false information largely distributed through social media platforms.
During a panel discussion on the future of online content moderation at SXSW yesterday (March 10), Ronaldo Lemos, a member of Meta’s oversight board, an independent advisory body, spoke about how the consequences played out in his home country of Brazil. Similar to the attack on the U.S. Capitol on Jan. 6, 2021, where around 10,000 supporters of former President Donald Trump marched on the Capitol grounds out of false belief that the 2020 Presidential election was stolen, thousands of Brazilian supporters of former President Jair Bolsonaro stormed their government building for the same reason in 2023.
“It was a failure on the part of moderation because that content stayed up. It should have been taken down because it’s clearly incitement for a violent action,” Lemos said. “The case was escalated to the oversight board, and we immediately not only decided to take the content down, but we made some very important recommendations to Meta in regards of how we should approach this extremely important year for democracy.”
Also on the panel were Maxime Prades, Meta’s director of product for integrity, and Brittan Heller, a cyber policy expert and Stanford University lecturer. The panel was moderated by Charlotte Willner, executive director of the Trust and Safety Professional Association.
Prades said Meta has been working on its election-year strategy by using a policy forum to update users on its efforts. Specifically, the company has been using third-party fact checkers and consulting with non-government organizations and a separate organization of independent “trusted partners.”
“We need to have this pulse-on-the-ground movement within hours sometimes,” Prades said. “So we work with all these organizations and external bodies to get that pulse of how things are going and what we should be changing in our enforcement, in our policies.”
Lemos said Meta should be looking for a concrete way to measure its election integrity efforts and lengthen them. “Elections, they do not end when the results are announced, as we know,” Lemos said. “So the election efforts, they should last until the transition of power has been completed.”
Heller, the cyber policy expert, argued that world governments can provide answers for how content moderators can solve issues like election integrity. An example is the International Court of Justice, where all parties have to agree on its jurisdiction. There’s an advisory factor where countries can consult the ICJ proactively and there is a triage function for emergency measures, Heller said.
“Unfortunately, the pace of international law quickly is like six years, so I would recommend companies be a little more nimble, but I think the ICJ tends to be something that states like to go to because it’s consensual because they feel like they can get value out of the interactions,” Heller said.