On Sunday, the Guardian published what it has dubbed the Facebook Files, leaked documents describing the rules Facebook has developed to govern what its 2 billion users are allowed to share publicly. The more than 100 internal documents reveal the chasm between the platform’s simple and anodyne Terms of Service or Community Guidelines and the complex and granular moderation work that really takes place on the platform. Until now, there has been little information about how hidden teams of workers, assisted by expanding and experimental AI, regulate millions of violent, hateful, abusive, and illegal items every day.
Facebook’s guidelines — which reflect mainstream laws and cultural norms — have proven woefully inadequate for addressing violent and hateful user-generated content. That’s been true for a long time, and it is even more so today as images, both live and still, come to replace text as our dominant form of expression. Murderers and torturers in search of instant fame and glory take advantage of live-streaming and ranking algorithms to broadcast graphic crimes with the swipe of a thumb. Russian hackers steal US Military “revenge” porn from secret Facebook groups and use it for political blackmail. Terrorists create private groups to trade ideas and cultivate violent ideation.
Users post text, images, and more relentlessly on Facebook, where teams of thousands cannot scramble fast enough to scrub them off the site. Facebook, for its part, is releasing new tools that attempt to address the thorny fact that images and live video are outpacing, and even replacing, text-based online speech, which are much harder to control. Clearly, Facebook is failing.
During a visit to his alma mater, Bowdoin, this spring, Dave Willner, who helped build Facebook’s first speech guidelines and is now head of community policy at Airbnb, described the challenge of dealing with images in stark terms: “[W]hile Facebook has technically hard problems—like it has to store two billion people’s photos indefinitely for the rest of time—the hard problems those companies face are, what do you do about live video murder? In terms of consequences for the company and the societal impact, the hard problems are the human problems.”
Despite years of warnings from academics, sociologists, and civic society advocates about the potential harm of unleashing technologies with minimal understanding of their impacts, social media companies unabashedly continue to espouse utopian visions. Tech powers continue to advertise products with promises of magic and awe. These products often come with little to no safety or privacy protocols against the potential for amplifying long-standing exploitation and violence. Facebook markets Facebook Live as “a fun, engaging way to connect with your followers and grow your audience.” That may be how the majority of users use the product, but a quick Google search of Facebook Live turns up pages of headlines about live-streamed suicide, murder, and rape.
“Keeping people on Facebook safe is the most important thing we do. We work hard to make Facebook as safe as possible while enabling free speech,” Facebook’s Monika Bickert, head of global policy management, said in a statement responding to The Guardian‘s story. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”
Over the course of the past five years, we have both engaged in free speech and safety advocacy work, as well as written extensively about the lack of transparency and power inherent in Facebook’s moderation of user-generated content. We have interviewed dozens of industry leaders, experts, employees, former employees, moderators, academics, legal experts, and social scientists. One conclusion is clear and consistent: companies like Facebook — and the practices they develop to respond to the challenges of moderation — are human endeavors, governed by human experiences, judgements, and needs. And maybe, above all else, profit. Social media platforms are not simply technologies with problems that can be solved with more proprietary technology, people, or programming — they are socio-technologies with impacts which far exceed the protocols and profit goals of any platform.
They have, as is now glaringly obvious, serious consequences for individuals and society. And these consequences are a result of private deliberations behind closed doors. Corporations regulate speech all day, every day. Silicon Valley may not have anticipated the scale, costs, and implications of moderation, but many others, like UCLA’s Sarah T. Roberts, have been trying to raise public awareness and demand greater corporate transparency for years.
Where does Facebook go from here? Solutions remain elusive, but there are several consistent suggestions we’ve heard from observers:
Build a tech fix
There is currently an industry-wide focus on the use of algorithms to “solve” the problem of moderation. But algorithms are learning machines that rely on inputs and past behavior, often turning the “is” into the “ought,” at scale. Automation means the reproduction of status quo inequities evident in the moderation guidelines published by The Guardian. It is currently okay, according to the published documents, to say “Let’s beat up fat kids,” and “To snap a bitch’s neck, make sure to apply all of your pressure to the middle of her throat,” but not to specify which fat kid you want to beat up or which bitch you want to kill. Algorithms cannot appreciate preexisting biases, a specific context, or pervasive environmental hostility in making those decisions. Without accountability, oversight, and thoughtful ethical intervention, algorithms run the risk, arguably and demonstrably, of making an already complex situation worse. Initiatives are underway to improve an algorithm’s ability to address these issues, but for the foreseeable future the nature of moderation remains intensely human.
“Facebook and others keep telling us that machine learning is going to save the day,” says Hany Farid, professor and chair of computer science at Dartmouth and a senior advisor to the Counter Extremism Project. He developed the groundbreaking photoDNA technology used to detect, remove, and report child-exploitation images. Farid says that his new technology, eGlyph, built on the conceptual framework of photoDNA, “allows us to analyze images, videos, and audio recordings (whereas photoDNA is only applicable to images).” But a better algorithm can’t fix the mess Facebook’s currently in. “This promise is still — at best — many years away, and we can’t wait until this technology progresses far enough to do something about the problems that we are seeing online.”
Spend more money and hire more people
In response to recent criticism, Facebook announced that it was hiring an additional 3,000 moderators, on top of the 4,500 individuals it already employs worldwide, to take down questionable content. More moderators, the argument goes, could better handle the fire hose of abusive and damaging, “offensive” content. But there is no magic number of people that will stem the flow of violent, hateful, and threatening content, even if they are able to respond more quickly to reports. Adding and supporting moderators, while critical, has nothing to do with the process of deciding what is acceptable content.
Regulate platforms more heavily
In the United States, companies like Facebook are highly unregulated when it comes to user-generated content. This is thanks to Section 230 of the Communications Decency Act, which legally absolves platforms from responsibility for user-generated content. Social media systems are not, for the purposes of US law, considered publishers and are almost entirely immune from content-related legal liability. But outside of the US, Facebook is engaged in endless legal battles with governments with beliefs that differ greatly from those of the US government’s.
Activist Jillian York, director for International Freedom of Expression at the Electronic Frontier Foundation runs a project, onlinecensorship.org, that tracks the removal of user content by platforms. She is acutely aware of the tightrope walk between government and private corporation regulations of speech. “Section 230 is vital to ensuring that platforms like Facebook (as well as smaller websites and individual publishers) aren’t held liable for users’ speech,” says York. She is strongly opposed to any kind of full government oversight, but believes regulation by the state is preferable to regulation by private companies like Facebook. “The US is unique in these protections, and while I understand that there are drawbacks, I think the benefits are enormous. That said, Facebook already plays an editorial role in user content. I think the key is more transparency, all around.”
Don‘t regulate or moderate
Free speech activists and absolutists argue that Facebook should not be in the business of moderating content at all. This position is untenable in almost every conceivable way. For Facebook, free rein is incompatible with the existence of Facebook as a profitable, expanding, global brand. An unmoderated platform, Facebook executives and industry experts know, would almost immediately spiral into a pornographic cesspool, driving mainstream users away. Any practical and realistic approach to the problems represented by user-generated content has to presume some level of moderation.
Change how Facebook moderation functions
Almost without fail, executives, civil society advocates, and subject matter experts who have worked with Facebook during the past 10 years believe that greater transparency and engagement are critical to both the assessment of problems and finding solutions. The Guardian‘s publication of the moderation guidelines is so important because Facebook, like Twitter, YouTube, and other similar platforms, refuse to share details of their content regulation. While there are many understandable reasons — abusers gaming the system, competitors gaining insights into internal processes — there are many more compelling ones arguing for more transparency and accountability.
Previously, we have explored the possibility that Facebook effectively functions more like a media company than a tech company and that the act of moderation arguably translates into “publishing.” While categorizing Facebook as “media” would not solve the problem of moderation, per se, it would have serious ethical, professional, and legal implications. Facebook would shoulder more responsibility for its powerful influence in the public sphere.
In his 2016 book Free Speech, Timothy Garton Ash calls Facebook a superpower, built exclusively on a profit model with absent moral and legal mechanisms of accountability that exist for traditional media. Facebook controls vast, privately owned public spaces. If the company were a country, it would be the world’s largest. But it does not have the formal lawmaking authority of sovereign states. There is no formal mechanism of accountability. “Yet [its] capacity to enable or limit freedom of information and expression is greater than most states,” writes Garton Ash.
Nicco Mele, a technologist, former deputy publisher of the Los Angeles Times, and now director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, argues that Facebook is a media company “because it derives almost all of its revenue from advertising, by monetizing its audience attention.” Media companies, he wrote in an email, “have special responsibilities to the public good because of their unique power to shape public opinion.”
The recent attention to Facebook’s moderation fails, he says, “increases the likelihood that Facebook will need to put in place new methods of managing the content published on their platform; the company already does this around some kinds of content (child pornography, for example). In my assessment, the company is far from the level of monitoring and infrastructure their platform requires to meet its obligations to the public — and consequently a likely increase in headcount and expenditures is in its future. The longer the company puts off self-policing, the more likely regulation becomes.”
Mele believes regulation could mean many things for consumers. “We have a long history of regulating media in different ways — remember Janet Jackson’s Super Bowl halftime fashion mishap? Or more recently, Stephen Colbert’s unfortunate use of the word [cock] holster on late-night television?” The Federal Communications Commission, he notes, monitors and regulates broadcast media, so, “there is a precedent that any regulation could follow. The movie industry, facing potential regulation, voluntarily adopted the parental rating system. Facebook is at a crossroads.”
For now, it’s all but inconceivable that Mark Zuckerberg will ever call Facebook a media company or a publisher. Early on, he described Facebook as an “information infrastructure.” He has recently used the term “social infrastructure” instead, as reported by Sarah Kessler, an expression that evokes such public service functions as the USPS or public housing. If that’s the case, then the debate over moderation is really a debate about a global public commons, even as that commons is privately held and regulated.
On the same day the Guardian released of the leaked Facebook Files, the New York Times ran a profile of Twitter founder Evan Williams. In the piece, Williams apologized for Twitter’s possible role in Trump’s win, and admitted he’d never fathomed that Twitter would be used for nefarious purposes. “I thought once everyone could speak freely and exchange information and ideas, the world is automatically going to be a better places. I was wrong about that.” His mistake, reads the story, “was expecting the internet to resemble the person he saw in the mirror…”
“The Internet Is Broken,” the story headline read. “Broken” has become the preferred expression for referring to the dark, unsavory part of the internet. But you wouldn’t know that the internet was broken from Facebook’s earnings report for the third quarter of 2016 — a period during which Facebook’s net income increased 166 percent over the same period in 2015. The company’s ad revenue hit an unprecedented $6.8 billion. Harassment, fake news, and gruesome, graphically violent, and salacious content are profitable because they are relentless drivers of user engagement. Harvard Law School professor Susan Crawford, author of Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age, says, “This is about Facebook’s determination to have billions of people treat its platform as The Internet… Facebook wants simultaneously to be viewed as basic infrastructure while ensuring its highly profitable ways of doing deals are unconstrained.”
In the case of Facebook, or any other major platform, tech founders and leaders, who make billions off the affective digital labor of billions worldwide, have a distinct responsibility to imagine all the ways their platforms can be perverted in a world that includes murder, rape, child abuse, and terrorism, and those who will use platforms like Facebook to enact them.
“They’ve been trying to contain this problem, real and significant, for a long time,” says Roberts, “but it’s no longer containable.”
We are grateful to whoever took the significant risk to share documents Facebook has worked so hard for so long to keep under wraps, as we are grateful to the moderators, community managers, and senior executives who risked their jobs to talk to us about their hard work. For now, in the words of Hany Farid, “Here we are in crisis mode.” Facebook and others, he says, “have been dragging their feet for years to deal with what they knew was a growing problem. There is no doubt that this is a difficult problem and any solution will be imperfect, but we can do much better than we have.”
This post originally appeared at the Verge.