Facebook's Everything Problem
Facebook’s ‘Oversight Board’ recently affirmed Facebook’s choice to suspend then-President Donald Trump from its platform the day after the Capitol riots. Although many critics - including myself - support the decision to keep Trump off Facebook, we’re not too pleased with the process itself. “[It’s] a red herring, substituting a simulacrum of due process in certain high-profile cases for substantive reform,” Will Oremus writes in a New York Times op ed. Oremus and others point out that while Facebook has promised to obey the content moderation decisions of the board, it can choose to ignore any policy recommendations. “[Facebook] did not empower the board to watch over its products or systems — only its rules and how it applies them.”
And yet it is Facebook’s product design, policies, and internal governance mechanisms that cause systemic harm on and through the platform. Three recent reports on systemic content moderation issues show this clearly. In late March, the Tech Transparency Project posted a report into Facebook’s continued promotion of militia pages, despite their insistence that the pages were being removed. In mid April, Julie Carrie Wong at the Guardian published an article about data scientist Sophie Zhang, who was fired from Facebook after pushing too hard to remove government-sponsored fake engagement in over a dozen countries. Then, just a few weeks ago, Buzzfeed Reporters released a leaked internal report on how Facebook had failed to stop the “Stop the Steal” movement from organizing the Capitol riots.
There are many similarities in these stories. I recommend reading each of the linked reports, but I’ll try here to emphasize some important patterns. I’ll start detailed and zoom out, looking first at the individual design decisions that allowed these problems to take root, then at the policies that prevented the problems from being addressed, and finally, at the overall culture of governance at Facebook that allows ineffective policies and exploitable design to remain the status quo.
Design Decisions
In their internal report explaining how Stop the Steal groups spread on Facebook, the authors claim that a key factor was the actions of ‘super-inviters’. They write: “30% of invites [to Stop the Steal groups] came from just 0.3% of inviters. […] These super-inviters had other indicators of spammy behavior: 73% had bad friending stats, with a friend request reject rate above 50%. 125 of them likely obfuscated their home locations. 73 of them were members of harmful conspiracy Groups. We also saw that inviters to these Groups tend to be connected.”
Efforts to tamp down on the influence of the inviters were relatively modest: “a cap of 100 invites/person/day was implemented […] However, all of the rate limits were effective only to a certain extent and the groups were regardless able to grow substantially.”
One hundred invites per person per day seems like a very generous allotment for users who had already demonstrated “spammy” behavior, and I’m not surprised that the limits failed to slow growth. But the real question isn’t why a limit of 100 invites was chosen, but rather, why the limit wasn’t changed once it was proved to be ineffective.
In their report on the spread of militia groups on Facebook, the Tech Transparency Project documents how many of the militia pages they found were created by Facebook: “About 17 percent of the militia pages identified by TTP were actually auto-generated by Facebook, most of them with the word “militia” in their names. This has been a recurring issue with Facebook. A TTP investigation in May 2020 found that Facebook had auto-generated business pages for white supremacist groups.”
Facebook auto-generates pages from Wikipedia articles and when users check in to businesses or other locations that don’t actually exist - despite the complaints of businesses whose pages are generated for them. This aggressive approach allows Facebook to capture and sell more data about its users, their likes, and activities.
The original decision to implement this auto-generation functionality felt innocuous, I’m sure, yet by automatically pushing engagement, Facebook ends up promoting engagement that is harmful or dangerous. Again: why wasn’t this changed once issues were raised, as they have been for years?
When Sophie Zhang started researching fake engagement in Honduras, she found that people affiliated with President Hernández were setting up pages and making them look like user accounts. “Most fake likes on Facebook come from fake or compromised user accounts, but Hernández was receiving thousands of likes from Facebook Pages – Facebook profiles for businesses, organizations or public figures – that had been set up to resemble user accounts, with names, profile pictures and job titles.”
There was no punishment for anyone creating fake pages: “The lack of an enforcement mechanism remains a loophole in Facebook’s policies; even if the company takes down dozens of fake Pages, there is nothing to stop a user with an authentic account from creating dozens of new ones the next day.”
These are design decisions. Why are pages treated so differently than profiles, when pages can be made to act like profiles? (Or when profiles can be made to act like pages: the Tech Transparency Project notes that many militia pages are actually profiles that act as “de facto pages”, allowing for tighter control of information.) Why does this kind of rule-breaking not earn suspensions or even simple restrictions on creating new pages?
In truth, none of these design decisions are terrible on their own. As a product developer, I would never claim I could have avoided these or similar mistakes - the world is just too complicated, and people too devious, to predict what will happen when your software enters the hands of users.
That’s why it’s crucial to have good systems for gathering feedback and adjusting your systems. So let’s move up a level in the Facebook pyramid of failure, to their policy decisions.
Policy Decisions
One reason for the sluggish response to Stop the Steal groups was Facebook’s existing policy of focusing on inauthentic behavior. As the author’s of the internal report put it: “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy? What do we do when that authentic movement espouses hate or delegitimizes free elections?”
The authors continue: “We have little policy around coordinated authentic harm. While some of the admins had VICN ties or were recidivist accounts, the majority of the admins were ‘authentic’. […] The harm existed at the network level: an individual’s speech is protected, but as a movement, it normalized delegitimization and hate in a way that resulted in offline harm and harm to the norms underpinning democracy.”
This quote emphasizes another of the report’s critiques: that Facebook treated groups, people and content atomically, instead of evaluating them as a cohesive movement. “We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation.”
By focusing on a subset of behaviors that are easier to document and enforce, Facebook is able to avoid hard questions like those the report’s authors pose. By limiting themselves to atomic decisions, they exclude the context that makes judgments messy and ambiguous but also accurate and fair.
Facebook has been trying to dodge responsibility, which has allowed hate and misinformation to flourish.
Facebook also has a pattern of systematically under-resourcing efforts to address these kinds of problems. The authors of the Stop the Steal report write, “This sort of deep investigation takes time, situational awareness, and context that we often don’t have.” They describe how frequently misinformation was reported, but “volume far outstripped [third party fact-checking] or escalation review capacity”. When attempting to observe direct coordination of Stop the Steal groups, they were forced to rely on external sources for leads.
Lack of capacity was a crucial factor in Facebook’s failure to respond to Sophie Zhang’s reports of government-sponsored fake engagement. The networks Zhang uncovered “often failed to meet Facebook’s shifting criteria to be prioritized for CIB takedowns, which are investigated by threat intelligence and announced publicly, but they still violated Facebook’s policies. Networks of fake accounts were dealt with by identity ‘checkpointing’, a process [which] could also be carried out by Facebook’s “community operations” staff, who greatly outnumbered threat intelligence investigators”. The community operations staff are also greatly underpaid compared to threat intelligence investigators.
A fake engagement network in Azerbaijan, the largest one Zhang came across, left millions of harassing comments on the Facebook pages of opposition leaders and media outlets. But Facebook did not employ a dedicated policy staffer for Azerbaijan. No one on staff spoke Azeri, leaving them to use Google Translate to try and understand the abuse. Zhang spoke out in frustration: “Facebook has become complicit by inaction in this authoritarian crackdown.”
In an argument with Guy Rosen, Facebook’s Vice President of Integrity (and founder of the user surveillance company Onavo), Zhang pushed back against the decision to de-prioritize countries outside the US and western Europe: ““I get that the US/western Europe/etc is important, but for a company with effectively unlimited resources, I don’t understand why this cannot get on the roadmap for anyone …”
Rosen replied: “I wish resources were unlimited.” Yet as Wong points out in her article, at the time Rosen wrote that, Facebook had over $50 billion cash on hand. Clearly Facebook has the resources to address these issues; the decision not to, therefore, is a policy decision.
Governance Decisions
Zhang was single-handedly responsible for the detection and eventual shut down of dozens of fake engagement networks across the globe - a fact which speaks well of Zhang, but very poorly of Facebook. She often had to advocate for months, going around official channels and reporting structures, to get problems fixed. “A strategic response manager told me that the world outside the US/Europe was basically like the wild west with me as the part-time dictator in my spare time,” Zhang told the Guardian. “He considered that to be a positive development because to his knowledge it wasn’t covered by anyone before he learned of the work I was doing.” But eventually, Zhang was seen as too critical of Facebook, and fired.
It’s unclear how Facebook has responded to the Stop the Steal report. It was initially published to an internal discussion platform, then taken down. This mirrors Facebook’s response to Zhang’s parting critique, which was removed from internal platforms. (Zhang, anticipating the removal, also linked to a copy on an external website; Facebook went to her hosting platform and DNS server to get that site taken offline too.)
The Tech Transparency Project’s report on militia pages is completely external, so there’s no way to tell if anyone at Facebook even knows about the problem.
Again and again we see the same pattern: a lack of transparency, accountability, or formal channels for contesting decisions. In the absence of any kind of structure to address problems in product design or policy, they will just keep occurring. These three reports over the last month are only the latest additions to Facebook’s long list of failures, helpfully summarized on this Wikipedia page.
Because of Facebook’s overall governance structure - as a publically traded for-profit company with a single dominant majority shareholder who also serves as CEO - the resolution of these kinds of recurring issues is always going to be whatever serves Facebook and Mark Zuckerberg best. Whatever avoids the worst press. Whatever brings them closer to power. And of course, whatever makes them the most money.
As long as money and power are the guiding stars, that’s where Facebook will steer, dragging its two billion users and the rest of the world in its wake.