Facebook Bug: Led to Raised Views Of Harmful Content

Facebook engineers identified a "massive ranking failure" that exposed half of all News Feed views to potential "integrity risks" over the past six months.
The social network touts downranking as a way to thwart problematic content, but what happens when that system breaks?
Last October, the engineers first detected the issue when a sudden surge of misinformation started flowing through the News Feed, noting the report shared inside the company last week. However, instead of suppressing posts from repeat misinformation offenders reviewed by the company's network of outside fact-checkers, the News Feed gave the distribution of the posts, spiking views by as much as 30 percent globally.
Unable to find the root cause, the engineers watched the surge subside a few weeks later and flared up repeatedly until the ranking issue was fixed on March 11th.
In addition to posts flagged by fact-checkers, the internal investigation found that, during the bug period, Facebook's systems failed to properly demote probable nudity, violence, and even Russian state media.
The social network recently pledged to stop recommending in response to the country's invasion of Ukraine. The issue was internally designated a level-one SEV, or site event — a label reserved for high-priority technical crises, like Russia's ongoing block of Facebook and Instagram.
The Technical Problem was First Introduced In 2019 But Didn't have a Noticeable Impact till October 2021
Meta spokesperson Joe Osborne confirmed the incident in a statement, saying the company "detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics." The internal documents said the technical issue was first introduced in 2019 but didn't create a noticeable impact until October 2021.
"We traced the root cause to a software bug and applied needed fixes," said Osborne, adding that the bug "has not had any meaningful, long-term impact on our metrics" and didn't apply to content that met its system's threshold for deletion.
For years, Facebook has touted downranking to improve the quality of the News Feed and has steadily expanded the kinds of content that its automated system acts on. Downranking has been used in response to wars and controversial political stories, sparking concerns of shadow banning and calls for legislation.
Despite its increasing importance, Facebook has yet to open up about its impact on what people see and, as this incident shows, what happens when the system goes awry.

In 2018, CEO Mark Zuckerberg explained that downranking fights people's impulse to inherently engage with "more sensationalist and provocative" content. "Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterward they don't like the content," he documented in a Facebook post at the time.
Downranking not only hides what Facebook calls "borderline" content that arrives close to violating its rules but also incorporates it is AI systems suspect as violating but requires further human review. The company published a high-level list of what it downgrades last September but hasn't peeled back on how exactly demotion influences the distribution of affected content.
Officials have suggested they wish to shed more light on how bumps work but are concerned that doing so would allow adversaries to game the system.
In the meantime, Facebook's leaders regularly brag about how their AI systems are getting better every year at proactively noticing content like hate speech, placing greater importance on the technology as a way to moderate at scale. Last year, Facebook said it would start downranking all political content in the News Feed — part of CEO Mark Zuckerberg's push to return the Facebook app to its lighthearted roots.
I've seen no indication of malicious intent behind this recent ranking bug that impacted up to half of News Feed views over months. Thankfully, it didn't break Facebook's other moderation tools. But the incident shows why more transparency is needed in internet platforms and the algorithms they use, according to Sahar Massachi, a former member of Facebook's Civic Integrity team.
"In a large complex system like this, bugs are inevitable and understandable," Massachi, who is now co-founder of the nonprofit Integrity Institute told. "But what happens when a powerful social platform has one of these accidental faults? How would we even know? We need real transparency to build a sustainable accountability system, so we can help them catch these problems quickly."