What Facebook Did to American Democracy

And why it was so hard to see it coming

The continental United States with the Facebook logo superimposed
Luchenko Yana / Shutterstock / Zak Bickel / The Atlantic

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.

Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

To hear more feature stories, see our full list or get the Audm iPhone app.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

* * *

We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics. In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.

Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year. Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase. And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)

In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”

In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.

Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”

They wouldn’t do it intentionally, at least.

As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered.

* * *

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team. But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.

Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?

“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”

This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

* * *

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.

While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.

At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app. Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.

Why? This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on.

Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out.

And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing. Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January. By July, Breitbart had surpassed The New York Times’ main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged.

There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.

And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.

* * *

In a December 2015 article for BuzzFeed, Joseph Bernstein argued that “the dark forces of the internet became a counterculture.” He called it “Chanterculture” after the trolls who gathered at the meme-creating, often-racist 4chan message board. Others ended up calling it the “alt-right.” This culture combined a bunch of people who loved to perpetuate hoaxes with angry Gamergaters with “free-speech” advocates like Milo Yiannopoulos with honest-to-God neo-Nazis and white supremacists. And these people loved Donald Trump.

“This year Chanterculture found its true hero, who makes it plain that what we’re seeing is a genuine movement: the current master of American resentment, Donald Trump,” Bernstein wrote. “Everywhere you look on ‘politically incorrect’ subforums and random chans, he looms.”

When you combine hyper-partisan media with a group of people who love to clown “normies,” you end up with things like Pizzagate, a patently ridiculous and widely debunked conspiracy theory that held there was a child-pedophilia ring linked to Hillary Clinton somehow. It was just the most bizarre thing in the entire world. And many of the figures in Bernstein’s story were all over it, including several who the current president has consorted with on social media.

But Pizzagate was but the most Pynchonian of all the crazy misinformation and hoaxes that spread in the run-up to the election.

BuzzFeed, deeply attuned to the flows of the social web, was all over the story through reporter Craig Silverman. His best-known analysis happened after the election, when he showed that “in the final three months of the U.S. presidential campaign, the top-performing fake election-news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, The Huffington Post, NBC News, and others.”

But he also tracked fake news before the election, as did other outlets such as The Washington Post, including showing that Facebook’s “Trending” algorithm regularly promoted fake news. By September of 2016, even the Pope himself was talking about fake news, by which we mean actual hoaxes or lies perpetuated by a variety of actors.

The longevity of Snopes shows that hoaxes are nothing new to the internet. Already in January 2015, Robinson Meyer reported about how Facebook was “cracking down on the fake news stories that plague News Feeds everywhere.”

What made the election cycle different was that all of these changes to the information ecosystem had made it possible to develop weird businesses around fake news. Some random website posting aggregated news about the election could not drive a lot of traffic. But some random website announcing that the Pope had endorsed Donald Trump definitely could. The fake news generated a ton of engagement, which meant that it spread far and wide.

A few days before the election Silverman and fellow BuzzFeed contributor Lawrence Alexander traced 100 pro–Donald Trump sites to a town of 45,000 in Macedonia. Some teens there realized they could make money off the election, and just like that, became a node in the information network that helped Trump beat Clinton.

Whatever weird thing you imagine might happen, something weirder probably did happen. Reporters tried to keep up, but it was too strange. As Max Read put it in New York Magazine, Facebook is “like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize.” No one can quite wrap their heads around what this thing has become, or all the things this thing has become.

“Not even President-Pope-Viceroy Zuckerberg himself seemed prepared for the role Facebook has played in global politics this past year,” Read wrote.

And we haven’t even gotten to the Russians.

* * *

Russia’s disinformation campaigns are well known. During his reporting for a story in The New York Times Magazine, Adrian Chen sat across the street from the headquarters of the Internet Research Agency, watching workaday Russian agents/internet trolls head inside. He heard how the place had “industrialized the art of trolling” from a former employee. “Management was obsessed with statistics—page views, number of posts, a blog’s place on LiveJournal’s traffic charts—and team leaders compelled hard work through a system of bonuses and fines,” he wrote. Of course they wanted to maximize engagement, too!

There were reports that Russian trolls were commenting on American news sites. There were many, many reports of Russia’s propaganda offensive in Ukraine. Ukrainian journalists run a website dedicated to cataloging these disinformation attempts called StopFake. It has hundreds of posts reaching back into 2014.

A Guardian reporter who looked into Russian military doctrine around information war found a handbook that described how it might work. “The deployment of information weapons, [the book] suggests, ‘acts like an invisible radiation’ upon its targets: ‘The population doesn’t even feel it is being acted upon. So the state doesn’t switch on its self-defense mechanisms,’” wrote Peter Pomerantsev.

As more details about the Russian disinformation campaign come to the surface through Facebook’s continued digging, it’s fair to say that it’s not just the state that did not switch on its self-defense mechanisms. The influence campaign just happened on Facebook without anyone noticing.

As many people have noted, the 3,000 ads that have been linked to Russia are a drop in the bucket, even if they did reach millions of people. The real game is simply that Russian operatives created pages that reached people “organically,” as the saying goes. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, pulled data on the six publicly known Russia-linked Facebook pages. He found that their posts had been shared 340 million times. And those were six of 470 pages that Facebook has linked to Russian operatives. You’re probably talking billions of shares, with who knows how many views, and with what kind of specific targeting.

The Russians are good at engagement! Yet, before the U.S. election, even after Hillary Clinton and intelligence agencies fingered Russian intelligence meddling in the election, even after news reports suggested that a disinformation campaign was afoot, nothing about the actual operations on Facebook came out.

In the aftermath of these discoveries, three Facebook security researchers, Jen Weedon, William Nuland, and Alex Stamos, released a white paper called Information Operations and Facebook. “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam, and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” they wrote.

One key theme of the paper is that they were used to dealing with economic actors, who responded to costs and incentives. When it comes to Russian operatives paid to Facebook, those constraints no longer hold. “The area of information operations does provide a unique challenge,” they wrote, “in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.” They were not expecting that.

Add everything up. The chaos of a billion-person platform that competitively dominated media distribution. The known electoral efficacy of Facebook. The wild fake news and misinformation rampaging across the internet generally and Facebook specifically. The Russian info operations. All of these things were known.

And yet no one could quite put it all together: The dominant social network had altered the information and persuasion environment of the election beyond recognition while taking a very big chunk of the estimated $1.4 billion worth of digital advertising purchased during the election. There were hundreds of millions of dollars of dark ads doing their work. Fake news all over the place. Macedonian teens campaigning for Trump. Ragingly partisan media infospheres serving up only the news you wanted to hear. Who could believe anything? What room was there for policy positions when all this stuff was eating up News Feed space? Who the hell knew what was going on?

As late as August 20, 2016, the The Washington Post could say this of the campaigns:

Hillary Clinton is running arguably the most digital presidential campaign in U.S. history. Donald Trump is running one of the most analog campaigns in recent memory. The Clinton team is bent on finding more effective ways to identify supporters and ensure they cast ballots; Trump is, famously and unapologetically, sticking to a 1980s-era focus on courting attention and voters via television.

Just a week earlier, Trump’s campaign had hired Cambridge Analytica. Soon, they’d ramped up to $70 million a month in Facebook advertising spending. And the next thing you knew, Brad Parscale, Trump’s digital director, is doing the postmortem rounds talking up his win.

“These social platforms are all invented by very liberal people on the west and east coasts,” Parscale said. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”

And that was part of the media’s problem, too.

* * *

Before Trump’s election, the impact of internet technology generally and Facebook specifically was seen as favoring Democrats. Even a TechCrunch critique of Rosen’s 2012 article about Facebook’s electoral power argued, “the internet inherently advantages liberals because, on average, their greater psychological embrace of disruption leads to more innovation (after all, nearly every major digital breakthrough, from online fundraising to the use of big data, was pioneered by Democrats).”

Certainly, the Obama tech team that I profiled in 2012 thought this was the case. Of course, social media would benefit the (youthful, diverse, internet-savvy) left. And the political bent of just about all Silicon Valley companies runs Democratic. For all the talk about Facebook employees embedding with the Trump campaign, the former CEO of Google, Eric Schmidt, sat with the Obama tech team on Election Day 2012.

In June 2015, The New York Times ran an article about Republicans trying to ramp up their digital campaigns that began like this: “The criticism after the 2012 presidential election was swift and harsh: Democrats were light-years ahead of Republicans when it came to digital strategy and tactics, and Republicans had serious work to do on the technology front if they ever hoped to win back the White House.”

It cited Sasha Issenberg, the most astute reporter on political technology. “The Republicans have a particular challenge,” Issenberg said, “which is, in these areas they don’t have many people with either the hard skills or the experience to go out and take on this type of work.”

University of North Carolina journalism professor Daniel Kreiss wrote a whole (good) book, Prototype Politics, showing that Democrats had an incredible personnel advantage. Drawing on an innovative data set of the professional careers of 629 staffers working in technology on presidential campaigns from 2004 to 2012 and data from interviews with more than 60 party and campaign staffers,” Kriess wrote, “the book details how and explains why the Democrats have invested more in technology, attracted staffers with specialized expertise to work in electoral politics, and founded an array of firms and organizations to diffuse technological innovations down ballot and across election cycles.”

Which is to say: It’s not that no journalists, internet-focused lawyers, or technologists saw Facebook’s looming electoral presence—it was undeniable—but all the evidence pointed to the structural change benefitting Democrats. And let’s just state the obvious: Most reporters and professors are probably about as liberal as your standard Silicon Valley technologist, so this conclusion fit into the comfort zone of those in the field.

By late October, the role that Facebook might be playing in the Trump campaign—and more broadly—was emerging. Joshua Green and Issenberg reported a long feature on the data operation then in motion. The Trump campaign was working to suppress “idealistic white liberals, young women, and African Americans,” and they’d be doing it with targeted, “dark” Facebook ads. These ads are only visible to the buyer, the ad recipients, and Facebook. No one who hasn’t been targeted by then can see them. How was anyone supposed to know what was going on, when the key campaign terrain was literally invisible to outside observers?

Steve Bannon was confident in the operation. “I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” Bannon told them. “Facebook is what propelled Breitbart to a massive audience. We know its power.”

Issenberg and Green called it “an odd gambit” which had “no scientific basis.” Then again, Trump’s whole campaign had seemed like an odd gambit with no scientific basis. The conventional wisdom was that Trump was going to lose and lose badly. In the days before the election, The Huffington Post’s data team had Clinton’s election probability at 98.3 percent. A member of the team, Ryan Grim, went after Nate Silver for his more conservative probability of 64.7 percent, accusing him of skewing his data for “punditry” reasons. Grim ended his post on the topic, “If you want to put your faith in the numbers, you can relax. She’s got this.”

Narrator: She did not have this.

But the point isn’t that a Republican beat a Democrat. The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.

In the middle of the summer of the election, the former Facebook ad-targeting product manager, Antonio García Martínez, released an autobiography called Chaos Monkeys. He called his colleagues “chaos monkeys,” messing with industry after industry in their company-creating fervor. “The question for society,” he wrote, “is whether it can survive these entrepreneurial chaos monkeys intact, and at what human cost.” This is the real epitaph of the election.

The information systems that people use to process news have been rerouted through Facebook, and in the process, mostly broken and hidden from view. It wasn’t just liberal bias that kept the media from putting everything together. Much of the hundreds of millions of dollars that was spent during the election cycle came in the form of “dark ads.”

The truth is that while many reporters knew some things that were going on on Facebook, no one knew everything that was going on on Facebook, not even Facebook. And so, during the most significant shift in the technology of politics since the television, the first draft of history is filled with undecipherable whorls and empty pages. Meanwhile, the 2018 midterms loom.

Update: After publication, Adam Mosseri, head of News Feed, sent an email
describing some of the work that Facebook is doing in response to the
problems during the election. They include new software and processes
"to stop the spread of misinformation, click-bait and other
problematic content on Facebook."

"The truth is we’ve learned things since the election, and we take our
responsibility to protect the community of people who use Facebook
seriously. As a result, we’ve launched a company-wide effort to
improve the integrity of information on our service," he wrote. "It’s
already translated into new products, new protections, and the
commitment of thousands of new people to enforce our policies and
standards... We know there is a lot more work to do, but I’ve never
seen this company more engaged on a single challenge since I joined
almost 10 years ago."

Alexis Madrigal is a contributing writer at The Atlantic and the host of KQED’s Forum.