Fact check has been eliminated, “Community Notes” are being
Meta's chief global affairs officer Joel Kaplan announced on Friday that the process of removing fact checks from Facebook, Threads and Instagram is almost done. By the next Monday, there will be “no new fact checkers, no fact checkers” working on these platforms, used by billions of people around the world – without professionals labeling false information about vaccines or stolen elections. X's owner Elon Musk is a rival platform with a notorious modest approach to content, and he returns to Kaplan with the words “Cool.”
Meta, who had just called Facebook, started his fact-checking program in December 2016 after President Donald Trump’s first election, while social networks have been criticized for allowing the spread of prevalent fake news. The company will still take action against many problematic content – for example, the threat of violence. However, it leaves behind the work of patrolling users themselves with multiple misinformation. Now, if users are so forced, they can turn to the community ticket program, which allows ordinary people to formally contradict each other's posts to clarify or correct formal supplemental text. A Facebook post says that the sun changes color may get useful corrections, but only if someone decides to write one and submit it for consideration. Almost anyone can sign up for the program (Meta says users must be over 18 years old and have “reputable”), which, in theory, is a typical way to review content.
Chief Executive Mark Zuckerberg called it a hub of misinformation to return to the company's “roots” with Facebook and Instagram as “free expression” sites. He announced his decision to adopt a community note in January and explicitly set the move as a response to the 2024 election, which he described as a “cultural turning point in prioritizing speeches again.” Less clear, Meta's shift to community notes is a response to years of criticism of the company's misleading methods of companies. At the end of his last term, Trump targeted Facebook and other online platforms by executing an order, accusing them of “damaging the selective censorship of our national discourse”, and during the Biden administration, Zuckerberg said he was under pressure to lower more information about shared posts.
Meta's abandonment of traditional fact checking may be cynical, but misinformation is also a tricky problem. Fact-checking hypothesis that if you can get a trusted source to provide better information, it can save people from believing false claims. But people have different ideas about being a trustworthy source, sometimes people think Believe in the wrong things. How do you stop them? Moreover, the second question the platform has to ask itself now: How should you work hard?
Community Notes Project – X was invented in 2021 and was then still known as Twitter, which was a confusing attempt to solve the problem. It seems to rely on the quaint, naive ideas of people acting online: Let's say it out! A reasonable debate will prevail! But for the honor of social media platforms, this approach doesn’t seem to be as starry snacks as they seem.
The main innovation in community notes is that annotations are generated through consensus among people that may differ. Not every note written actually appears under a given post; instead, they are evaluated using a “bridge” algorithm designed to “bridge” partitions by taking into account what is called “diversified positive feedback.” This means that if someone who exhibits various biases at other times is “useful”, the underlying notes are higher rated and more likely to appear on the post. The basis of the system has quickly become a new industry standard. Shortly after Meta announced the fact checks, Tiktok said it would test its own version of community notes (called footnotes), and although unlike Meta and X, Tiktok will also continue to use a formal fact checking program.
These tools are “a good idea, greater than harm,” Paul Friedl, a researcher at Humboldt University in Berlin, told me. Friedl Internet Policy ReviewX's community notes are discussed in other examples, including Reddit's forum and old Usenet message threads. One of the main benefits he and his co-authors cite is that these programs can help build a “culture of responsibility” by encouraging communities to “reflect, debate and consent” to achieve the purpose of any online space they use.
The platform certainly has good reason to accept the model. According to Friedl, the first one is the fee. These programs do not require a simple algorithm, rather than adopting fact checkers worldwide. Users work for free. The second is people like They – They often find context added in other user posts helpful and interesting. The third is politics. Over the past decade, the platform (especially meta) has had a high response to political events, turning from crisis to crisis, and infuriating critics in the process. When Facebook first started tagging fake news, it was considered too little, and reckless censorship by Democrats and Republicans was too late. It significantly expanded its fact-checking program in 2020 to deal with misinformation about the rampant coronavirus pandemic (usually spread by Trump) and the election that year. According to Facebook’s self-report, the company showed fact-check tags on more than 180 million pieces of content from March 1, 2020 to Election Day that year. Again, both are considered too much and not enough. With annotation-based systems, the platform can avoid fact checking and why and why it stands out from the drama and why it is not. Friedel said they avoid making controversial decisions, which helps “not lose cultural capital on any user base.”
Recently, John Stoll, head of news at X, told me something similar about community notes. He said the tool is the “best solution” for misinformation because it will “sledge into the black box.” X's program allows users to download all comments and their voting history in a huge spreadsheet. He believes that by making moderation visible and collaborative, rather than secret and irresponsible, X discovered how to do things in the “fairest, most fair, most unauthorized voice-free way.” (It should be noted that “freedom of speech” on X also means that white supremacists and other abominable users were previously banned under the old Twitter rules.)
People across the political spectrum do seem to trust notes more than the standard misinformation mark. This may be because the notes feel more organic and tend to be more detailed. In the 2024 paper, Friedl and his co-author wrote that community notes give the responsibility of “the person who understands the complexity of a particular online community most intimately.” These people may work faster than traditional fact checkers – x claims that comments usually appear within a few hours, while complex independent fact checks can take several days.
However, all of these advantages have their limits. Community notes are indeed best for picking individual instances that are lying or just wrong. It cannot resist complex large-scale movements of false information or punishing duplicate bad actors (like the old fact-checking system). When the details of the mechanism were revealed when an earlier version of Twitter's Community Notes (called BirdWatch then) debuted, the paper acknowledged another important limitation: the algorithm “needs some cross-party protocols to work”, which may sometimes be unavailable. If there is no consensus, there are no notes.
Musk himself provides a good case study for this issue. Some community notes in Musk’s posts disappeared. It is possible that he has removed them – sometimes he seems unhappy with the power that X has given to users through the program, which shows that the system is being “recognized” and picky for users who quote “traditional media”, but disappearance may be an issue with the algorithm. The influx of Elon haters or Elon fans could undermine consensus and notes’ fun ratings that make them disappear. (When I asked this question, Stor told me: “As a company, we are 100% committed to and fall in love with community notes,” but he did not comment on what happened with the notes removed from Musk's post.)
Early bird watching paper also noted that the system might be really good at regulating “trivial topics.” That is the core weakness of the tool and its core strength. Note that because they are written and voted by people with many niche interests and gazes, they can appear on anything. While you'll see them with classic mistakes and dangerous things like conspiracy theories about Barack Obama's birth certificate, you'll also see them on ridiculous and harmless things like the cute videos of the Hedgehog. (The title of the hedgehog video I saw last week shows that the crow “helped” a tripping hedgehog in the street; the community’s notice that the crow may be trying to kill it, while the original poster sometimes deletes the post. I recently laughed on X posts: “People are really logged in here to upset the post and take the time to write the entire community record saying that katy Perry is not an astronaut.”
The good thing, though, is that when anything can be commented, it feels big, or a big conspiracy at the moment of something. Formal fact-checking plans will feel punitive and harsh, and they give people something to object. Notes come from peers. This makes it possible to receive a possible More Awkward than being traditional fact checked; earlier research suggests that people may delete misleading posts when they receive community notes.
The optimism about the note-type system is that they utilize materials that already exist and are already familiar with everyone. People have been correcting each other online: On almost all tiktoks, someone says something obvious wrong, and the highest comment will come from another person pointing it out. It became the top comment because other users “liked” it and it would subvert it. Whenever I hear Tiktok and think about what I have, I have instinctively studied the comment section That's not true, isn't it?
For better or worse, the idea that lets the crowd decide what needs to be corrected is a return to the era of Internet forums. In fact Culture has begun. However, this era of content moderation will not last forever, just like the previous era. By directly speaking, a cultural and political atmosphere inspired this change, Mehta has made many suggestions. We live in In fact The Internet is now. Whenever the climate changes, or whenever the head of the platform senses it changes, we find ourselves elsewhere.