Climate misinformation thrives on social media — and it threatens to get worse

Meta, the parent company of Facebook and Instagram, has decided to end its fact-checking programs and otherwise reduce content moderation, raising questions about what the future of content on these social media platforms will look like.

One worrying possibility is that the change could open the floodgates for more climate misinformation on the Meta app, including misleading or taken-out-of-context claims during disasters.

In 2020, Meta launched the Climate Science Information Center on Facebook to combat climate misinformation. Currently, third-party fact-checkers working with Meta flag false and misleading posts. Meta then decides whether to slap warning labels on them and reduce the extent to which the company’s algorithms promote them.

Meta’s policy has fact-checkers that prioritize “viral disinformation,” hoaxes, and “false claims that are demonstrably timely, trending, and consequential.” Meta explicitly states that this does not include opinion content that does not contain false claims.

The company will terminate its agreement with U.S. third-party fact-checking organizations in March 2025. The changes planned for US users will not affect fact-checking content viewed by users outside the US. In other regions, such as the European Union, the tech industry faces tighter regulations to combat disinformation.

Fact checks curb climate misinformation

I study climate change communication. Fact-checking can help correct political misinformation, including about climate change. People's beliefs, ideologies, and prior knowledge can influence the effectiveness of fact-checking. Finding information that aligns with your target audience’s values ​​and using trusted messengers—such as climate-friendly conservative groups when speaking to political conservatives—can help. So does calling for shared social norms (such as limiting harm to future generations).

As the world warms, heat waves, floods and fires become increasingly common and catastrophic. Extreme weather events often lead to a surge in attention to climate change on social media. Posts on social media peak during the crisis but quickly decline.

Low-quality fake images created using generative artificial intelligence software, so-called AI slop, have fueled confusion online during the crisis. For example, after Hurricanes Helen and Milton last fall, a fake AI-generated image of a young girl shaking on a boat holding a puppy went viral on social media platform X. Rumors and misinformation hampered FEMA’s disaster response.

The difference between misinformation and disinformation lies in the intent of the person or group sharing the information. Misinformation is false or misleading content that is shared without actively misleading. Disinformation, on the other hand, is false information that is misleading or shared for the purpose of deceiving.

Disinformation campaigns are already happening. In the aftermath of the 2023 Hawaii wildfires, researchers from Recorded Future, Microsoft, NewsGuard, and the University of Maryland independently documented an organized propaganda campaign by Chinese operatives targeting U.S. social media users.

To be sure, the spread of misleading information and rumors on social media is not a new problem. However, not all content moderation methods have the same effect, and platforms are changing how they handle misinformation. For example, X replaces rumor controls with user-generated tag “community notes” that help debunk false claims during fast-moving disasters.

[embed]https://www.youtube.com/watch?v=xgJ-xwXZ0zA[/embed]

A report found that climate change misinformation on X surged after Elon Musk acquired the X social media platform on October 27, 2022.

False claims can spread quickly

Meta CEO Mark Zuckerberg specifically cited X's community notes as inspiration for his company's plans to change content moderation. The problem is that false claims spread quickly. Recent research has found that the response times of crowdsourced community notes are too slow to stop the spread of viral misinformation early in its online lifecycle—the moment when a post is most widely viewed.

When it comes to climate change, misinformation is “sticky.” Once a lie is repeated, it is difficult to remove it from people's minds. Furthermore, climate misinformation undermines public acceptance of established science. Simply sharing more facts will not stop the spread of false claims about climate change.

Explaining that scientists agree that climate change is happening and is caused by humans burning greenhouse gases can prepare people and avoid misinformation. Psychological research shows that this "vaccination" approach can reduce the impact of false claims.

That’s why warning people about climate misinformation before it spreads is crucial to curbing its spread. Doing so may become more difficult for Meta's applications.

Social media users are the only debunkers

With the coming changes, you will become a fact checker on Facebook and other meta apps. The most effective way to prevent climate misinformation is to provide accurate information and then briefly warn about the myth - but only state it once. Next explain why it is inaccurate and repeat the facts.

During disasters caused by climate change, people urgently need accurate and reliable information to make life-saving decisions. Doing so is challenging enough, as it was when the Los Angeles County Office of Emergency Management mistakenly issued an evacuation alert to 10 million people on January 9, 2025.

During a crisis, in an information vacuum, crowdsourced debunking is no match for organized disinformation campaigns. With changes to Meta's content moderation policies and algorithms, the rapid and uncontrolled spread of misleading and outright false content is likely to get worse.

Overall, the American public wants the industry to rein in disinformation online. Instead, big tech companies appear to be leaving fact-checking to users.