In the Dark: How Social Media Companies' Climate Disinformation Problem is Hidden from the Public

Ranking Big Tech on transparency, A Report by Friends of the Earth, Avaaz, and Greenpeace USA

Published: April 21, 2022

Executive Summary

For decades, the fossil fuel industry has poured millions of dollars into spreading climate disinformation[1] online and offline to drive public polarization and stall action on the climate crisis. That’s why the latest UN Climate Reports say climate disinformation is a threat to climate action.

A new scorecard by Friends of the Earth, Avaaz, and Greenpeace USA shows that social media companies are largely leaving the public in the dark about their efforts to combat the problem. There is a gross lack of transparency, as these companies conceal much of the data about the prevalence of digital climate dis/misinformation and any internal measures taken to address its spread. Pinterest and YouTube have taken notable steps to address climate dis/misinformation, while Facebook, TikTok, and Twitter trail behind in their efforts.

All of the social media companies fail to disclose comprehensive policies to combat climate dis/misinformation, including:

  • Releasing weekly transparency reports that detail the scale and prevalence of climate dis/misinformation on their platforms and mitigation efforts taken internally;
  • Providing thorough and consistent detail for the courses of action they take on repeat violators of their policies, especially in the context of climate dis/misinformation.

But some have done more than others:

  • Pinterest and YouTube have adopted climate expert-informed definitions of climate dis/misinformation, while Facebook, TikTok, and Twitter have not.

We used a 27-point assessment question system to analyze climate dis/misinformation policies at Facebook, Pinterest, TikTok, Twitter, and YouTube.



The findings of this assessment reinforce the need for lawmakers throughout the world to pass robust regulation such as the Digital Services Oversight and Safety Act (DSOSA), which would mandate transparency from social media companies. Dis/misinformation experts have noted that transparency is key to better understanding the evolving landscape of climate dis/misinformation, holding disinformers accountable, and ultimately ending the climate crisis. Currently, these companies do not provide the data transparency researchers and advocates need to understand the scale and harms of climate dis/misinformation on social media, leaving the public powerless to judge whether social media companies are acting responsibly in designing and building their platforms.

Background

Despite half of US and UK adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception – in fact, they continue to allow these climate lies to pollute users’ feeds.

Their publicly available policies and reports show that they’re failing to keep up with the evolving tactics of climate disinformers. As mounting scientific evidence has made the greenhouse effect impossible to deny, disinformers have shifted to admitting that climate change is real, but are still pushing narratives that undermine and oppose policy action. Adopting a “Global warming is not a hoax, but…” stance allows disinformers to appear as moderate interlocutors in debates about climate change while pushing for similar policy outcomes. These policy positions are often supported by outright falsehoods, cherry-picked data, and misleading claims.

In the European Union, there is significant movement towards regulating Big Tech, including requiring transparency on their content policies and practices. The Digital Services Act (DSA) is currently under negotiation and expected to pass Parliament in June 2022. It will likely establish a legal standard for tech company responsibility that sets global expectations in the same way that Europe’s General Data Protection Regulation (GDPR) changed standards for data protection. In the US, Representative Lori Trahan (MA-03) has introduced the Digital Services Oversight and Safety Act (DSOSA) bill, which will empower analysts and advocates working to reduce climate dis/misinformation by requiring transparency from social media companies and mandating that they facilitate independent research into the social and political consequences of platform design. Both of these key legislative pushes underline the need for transparency and social media company accountability in order to mitigate the harms that result from Big Tech’s business model and reinforce the need for transparency on urgent issues like climate dis/misinformation until comprehensive regulation is in place.

Social Media Company Rankings

Based on an assessment of official, publicly-available company reports and policy announcements[2],[3] (as of April 8, 2022), Avaaz, Friends of the Earth, and Greenpeace USA found that Facebook, Pinterest, TikTok, Twitter, and YouTube fall short of disclosing comprehensive policies to combat climate dis/misinformation.

We created 27 “yes or no” assessment questions that, in sum, emphasize the scope of detail we believe would uphold the level of transparency social media companies should achieve in order for researchers, policymakers and the public to track climate dis/misinformation, and hold social media companies accountable for their role in its spread.[4] We then gathered all public-facing community guidelines, terms of service, press releases, and reporting from the five social media companies relevant to dis/misinformation and climate dis/misinformation specifically. For each assessment question the social media company earned 1 point if it demonstrated transparency on the policy or effort being evaluated for at least one category of dis/misinformation. If a social media company had no official policy or effort OR only a partial or opaque articulation of such a policy or effort, no point was granted.

While there is no universally accepted definition of dis/misinformation we used the following definitions for the purposes of our research[5]:

“Disinformation” describes any verifiably false or misleading content that is spread with the intention to deceive or secure economic or political gain, and which has the potential to cause public harm.

“Misinformation” describes verifiably false or misleading content that is shared without harmful intent, though the effects can still be harmful.

 



With the exception of Pinterest and YouTube, all social media companies fail to adopt a climate expert-informed definition of climate dis/misinformation, which is integral to ensuring that platforms’ policies and enforcement practices address the fullest scope of the problem. Pinterest is the only social media company that publicly articulates an expert-informed definition that applies to both paid and organic content, while YouTube articulates a definition only as it applies to paid content (advertising).[6]

Across the board, social media companies fail to release weekly transparency reports that detail the scale and prevalence of climate dis/misinformation on the platform and mitigation efforts taken internally. All social media companies have the resources to issue reports at this frequency and scope[7], which would give the public a greater ability to monitor the rapidly evolving dis/misinformation landscape and assess what measures are most effectively addressing it. Enforcement reporting on key issues such as climate dis/misinformation is important so that outside observers can monitor and flag the spread of potentially harmful content if and when internal systems fail, as was recently reported on Facebook.

Furthermore, while all platforms except Twitter allow all users to flag and report dis/misinformation for review and likely action by content moderators, the platforms are vague about how they follow up to alert users of what actions were taken or not taken on the flagged content and actors.

The Center for Countering Digital Hate has documented that repeat violators of dis/misinformation policies fuel the majority of climate denial on social platforms. Despite this, all the social media companies we analyzed fail to provide thorough and consistent detail describing the courses of action they take to address repeat violators of their policies, especially in the context of climate dis/misinformation. TikTok and YouTube are explicit that repeat violators of their overarching dis/misinformation policies will face likely suspension or removal, while Facebook is the only social media company that explicitly references how it penalizes repeat policy violators of its dis/misinformation policies as it relates to climate-related content.

All social media companies have explicit language prohibiting racist and misogynistic content and/or imagery. Due to increasingly prominent climate-fueled racist or misogynistic content, it is important for social media companies to ensure that this prohibition is rigorously enforced against climate disinformation that contains misogynistic, racist, and otherwise discriminatory elements.


#1 Pinterest is the only social media company to provide a robust definition of climate dis/misinformation that applies to both paid and organic content.

  • Pinterest is one of two social media companies analyzed that provides a climate-expert-informed definition of climate dis/misinformation.
  • Pinterest also explicitly articulates how its anti-dis/misinformation policy enforcement will apply to climate dis/misinformation specifically.
  • Pinterest’s process for reviewing dis/misinformation, including climate-specific dis/misinformation, is inadequately disclosed and vaguely described. Furthermore, the company states they “may remove, limit, or block the distribution” of accounts that repeatedly violate their Community Guidelines, which is non-committal language that creates opacity around how this policy is enforced.

#1 YouTube articulates a climate dis/misinformation definition only as it relates to paid advertising, which leaves the public in the dark as to how it applies to other content.

  • While YouTube is clear on the internal process through which it categorizes content dis/misinformation, the social media company is vague about how widely and consistently dis/misinformation counter-measures are enforced.
  • YouTube has articulated a climate dis/misinformation definition and policy that only applies to monetized videos and paid advertising.
  • YouTube provides a target rate for the prevalence of “borderline content” in recommendations but does not remove it, nor disclose whether they consistently meet this target in enforcement reports. “Borderline content” is defined as content that doesn’t quite violate Youtube Community Guidelines and encompasses many categories of dis/misinformation.

#3 Facebook is explicit about how fact-checking and downranking measures are applied to climate dis/misinformation and its regular spreaders, but it provides little information on disclosure of enforcement outcomes.

  • While Facebook articulates how content is verified as dis/misinformation through their third-party fact-checking program, there is little clarity around how widely and consistently dis/misinformation measures are applied.
  • Facebook doesn’t release updates on the scale and prevalence of climate dis/misinformation in its quarterly enforcement reports, or on climate dis/misinformation in general.
  • Facebook has not released a publicly available definition of climate dis/misinformation.

#4 TikTok has clear standards for removing repeat offenders and includes dis/misinformation in their transparency reports, but does not articulate how any policies apply to climate dis/misinformation.

  • TikTok has transparent user flagging policies, and the social media company clearly articulates that repeat violators of their Community Guidelines face account-level consequences for spreading dis/misinformation.
  • TikTok is transparent on the process for identifying mis/disinformation on its platform.
  • While TikTok’s quarterly enforcement reports include high-level metrics on actions taken on dis/misinformation, these reports do not include detail on actions taken specifically on climate dis/misinformation.
  • TikTok has not published a publicly available definition of climate dis/misinformation, nor does it refer to climate-related content throughout any of its community guidelines.
  • TikTok releases guidelines for recommended content on its platform that make it clear that mis/disinformation is discouraged, but the use of qualifying language throughout the guidelines makes it difficult for users and researchers to understand whether or not policies are applied consistently.

#5 Twitter’s lack of clarity on dis/misinformation review policies and vague enforcement reporting information puts it last.

  • Twitter lost many points for qualifying language and vagueness. For example, in their enforcement policies, Twitter describes “enforcement actions that we may take” without providing clear criteria for when and why certain actions are applied or not.
  • Its strengths as a social media company came from transparency on account-level exemptions to standard enforcement, and clear strike policies for repeat offenders on some types of dis/misinformation.
  • The social media company does not articulate how its existing dis/misinformation-related policies apply to climate dis/misinformation, nor does it explicitly articulate how it plans to address climate dis/misinformation.
  • Twitter is not clear about how content is verified as dis/misinformation, nor explicit about engaging with climate experts to review dis/misinformation policies or flagged content.
  • Twitter’s total lack of reference to climate dis/misinformation, both in their policies and throughout their enforcement reports, earned them no points in either category.
  • The social media company has not released a publicly available definition of climate dis/misinformation.

Recommendations

The Climate Disinformation Coalition, an intersectional group of climate organizations and tech accountability groups (including Avaaz, Friends of the Earth, and Greenpeace USA) has called on all social media companies to deliver on the following[8]:

  • Establish, disclose, and enforce policies to reduce climate change dis/misinformation.
  • Release in full the company’s current labeling, fact-checking, policy review, and algorithmic ranking systems related to climate change disinformation policies.
  • Disclose weekly reports on the scale and prevalence of climate change dis/misinformation on the platform and mitigation efforts taken internally.
  • Adopt privacy and data protection policies to protect individuals and communities who may be climate dis/misinformation targets.

Methodology

Between January 14, 2022 and April 8, 2022, Friends of the Earth, Avaaz, and Greenpeace USA created specific criteria for evaluating the platforms’ climate disinformation policies, using a list of 27 “yes or no” assessment questions that represent the scope and level of transparency that we believe these companies should hold themselves to in order for researchers, policymakers, and the public to track climate dis/misinformation, and hold social media companies accountable for their role in its spread.[9]

We gathered all official, public-facing community guidelines, terms of service, press releases, and reporting relevant to assessment questions from the five social media companies. Each assessment question amounted to 1 point if the social media company demonstrated transparency on the policy or effort being evaluated. If a social media company had no official policy or effort OR only a partial or qualified articulation of such a policy or effort, no point was granted.

For questions on a social media company’s general (non-climate-specific) dis/misinformation policy, points were awarded for transparency on dis/misinformation policies that covered specific domains, such as COVID-19 dis/misinformation. The assessment question regarding elements within a climate dis/misinformation definition, was worth three points total. The total number of points available was 27.

Assessment Questions

In February, our researchers reached out to all social media companies to flag this upcoming cross-platform analysis of transparency and content moderation policies and to request any clarification on existing or forthcoming policies that might be released before we concluded our assessment. Pinterest was the only company to respond.

This report will be updated periodically in the future, based on any policy changes, updated information, or corrections provided by the companies.

The social media companies’ policies and transparency efforts were all evaluated with the following questions, which reflect the recommendations communicated to the social media companies. A social media company was given 1 point for officially and publicly disclosing each of the policies or information below.

Total Points: 27 points. The ranking of each social media company was determined by the sum of points each received after the assessment was complete.

DISCLAIMER: We do not include API access or research data sharing policies in our analysis, as this data requires significant resources and expertise to process, and is therefore not easily accessible to policymakers and the general public. Moreover, data accessed through major platform APIs, including data offered by Facebook, Twitter, and YouTube, has been revealed to be non-comprehensive, subject to arbitrary restriction or withdrawal by social media companies, and constrained by data limits and the omission of deleted content and other essential information. For this reason, external researchers cannot and should not be asked to provide a comprehensive overview of climate misinformation easily, internally available to social media companies. However, we recognize the importance of more open API and researcher access to hold social media companies accountable and support the development of accessible data-sharing protocols and rigorous auditing procedures needed to verify the enforcement of policies covered in this scorecard.

Final Note

The organizations that created this assessment are cognizant that different scientific models can be used to assess the effects of climate change and the progression of associated threats over the coming decades.

One of the key objectives of this report is to allow for fact-based deliberation, discussion and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data. Past research has shown that much of the climate dis/misinformation on social media is spread by a small number of actors, often with vested economic and political interests, and amplified by social media recommendation algorithms designed to maximize human attention and profit.

We see a clear boundary between freedom of speech and freedom of reach, and believe that transparency on climate dis/misinformation and accountability for the actors who spread it is a precondition for a robust and constructive debate on climate change and the response to the climate crisis.

References

[1] When referred to in this report, “disinformation” describes any verifiably false or misleading content that is spread with the intention to deceive or secure economic or political gain, and which has the potential to cause public harm. “Misinformation” describes verifiably false or misleading content that is shared without harmful intent, though the effects have the potential to be harmful. These definitions are informed by UNESCO and the years-long research by the authors of this report on the dis/misinformation landscape.

When referring to climate disinformation and misinformation specifically in this report, we are referring to deceptive or misleading content that: 1) Undermines the existence or impacts of climate change, the unequivocal human influence on climate change, and the need for corresponding urgent action according to the IPCC scientific consensus and in line with the goals of the Paris Climate Agreement; 2) Misrepresents scientific data, including by omission or cherry-picking, in order to erode trust in climate science, climate-focused institutions, experts, and solutions; or 3) Falsely publicizes efforts as supportive of climate goals that in fact contribute to climate warming or contravene the scientific consensus on mitigation or adaptation. This definition was developed in partnership with climate and disinformation experts.

[2] This assessment is based on official, publicly-available company reports and policy announcements alone. This does not include comments from company spokespeople and current or former employee leaks to the press. Avaaz, Friends of the Earth, and Greenpeace USA believe that social media companies should make all information relevant to policy details, enforcement measures, and effectiveness of mitigation efforts available in the form of official company-issued reports and policy announcements. See “Methodology” for more.

[3] See full assessment here.

[4] See “Methodology” for the list of all 27 questions used to assess the companies’ policies and actions.

[5] These definitions are informed by UNESCO and the years-long research on the dis/misinformation landscape by the authors of this report.

[6] https://support.google.com/google-ads/answer/11221321?hl=en

[7]https://www.courant.com/opinion/op-ed/hc-op-oneil-facebook-algorithms-20211012-u5jdwte3mfhthm3s7ecq4nvq6e-story.html

[8] Expanded policy demands can be viewed here.