resources

sixty civil and human rights organizations have come together to tell internet companies: It’s time to Change The Terms.
Big Tech Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections

Free Press

BIG TECH Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections

In this report, Free Press shows how social-media rollbacks threaten democracy ahead of the 2024 elections.

Free Press' "Empty Promises: Inside Big Tech's Weak Effort to Fight Hate and Lies in 2022" report cover

Free Press

Empty Promises: Inside Big Tech's Weak Effort to Fight Hate and Lies in 2022

This Free Press report shows how Meta, TikTok, Twitter and YouTube are failing to crack down on hate and anti-democratic disinformation ahead of the U.S. midterms.

GLAAD's 2023 Social Media Safety Index (SMSI) & Platform Scorecard report cover.

Gay & Lesbian Alliance Against Defamation (GLAAD)

2023 Social Media Safety Index (SMSI) & Platform Scorecard

The annual GLAAD Social Media Safety Index (SMSI) & Platform Scorecard provides recommendations for the industry and evaluates the five major platforms: Facebook, Instagram, Twitter, YouTube, and TikTok with a focus on LGBTQ safety, privacy, and expression.

GLAAD's 2022 Social Media Safety Index Report Cover

Gay & Lesbian Alliance Against Defamation (GLAAD)

2022 Social Media Safety Index (SMSI)

GLAAD’s 2022 Social Media Safety Index (SMSI) provides recommendations for the industry at large and reports on LGBTQ user safety across the five major social media platforms: Facebook, Instagram, Twitter, YouTube, and TikTok.

As a follow up to the pioneering 2021 SMSI, the 2022 edition of the report introduces a Platform Scorecard developed by GLAAD in partnership with Ranking Digital Rights and Goodwin Simon Strategic Research. The Platform Scorecard utilizes twelve LGBTQ-specific indicators to generate numeric ratings with regard to LGBTQ safety, privacy, and expression. After reviewing the platforms on measures like explicit protections from hate and harassment for LGBTQ users, offering gender pronoun options on profiles, and prohibiting advertising that could be harmful and/or discriminatory to LGBTQ people, all platforms scored under a 50 out of a possible 100.

Center for Countering Digital Hate's Failure to Protect Report Cover

Center for Countering Digital Hate

Failure to Protect

Social media companies are failing to act on Anti-Muslim Hate 89% of the time. Earlier reports by CCDH have shown platforms have similarly failed to act on antisemitism, anti-black racism, misogynist abuse and dangerous vaccine misinformation. Join us and demand Facebook, Instagram, TikTok, Twitter and YouTube take swift action to stop profiting from and ban stop anti-Muslim hate on their platforms.

GPAHE Americans’ Fear Suppressing Participation in Democracy Report Image

Global Project Against Hate and Extremism (GPAHE)

Americans’ Fear Suppressing Participation in Democracy

Social media companies are failing to act on Anti-Muslim Hate 89% of the time. Earlier reports by CCDH have shown platforms have similarly failed to act on antisemitism, anti-black racism, misogynist abuse and dangerous vaccine misinformation. Join us and demand Facebook, Instagram, TikTok, Twitter and YouTube take swift action to stop profiting from and ban stop anti-Muslim hate on their platforms.

001

What is the goal of these policies? What problem is this solving?

Just as the internet has created immense positive value by connecting people and creating new communities, it has also given new tools to those who want to threaten, harass, intimidate, defame or even violently attack people different from themselves. White supremacists and others engaged in these sorts of hateful activities use social media and other major tech platforms to mobilize, organize, fundraise and normalize racism, sexism, bigotry, xenophobia, antisemitism, homophobia and transphobia. In the past few years, hateful activity online has grown significantly as the so-called “alt-right” has emerged from the shadows. From the Unite the Right Rally in Charlottesville in 2017 to the insurrection on the U.S. Capitol on Jan. 6, 2021, we’ve seen how the internet’s invigoration of hate groups can result in real-world violence.

Meanwhile, the internet has opened up unprecedented opportunities for diverse communities to speak, create, educate and entertain by building direct connections with their audiences — and given platforms to voices that would otherwise be silenced. Yet these same creators — including people of color, women, religious minorities and members of the LGBTQIA+ community— are routinely harassed and threatened online. These attacks stifle their voices and chill their participation on these platforms. These harms undermine the ability of entire communities to take advantage of the most important technological advances of the modern era.

Some Big Tech companies have made attempts — some more successfully than others — to address hateful activities on their services. . But when companies try to regulate content arbitrarily, without civil-rights expertise or without sufficient resources, they can exacerbate the problem. Indeed, some attempts have silenced diverse voices combating racism and discrimination. 

The goal of the Change the Terms policies is to provide greater structure, transparency and accountability to the content moderation that many large platforms are already undertaking. The platforms want fair policies that are effectively enforced. We want to help them manage their platforms responsibly and respectfully.

002

Who are you and why should we listen to you?

The signatories and proponents of these model policies are a coalition of civil rights, human rights, technology policy, and consumer protection organizations. The policies themselves were drafted by the Center for American Progress, Color of Change, Free Press, the Lawyers’ Committee for Civil Rights Under Law, the National Hispanic Media Coalition, and the Southern Poverty Law Center. These drafters spent approximately nine months consulting with a wide range of civil and human rights experts and technologists to try to develop a thorough yet flexible set of policies.

003

Why are you doing this?

For the past few years, civil- and human rights organizations have seen hatred grow online and watched as major tech companies have failed to adequately address the problems that their own platforms have created. Some tech companies have made significant improvements since the deadly Unite the Right rally in Charlottesville in 2017 — but the insurrection at the U.S. Capitol on Jan. 6, 2021, underscores how much work is left to be done. 

Change the Terms provides a set of uniform model policies that civil- and human-rights groups can point to as best practices. We hope that these policies will set a benchmark to measure the progress of major tech companies, as well as a guide for newer companies wrestling with some of these issues for the first time.

With provisions calling for a strong appeals process, we hope these policies will help companies more effectively combat hate on their platforms — while avoiding silencing the users and creators who are combating hate online.

And we hope to use these policies to hold tech companies accountable. With this yardstick we plan to measure and report on which services and companies are best protecting their users.

004

If a company adopts these policies as its Terms of Service, what would the rules be?

Under these policies, a company commits to not allowing their services to be used for hateful activities.

  • Enforcement. The company will use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
  • Right of Appeal. The company will provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
  • Transparency. The company will regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
  • Evaluation and Training. The company will invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
  • Governance and Authority. The company will make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
  • State Actors, Bots, and Trolls. Recognizing that social media in particular is a new front for information warfare, the company will take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.
005

What is a “hateful activity?” Will this initiative block free speech?

As defined in the model terms of service, “hateful activity” means “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

Because of the strict definition of hateful activity found in the terms of service, these policies will not block free speech. First, as an initial matter, the First Amendment does not apply to the policies of a private company; it only applies to actions taken by a U.S., state, or local government.

But even if it did apply, the First Amendment does not protect all speech. We carefully wrote the definition of hateful activity to cover types of speech that courts have said are not protected as free speech: incitement, violence, intimidation, harassment, threats, and defamation.

We also looked to hate crimes laws to determine what types of characteristics to protect. It is wrong to discriminate against someone based on their immutable characteristics—those personal traits that one cannot change or that are fundamental to one’s identity—such as race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.

006

Why not just get the government to ban hate speech and regulate these companies?

There are several legal obstacles—which serve important purposes—that prevent government action from unilaterally solving this problem, even if we wanted it to. Moreover, the tech companies who built these highly profitable platforms also created these problems as collateral damage. Those who profit from these systems should bear the burden of solving the problem.

First, the First Amendment protects free speech in theUnited States, including hate speech. American federal and state governments cannot ban hate speech. But even outside the United States, speech laws vary wildly from one country to another. It is preferable to develop a set of policies that major tech companies can apply globally so that hateful actors cannot launder their activity by routing their traffic through a different jurisdiction.

Second, in the United States, hateful activities and hate crimes are already illegal in most jurisdictions. An injured person in many cases can bring a civil lawsuit against someone who defames, harasses, or threatens them. But such one-off litigation is very slow and expensive, and sometimes you cannot identify your attacker; many marginalized communities do not have sufficient access to legal services to make this an effective strategy in most cases. Online hate is a systemic problem that needs a systemic solution.

Third, the United States gave tech companies some limited legal immunity under the Communications Decency Act. This immunity is vital to internet innovation and small startups; without it we would not have the internet as we know it today. But the trade-off is that we expect the tech companies to police their own platforms.

Finally, and perhaps most importantly, if any government began regulating online speech directly, there would be huge risks that the majority would silence and oppress the minority. Historically, censorship laws have always disproportionately silenced activists and minorities.

007

Are you worried that these policies will be used to silence minorities online?

Yes, we are always worried that any policy could have disparate impacts on minority groups. But we have to look realistically at the situation we are facing today.

First, minorities are already receiving disparately discriminatory treatment from many tech companies. Many—if not most—tech companies are already creating and enforcing rules about what type of content is allowed on their platforms. In our experience, these rules are often poorly written or inadequately enforced.They often only respond when someone affirmatively reports a violation (sometimes trolls abuse these systems by flagging legitimate content).

And the enforcement teams are often understaffed, under-resourced, and poorly trained. In many cases, the appeals policies are insufficient to allow minorities whose speech does not constitute hateful activity to get their content back on-line.  All of these factors cause the current policies and practices to disproportionately silence the speech of minorities.

Second, abstaining entirely from content moderation results in even more hate and discrimination targeted at marginalized communities. The platforms that historically have taken the most permissive approach to content (such as 4chan, Twitter, and—until recently—Reddit) are also the places where the worst conduct occurs. A Wild West internet is not the answer.

Our goal is to help tech companies develop policies and best practices—with input from civil and human rights experts—that appropriately moderate their platforms while avoiding discriminatory impacts. These policies are a living document, intended to reflect the most effective policies and practices to combat hate activities while protecting minority voices.

008

What companies do you want to adopt these policies?

These policies are intended for internet companies that provide the following types of services:

  • Social media, video sharing, communications, marketing, event scheduling, or ticketing
  • Online advertising
  • Financial transactions or fundraising
  • Public chat or group communications
  • Domain names
  • Building or hosting websites, blogs, or message boards

This policies are not intended to be used by Internet Service Providers (e.g., Comcast or AT&T). We are committed to an open internet. Nothing in these policies is intended to allow or support blocking, throttling, or prioritizing any lawful content by an Internet Service Provider.

009

How will you measure whether a company is complying with these policies?

We intend to continue to engage directly with tech companies, and begin engagement with companies we have not worked with previously. We are happy to sit down with them at any time to discuss the policies and any concerns they may have. We will be actively encouraging tech companies to adopt and implement these policies.

After we have given tech companies a reasonable amount of time to digest, adopt, and implement these policies, we will initiate an evaluation process to see how companies stack up. This process will measure both tech companies’ formal policies and how they implement them. Without effective and evenhanded execution, a formal policy is just empty words.

010

Technology changes quickly. Are you worried these policies won’t keep up?

These model policies are meant to be a living document. This is Version 1.0. We fully expect that they will need evaluation and revision going forward. We will learn lessons from their implementation and online behavior will evolve. We hope that the transparency and reporting procedures in these policies will independent researchers and the public the data needed to figure out what works and what does not. And then we will revise.

In addition, we recognize that these are model policies and that every company has a different architecture and business model. We have tried to write these policies in a flexible manner so that companies can adapt their execution to the structure of their services while maintaining some baseline expectations for fairness, transparency, and consumer protection.

011

AI and machine learning algorithms are often infected with bias and produce discriminatory results.

Why would you want tech companies to use them as part of their enforcement practices?

We believe it is unrealistic to expect human reviewers to adequately monitor most large tech platforms. The volume of data is just too large. Without technological assistance, under-enforcement would be rampant. The human reviewers would always be swamped with too much content to review and only the most egregious or loudest cases would be addressed. That would allow hateful activities to flourish on these services.

But when tech companies use software to help monitor for hateful activities on their platforms, we recognize the risks of algorithmic bias and require the companies to take precautions to avoid it. All enforcement decisions must involve a human. Companies need to design and test their algorithms for bias, and routinely audit them even after deployment.

012

Will tech companies be able to use these policies to further invade my privacy, anonymity, or use of encryption?

No. Tech companies should not be allowed to use these policies as an excuse to invade their users’ privacy, strip them of their anonymity, or undermine the security and privacy of encrypted messaging services. Hateful activities can be reduced while respecting consumers’ rights.

Internet companies must ensure that their efforts are tailored to the mission of addressing hateful activities, and do not inappropriately invade users’ privacy, profile users based solely on their identity or affiliations, or initiate investigations solely based on offensive speech that does not qualify as hateful activities.

We intend these policies to be flexible depending on the nature of the service the tech company is providing. For some companies, requiring an authentic identity is part of the structure of the platform. For others, their users value anonymity. When tech companies are structuring their practices to implement these policies, they will need to take into account the reasonable expectations of their users. However, a commitment to anonymity cannot be a reason to not address hateful activities. Similarly, a commitment to users disclosing who they are has not in and of itself stopped these kinds of hateful activities on social media platforms.

013

WILL GOVERNMENT AGENCIES BE ABLE TO USE THESE POLICIES TO CONTROL FREE SPEECH?

No. Tech companies should not allow government actors to use their tools to attempt to remove content they find objectionable. Governments have other means by which to address content concerns through the usual legislative, judicial, or regulatory mechanisms that are (or should be)accountable to their constituency.

For instance, in the United States there are strong restrictions on what speech can be limited by government and the requirement for due process prior to such limitations. Nothing in these policies should be interpreted to grant additional authority to government or to grant government actors extrajudicial influence over tech companies’ content.

002

 Who are you and why should we listen to you?

004

If a company adopts these policies as its Terms of Service, what would the rules be?

Under these policies, a company commits to not allowing their services to be used for hateful activities.

  • Enforcement – The company will use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
  • Right of Appeal – The company will provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
  • Transparency – The company will regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
  • Evaluation and Training – The company will invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
  • Governance and Authority – The company will make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
  • State Actors, Bots, and Trolls – Recognizing that social media in particular is a new front for information warfare, the company will take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.
006

Why not just get the government to ban hate speech and regulate these companies?

There are several legal obstacles—which serve important purposes—that prevent government action from unilaterally solving this problem, even if we wanted it to. Moreover, the tech companies who built these highly profitable platforms also created these problems as collateral damage. Those who profit from these systems should bear the burden of solving the problem.

First, the First Amendment protects free speech in the United States, including hate speech. American federal and state governments cannot ban hate speech. But even outside the United States, speech laws vary wildly from one country to another. It is preferable to develop a set of policies that major tech companies can apply globally so that hateful actors cannot launder their activity by routing their traffic through a different jurisdiction.

Second, in the United States, hateful activities and hate crimes are already illegal in most jurisdictions. An injured person in many cases can bring a civil lawsuit against someone who defames, harasses, or threatens them. But such one-off litigation is very slow and expensive, and sometimes you cannot identify your attacker; many marginalized communities do not have sufficient access to legal services to make this an effective strategy in most cases. Online hate is a systemic problem that needs a systemic solution.

Third, the United States gave tech companies some limited legal immunity under the Communications Decency Act. This immunity is vital to Internet innovation and small startups; without it we would not have the Internet as we know it today. But the trade-off is that we expect the tech companies to police their own platforms.

Finally, and perhaps most importantly, if any government began regulating online speech directly, there would be huge risks that the majority would silence and oppress the minority. Historically, censorship laws have always disproportionately silenced activists and minorities.

008

What companies do you want to adopt these policies?

These policies are intended for Internet companies that provide the following types of services:

  • Social media, video sharing, communications, marketing, event scheduling, or ticketing
  • Online advertising
  • Financial transactions or fundraising
  • Public chat or group communications
  • Domain names
  • Building or hosting websites, blogs, or message boards

This policies are not intended to be used by Internet Service Providers (e.g., Comcast or AT&T).We are committed to an open Internet. Nothing in these policies is intended to allow or support blocking, throttling, or prioritizing any lawful content by anInternet Service Provider.

010

Technology changes quickly. Are you worried these policies won’t keep up?

These model policies are meant to be a living document. This is Version 1.0. We fully expect that they will need evaluation and revision going forward. We will learn lessons from their implementation and online behavior will evolve. We hope that the transparency and reporting procedures in these policies will independent researchers and the public the data needed to figure out what works and what does not. And then we will revise.

In addition, we recognize that these are model policies and that every company has a different architecture and business model. We have tried to write these policies in a flexible manner so that companies can adapt their execution to the structure of their services while maintaining some baseline expectations for fairness, transparency, and consumer protection.

012

WILL TECH COMPANIES BE ABLE TO USE THESE POLICIES TO FURTHER INVADE MY PRIVACY, ANONYMITY, OR USE OF ENCRYPTION?

No. Tech companies should not be allowed to use these policies as an excuse to invade their users’ privacy, strip them of their anonymity, or undermine the security and privacy of encrypted messaging services. Hateful activities can be reduced while respecting consumers’ rights.

Internet companies must ensure that their efforts are tailored to the mission of addressing hateful activities, and do not inappropriately invade users’ privacy, profile users based solely on their identity or affiliations, or initiate investigations solely based on offensive speech that does not qualify as hateful activities.

We intend these policies to be flexible depending on the nature of the service the tech company is providing. For some companies, requiring an authentic identity is part of the structure of the platform. For others, their users value anonymity.

When tech companies are structuring their practices to implement these policies, they will need to take into account the reasonable expectations of their users. However, a commitment to anonymity cannot be a reason to not address hateful activities. Similarly, a commitment to users disclosing who they are has not in and of itself stopped these kinds of hateful activities on social media platforms.