Download the policies

the Terms

BELOW ARE RECOMMENDED POLICIES FOR CORPORATIONS TO ADOPT AND IMPLEMENT TO ADDRESS HATEFUL AND HARMFUL ACTIVITIES ON THEIR PLATFORMS.

While a free and open internet creates immense social value, it can also be used to engage in hateful activities and to do so at a large scale.

For example, white supremacist and other organizations inciting hate are using online platforms to organize, fund, recruit supporters for, and normalize racism, sexism, xenophobia, religious bigotry, homophobia and transphobia, among others.

This chills the online speech of the targeted groups, curbs democratic participation, and threatens people’s safety and freedom in real life.

While some companies are taking steps in the right direction to reduce hateful activities online, anti-hate provisions in most companies’ terms of service alone are not enough.

To ensure that companies are doing their part to help combat hateful conduct on their platforms, organizations in this campaign will track the progress of major tech companies – especially social media platforms – to adopt and implement these model corporate policies and give report cards to these same companies on both their policies and their execution of those policies the following year.

Below, see an excerpt of the recommended corporate policies. Download a full version here.

*Note: Change the Terms uses the term “hateful activities” to mean activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.

TERMS OF SERVICE AND ACCEPTABLE USE POLICIES
+

Terms Of Service or Acceptable Use Policies should at a minimum make it clear that it shall be grounds for terminating the service for a user, if the user uses the service to engage in hateful activities on the service or to facilitate hateful activities off the service.

For instance, while an online payment processor may not be the vehicle through which a group directly engages in hateful activities, the online payment processor should not knowingly allow the group to use its services to fund hateful activities. Not denying services under this example would mean that the online payment processor is financially profiting from hateful activities.

ENFORCEMENT
+

Strong Terms of Service or Acceptable Use Policies mean very little if they are not effectively enforced. In practice, enforcement varies significantly across Internet Companies and can vary within an Internet Company from case to case.

This has made it possible for groups and individuals who have engaged in hateful activities online to continue to operate unscathed or to lose access to a service, only to be reinstated later without explanation. Internet Companies must have in place an enforcement strategy which recognizes the scope of the problem and reflects a commitment to continuously significantly diminish hateful activities within their services.

RIGHT OF APPEAL
+

Because determining what are hateful activities can be complicated in some cases, a user should have the right to appeal any material impairment, suspension or termination of service, whether that impairment, suspension or termination of service is in full or in part.

This right should allow for an appeal to be made to a separate neutral decision-maker, other than that which made the initial determination, with knowledge of the social, political and cultural history and context within the country or countries from which the user comes and in which people have access to the perceived transgression of the terms of service or acceptable use policy. The appealing user should have the opportunity to present information to advocate for their position.

TRANSPARENCY

Both technologies and how people use them change rapidly. To address hateful activities online, it is important to understand what is occurring, what is working, and what is not. To facilitate this understanding, InternetCompanies should be transparent with the actions that they are taking, why they are doing so, and who is impacted.

This data should be made available online in easily accessible, comprehensive, formats that are both human and machine-readable. This will allow for researchers, scholars and others to analyze the data to better understand what is happening, make recommendations and develop best practices.

EVALUATION AND TRAINING
+

In their efforts to address hateful activities online, Internet Companies are testing a variety of techniques that often combine technology-based tests with human assessors to evaluate whether use of their services constitutes hateful activities. This has not always been successful because the programmers and human assessors may lack expertise on hateful activities for a variety of reasons, including that they are not properly trained or lack an understanding of the cultural, social and political history and context of the locales, regions, country or countries which will have access to the content created.

To address this, Internet Companies should hire recognized experts who have a demonstrated expertise on hate, such as peer-reviewed publications and solid academic credentials directly relevant to germane topics, to: advise programmers, develop training content and oversee training of assessors.

GOVERNANCE AND AUTHORITY
+

Several factors have increased corporate interest in addressing the reality that groups are using their tools to engage in hateful activities. These include the neo-Nazi march in Charlottesville, new European rules that embrace large fines for Internet Companies failing to address criminal violations under their laws, and revelations by large Internet Companies that foreign governments or aligned entities have engaged in hateful activities on social media platforms in an attempt to create divisions within democracies.

Prior to these recent occurrences, organizations working to oppose hateful activities online found that while some Internet Companies were willing to meet with them, this seldom resulted in meaningful action or organization-wide commitment to change.

STATE ACTORS, BOTS AND TROLL CAMPAIGNS
+

Large scale initiatives to promote hateful activities may originate with countries, or other entities, that intend to sow discord or to influence the outcomes of elections. It is clear now that this has happened with foreign actor efforts targeting elections in the United States and multiple countries in Europe. There are also now reports of large scale social media troll campaigns engaging in hateful activities targeting ethnic or religious groups in both African andAsian countries.

These coordinated campaigns of hateful activities have occurred using large numbers of bots and/or large teams of human operatives, both of whom present themselves as someone other than who they are. To remove hateful activities from online services will require dealing directly with these large-scale initiatives. At their core, these initiatives rely on the ability for anonymous, clandestine and/or delusive actors, whether human or bots, to manipulate services through coordinated action, especially on social media platforms.

In addition to propagating hateful activities, this manipulation provides untrue information to Internet Companies’ users and potentially undermines the legitimacy of platforms including the many valid and valuable purposes for anonymity and privacy-protective services.

TERMS OF SERVICE AND ACCEPTABLE
USE POLICIES
+
ENFORCEMENT
+
RIGHT OF APPEAL
+
TRANSPARENCY
+
EVALUATION AND TRAINING
+
GOVERNANCE AND AUTHORITY
+
STATE ACTORS, BOTS AND TROLL CAMPAIGNS
+
do more