We’re proud to announce today’s release of Onlinecensorship.org’s first report looking at how content is regulated by social media companies. Onlinecensorship.org—a joint project of EFF and Visualizing Impact (VI) that won the 2014 Knight News Challenge—seeks to encourage social media companies to operate with greater transparency and accountability toward their users as they make decisions that regulate speech.
Onlinecensorship.org was founded to fill a gap in public knowledge about how social media companies moderate content. As platforms like Facebook and Twitter play an increasingly large role in our lives, it’s important to track how these companies are regulating the speech of their users, both in tandem with governments and independent of them. As self-ordained content moderators, these companies face thorny issues; deciding what constitutes hate speech, harassment, and terrorism is challenging, particularly across many different cultures, languages, and social circumstances. These U.S.-based companies by and large do not consider their policies to constitute censorship. We challenge this assertion, and examine how their policies (and their enforcement) may have a chilling effect on freedom of expression.
This inaugural report looks at four months of results from our user questionnaire, from our launch in November 2015 to March 2016. From these user submissions, we have learned a great deal about how content moderation works, how policies are being enforced, and how such enforcement impacts users’ lives. Our report looks broadly at these questions, as well as taking a deep dive to look specifically at how policies banning nudity and requiring “authentic” names are particularly affecting users. Finally, we make a set of recommendations to companies on how they can improve users’ experience and boost their commitment to free expression.
The report is available at https://onlinecensorship.org/news-and-analysis/onlinecensorship-org-launches-first-report-download