A new project aimed at “countering illegal use of the Internet” is making headlines this week. The project, dubbed CleanIT, is funded by the European Commission (EC) to the tune of more than $400,000 and, it would appear, aims to eradicate the Internet of terrorism.
European Digital Rights, a Brussels-based organization consisting of 32 NGOs throughout Europe (and of which EFF is a member), has recently published a leaked draft document from CleanIT.
On the project’s website, its stated goal is to reduce the impact of the use of the Internet for “terrorist purposes” but “without affecting our online freedom.” While the goal may seem noble enough, the project actually contains a number of controversial proposals that will compel Internet intermediaries to police the Internet and most certainly will affect our online freedom. Let’s take a look at a few of the most controversial elements of the project.
Privatization of Law Enforcement
Under the guise of fighting ‘terrorist use of the Internet,' the “CleanIT project," led by the Dutch police, has developed a set of ‘detailed recommendations’ that will compel Internet companies to act as arbiters of what is “illegal” or “terrorist” uses of the Internet.
Specifically, the proposal suggests that “legislation must make clear Internet companies are obliged to try and detect to a reasonable degree … terrorist use of the infrastructure” and, even more troubling, “can be held responsible for not removing (user generated) content they host/have users posted on their platforms if they do not make reasonable effort in detection.”
EFF has always expressed concerns about relying upon intermediaries to police the Internet. As an organization, we believe in strong legal protections for intermediaries and as such, have often upheld the United States’ Communications Decency Act, Section 230 (CDA 230) as a positive example of intermediary protection. While even CDA 230’s protections do not extend to truly criminal activities, the definition of “terrorist” is, in this context, vague enough to raise alarm (see conclusion for more details).
Erosion of Legal Safeguards
The recommendations call for the easy removal of content from the Internet without following “more labour intensive and formal” procedures. They suggest new obligations that would compel Internet companies to hand over all necessary customer information for investigation of “terrorist use of the Internet.” This amounts to a serious erosion of legal safeguards. Under this regime, an online company must assert some vague notion of “terrorist use of the Internet,” and they will have carte blanche to bypass hard-won civil liberties protections.
The recommendations also suggest that knowingly providing hyperlinks to a site that hosts “terrorist content” will be defined as illegal. This would negatively impact a number of different actors, from academic researchers to journalists, and is a slap in the face to the principles of free expression and the free flow of knowledge.
Data Retention
Internet companies under the CleanIT regime would not only be allowed, but in fact obligated to store communications containing “terrorist content,” even when it has been removed from their platform, in order to supply the information to law enforcement agencies.
Material Support and Sanctions
The project also offers guidelines to governments, including the recommendation that governments start a “full review of existing national legislation” on reducing terrorist use of the Internet. This includes a reminder of Council Regulation (EC) No. 881/2002 (art. 1.2), which prohibits Internet services from being provided to designated terrorist entities such as Al Qaeda. It is worth noting that similar legislation exists in the US (see: 18 U.S.C. § 2339B) and has been widely criticized as criminalizing speech in the form of political advocacy.
The guidelines spell out how governments should implement filtering systems to block civil servants from any “illegal use of the Internet.”
Furthermore, governments’ criteria for purchasing policies and public grants will be tied to Internet companies’ track record for reducing the “terrorist use of the Internet.”
Notice and Take Action
Notice and take action policies allow law enforcement agencies (LEAs) to notify and act against Internet companies, who must remove “offending” content as fast as possible. This obligates LEAs to determine the extent to which content can be considered “offensive.” An LEA must “contextualize content and describe how it breaches national law.”
The leaked document contains recommendations that would require LEAs to, in some cases, send notice that access to content must be blocked, followed by notice that the domain registration must be ended. In other cases, sites' security certificates would be downgraded.
Real Identity Policies
Under the CleanIT provisions, all network users, whether in social or professional networks, will be obligated to supply their real identities to service providers (including social networks), effectively destroying online anonymity, which EFF believes is crucial for protecting the safety and well-being of activists, whistle-blowers, victims of domestic violence, and many others (for more on that, see this excellent article from Geek Feminism). The Constitutional Court of South Korea found an Internet "real name" policy to be unconstitutional.
Under the provisions, companies can even require users to provide proof of their identity, and can store the contact information of users in order to provide it to LEAs in the case of an investigation into potential terrorist use of the Internet. The provisions will even require individuals to utilize a real image of him or herself, destroying decades of Internet culture (in addition to, of course, infringing on user privacy).
Semi-Automated Detection
The plan also calls for semi-automated detection of “terrorist content.” While content would not automatically be removed, any searches for known terrorist organizations’ names, logos or other related content will be automatically detected. This will certainly inhibit research into anything remotely associated with what law enforcement might deem “terrorist content,” and would seriously hinder normal student inquiry into current events and history! In effect, all searches about terrorism might end up falling into an LEA’s view of terrorist propaganda.
LEA Access to User Content
The document recommends that, at the European level, browsers or operating systems should develop a reporting button of terrorist use of the Internet, and suggests governments draft legislation to make this reporting button compulsory for browser or operating systems.
Furthermore, the document recommends that judges, public prosecutors and (specialized) police officers be able to temporarily remove content that is being investigated.
Banning Languages
Frighteningly, one matter up for discussion within the CleanIT provisions is the banning of languages that have not been mastered by “abuse specialists or abuse systems.” The current recommendation contained in the document would make the use of such languages “unacceptable and preferably technically impossible.”
With more than 200 commonly-used languages and more than 6,000 languages spoken globally, it seems highly unlikely that the abuse specialists or systems will expand beyond a select few. For the sake of comparison, Google Translate only works with 65 languages.
At a time when new initiatives to preserve endangered languages are taking advantage of new technologies, it seems shortsighted and even chauvinistic to consider limiting what languages can be used online.
What Is Terrorism, Anyway?
While the document states that the first reference for determining terrorist content will be UN/EU/national terrorist sanctions list, it seems that the provisions allow for a broader interpretation of “terrorism.” This is incredibly problematic in a multicultural environment; as the old adage goes, “one man’s terrorist is another man’s freedom fighter.” Even a comparison of the US and EU lists of designated terrorist entities shows discrepancies, and the recent controversy in the US around the de-listing of an Iranian group shows how political such decisions can be.
Overall, we see the CleanIT project as a misguided effort to introduce potentially endless censorship and surveillance that would effectively turn Internet companies in Internet cops. We are also disappointed in the European Commission for funding the project: Given the strong legal protections for free expression and privacy contained in the Charter of Fundamental Rights of the European Union [PDF], it’s imperative that any efforts to track down and prosecute terrorism must also protect fundamental rights. The CleanIT regime, on the other hand, clearly erodes these rights.