This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks.
When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.
The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently.
In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:
- Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm.
- Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
- Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material.
Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.
Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.
Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online.
Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online.
But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development.
At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms.
Taking a Holistic Approach to Risk Mitigation
In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach.
Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content.
To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society.
Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.
The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.
However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA.
The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates.
Strengthening Users’ Choice
Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them.
Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs.
The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.
Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain.
A Privacy First Approach to Addressing Online Harms
While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms.
We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data.
Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections.
Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled.
This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online.
Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.