States' Commitments Under the Inter-American Human Rights System
Guidelines for Admissible Restrictions and/or Limitations to Rights
Public Policies and Human Rights in the Inter-American Human Rights System
Inter-American Human Rights Standards Regarding Relevant Rights
      Political Participation
      Access to Information
      Equality and Non-discrimination
      Due Process
      Dignity, Right to Private Life, and Related Rights
      Freedoms of Expression, Association, and Assembly
      The Right to a Dignified Life (ESC rights)

States' Commitments Under the Inter-American Human Rights System

  • As per Article 1(1) of the American Convention, State Parties of the American Convention are obligated to respect and guarantee human rights, which includes the duty to prevent, investigate, and punish human rights violations, as well as to provide effective remedies and proper reparation when these violations occur.
  • The obligation to respect rights and freedoms enshrined in Article 1(1) of the ACHR, without prejudice of other international human rights instruments, is the essential baseline of any State's development and/or use of AI/ADM systems that can affect the recognition or exercise of such rights and freedoms considered in their interdependence.
  • When adopting AI/ADM systems to support rights-related decision-making, States must have the proper processes and apparatus in place to prevent human rights violations or provide effective remedies and reparations in case they occur, including when perpetrated by private third parties.
  • States must also take steps to ensure that all those affected have the means to freely and fully exercise their rights regarding the decision-making process and the institutions in charge of them.
  • This entails having domestic legal and institutional frameworks equipped with instruments, practices, and safeguards capable of fulfilling the protection of conventional rights. Norms and practices underpinning the violation of conventional rights and freedoms must be reformed, repealed, or nullified.
  • States’ obligations before the American Convention and its protocols are not limited to the literal meaning of the provisions. They encompass the Inter-American Court’s case law, which is the ultimate interpreter of the Convention.
  • All state institutions and those acting on their behalf, including judges and entities engaged in the administration of justice, are subject to commitments the State undertook before the Inter-American System. As such, local courts have the duty to exercise “conventionality control” over domestic laws governing the use of AI/ADM systems in situations or through procedures that are illicit under the Convention. This also entails control of norms that prevent the proper exercise of conventional rights and freedoms before the implementation and operation of such systems.
  • When assessing the development, deployment, and/or implementation of AI/ADM systems vis-à-vis their obligations under conventional rights, state bodies and officials must interpret the Convention in good faith and adopt the most favorable interpretation to the protection of people affected, that is, they must abide by the pro persona principle.
  • Good faith Interpretations of the Convention involve observing the ordinary meaning of its terms and protocols “in their context and in the light of [their] object and purpose.” It also involves applying objective criteria to interpret the Convention and following interpretations that align with the rules and values of international human rights laws (in accordance with Article 31 of the Vienna Convention).1 Conclusions must also reflect the understanding of human rights treaties as living instruments, meaning they must resonate with current living conditions.2
  • State bodies and officials must keep in mind that the interpretation of conventional provisions cannot imply that rights and freedoms be suppressed or restricted in excess of what’s allowed under the Convention. Likewise, such interpretation cannot exclude other rights or guarantees that are inherent in the human personality or intrinsic to democratic societies.
  • This has many implications, one of which being that States should not justify the arbitrary implementation of AI/ADM systems (see Chapter 5) by invoking a commitment to comply with the progressive development of Economic, Social and Cultural rights (ESC rights) or the imperative to protect other specific conventional rights and freedoms.
  • Any human rights violation resulting from States’ adoption of these systems must be adequately investigated, and necessary measures must be taken to establish its causes and those responsible.
  • If a violation relates to the AI/ADM system component of a state policy or activity, States are responsible for its impacts, as what matters is whether the rights violation “has occurred with government support or acquiescence,” or whether the “State has allowed the act to take place without taking measures to prevent it or to punish those responsible.” This encompasses both acts and omissions of public authorities or private agents acting in a State capacity, regardless of their intent or motivation, or whether their individual identity is known.
  • Investigations of related human right violations must provide inputs for a strict assessment of the system and the various relevant elements surrounding its use, leading either to effective modifications or discontinuation. This does not replace or exclude States’ responsibility to carry out periodic evaluations and independent audits to assure the system operates according to human rights standards.
  • The outcomes of the investigation must also set precedents for state bodies using or seeking to use similar systems. This is as an essential measure to prevent the recurrence of human rights violations.
  • States must ensure effective appeals mechanisms and judicial protection for those whose rights are impaired by the use of AI/ADM systems within public institutions’ functions and policies (see also Section 4.4). This involves eliminating legal and administrative barriers limiting access to judicial and extrajudicial remedies, as well as addressing cultural, social, physical, or financial barriers for people in vulnerable groups (see also Section 4.3).
  • Any human rights violation resulting from States’ adoption of AI/ADM systems must give rise to an effective remedy and proper reparation of any damages caused, and reinstate, whenever feasible and appropriate, the preexisting situation and rights. Properly redressing violations relating to the allocation of state welfare subsidies and benefits entails putting the person in the position they would enjoy if they had been given a just outcome.

Guidelines for Admissible Restrictions and/or Limitations to Human Rights

  • While States’ use of AI/ADM systems can theoretically serve to protect and promote conventional rights and freedoms, relying on them for rights-based decision-making carries at least the potential for interfering with one or more of such rights and freedoms. As a consequence, States’ development or adoption of these systems must observe the admissible grounds for restricting or limiting human rights.
  • This means States must first comply with legality and legitimacy standards. Restrictions and limitations must be grounded in the law. Any underpinning legal provisions must be fit for the purpose, i.e. have been democratically approved and meet the guarantees of the Inter-American System. Restrictions/limitations of rights must also be authorized by the Convention and/or its protocols, and fulfill their corresponding requirements.
  • Any laws underpinning rights restrictions or limitations must have been enacted (or be enacted) for the general welfare, which is “the conditions of social life that allow members of society to reach the highest level of personal development and the optimum achievement of democratic values.” Rights restrictions based on general welfare must be strictly limited to the “just demands” of a “democratic society” considering both individual and collective interests. This means that rights restrictions/limitations must pursue a legitimate aim and cannot stray from the legitimate purpose that justified the restriction. Failing that, there will be a misuse of power.
  • For a pursued goal to legitimately justify the restriction or limitation of conventional rights, it must be “necessary in a democratic society.” A democratic society entails the interrelation between the concepts of “Rule of Law,” “human rights,” “guarantees,” and “democracy.”3 As such, a pursued goal is a legitimate aim to the extent that it corresponds to protecting “the essential rights and the creation of circumstances that allow the human person to progress spiritually and materially.”4 In this sense, generic justifications based on national security, public order, or even the protection of a conflicting human right are not consistent with this reasoning. A consistent analysis demands looking at the particular context and at the elements and implications involved in prioritizing a certain goal over a specific right.
  • Moreover, States must meet suitability, necessity, and proportionality standards from the moment they first assess whether to adopt AI/ADM systems for rights-based determinations and throughout the system’s implementation and use. This requires rigorous scrutiny to determine whether adopting the system is an adequate means to achieve the legitimate goal pursued and identify the rights restrictions involved (suitability). To limit rights, it is not sufficient that the measure be useful, reasonable, or convenient. Restricting or limiting a certain right must be the least harmful alternative available to achieve the legitimate goal (necessity), and there must be a proportional balance between the level of restriction and the level of benefit to the interest pursued (proportionality).
  • For ESC rights, the above analysis must integrate the principles of progressive development and non-regression. States’ adoption of AI/ADM systems in the context of social protection, or welfare, must be conducive to the proper enjoyment and attainment of ESC rights, and should not implicate a regression in social protection guarantees and policies. Any restrictions to ESC rights must also be justified and balanced in light of these two principles (see also Section 4.7).

Public Policies and Human Rights in the Inter-American Human Rights System

  • The commitments that States undertook under the Inter-American Human Rights System call on them to adopt a human rights approach to public policymaking. As such, human rights principles and standards must guide States in scoping the problems they seek to address and determining whether the development, procurement, and/or use of AI/ADM systems as part of a certain policy is appropriate. If so, human rights principles and standards must also inform the design, implementation, and monitoring of the system's use within a particular public policy.
  • Any public policymaking must begin with the principle that people and social groups are holders of rights and, as such, have the capacity and right to call for these rights and participate.5 This involves the right to meaningful information about how an AI/ADM tool is developed and works.
  • Any public policymaking should also abide by principles of social participation, due process, access to justice, and access to information to promote transparency, accountability, equality, and non-discrimination. This involves ensuring priority protection for groups subject to historical discrimination and adopting a gender and diversity protective perspective (see Chapter 5).
  • In this and other sections, we present a non-exhaustive list of the repercussions for taking a human rights approach to using AI/ADM systems in public policymaking. The use of AI/ADM systems as public policy tools must take into account the cross-cutting principles mentioned above. This means that States must ensure appropriate conditions and mechanisms for social participation and public oversight from the moment they assess and identify a problem they aim to address. This assurance also applies when States are examining whether the adoption of an AI/ADM tool is suitable for the problem and during the design, implementation, and evaluation phases of the policy being created or changed (see Section 5.4).
    • A thorough and democratic assessment before deciding to adopt the system is particularly critical for AI technologies. The analysis on its “suitability” to address a certain issue must consider that machine learning fundamentally attempts to reproduce patterns observed in the available data, which can frequently produce an undesirable outcome to guide decisions. In addition, because AI systems involve probability and randomness in making determinations, they may not be sufficiently understandable or equitable for rights-based decisions.
    • It is crucial for States to adopt measures ensuring broad participation without any type of discrimination, which entails implementing “special actions that guarantee substantive participation and effective incidence in all political decision-making spaces by the most vulnerable and excluded persons and groups.”6
    • In this sense, decisions about and monitoring of the adoption and use of AI/ADM systems in public policies must give special attention to historically discriminated against groups, prioritizing their protection and fostering their effective participation (see Section 4.3). This enables policies to “incorporate the experiences, perspectives, and viewpoints of the persons and groups who are the holders of the rights that are being targeted for safeguarding.”7
    • Consequently, incorporating a gender and diversity perspective for algorithm-based systems employed as part of a state policy or initiative is important to ensure that the use of the system does not reproduce discrimination and negatively impact diverse bodies and identities. Accordingly, the validation, testing, and review of both AI and traditional ADM systems must consider not just average members of society but also marginalized groups and gender expressions that diverge from cisnormative or heteronormative standards.
    • Effective participation can take many and complementary forms, but it should include the ability for independent and expert organizations to audit the system.
    • To ensure effective participation, States must make clear to the public how contributions from consultations and deliberative mechanisms inform the design, implementation, and evaluation of AI/ADM-based public policies (see also Section 4.1).
    • Mechanisms of participatory democracy concerning indigenous and Afro-descendant communities bring valuable models and lessons for States committed to ensure broad and effective social participation in policymaking. Free and informed consultation and consent of communities before implementing projects or other initiatives that can affect their rights and territories are important guidelines to consider. The same goes for effective participation in prior impact assessment studies.
    • As these participatory standards point out, “consultation is not a single act, but a process of dialogue”8 where clear, accessible, and complete information is provided with sufficient time to allow proper community engagement.9 Good faith consultations must not involve any type of coercion and must go beyond merely pro forma procedures.10 In this sense, failure to give proper consideration to the concerns and feedback gathered in consultations is contrary to the principle of good faith.11 In line with due process guarantees, State decisions resulting from the participatory processes are subject to review by higher administrative and judicial authorities.12
    • Relatedly, effective social participation throughout the policy cycle presupposes access to meaningful information about the AI/ADM system and its potential (or current) use, as well as the potential (or actual) results of such use (see Section 4.2).
    • This means States must provide access to relevant information prior to designing a certain AI/ADM-based public policy, making available information and indicators that underpin States’ evaluation during this assessment stage. As a result, the fact that the state body is still assessing the contours of a certain policy does not justify restricting access to information about that policy. On the contrary, seeking societal feedback and publicly providing meaningful information prior to design is essential to ensuring that the development, procurement, and/or use of AI/ADM systems as part of a certain policy is appropriate and respectful of human rights. Similarly, a vendor’s supposed commercial interest in the secrecy of their technology cannot override the need for public scrutiny; technologies containing significant secrets are not suitable for rights-affecting decisions (see Section 5.3).
    • Diverse and effective social participation aligns with States’ duty to take reasonable steps to prevent human rights violations (see Section 1.1). In the context of policymaking, this is reflected by including a preventive approach to how States structure the problem they aim to tackle and decide whether, or to what extent, the use of AI/ADM systems is appropriate to fulfill the envisioned goal.
    • Moreover, budget allocation and introduction of AI/ADM systems in this context should aim to reduce inequalities and promote rights (see Section 4.7).
    • Measures strengthening transparency, accountability, and public debate about State’s budget allocation for developing, purchasing, and implementing AI/ADM systems are also instrumental to the public’s ability to monitor, identify, and prevent corruption and/or misuse of public funds. They can shine a light on state bodies favoring problematic vendors or deals and help eradicate the culture of secrecy that leads to rights-invasive outcomes.
    • State bodies must continuously monitor the implementation of AI/ADM technology as public policy instruments, publishing indicators and analysis that can measure public policies’ results and impacts. Monitoring and evaluation (M&E) processes should build on such indicators and analysis to assess whether a certain policy (and the automated system within it) has been an effective tool to accomplish human rights. The assessment of results and impacts must flag any issues and offer inputs for adjustments or discontinuation.
    • In addition to the body in charge of the policy, independent state institutions should oversee the design, implementation, and evaluation of policies using AI/ADM systems. AI/ADM-based policies should also benefit from the combined expertise of government entities, including data protection authorities and, as appropriate, bodies that focus on technology, education, health, etc. Moreover, M&E processes must involve civil society actors and effective social participation to integrate public oversight mechanisms into the policy's evaluation dynamic. M&E processes must also incorporate data about administrative and judicial complaints (see also Section 4.4).
    • All these measures also apply to public security policies that involve the use of AI/ADM systems. As with other areas of government policymaking, transparency, civic participation, independent oversight, and proper M&E processes are important to ensure that state bodies in charge are accountable and that policies are respectful of human rights. Militarization and privatization of public security generally undermine these goals and should not prevail.

Inter-American Human Rights Standards Regarding Relevant Rights

Political Participation

  • The public has a right to political participation, which includes speaking out about and influencing whether or how state institutions use AI/ADM technology to support rights-based determinations. The imperative of ensuring political participation as a vital element of democratic societies remains valid and enforceable, regardless of whether States’ actions involve the use of AI/ADM technologies.
  • States must organize governmental apparatus and practices to ensure the free and full exercise of the right to participate in government (see Section 1.1). This entails creating and/or adapting existing structures and processes to enable effective and diverse participation in decision-making and evaluation of state use of AI/ADM systems (see Chapter 3).
  • It is important that participation models and processes allow affected communities and civil society to “exert a tangible impact”13 on States’ decisions regarding the use of AI/ADM technologies for rights-based determinations, and provide proper means for assessing and measuring such an impact (see Section 3.1).
  • Effective participation also implies taking steps to remove barriers for civic engagement in public affairs, especially regarding groups in situations of historical discrimination.

Access to Information

  • States must comply with the principles of maximum disclosure and good faith in their use of AI/ADM technologies. The fact they are novel or complex technologies does not waive States’ duties to uphold the public’s right to information.
  • All levels of government in all branches must comply with obligations derived from the right of access to information, including secretariats, ministries, and security agencies, which encompass military forces.
  • Just as all public institutions in all branches of government must comply with the public’s right to access information, so too must autonomous bodies and other entities which employ AI/ADM tools to carry out public services. The same goes for entities that receive public funds to deploy or operate AI/ADM systems within a state policy or activity.
  • The right of access to information includes a set of obligations starting with active transparency and adequate implementation.14 Regarding active transparency, States should as a basic first step disclose, in a systematic and user-friendly way, which AI/ADM systems are used and for which purposes. Those disclosures should also include the related legal framework, the categories of data involved, which institutions are in charge, which are the system’s developers and/or vendors, the public budget involved, reasons and documentation underpinning the adoption of the system, all impact assessments carried out, performance metrics, information on the decision-making flow including human and AI agents, the rights of people affected, and the means available for review and redress (see Section 5.3).
  • Regarding information requests, denying access to information about the assessment, implementation and use of AI/ADM technologies within state activities and policies is allowed only exceptionally and must meet the Inter-American System's criteria to comply with human rights standards.
  • The interlink between access to information and democratic participation and oversight of state use of these technologies is tremendous and must underpin the application of maximum disclosure and good faith principles and analysis of admissible restrictions. Bound entities must assess in good faith how keeping information secret is detrimental to the rights of individuals and groups.
  • Bound institutions must clearly and properly justify denying requests, weighing the risk or harm the disclosure can cause to a legitimate aim under Art. 13(2) of the Convention, through an analysis of necessity and proportionality. If admissible, the restriction should last only as long as such specific and objective risk exists.
  • In view of the implications of secrecy in this context, the protection of trade secrets should not constitute a sufficient basis for legitimately denying access to information, including access to the system's source code, development records, and the ability to conduct independent audits (see Section 5.3).
  • Restriction justifications based on the general welfare (e.g. national security or public order) must be strictly limited to the “just demands” of a “democratic society” (see Chapter 2). This means they cannot implicate an all-encompassing limitation, but be strictly tailored to the types of information and the types of disclosure that effectively represent a risk as per democratic principles, and only until such specific and objective risk exists (which entails conducting periodic reviews of classified information).15
  • State authorities are forbidden from using the secrecy of information in their control “with the disguised interest of favoring or harming a particular political activity or ideology, or in any other way that implies any type of discrimination.”16 In case of human rights violations, authorities cannot keep secret information required by judicial or administrative bodies in charge of investigations or pending proceedings. Moreover, the decision to classify and deny access to information cannot be made by a state body whose members are accused of committing a punishable unlawful act.17
  • In this sense, it is crucial to emphasize that government entities engaged with security activities, including national and public security, are subject to the rule of law and accountability just like other public bodies, and must comply with access to information obligations. This means that the principles of maximum disclosure and good faith, as well as the three-part test,18 with the principles of legality, legitimate aim, and suitability, necessity and proportionality, are also applicable to security authorities (see Chapter 2).
  • As a result, blanket restrictions based on a general qualification are not allowed.19 Any limitations must clearly justify why a specific content falls within strict restriction categories set by law, in accordance with Inter-American standards. Access to information law should, to the extent of any inconsistency, prevail over other legislation, while secrecy laws should be subject to public debate and conventionality control.20 In fact, creating, organizing, and disseminating reliable information about the design, implementation, and evaluation of public security policies, including when they rely on AI/ADM systems, is foundational to a democratic model of citizen security based on the protection of human rights of the entire population.21
  • When it comes to AI/ADM systems deployed for surveillance purposes, including in the context of national security, people should be informed, at a minimum, about the legal framework regulating these practices; the bodies authorized to use such systems; oversight institutions; procedures for authorizing the system’s use, selecting targets, processing data, and establishing the duration of surveillance; protocols for sharing, storing, and destroying intercepted material; and general statistics regarding these activities.22
  • In the same vein, restricting access to information based on fraud prevention and its potential relation to preserving the public order must be thoroughly assessed against conventional guarantees and the tenet that “people and social groups are holders of rights with the capacity and right to call for their rights and participate in states’ activities” (see Chapter 3). States’ claims that the public’s knowledge about these technologies correspond to the risk of fraud implies suppressing the right to political participation, among others, and fails to meet Inter-American standards (see Chapter 1).
  • It is equally problematic to deny requests on the basis that state institutions are still assessing the implementation of AI/ADM systems. Civic participation is crucial for ensuring that States are properly conducting such assessments and comply with their duty to take reasonable steps to prevent human rights violations (see Chapter 3). It also does not seem to fit in with the legitimate goals set in Art. 13(2).
  • The obligation to produce or gather information the State needs to fulfill its duties requires state bodies and other bound institutions to have proper knowledge about the AI/ADM systems they adopt or aim to adopt for rights-based determinations. Under this obligation, state bodies and other institutions cannot claim that the State does not have the information requested when such information is relevant for conducting a state policy and/or activity.
  • Relatedly, producing, organizing, and disclosing information about public budget allocations to pay for AI/ADM systems is critical for preventing corruption and enabling public oversight (see Section 3.3). This also applies to security authorities and activities, including national security and public security.23
  • Reliable indicators on the implementation of policies that include AI/ADM systems is equally crucial to properly monitor and evaluate such policies. States must devise, produce, and periodically release these indicators, which must constitute a relevant basis for assessing whether the policy and its AI/ADM system component are advancing or hindering the protection of human rights (see also Section 3.3). Indicators should pay special attention to policies’ impacts on historically marginalized groups, measuring results regarding Afro-descendant populations, indigenous groups, women and sexual minorities, children and adolescents, and migrants, among others. This is especially relevant in the context of security and social welfare policies.
  • States must have clear guidance and structures in place to adequately and effectively satisfy the right to information regarding their adoption of AI/ADM systems. This includes properly gathering and managing related information, training agents responsible for responding to requests, and setting processes to fulfill active transparency duties.
  • States must also ensure a simple, prompt, and effective appeal mechanism to the denial of information requests about AI/ADM systems (see Section 4.4). Those who had their information petitions refused must be able to count on an independent body, distinct from the one that denied the request, to present an appeal.24 A specialized administrative body is ideal for this, in addition to people’s right to go to court.25
  • This administrative body should have the resources and power to carry out its duties timely and independently, including overseeing enforcement of right-to-information obligations and resolving disputes over the provision of information through binding decisions.26 The Model Inter-American Law on Access to Public Information is an essential reference for States regarding proper compliance with the right of access to information.

Equality and Non-discrimination

  • Equality and non-discrimination are guiding principles for all state activities, norms, and practices, and as such present specific and particular implications for government use of AI/ADM systems for adjudication, recognition, and exercise of rights.
  • The American Convention prohibits all discriminatory treatment. Consequently, prior to implementation and throughout the system’s lifecycle, States must take all the necessary steps to ensure that the AI/ADM systems they employ for rights-related purposes do not result in discriminatory treatment—meaning any exclusion, restriction, or privilege that is not objective, necessary, and proportional to the legitimate goal they aim to achieve and that adversely affects human rights.
  • Differential treatment that negatively impacts rights and freedoms based on speculation, presumption, or stereotypes of persons and groups is incompatible with the Convention (see also Section 4.4). This poses special challenges when States adopt AI/ADM technologies for rights-based decision-making, as algorithmic determinations rely on pattern recognition within a dataset to infer conclusions about a person or situation that may fail to reflect or match real life circumstances. Relatedly, machine learning includes a probabilistic element, such that random chance plays a significant role in the outputs of many AI technologies.
  • For AI/ADM adoption to be compliant with international human rights law, States must organize their apparatus, processes, and practices, and adapt national legislation if needed, to prevent, mitigate, remedy, and redress discriminatory decision-making based on those systems.27
  • It is crucial for States to adopt practices to meet this obligation, including: conducting human rights impact assessments before implementation and during the system lifecycle; ensuring that responsible state institutions and oversight bodies have access to the system’s inner workings and proper expertise to assess them from a human-rights perspective; carrying out internal and external independent audits; and ensuring there is human oversight and meaningful human review of AI/ADM-based decisions by competent officials following strict and transparent criteria (see Chapter 5). Providing meaningful human review entails properly addressing the so-called “automation bias,” which is the propensity for humans to favor suggestions from automated decision-making systems and ignore contradictory information made without automation, even if it is correct. In addition, periodical evaluation, impact assessments, and audits should address not just the initial design but the actual outcomes of using AI/ADM systems once implemented. Human rights impact assessments and audits should adopt a gender and intersectional perspective in observation of IACHR’s three-step guide to policy design (see detailing below).
  • In each one of these measures, the engaged institutions and officials must observe whether the use of the system may cause, or is leading to, indirect discrimination with a disproportionate impact on certain groups. Institutions and officials must be aware that even AI/ADM technologies presumably built to act neutrally towards different groups can result in discriminatory treatment.
  • States’ adoption of these technologies should not take an approach that is neutral or blind to historical discrimination and inequalities in society and within the particular contexts where the system would be or is implemented. This is important not only to prevent discrimination but also to accomplish the States’ duty to pursue substantial/material equality. In particular, if the training data for an AI technology reflects historical discrimination and inequalities, it will not be suitable for AI training, because the resulting AI system will reproduce those historical patterns.
  • Companies selling AI/ADM systems to States for decision-making applications that affect the recognition, enjoyment, and exercise of human rights must put in place the necessary processes and measures to assess, prevent, and mitigate discrimination and other detrimental impacts to human rights their systems may cause. They must provide States with sufficient guarantees that their systems are compatible with human rights and comply with proper standards of transparency, fairness, privacy, security, and reliability, among others attributes, while also making available all information pertinent to State and independent analysis of the system.
  • States must refrain from implementing AI/ADM technologies that do not provide these guarantees, including having a track record of human rights violations. States must also refrain from adopting AI/ADM-based decision-making for purposes or in contexts where it would be incompatible with human rights, such as state practices that replicate systemic discrimination and/or entail racial profiling. In this sense, States must refrain from implementing AI/ADM technologies that have disproportionate impact on specific groups and/or inherently reproduce discriminatory views or practices reflected in biased datasets used to train the AI model or feed the system's operation (see Section 4.4). State use of facial recognition and predictive policing technologies are examples of those.
  • The three-step guide the IACHR recommends for policy design, implementation, and evaluation configures a relevant roadmap for States to assess public policies that make use of AI/ADM systems. It encompasses the consideration of: (i) the differential impact a certain policy has or might have for groups that have been historically discriminated against; (ii) the views and concerns of these groups at different stages of the policy’s cycle; (iii) and the actual benefits this policy brings or may bring for reducing inequities impacting those groups.
  • Relatedly, States must establish the means and processes for effective civic participation, particularly of historically discriminated against groups, at different stages of AI/ADM systems adoption by public institutions for rights-based determinations (see Chapter 3 and Section 4.1).
  • It is up to state authorities, rather than the persons affected, to justify and prove that a decision based on automated systems or AI models did not have a discriminatory purpose or effect. To properly comply with this obligation, States must ensure that AI/ADM-based decisions have explainable and justifiable grounds.

Due Process

  • Due process guarantees apply at all stages of any proceeding conducted by any public authority to determine people’s rights and obligations, including administrative and judicial proceedings, regardless of whether or not such determinations are based on AI/ADM systems.
  • This means authorities using AI/ADM systems as part of decision-making about rights and obligations remain responsible for preventing arbitrary conclusions and must ensure that the procedures involving these systems fulfill Article 8 guarantees.
  • State proceedings that deal with rights-related issues must ensure that those affected can fully exercise their right to be heard. This means setting up processes allowing affected people to intervene in proceedings, submit their claims, and present factual and probative elements (e.g. to indicate inaccurate or outdated data). Those elements and claims must be properly analyzed before a final decision is reached by the body holding the proceeding. This is equally connected to the right of defense.
  • Accordingly, the person affected must be made aware of the proceeding analyzing their rights and obligations before a decision is reached. Wherever possible, affected individuals should receive prior and detailed notice. In any case, individuals must receive, in time for them to intervene as detailed above, clear information explaining why they are subject to the proceeding, the relevant elements being considered, and a minimum reference about how the elements being assessed factor into the consequences they may face, including legal or disciplinary rules pertinent to a final decision. Affected individuals must also be informed of the means available to them to present their claims, which must be easily accessible.
  • States must also take necessary measures to ensure that decision-making proceedings include the information and elements needed to produce the determination they are intended for.28 The metrics, criteria, and accuracy of the data the AI/ADM system considers, among others, are elements of the decision-making proceeding and must be appropriately conducive for analyzing what the proceeding is meant to analyze. It is also crucial that States ensure competent human oversight of public institutions’ AI/ADM-based determinations affecting human rights.
  • The guarantees of independence and impartiality mean that people should, as a general rule, know what to expect from decision-making processes affecting their rights. Government use of AI/ADM systems for rights-based determinations should follow specific, previously approved, and publicly available protocols grounded in law and capable of ensuring that proceedings are predictable, objective, and coherent.
  • The guarantees also mean that officials involved in decision-making proceedings have the required competencies and are properly trained to interact with the AI/ADM system at issue. This is crucial to ensure that resolutions reached through the decision-making process comply with due process guarantees.
  • Still regarding objectivity and impartiality, state use of ADM and AI systems—and the decisions those systems make— that affect people’s rights must not be the result of or reproduce prejudices and stereotypes. According to the Court, “stereotypes are pre-conceptions of the attributes, conducts, roles or characteristics of individuals who belong to a specific group.” Public bodies must take all the necessary measures to prevent stereotyping from influencing the decision-making proceeding (see also Section 4.3). Failure to do so also implies a violation of the presumption of innocence.
  • When AI/ADM technologies are used in the justice system to support any rights-based determinations, defendants’ right to confront the software come into play, including the ability of defense experts to evaluate and audit the system.29 This right should act as a final line of defense to evaluate a technology that has already been subject to systematic safeguards, meaning the primary burden of evaluating the suitability of technology should not fall on individual criminal defense teams.
  • State use of AI/ADM systems to support decision-making proceedings must not undermine the presumption of innocence and shift the burden of proof to individuals subject to algorithmic-based decisions. This is especially important in criminal and disciplinary proceedings. Disciplinary, punitive, or rights-restrictive consequences related to AI/ADM-based decisions must not apply if authorities are not able to ascertain whether the conclusions they produce are reliable and/or whether the facts, data, and criteria underpinning the decision are accurate or pertinent.
  • Decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. This means that systems employed for rights-based determinations must meet interpretability and explainability goals (see Chapter 5). The principle that States must justify decisions affecting people’s rights is a cornerstone of democratic societies and a necessary condition for the full exercise of the right to appeal.
  • States must ensure that effective judicial and administrative remedies are available and easily and equitably accessible. People affected by an administrative AI-based decision must have the appropriate means to challenge it at the administrative level, in addition to the right to take it to court.
  • Effective appeals are not a mere formality but a consistent mechanism that ensures a comprehensive examination of the decision challenged. This entails a human review with the proper authority and expertise to assess the decision through strict and transparent criteria. Expertise and adequate protocols are crucial to duly address the potential “automation bias” of human reviews.30 The conclusions of the review must also be duly justified.
  • Complaints and challenges targeting AI/ADM-based decisions must inform States’ design, implementation, and evaluation of public policies. They hold crucial information for assessing the quality of policies in place and establishing parameters for new policies (see Chapter 3).
  • This requires State bodies to document them and generate publicly available statistical reports. Complaints, challenges, and related documentation should be included in auditing processes and available to independent oversight bodies. They should also be available to independent researchers and the general public in a way that follows adequate privacy and data protection guarantees.
    • AI/ADM decision-making in the scope of this paper should abide by the principle of transparency. Relevant information about the decision proceeding (such as stages involved in decision-making, criteria considered, and information on how data is processed) should be publicly available with appropriate protections against the sharing of private, personal data with third parties. Special attention must be given to historically vulnerable or marginalized groups to make sure they do not suffer discrimination and stigmatization resulting from inadequate protections in this context.31

Dignity, Right to Private Life, and Related Rights

  • Dignity, private life, autonomy, and self-determination, including informational self-determination, permeate state use of AI/ADM technologies for rights-based determinations.
  • The use of these systems in the context of this paper generally entails the processing of data relating to an identified or identifiable individual.32 It also involves a decision-making process based on a sort of “identity” perceived or established by the system through correlations, profiling, and inferences that produces an artificial “exteriorization of the persona” with significant effects depending on the decision in question. This may seriously affect people’s life plans, the free development of their personality, and their social relationships.
  • As such, these are all rights and guarantees that States must consider when assessing the adoption, implementing, and evaluating the use of AI/ADM systems for supporting decisions that affect the adjudication, recognition, and exercise of human rights.
  • State AI/ADM-based decisions may also seriously impact family life and the free and full enjoyment and exercise of sexual and reproductive rights.
  • In this context, authorities must translate the State’s duty to “increase its commitment to adapt the traditional forms of protecting the right to privacy to current times”33 into structures, practices, and effective mitigation and protective measures.
  • Any interference with the right to private life and related rights stemming from state AI/ADM-based decision-making must fulfill the principles of legality, legitimacy, suitability, necessity, and proportionality (see Chapter 2). There are several measures indicated in previous sections that state authorities must consider and implement to meet these standards (see particularly Chapters 2, 3, and Sections 4.3 and 4.4).
  • Government surveillance and collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulation and effective controls to prevent abuse by state authorities.34 Government use of AI/ADM systems in the context of this report must be backed by robust legal and institutional safeguards for privacy and data protection. These safeguards must pay due care to sensitive personal data. They deserve special protection given their heightened discriminatory potential, especially when providing the basis for profiling.35
  • Government surveillance of communications, including tracking enabled by the processing of communications-related data (either content or metadata) generally require a prior and reasoned judicial order. The use of AI/ADM systems by state bodies in this context must observe the proper application of this standard.
  • Government implementation of AI/ADM systems to enable indiscriminate surveillance of physical or online spaces is disproportionate and arbitrarily interferes with the right to private life, among other rights.36
  • The principle of personal autonomy prohibits any State action that attempts to “instrumentalize” individuals. This should reinforce the principle that people and social groups are rights holders and, as such, have the capacity and right to call for their rights and participate in government decision-making. This principle reflects the close ties that autonomy and self-determination have with due process guarantees (within the decision-making proceedings affecting certain individuals and groups) and the right to political participation (as for influencing States’ definitions regarding the adoption of AI/ADM systems for rights-based determinations).
  • In addition, the protection of autonomy and private life requires authorities to follow an adequate framework regarding the collection and use of personal data to prevent their processing in a manner that is inappropriate or incompatible with these rights. These standards are particularly relevant in the context of government use of AI/ADM technologies for rights-based determinations.
    • Personal data processing must be grounded in free and informed consent or, failing that, grounded in other bases strictly and expressly authorized by law. Personal data processing is only authorized under the framework of the American Convention for pursuing legitimate purposes and by legal mechanisms.37
    • Free and informed consent requires providing data subjects with sufficient information about the details of the data to be collected, the manner of its collection, the purposes for which it will be used, and the possibility, if any, of its disclosure. Further, consent by the individual should express their willingness in such a way that there is no doubt about their intention. In short, people whose data is being processed should have the ability to exercise a real choice without the risk of deception, intimidation, coercion, or significant negative consequences for refusing to consent.38
    • In cases where state institutions can legitimately process personal data, they are limited to obtaining the truthful, relevant, and necessary data for the strict fulfillment of their functions, in accordance with the applicable legal framework.39
    • Institutions in charge of data processing must ensure the protection and security of data, preventing their unauthorized access, loss, destruction, use, modification, or disclosure. Government decision-making based on data processing must also ensure that data used are kept up to date, complete, and accurate.40
    • States have the duty to ensure the right to informational self-determination and provide mechanisms for data subjects to access and control their data held by public institutions. Any restrictions to this right must comply with the standards for admissible restrictions and limitations to the right to access information (Article 13 of the Convention). Thus, any denials must be strictly justified and allow for an effective opportunity to appeal, besides the right to object in court the reasons invoked for the denial (see Section 4.2). The judicial authority, if it deems necessary, must be able to examine the information to which access has been denied.41
    • The scope of informational self-determination encompasses a set of powers for data subjects42 in addition to free and informed consent:
      • the right to know what data from them are held by public and private institutions (whatever their format), where they come from, how they were obtained, what they are used for, how long they are kept, whether they are shared with other bodies or persons, the reason for sharing and, in general, the conditions of their processing. Such right to access should encompass all the data relating to the person considered in the decision-making process. This means not only data actively and knowingly provided by individuals, but also data the institution observed and gathered without their actual knowledge, as well as derived and inferred data.43
      • the right to demand the rectification, modification, or updating of the data, in case they are inaccurate, incomplete, or outdated. This should include biased and discriminatory data considered in the decision-making process. For these purposes, individuals should receive information about the underlying logic of derived and inferred data influencing the decision so that they are able to contest, under legitimate grounds, any inaccuracies or issues.
      • the right to object to data processing in cases in which, given the particular situation of the person, damage can be caused to them and in cases provided for in proper regulations; and
      • when possible and according to legal provisions, the right to receive the data in a structured, commonly used machine-readable format, and to request its transmission without being prevented from doing so by the authority that holds it.
  • States must provide adequate, agile, free, and effective mechanisms or procedures to process and respond to data subjects’ requests for access to and control of their data, with reasonable time limits set for their resolution and under the responsibility of properly trained officials.44
  • Informational self-determination is connected to a person’s ability to shape and determine their identity. In a context where the processing of personal data through profiling and other techniques creates “data representatives” or “data doubles” that intermediate the person’s relationship with others, including state authorities, the powers associated with informational self-determination, as well as non-discrimination and data protection safeguards, are key to preserve individuals’ autonomy and rights.
  • The right to identity as a guarantee derived from self-determination and the free development of one’s personality reinforces the extra care States must give to privacy, data protection, data security, and nondiscrimination when using data processing, digitalization, and algorithmic decision-making to intermediate their relationship with the population. Such a guarantee indicates the need to deter unnecessary and disproportionate data processing while emphasizing the reasons why people have the right to understand how their data is processed to shape state bodies’ perceptions and conclusions about who they are.
  • Relatedly, such guarantee is in contradiction with an indiscriminate normalization of comprehensive digital identity schemes.45 Without strict and robust safeguards paired with proper state apparatus to prevent human rights violations, digital identification schemes will actually undermine the right to identity conceived as a guarantee arising from individuals self-determination and the free development of their personality.
  • The set of rights analyzed in this section in connection with the principles of equality and non-discrimination requires that States abide by the rights to reproductive autonomy and reproductive health when formulating related public policies. The adoption of AI/ADM technologies as policy instruments in this context demands careful consideration as to whether they are fit for the purpose and non-arbitrary (see Chapter 5), and whether their adoption involved broad civic participation, particularly from groups engaged with women’s rights and related issues.
  • Family and family life play a central role in people’s lives, thus States’ decision-making affecting family relations and ties must respect this reality. Such decisions should serve the development and strength of the family unit whenever this is reasonable and in accordance with the free development of one’s personality and child’s rights. The concept of “family” must be broadly understood to encompass different models and configurations.
  • This means that the adoption of AI/ADM technologies for State determinations that may cause children to be separated from their family must rigorously observe Inter-American System’s parameters for rights restrictions (see Chapters 2 and 5) and Inter-American Court guidance linking the protection of private and family life, children’s rights, due process, and equality and non-discrimination (see also Sections 4.3 and 4.4).
  • State institutions must have in place the apparatus and processes needed to properly ensure the rights of children and family members to be heard. “Those who participate in decision-making processes must have the necessary personal and professional competence to identify advisable measures from the standpoint of the child’s interests.”46 It follows that, while AI/ADM systems may be appropriate to support certain data analysis needs in this context, the proper decision-making requires interdisciplinary human reasoning and assessment.
  • Assumptions of risks or harms arising from AI/ADM systems’ correlations do not constitute a legitimate basis for a decision involving a child’s separation from their family. The lack of material resources is not a sufficient basis either. Meaningful and interdisciplinary human involvement should be required in any decision-making regarding the separation of a child from their family. Finally, States must prevent discriminatory assumptions relating to biases and stereotypes from interfering in decision-making proceedings, considering both humans and AI/ADM technologies involved.

Freedoms of Expression, Association, and Assembly

  • State use of AI/ADM systems that interfere with freedoms of expression, association, and assembly is only admissible to the extent it is authorized by a law democratically approved, and is adequate, necessary, and proportionate to achieve a legitimate aim (see Chapter 2).
  • Freedoms of expression, association, and assembly are closely interlinked with the right to private life and anonymity. Indiscriminate State monitoring and surveillance of physical and online spaces through AI/ADM technologies seriously impairs the free and full enjoyment and exercise of such freedoms, with unfolding impacts to autonomy and self-determination (see Section 4.5). State use of AI/ADM systems with mass surveillance purposes, as with facial recognition technologies (including emotion recognition), is inherently disproportionate and should not be tolerated under Inter-American human rights standards. The use of IMSI catchers, including against people exercising their freedoms to peaceful assembly and association, raise similar issues.47
  • Impacts on free expression and assembly rights are particularly severe in the context of the right to protest. As pointed out in Chapters 1 and 2, any State use of AI/ADM systems must be compatible with the tenets of a democratic society, and restrictions to conventional rights are only allowed to the extent they abide by such tenets. AI/ADM systems adopted by the government that prevent people from taking part in demonstrations or illegitimately inhibit their participation violate the Convention.
  • States must carefully prevent and address the disproportionate impact government use of AI/ADM systems has on the freedoms of expression, association, and assembly of historically discriminated against groups. Ensuring that these groups can fully exercise such freedoms is instrumental to their right to political participation, including their effective participation in State definitions on the use of AI technologies that may affect their rights.
  • States must structure and maintain oversight mechanisms to provide adequate accountability of public institutions’ use of AI/ADM systems, following proper transparency and participation standards (see Chapter 1). This is crucial to prevent abuses and violations of the freedoms herein, and to ensure remedies and reparations in case they regrettably occur. The fact that algorithmic tools used in a state policy or initiative were developed by private parties does not exempt States’ responsibility, including prevention and oversight duties, when adopting these systems for rights-related determinations.

The Right to a Dignified Life

  • Every human being has the right to a dignified life. That includes protection against circumstances under which they are “prevented from having access to the conditions that guarantee a dignified existence.”48 As a consequence, State adoption of AI/ADM systems cannot constitute an arbitrary barrier for people’s enjoyment and exercise of their ESC rights. This is also in line with the principle of non-regression (see Chapter 2).
  • States’ duty to ensure the right to a dignified life include creating the necessary conditions to prevent violations and arbitrary restrictions of this right. This means that assessing and implementing AI/ADM systems within social protection policies must be grounded in principles and practices aimed at ensuring the right to a decent life and the progressive realization of ESC rights.
  • In this sense, an important principle to guide State action in this and other contexts is, again, that people subject to AI/ADM-based decision proceedings are rights holders—including the right to a dignified life and related ESC rights. As such, effective civic participation of affected groups in the design, implementation, and evaluation of public policies involving AI/ADM systems is a second crucial principle to inform State action (see Chapters 3 and 5).
  • Equality and non-discrimination are also related principles that must govern the exercise of State functions in general and in the context of social protection and ESC rights (see Chapter 2 and Section 4.3).
  • The rights to social security and health are critical enablers of other rights, including the essential right to life and personal integrity. States must observe such an interdependence when using AI/ADM systems to allocate or deny provisions, subsidies, and services.
  • Public provisions, subsidies, and services are all state mechanisms to comply with obligations to ensure members of society can live a dignified life and progressively achieve the realization of ESC rights. Reducing, suspending, or denying them implies a restriction to corresponding ESC rights that must be provided for by law and based on reasonable grounds (see Chapter 2).
  • In addition, any State AI/ADM-based decision affecting the enjoyment of ESC rights must rigorously satisfy due process guarantees (see Section 4.4).
  • For the benefits of scientific and technological progress to be distributed inclusively and sustainably among the population, relevant State policies and practices must be based on respecting and protecting human rights. This reinforces that State development and use of AI/ADM systems for rights-based determinations must have international human rights law as its baseline.
    • The obligation of progressive realization (see Section 2.4) prohibits States’ from failing to take steps to achieve the comprehensive protection of ESC rights, especially when States’ failure to do so puts people’s lives or personal integrity at risk.49
    • The right to health is an integral part of the right to personal integrity and involves having an equal opportunity to enjoy the rights to the highest attainable level of health and to be free of interference. The latter includes the right to be free from nonconsensual medical treatment and experimentation. State adoption of AI/ADM systems to manage or in other ways establish medical treatment recommendations or routines must observe the right to be free from experimentation, among other rights.
    • State adoption of AI/ADM technologies in this context must consider that the right to the highest attainable level of health enables people to live a full life. Health is much more than the absence of disease or illness, but a state of complete physical, mental, and social well-being.50 Government implementation of these technologies in a way that mostly restricts and/or undermines people’s well-being is the opposite of inclusively and sustainably distributing the benefits of scientific and technological progress.
    • States must also ensure that individuals receiving their health services have access to relevant information regarding their medical treatment as part of States’ active transparency duties. This information must be opportune, complete, comprehensible, and reliable.51 As the Court highlighted, “information accessibility” is one of the essential elements of the right to the highest attainable standard of health. Accessibility and non-discrimination are also elements States must take into account when assessing the implementation of AI/ADM systems in the context of health services.
    • This relates to the connection between physical and mental integrity, personal autonomy, and the freedom to make decisions regarding one’s own body and health. This connection also requires States to ensure and respect people’s decisions and choices regarding their health that have been made freely and responsibly.52
    • The deployment and/or implementation of AI/ADM systems in the welfare context (such as social security and health) must abide by States’ obligations regarding the right of access to information (see Section 4.2). Social protection policies are funded by public budget expenditures. As such, public spending directly connected to the use of these technologies within the welfare system must be publicly available and easily accessible as a general rule.
    • The inclusive and sustainable fulfillment of the right to enjoy the benefits of scientific and technological progress should encompass State policies that foster research and investments for local development of AI technologies based on principles of openness, decentralization, and respect for human rights.
    • One key element of respecting human rights in the context of policies fostering research and development of AI/ADM systems is the principle of personal autonomy and its prohibition against the “instrumentalization” of individuals (see Section 4.5). Therefore, State innovation policies must refrain from approaches that either exploit data from the most vulnerable or test experimental solutions on marginalized populations as incentives for partnerships with the private sector.53

Notes

  • Case of González et al. (“Cotton Field”) v. Mexico, Preliminary Objection, Merits, Reparations and Costs, Judgment of November 16, 2009, para. 33. 

  • Case Artavia Murillo et al. (“In Vitro Fertilization”) v. Costa Rica, Merits, Reparations and Costs, Judgement of 28 November, 2012, para. 245. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, Preliminary Objections, Merits, Reparations and Costs, Judgment of October 18, 2023, footnote 656, freely translated. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 533, freely translated, and Case of Salvador Chiriboga v. Ecuador, Preliminary Objection and Merits, Judgment of May 6, 2008, para. 74. 

  • IACHR, Public Policy with a Human Rights Approach, September 15, 2018, para. 44. 

  • IACHR, Compendium on Democratic Institutions, Rule of Law, and Human Rights, November 30, 2023, para. 148. Section 3.1 of the Appendix provides examples of statements where the Commission highlights the importance of measures to ensure effective political participation of women, LGBTQI+ people, and indigenous people and communities. 

  • IACHR, Public Policy with a Human Rights Approach, para. 58.  

  • IACHR, Indigenous and Tribal Peoples’ Rights over their Ancestral Lands and Natural Resources, supra note 96, para. 285 (emphasis added). 

  • IACHR, Indigenous Peoples, Afro-Descendent Communities, and Natural Resources, supra note 96, para. 108 (emphasis added), citing IACHR, Special Rapporteur on the Right to Freedom of Expression, The Right to Access to Information in the Inter-American System, December 30, 2009, para. 72.  

  • IACHR, Indigenous and Tribal Peoples’ Rights over their Ancestral Lands and Natural Resources: Human Rights Protection in the Context of Extraction, Exploitation, and Development Activities, December 31, 2015, para. 317. 

  • IACHR, Indigenous and Tribal Peoples’ Rights over their Ancestral Lands and Natural Resources, para. 325 (emphasis added) 

  • IACHR, Indigenous and Tribal Peoples’ Rights over their Ancestral Lands and Natural Resources, para. 328. 

  • See IACHR, Public Policy with a Human Rights Approach, paras. 60-61. 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, The Inter-American Legal Framework regarding the Right to Access to Information: Second Edition, March 7, 2011, pp. 8-15. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 607. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 607.  

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 607.  

  • “In order to be in line with the American Convention, an instance of interference must meet the following standards: to be contemplated in legislation, to serve a legitimate purpose, and to be suitable, necessary, and proportionate.” Case of Tristán Donoso v. Panamá, Preliminary Objection, Merits, Reparations, and Costs, Judgment of January 27, 2009, para. 56. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 603, freely translated. 

  • See IACHR, Office of the Special Rapporteur for Freedom of Expression, Derecho a la Información y Seguridad Nacional, : El Acceso a la Información de Interés Público frente a la Excepción de Seguridad Nacional, July 2020, paras. 74, 77-78, and 88; UN Special Rapporteur on Freedom of Opinion and Expression, OSCE Representative on Freedom of the Media and OAS Special Rapporteur for Freedom of Expression, Joint Declaration on Access to Information and on Secrecy Legislation, December 6, 2004; and IACHR, Report on Citizen Security and Human Rights, December 31, 2009.  

  • IACHR, Report on Citizen Security and Human Rights, para. 186 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, Derecho a la Información y Seguridad Nacional,, paras. 117 and 119. 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, Derecho a la Información y Seguridad Nacional, para. 126, and IACHR, Report on Citizen Security and Human Rights. 

  • Case of Gomes Lund et al. (“Guerrilha do Araguaia”) v. Brazil, Preliminary Objections, Merits, Reparations and Costs, Judgment of November 24, 2010, para. 231. 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, Los Órganos de Supervisión del Derecho de Acceso a la Información Pública,: Compilación de informes temáticos contenidos en los Informes Anuales 2013 y 2014 de la Relatoría Especial para la Libertad de Expresión de la Comisión Interamericana de Derechos Humanos, 2016, para. 7 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, Los Órganos de Supervisión del Derecho de Acceso a la Información Pública, para. 44. 

  • See Case of the Santo Domingo Massacre v. Colombia, Preliminary Objections, Merits and Reparations, Judgment of November 30, 2012, para. 18; IACHR, Compendium on the Obligation of States to adapt their Domestic Legislation to the Inter-American Standards of Human Rights, January 25, 2021, para. 29; and Case of Velásquez-Rodríguez v. Honduras, Merits, Judgment of July 29, 1988, para. 166.  

  • See Case of Barbani Duarte et al. v. Uruguay, Merits, Reparations and Costs, Judgment of October 13, 2011, para. 122.  

  • See more at Lacambra, S., Matthews, J., & Walsh, K. (May 2018). Opening the Black Box: Defendant's Rights to Confront Forensic Software, The Champion (NACDL); and Gullo, K., Victory! New Jersey Court Rules Police Must Give Defendant the Facial Recognition Algorithms Used to Identify Him, Electronic Frontier Foundation, June 7, 2023. Available at <https://www.eff.org/deeplinks/2023/06/victory-new-jersey-court-rules-police-must-give-defendant-facial-recognition>. 

  • As noted in the implications of Section 4.3, “automation bias” refers to the propensity for humans to deem decisions of a machine more objective or correct than those taken by people. 

  • One example of this concern is the stigmatization of women (especially black women) in the context of social welfare policies. See, for example, Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press; and Valente, M., Santos, N., & Fragoso, N. (2021). Presa na Rede de Proteção Social: Privacidade, Gênero e Justiça de Dados no Programa Bolsa Família [Trapped in the Social Safety Net: Privacy, Gender and Data Justice in the Bolsa Família Program]. InternetLab.  

  • In the Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, the Court adopted the definition of personal data used by the Inter-American Juridical Committee. Such definition establishes that personal data “includes information that identifies, or can be reasonably be used to identify, a natural person, whether directly or indirectly, in particular by reference to an identification number, location data, an online identifier or to one or more factors specific to his or her physical, physiological, genetic, mental, economic, cultural or social identity.” It’s also interesting to mention that the Updated Guidelines intentionally use the term “data” broadly to encompass “factual items or electronically-stored ‘bits’ or digital records” and “compilations of facts that taken together allow conclusions to be drawn about the particular individual(s).” The Updated Principles do so in order to promote the greatest protection of privacy. OAS, Inter-American Juridical Committee, Updated Principles on Privacy and Protection of Personal Data, Definition of Personal Data, p. 24. 

  • For a deeper analysis on the implications of Inter-American Human Rights Standards to State communications surveillance, see our report available at <https://necessaryandproportionate.org/americas-legal-analysis/>.  

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 541.  

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 554. 

  • IACHR, Office of the Special Rapporteur for Freedom of Expression, Standards for a Free, Open, and Inclusive Internet, March 15, 2017, para. 223. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 573. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, footnote 718. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 576. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 576. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, paras. 599-600 and 608. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 588. 

  • For the concepts of “derived” and “inferred” data, see Art. 29 of Data Protection Working Party, Guidelines on the Right to Data Portability, April 2017. 

  • Case of Members of the “José Alvear Restrepo” Lawyers Collective v. Colombia, para. 599. 

  • See the open letter of the #WhyID campaign, focused on the problems of the current implementations of digital identity programmes. Access Now et al. (n.d.). #WhyID Campaign: An Open Letter to the Leaders of International Development Banks, the United Nations, International Aid Organisations, Funding Agencies, and National Governments.  

  • Advisory Opinion OC-17/02, of August 28, 2002, Juridical Condition and Human Rights of the Child, para. 103.  

  • See N, Y. (June 28, 2019). Gotta Catch ‘Em All: Understanding How IMSI-Catchers Exploit Cell Networks. Electronic Frontier Foundation, and Privacy International. (June 2020). IMSI catchers legal analysis.  

  • Case of the “Street Children” (Villagran-Morales et al.) v. Guatemala, Merits, Judgment of November 19, 1999, para. 144. 

  • Case of Cuscul Pivaral et al. v. Guatemala, Preliminary Objection, Merits, Reparations and Costs, Judgment of August 23, 2018, para. 146. 

  • Case of Cuscul Pivaral et al. v. Guatemala, para. 105 (emphasis added). 

  • Case of I.V. v. Bolivia, para. 155. 

  • Case of I.V. v. Bolivia, para. 155. 

  • See concerns around this approach in López, J. (2020). Experimentando con la Pobreza: El SISBÉN y los Proyectos de Analítica de Datos en Colombia. Fundación Karisma.