3 6. Principle of Non-Discrimination
This chapter explains the applicability and scope of protection of the principle of non-discrimination. It then proceeds to unpack the relevance and application of the principle of non-discrimination technological developments and AI decision-making process. More specifically, this chapter addresses:
- the relevant law concerning the principle of non-discrimination
- the protective scope of non-discrimination
- the new challenges that new technologies and AI systems bring to conceptualising, detecting and substantiating discriminatory harms and risks
- States’ and business actors’ obligations under the principle of non-discrimination
(1) Protective Scope of the Principle of Non-discrimination
(1.1) Relevant law
(1.1.1) General clauses on equality and non-discrimination
International Covenant on Civil and Political Rights
Article 2(1)
Each State Party to the present Covenant undertakes to respect and to ensure to all individuals within its territory and subject to its jurisdiction the rights recognized in the present Covenant, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
Article 26
All persons are equal before the law and are entitled without any discrimination to the equal protection of the law. In this respect, the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
Article 14 European Convention on Human Rights – Prohibition of discrimination
The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.
Article 1 Additional Protocol 12 to the ECHR – General prohibition of discrimination
1. The enjoyment of any right set forth by law shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.
2. No one shall be discriminated against by any public authority on any ground such as those mentioned in paragraph 1.
Article 21 EU Charter on Fundamental Rights – Non-discrimination
1. Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.
2. Within the scope of application of the Treaties and without prejudice to any of their specific provisions, any discrimination on grounds of nationality shall be prohibited.
(1.1.2) Specialised treaties
International Convention on the Elimination of All Forms of Racial Discrimination
Article 1(1)
In this Convention, the term “racial discrimination” shall mean any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life
Article 2
1. States Parties condemn racial discrimination and undertake to pursue by all appropriate means and without delay a policy of eliminating racial discrimination in all its forms and promoting understanding among all races, and, to this end:
(a) Each State Party undertakes to engage in no act or practice of racial discrimination against persons, groups of persons or institutions and to en sure that all public authorities and public institutions, national and local, shall act in conformity with this obligation;
(b) Each State Party undertakes not to sponsor, defend or support racial discrimination by any persons or organizations;
(c) Each State Party shall take effective measures to review governmental, national and local policies, and to amend, rescind or nullify any laws and regulations which have the effect of creating or perpetuating racial discrimination wherever it exists;
(d) Each State Party shall prohibit and bring to an end, by all appropriate means, including legislation as required by circumstances, racial discrimination by any persons, group or organization;
(e) Each State Party undertakes to encourage, where appropriate, integrationist multiracial organizations and movements and other means of eliminating barriers between races, and to discourage anything which tends to strengthen racial division.
International Convention on the Elimination of All Forms of Discrimination Against Women
Article 2
States Parties condemn discrimination against women in all its forms, agree to pursue by all appropriate means and without delay a policy of eliminating discrimination against women and, to this end, undertake:
(a) To embody the principle of the equality of men and women in their national constitutions or other appropriate legislation if not yet incorporated therein and to ensure, through law and other appropriate means, the practical realization of this principle;
(b) To adopt appropriate legislative and other measures, including sanctions where appropriate, prohibiting all discrimination against women;
(c) To establish legal protection of the rights of women on an equal basis with men and to ensure through competent national tribunals and other public institutions the effective protection of women against any act of discrimination;
(d) To refrain from engaging in any act or practice of discrimination against women and to ensure that public authorities and institutions shall act in conformity with this obligation;
(e) To take all appropriate measures to eliminate discrimination against women by any person, organization or enterprise;
(f) To take all appropriate measures, including legislation, to modify or abolish existing laws, regulations, customs and practices which constitute discrimination against women;
(g) To repeal all national penal provisions which constitute discrimination against women.
Article 10 Framework Convention on AI, Human Rights, Democracy and the Rule of Law – Equality and non-discrimination
1. Each Party shall adopt or maintain measures with a view to ensuring that activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law.
2. Each Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems.
(1.2) A distinction does not constitute discrimination, if there are reasonable and objective reasons
Discrimination in law is a distinction/differentiation based on the ground of a protected characteristic (e.g., race, colour, sex, language) as long as it is not justified by reasonable and objective reasons.
UN Human Rights Committee, General Comment No 18: Non-discrimination, UN Doc HRI/GEN/1/Rev.1 10 November 1989
- […] the Committee believes that the term “discrimination” as used in the Covenant should be understood to imply any distinction, exclusion, restriction or preference which is based on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms.
[…]
13. [….] the Committee observes that not every differentiation of treatment will constitute discrimination, if the criteria for such dif. ferentiation are reasonable and objective and if the aim is to achieve a purpose which is legitimate under the Covenant.
(1.3) Applying the definition of discrimination
ECtHR, Schalk and Kopf v Austria, App no 30141/04, 24 June 2010
96. The Court has established in its case-law that in order for an issue to arise under Article 14 there must be a difference in treatment of persons in relevantly similar situations. Such a difference of treatment is discriminatory if it has no objective and reasonable justification; in other words, if it does not pursue a legitimate aim or if there is not a reasonable relationship of proportionality between the means employed and the aim sought to be realised. The Contracting States enjoy a margin of appreciation in assessing whether and to what extent differences in otherwise similar situations justify a difference in treatment […].
97. On the one hand, the Court has held repeatedly that, just like differences based on sex, differences based on sexual orientation require particularly serious reasons by way of justification […]. On the other hand, a wide margin of appreciation is usually allowed to the State under the Convention when it comes to general measures of economic or social strategy [….].
98. The scope of the margin of appreciation will vary according to the circumstances, the subject matter and its background; in this respect, one of the relevant factors may be the existence or non-existence of common ground between the laws of the Contracting States […].
(1.4) Direct and indirect discrimination
ECtHR, D.H. and Others v. the Czech Republic, app no. 57325/00, 13 November 2007
175. […] discrimination means treating differently, without an objective and reasonable justification, persons in relevantly similar situations […] The Court has also accepted that a general policy or measure that has disproportionately prejudicial effects on a particular group may be considered discriminatory notwithstanding that it is not specifically aimed at that group […], and that discrimination potentially contrary to the Convention may result from a de facto situation […]
(1.5) The special function of Article 14 ECHR: not a self-standing provision
ECtHR, Schalk and Kopf v Austria, App no 30141/04, 24 June 2010
89. As the Court has consistently held, Article 14 complements the other substantive provisions of the Convention and its Protocols. It has no independent existence since it has effect solely in relation to “the enjoyment of the rights and freedoms” safeguarded by those provisions. Although the application of Article 14 does not presuppose a breach of those provisions – and to this extent it is autonomous –, there can be no room for its application unless the facts at issue fall within the ambit of one or more of the latter […].
However, Article 1 of Additional Protocol 12 to the ECHR creates a general prohibition of discrimination that functions independently of the applicability of other ECHR rights. Therefore, individuals may claim a violation of Article 1 of Additional Protocol 12 to the ECHR as a self-standing provision. This concerns only States that have signed and ratified Additional Protocol 12 (for details on those states see here).
(1.6) Burden of proof and evidentiary means in cases of discrimination
ECtHR, D.H. and Others v. the Czech Republic, app no. 57325/00, 13 November 2007
177. As to the burden of proof [in cases of alleged discrimination], the Court has established that once the applicant has shown a difference in treatment, it is for the Government to show that it was justified […]
178. As regards the question of what constitutes prima facie evidence capable of shifting the burden of proof on to the respondent State, the Court stated in Nachova and Others […] that in proceedings before it there are no procedural barriers to the admissibility of evidence or pre-determined formulae for its assessment. The Court adopts the conclusions that are, in its view, supported by the free evaluation of all evidence, including such inferences as may flow from the facts and the parties’ submissions. According to its established case-law, proof may follow from the coexistence of sufficiently strong, clear and concordant inferences or of similar unrebutted presumptions of fact. Moreover, the level of persuasion necessary for reaching a particular conclusion and, in this connection, the distribution of the burden of proof are intrinsically linked to the specificity of the facts, the nature of the allegation made and the Convention right at stake.
179. The Court has also recognised that Convention proceedings do not in all cases lend themselves to a rigorous application of the principle affirmanti incumbit probatio [he who alleges something must prove that allegation] […]. In certain circumstances, where the events in issue lie wholly, or in large part, within the exclusive knowledge of the authorities, the burden of proof may be regarded as resting on the authorities to provide a satisfactory and convincing explanation […]. In Nachova and Others (cited above, § 157), the Court did not rule out requiring a respondent Government to disprove an arguable allegation of discrimination in certain cases, even though it considered that it would be difficult to do so in that particular case, in which the allegation was that an act of violence had been motivated by racial prejudice. It noted in that connection that in the legal systems of many countries proof of the discriminatory effect of a policy, decision or practice would dispense with the need to prove intent in respect of alleged discrimination in employment or in the provision of services.
180. As to whether statistics can constitute evidence, the Court has in the past stated that statistics could not in themselves disclose a practice which could be classified as discriminatory […]. However, in more recent cases on the question of discrimination in which the applicants alleged a difference in the effect of a general measure or de facto situation […], the Court relied extensively on statistics produced by the parties to establish a difference in treatment between two groups (men and women) in similar situations.
(1.7) Horizontal application of EU Directives and Article 21 EU Charter on Fundamental Rights
Xenidis, ‘Article 21. An Exploration, or the Right to Algorithmic Non-Discrimination’, in Alexandra Giannopoulou (ed), Digital Rights Are Charter Rights: Essay Series (Digital Freedom Fund & DitiRISE, 2023) 22, 23
[…] Due to the scope of the [EU Charter on Fundamental Rights], as expressed in Article 51(1), the prohibition of discriminations applies to ‘the institutions [and] bodies […] of the Union’ as well as ‘the Member States only when they are implementing Union law’. Hence, the EU has a comprehensive obligation to refrain from any form of discrimination based on all grounds listed in Article 21. For instance, this implies that Frontex, an EU agency, cannot utilize border control software that discriminates against individuals based on factors like skin colour, ethnic origin or language.
The obligations for member States, however, is more limited: the non-discrimination clause contained in Article 21 only applies when there is a ‘direct link’ with EU law. Within the material scope of the EU’s four anti-discrimination directives (Directives 2000/43/EC, 2000/78/EC, 20004/113/EC and 2006/54/EC), EU secondary law provisions prohibit discrimination on grounds of sex or gender, race or ethnic origin, sexual orientation, disability, religion or belief and age. Article 21 of the Charter, embodying the general principle of equal treatment, applies within the specific framework defined by these Directives. Yet, where the Directives cannot, in principle, apply directly to private parties, the Court of Justice of the European Union has recognised horizontal direct effects to Article 21 of the Charter. […]
In scenarios where the situation falls outside the scope of EU secondary law but maintains a direct link with EU law, Article 21 operates in a subsidiary manner.
(2) Digital Technologies and Discrimination Concerns
General Recommendation No. 36 (2020) of the Committee on the Elimination of Racial Discrimination on preventing and combating racial profiling by law enforcement officials, UN Doc CERD/C/GC/36, 17 December 2020
33. Particular risks emerge when algorithmic profiling is used for determining the likelihood of criminal activity either in certain localities, or by certain groups or even individuals. Predictive policing that relies on historical data for predicting possible future events can easily produce discriminatory outcomes, in particular when the datasets used suffer from one or more of the flaws described above. For example, historical arrest data about a neighbourhood may reflect racially biased policing practices. If fed into a predictive policing model, use of these data poses a risk of steering future predictions in the same, biased direction, leading to overpolicing of the same neighbourhood, which in turn may lead to more arrests in that neighbourhood, creating a dangerous feedback loop.
34. Similar mechanisms have been reported to be present in judicial systems. When applying a sanction, or deciding whether someone should be sent to prison, be released on bail or receive another punishment, States are increasingly resorting to the use of algorithmic profiling, in order to foresee the possibilities that an individual may commit one or several crimes in the future. Authorities gather information regarding the criminal history of the individual, their family and their friends and their social conditions, including their work and academic history, in order to assess the degree of “danger” posed by the person from a score provided by the algorithm, which usually remains secret. […]
(2.1) Bias in AI decision-making
(2.1.1) How does bias enter AI decision-making?
Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making, November 2020, 26
Bias can enter algorithmic decision-making systems in a number of ways, including:
- Historical bias: The data that the model is built, tested and operated on could introduce bias. This may be because of previously biased human decision-making or due to societal or historical inequalities. For example, if […] your criminal record is in part a result of how likely you are to be arrested (as compared to someone else with the same history of behaviour, but not arrests), an algorithm constructed to assess risk of reoffending is at risk of not reflecting the true likelihood of reoffending, but instead reflects the more biased likelihood of being caught reoffending.
- Data selection bias: How the data is collected and selected could mean it is not representative. […] This has been the main cause of some of the widely reported problems with accuracy of some facial recognition algorithms across different ethnic groups, with attempts to address this focusing on ensuring a better balance in training data.
- Algorithmic design bias: It may also be that the design of the algorithm leads to introduction of bias. […]
- Human oversight is widely considered to be a good thing when algorithms are making decisions, and mitigates the risk that purely algorithmic processes cannot apply human judgement to deal with unfamiliar situations. However depending on how humans interpret or use the outputs of an algorithm, there is also a risk that bias re-enters the process as the human applies their own conscious or unconscious biases to the final decision
There is also risk that bias can be amplified over time by feedback loops, as models are incrementally re-trained on new data generated, either fully or partly, via use of earlier versions of the model in decision-making. For example, if a model predicting crime rates based on historical arrest data is used to prioritise police resources, then arrests in high risk areas could increase further, reinforcing the imbalance. […]
(2.1.2) Does discrimination in law capture unfair bias in AI systems?
Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making, November 2020, 28
Discrimination is a narrower concept than bias. Protected characteristics have been included in law due to historical evidence of systematic unfair treatment, but individuals can also experience unfair treatment on the basis of other characteristics that are not protected.
[…]
Algorithmic decision-making can also go beyond amplifying existing biases, to creating new biases that may be unfair, though difficult to address through discrimination law. This is because machine learning algorithms find new statistical relationships, without necessarily considering whether the basis for those relationships is fair, and then apply this systematically in large numbers of individual decisions.
Examples of non-protected characteristics in bias
Individuals and/or groups may be discriminated based on characteristics not protected in law (e.g., socio-economic status).
Moreover, data may encompass proxy information (e.g., postal codes) indirectly indicating race or ethnic origin. Many times, the use of AI systems results in discriminatory practices not by using a prohibited category as a variable but through proxies and correlations established in the content of data and this very difficult, if not impossible, to trace back or review. A characteristic example of how training data sets and input data can lead to discrimination against individuals and groups comes from predictive policing. Predictive policing consists of AI systems pulling from multiple sources of data, such as criminal records, crime statistics and the demographics of neighbourhoods, for determining the likelihood of criminal activity. Reliance upon historical arrest data about a neighbourhood may reflect racially biased policing practices and, therefore, reproduce and reinforce biases and aggravate or lead to discrimination. In practice, this may result in over-policing of the same neighbourhood, which in turn leads to more arrests creating a dangerous feedback loop.
Example of new types of discriminatory harms
AI systems also create novel group discriminatory harms. For instance, when clustering algorithms profile and categorise individuals, they tend to discriminate against groups of individuals who display similar behavioural patterns. Group harms also relate to groups of people with shared backgrounds (e.g., income, residence, race, gender, nationality). In these instances, both individuals and groups have great difficulty to be identified as victims, not to mention that they may not be aware of the harms involved.
(2.2) Algorithmic opacity as a barrier to detecting and contesting discriminatory outcomes
Algorithmic opacity creates many challenges for detecting discriminatory and human rights harms and substantiating respective claims. Algorithmic opacity can be unintentional or intentional.
(2.2.1) Unintentional algorithmic opacity
Unintentional algorithmic opacity concerns the complexity of technological systems and the different ways in which bias can be ingrained into these systems. It was explained earlier how bias creeps in various – many times invisible – ways into AI systems. The so-called “black box”- namely the complexity of AI systems and our limited ability to understand how and why they reach certain outcomes – merits some explanation.
Explaining the black box of deep learning
Science Magazine, AI detectives are cracking open the black box of deep learning
(2.2.2) Intentional algorithmic opacity
Algorithmic opacity may be intentional too. It is common for AI systems that are developed by the private sector and deployed in the private or public sector to be shielded from scrutiny. This is due to restrictions imposed by contracts, intellectual property laws or technical measures.
(2.2.2.1) Intentional algorithmic opacity and IP interests
In State v. Loomis the Supreme Court of Wisconsin ruled for the first time about the constitutionality of the use of AI in sentencing. The case concerned the use of a risk assessment tool – COMPAS – to predict an individual’s risk of recidivism. Loomis argued, among others, that he was not able to understand how his COMPAS score was calculated due to the proprietary nature of system which violated his right to due process. The Supreme Court dismissed his arguments holding that the accuracy of the tools and the capacity of the judges to understand possible malfunctioning were adequate.
USA, Supreme Court of Wisconsin, State v. Loomis, 881, N.W.2d 749, 7532 (Wis, 2016)
46. We turn to address Loomis’ first argument that a circuit court’s consideration of COMPAS risk assessment violates a defendant’s due process right to be sentenced based on accurate information. Loomis advances initially that the proprietary nature of COMPAS prevents a defendant from challenging the scientific validity of the risk assessment. Accordingly, Loomis contends that […] a defendant cannot ensure that he is being sentenced based on accurate information.
53. […] Although Loomis cannot review and challenge how the COMPAS algorithm calculates risk, he can at least review and challenge the resulting risk scores set forth in the report […].
54. Loomis is correct that the risk scores do not explain how the COMPAS program uses information to calculate the risk scores. However, Northpointe’s 2015 Practitioners Guide to COMPAS explains that the risk scores are based largely on static information (criminal history), with limited use of some dynamic variables (i.e. criminal associates, substance abuse).
55. […] Thus, to the extent that Loomis’s risk assessment is based upon his answers to questions and publicly available data about his criminal history, Loomis had the opportunity to verify that the questions and answers listed on the COMPAS report were accurate.
58. Some states that use COMPAS have conducted validation studies of COMPAS concluding that it is a sufficiently accurate risk assessment tool […]
59. However, Loomis relies on other studies of risk assessment tools that have raised questions about their accuracy […]
62. In addition to these problems, there is concern that risk assessment tools may disproportionately classify minority offenders as higher risk, often due to factors that may be outside their control, such as familial background and education. Other state studies indicate that COMPAS is more predictive of recidivism among white offenders than black offenders.
64. Additional concerns are raised about the need to closely monitor risk assessment tools for accuracy […]
65. Focusing exclusively on its use at sentencing and considering the expressed due process arguments regarding accuracy, we determine that use of a COMPAS risk assessment must be subject to certain cautions in addition to the limitations set forth herein.
66. Specifically, any [Presentence Investigation Report] containing a COMPAS risk assessment must inform the sentencing court about the following cautions regarding a COMPAS risk assessment’s accuracy: (1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross-validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations. Providing information to sentencing courts on the limitation and cautions attendant with the use of COMPAS risk assessment will enable courts to better assess the accuracy of the assessment and the appropriate weight to be given to the risk score.
Question
(2.2.2.2) Intentional algorithmic opacity and “gaming the system”
States may also refuse to reveal details about AI systems used in the public sector invoking, or under the pretext, that individuals will “game the system”. This was the case in SyRI: The Netherlands refused to reveal before the court more details on how the AI system functioned.
District Court of The Hague, NCJM et al. and FNV v The State of the Netherlands, 6 March 2020
[In the SyRI case, the applicants’ challenged the Dutch government’s use of System Risk Indication (SyRI)—an algorithm designed to predict social welfare fraud. The applicants argued that SyRI’s use was in violation of the right to privacy, data protection and the principle of non-discrimination.]
6.49 The court finds that it is unable to assess the correctness of the position of the State of the precise nature of SyRI because the State has not disclosed the risk model and the indicators of which the risk model is composed or may be composed. In these proceedings the State has also not provided the court with objectively verifiable information to enable the court to assess the viewpoint of the State on the nature of SyRI. The reason the State gives for this is that citizens could then adjust their conduct accordingly. This is a deliberate choice of the State. That choice also coincides with the starting point of the legislator regarding the provision of information on SyRI. The SyRI legislation does not show how the decision model of SyRI functions and which indicators are or can be used in a SyRI project (see 4.23 above for the terms decision model and indicators), i.e. which factual data make or can make the presence of a certain situation plausible.
(2.2.2.3) Intentional algorithmic opacity and the need for transparency and scrutiny
Netherlands Institute for Human Rights, Transparent Use of Algorithms by the Government, 29 June 2023, 9-10
Objections to the obligation of transparency
[…]
Objection 2: Secrecy required in risk selection
Administrative bodies in charge of enforcement and detecting fraud or the abuse of schemes benefit from a degree of secrecy. For example, the Tax and Customs Administration has to choose from nearly 10 million income tax returns each year with regard to its inspections. In such circumstances, is a government entitled to keep selection criteria used by its algorithm secret, in order to prevent calculating citizens from anticipating these criteria in order to avoid an inspection?
The phenomenon of citizens tailoring their behaviour to known or public detection criteria is referred to as the problem of ‘gaming the system’. […]
First of all, it is useful to mention that in practice, transparency need not be completely ignored for detection methods and fraud detection. By not naming the exact values of criteria used in calculations, but explaining in comprehensible language how they came to a suspicion of fraud, partial transparency can often be achieved without objection.
At the same time, the need for the secrecy of detection methods – in the broad sense – is understandable. However, the Institute sees a dilemma here. Fraud detection algorithms in particular are susceptible to highly damaging and far-reaching stigmatising forms of discrimination. Although risk selection algorithms only involve further inspection and not a sanction at that point, selective controls for suspicious characteristics such as a migration background have a highly stigmatising effect.
That effect is self-reinforcing, as an algorithm that selectively searches more intensively than average on the basis of bias will also find more cases of fraud […]. Therefore, when Dutch administrative bodies choose to exempt detection algorithms (in part) from an obligation of transparency, the related monitoring will have to be organised through other, thorough and prudent methods.
In such cases, the Institute considers it necessary to carry out mandatory and independent verification beforehand. Algorithms for which the confidentiality of (some) test and selection elements is desired should be submitted to a body qualified to handle this matter. Until authorisation has been given, detection algorithms with secret elements must not be put into use.
Since the choice of these types of elements in algorithms can be dynamic, the possibility of subjecting these algorithms to periodic retesting should also be implemented.
(2.3) Burden of proof and evidence for detecting and contesting discriminatory harms/outcomes
(2.3.1) Evidence and burden of proof when there is absence of sufficient information on algorithmic systems – State obligation to put safeguards in place to neutralise the risk of discriminatory effects
District Court of The Hague, NCJM et al. and FNV v The State of the Netherlands, 6 March 2020
6.92 NJCM et al., in these proceedings also supported by FNV and the UN Special Rapporteur on extreme poverty and human rights, has explained extensively that it believes that the use of SyRI has a discriminatory and stigmatising effect. NJCM et al. notes that SyRI is used to further investigate neighbourhoods that are known as problem areas. This increases the chances of discovering irregularities in such areas as compared to other neighbourhoods, which in turn confirms the image of a neighbourhood as a problem area, contributes to stereotyping and reinforces a negative image of the occupants of such neighbourhoods, even if no risk reports have been generated about them.
6.93 It is correct that to date SyRI has only been applied to so-labelled ‘problem districts’, as confirmed by the State at the hearing. This in and of itself need not imply that such use is disproportionate or otherwise contrary to Article 8 paragraph 2 ECHR in all cases. However, given the large amounts of data that qualify for processing in SyRI, including special personal data, and the circumstance that risk profiles are used, there is in fact a risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background, as NJCM et al. argue.
6.94 Based on the SyRI legislation, it cannot be assessed whether this risk is sufficiently neutralised due to the absence of a verifiable insight into the risk indicators and the risk model as well as the functioning of the risk model, including the analysis method applied by the Social Affairs and Employment Inspectorate. The circumstance that the process of data processing consists of two phases and that the analysis unit of the Social Affairs and Employment Inspectorate, following a link of the files by the IB, assesses the decrypted data on their worthiness of investigation, which includes a human check for false positives and false negatives, is deemed insufficient by the court. After all, the manner in which the definitive risk selection takes place is not public. Nor are the data subjects informed about how the definitive risk selection is effectuated or about the associated conclusion whether or not a risk report is submitted, while the SyRI legislation only provides for a general monitoring by the AP afterwards.
6.95 In view of the foregoing, the court is of the opinion that the SyRI legislation contains insufficient safeguards to protect the right to respect for private life in relation to the risk indicators and the risk model which can be used in a concrete SyRI project. […]
(2.3.2) Burden of proof lies with the State to dispel suspicion of discriminatory effects?
37. In the context of an appropriately demanding proportionality assessment, the Special Rapporteur believes the burden of proof lies with the government to provide evidence that dispels the suspicion that the singular focus of SyRI on poor and marginalised groups in Dutch society is justified in light of available government statistics. An important test in this regard would be whether similar levels of intrusiveness that are evident in the context of the SyRI system would also be considered acceptable if well-off areas and groups were involved instead of poor ones. These considerations affect the proportionality assessment in this case, but are also relevant in light of an assessment of whether SyRI discriminates on prohibited grounds, including on the basis of race, colour, national or social origin, property or birth status.
(3) States’ Obligations Ensuring that AI Systems and Other Technologies Are in Compliance with International Human Rights Law and the Prohibition of Discrimination
(3.1) General Recommendation No. 36 (2020) of the Committee on the Elimination of Racial Discrimination on preventing and combating racial profiling by law enforcement officials, UN Doc CERD/C/GC/36, 17 December 2020
58. States should ensure that algorithmic profiling systems used for the purposes of law enforcement are in full compliance with international human rights law. To that effect, before procuring or deploying such systems States should adopt appropriate legislative, administrative and other measures to determine the purpose of their use and to regulate as accurately as possible the parameters and guarantees that prevent breaches of human rights. Such measures should, in particular, be aimed at ensuring that the deployment of algorithmic profiling systems does not undermine the right not to be discriminated against, the right to equality before the law, the right to liberty and security of person, the right to the presumption of innocence, the right to life, the right to privacy, freedom of movement, freedom of peaceful assembly and association, protections against arbitrary arrest and other interventions, and the right to an effective remedy.
59. States should carefully assess the potential human rights impact prior to employing facial recognition technology, which can lead to misidentification owing to a lack of representation in data collection […]
60. States should ensure that algorithmic profiling systems deployed for law enforcement purposes are designed for transparency, and should allow researchers and civil society to access the code and subject it to scrutiny. There should be continual assessment and monitoring of the human rights impact of those systems throughout their life cycle, and States should take appropriate mitigation measures if risks or harms to human rights are identified. […] Such processes should include community impact assessments. Groups that are potentially or actually affected and relevant experts should be included in the assessment and mitigation processes.
61. States should take all appropriate measures to ensure transparency in the use of algorithmic profiling systems. This includes public disclosure of the use of such systems and meaningful explanations of the ways in which the systems work, the data sets that are being used, and the measures in place to prevent or mitigate human rights harms. States should adopt measures to ensure that independent oversight bodies have a mandate to monitor the use of artificial intelligence tools by the public sector, and to assess them against criteria developed in conformity with the [CERD] to ensure they are not entrenching inequalities or producing discriminatory results […]
(3.2) Public authorities have the positive duty to do all that they reasonably can to ensure that an AI system, especially a novel and controversial technology, such as live automated facial recognition technology, does not have an inbuilt racial or gender bias.
R. (on the application of Edward Bridges) v. Chief Constable of S. Wales Police [2020] EWCA Civ 1058 (Ed Bridges)
1. This appeal concerns the lawfulness of the use of live automated facial recognition technology (“AFR”) by the South Wales Police Force […] in an ongoing trial using a system called AFR Locate. AFR Locate involves the deployment of surveillance cameras to capture digital images of members of the public, which are then processed and compared with digital images of persons on a watchlist compiled by the South Wales Police Force for the purpose of the deployment. On the facts of the present case, AFR Locate has been used in an overt manner. […]
2. […] The grounds of challenge were that AFR is not compatible with the right to respect for private life under Article 8 of the European Convention on Human Rights (“the Convention”), which is one of the Convention rights set out in Sch.1 to the Human Rights Act 1998 (“HRA”); data protection legislation; and the Public Sector Equality Duty […] in section 149 of the Equality Act 2010.
[…]
Ground 5: Public Sector Equality Duty
181. We acknowledge that what is required by the Public Sector Equality Duty is dependent on the context and does not require the impossible. It requires the taking of reasonable steps to make enquiries about what may not yet be known to a public authority about the potential impact of a proposed decision or policy on people with the relevant characteristics, in particular for present purposes race and sex.
182. We also acknowledge that, as the Divisional Court found, there was no evidence before it that there is any reason to think that the particular AFR technology used in this case did have any bias on racial or gender grounds. That, however, it seems to us, was to put the cart before the horse. The whole purpose of the positive duty (as opposed to the negative duties in the Equality Act 2010) is to ensure that a public authority does not inadvertently overlook information which it should take into account.
[…]
191. In our view, this does not constitute a sufficient answer to the challenge based on the Public Sector Equality Duty. As Mr Squires submitted, Mr Edgell was dealing with a different set of statistics. He did not know, for obvious reasons, the racial or gender profiles of the total number of people who were captured by the AFR technology but whose data was then almost immediately deleted. In order to check the racial or gender bias in the technology, that information would have to be known. We accept Mr Beer’s submission that it is impossible to have that information, precisely because a safeguard in the present arrangements is that that data is deleted in the vast majority of cases. That does not mean, however, that the software may not have an inbuilt bias, which needs to be tested. In any event, with respect to Mr Edgell, he is not an expert who can deal with the technical aspects of the software in this context.
[…]
200. Finally, we would note that the Divisional Court placed emphasis on the fact that the South Wales Police Force continue to review events against the section 149(1) criteria. It said that this is the approach required by the Public Sector Equality Duty in the context of a trial process. With respect, we do not regard that proposition to be correct in law. The Public Sector Equality Duty does not differ according to whether something is a trial process or not. If anything, it could be said that, before or during the course of a trial, it is all the more important for a public authority to acquire relevant information in order to conform to the Public Sector Equality Duty and, in particular, to avoid indirect discrimination on racial or gender grounds.
201. In all the circumstances, therefore, we have reached the conclusion that the South Wales Police Force have not done all that they reasonably could to fulfil the Public Sector Equality Duty. We would hope that, as AFR is a novel and controversial technology, all police forces that intend to use it in the future would wish to satisfy themselves that everything reasonable which could be done had been done in order to make sure that the software used does not have a racial or gender bias.
202. For the above reasons this appeal will be allowed on Ground 5.
(3.3) States’ obligations in connection to corporations’ activities
Report of the Special Rapporteur, E. Tendayi Achiume, on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance, Racial discrimination and emerging digital technologies: a human rights analysis UN Doc A/HRC/44/57, 18 June 2020
59. Although international human rights law is only directly legally binding on States, in order to discharge their legal obligations in this regard, States are required to ensure effective remedies for racial discrimination attributable to private actors, including corporations. Under the International Convention on the Elimination of All Forms of Racial Discrimination, States must enact special measures to achieve and protect racial equality throughout the public and private spheres. This should include close regulatory oversight of companies involved in emerging digital technologies.
61. States must ensure that human rights ethical frameworks for corporations involved in emerging digital technologies are linked with and informed by binding international human rights law obligations, including on equality and non-discrimination. There is a genuine risk that corporations will reference human rights liberally for the public relations benefits of being seen to be ethical, even in the absence of meaningful interventions to operationalize human rights principles. Although references to human rights, and even to equality and non-discrimination, proliferate in corporate governance documents, these references alone do not ensure accountability. Similarly, implementation of the framework of Guiding Principles on Business and Human Rights, including through initiatives such as the B-Tech Project, must incorporate legally binding obligations to prohibit – and provide effective remedies for – racial discrimination.
(4) Business Sector’s Duties Ensuring that AI Systems and Other Technologies Are Compliant with International Human Rights Law and the Prohibition of Discrimination
(4.1) General Recommendation No. 36 (2020) of the Committee on the Elimination of Racial Discrimination on preventing and combating racial profiling by law enforcement officials, UN Doc CERD/C/GC/36, 17 December 2020
64. […] private business enterprises […], in the process of developing, learning, marketing and using algorithms: (a) comply with the principle of equality and non-discrimination, and respect human rights in general, in line with the Guiding Principles on Business and Human Rights (in particular guiding principles 1–3, 11 and 24); (b) respect the precautionary principle and any administrative or legislative measure enacted to ensure transparency; (c) disclose publicly whether law enforcement has access to private data on individuals; and (d) avoid causing disparate or disproportionate impact on the social groups protected by the [CERD].
[…]
66. […] companies that are developing, selling or operating algorithmic profiling systems for law enforcement purposes have a responsibility to involve individuals from multiple disciplines, such as sociology, political science, computer science and law, to define the risks to, and ensure respect for, human rights […]
67. In the process of identifying, assessing, preventing and mitigating adverse human rights impacts, companies should pay particular attention to the data-related factors outlined in paragraph 27 above. Training data should be selected, and models designed, so as to prevent discriminatory outcomes and other adverse impacts on human rights. Moreover, companies should pursue diversity, equity and other means of inclusion in the teams developing algorithmic profiling systems. Companies should also be open to independent third-party audits of their algorithmic profiling systems. Where the risk of discrimination or other human rights violations has been assessed to be too high or impossible to mitigate, including because of the nature of a planned or foreseeable use by a State, private sector actors should not sell or deploy an algorithmic profiling system.
(4.2) Case study: Intersectionality and gender identity
(4.2.1) How does automated gender classification contribute to gender identity?
Theilen, ‘Article 20. Digital Inequalities and the Promise of Equality before the Law’, in Giannopoulou (ed), Digital Rights Are Charter Rights: Essay Series (Digital Freedom Fund & DitiRISE, 2023) 18, 20
While legal doctrine sees [intersectionality analysis] as subsidiary to more specific non-discrimination clauses, the promise of equality for ‘everyone’ in Article 20 of the Charter [of Fundamental Rights] should serve as a reminder that discrimination cannot be siloed into separate grounds of discrimination but must be considered holistically.
[…]
Automated gender classification contributes to normalising the idea that gender identity is readable from physical appearance and stands opposed to a self-determined gender identity – which many trans persons resists the technology as such, rather than focusing on reform to make it more inclusive.
(4.2.2) Disproportionate effects of content moderation and platform governance on the expressive rights of women and LGBTQI+ users
(4.2.2.1) Adult Nudity and Sexual Activity Community Standard, Meta Community Standards
Do not post:
Imagery of real nude adults, if it depicts:
[…]
Uncovered female nipples except in the context of breastfeeding, birth giving and after-birth moments, medical or health context (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.
(4.2.2.2) META Oversight Board, Gender identity and nudity cases, 2022-009-IG-UA and 2022-010-IG-UA, 17 January 2023
Case summary
The Oversight Board has overturned Meta’s original decisions to remove two Instagram posts depicting transgender and non-binary people with bare chests. It also recommends that Meta change its Adult Nudity and Sexual Activity Community Standard so that it is governed by clear criteria that respect international human rights standards.
About the case
In this decision, the Oversight Board considers two cases together for the first time. Two separate pieces of content were posted by the same Instagram account, one in 2021, the other in 2022. The account is maintained by a US-based couple who identify as transgender and non-binary.
Both posts feature images of the couple bare-chested with the nipples covered. The image captions discuss transgender healthcare and say that one member of the couple will soon undergo top surgery (gender-affirming surgery to create a flatter chest), which the couple are fundraising to pay for.
Following a series of alerts by Meta’s automated systems and reports from users, the posts were reviewed multiple times for potential violations of various Community Standards. Meta ultimately removed both posts for violating the Sexual Solicitation Community Standard, seemingly because they contain breasts and a link to a fundraising page.
The users appealed to Meta and then to the Board. After the Board accepted the cases, Meta found it had removed the posts in error and restored them.
Key findings
The Oversight Board finds that removing these posts is not in line with Meta’s Community Standards, values or human rights responsibilities. These cases also highlight fundamental issues with Meta’s policies.
Meta’s internal guidance to moderators on when to remove content under the Sexual Solicitation policy is far broader than the stated rationale for the policy, or the publicly available guidance. This creates confusion for users and moderators and, as Meta has recognized, leads to content being wrongly removed.
In at least one of the cases, the post was sent for human review by an automated system trained to enforce the Adult Nudity and Sexual Activity Community Standard. This Standard prohibits images containing female nipples other than in specified circumstances, such as breastfeeding and gender confirmation surgery.
This policy is based on a binary view of gender and a distinction between male and female bodies. Such an approach makes it unclear how the rules apply to intersex, non-binary and transgender people, and requires reviewers to make rapid and subjective assessments of sex and gender, which is not practical when moderating content at scale.
The restrictions and exceptions to the rules on female nipples are extensive and confusing, particularly as they apply to transgender and non-binary people. Exceptions to the policy range from protests to scenes of childbirth, and medical and health contexts, including top surgery and breast cancer awareness. These exceptions are often convoluted and poorly defined. In some contexts, for example, moderators must assess the extent and nature of visible scarring to determine whether certain exceptions apply. The lack of clarity inherent in this policy creates uncertainty for users and reviewers, and makes it unworkable in practice.
The Board has consistently said Meta must be sensitive to how its policies impact people subject to discrimination […]. Here, the Board finds that Meta’s policies on adult nudity result in greater barriers to expression for women, trans, and gender non-binary people on its platforms. For example, they have a severe impact in contexts where women may traditionally go bare-chested, and people who identify as LGBTQI+ can be disproportionately affected, as these cases show. Meta’s automated systems identified the content multiple times, despite it not violating Meta’s policies.
Meta should seek to develop and implement policies that address all these concerns. It should change its approach to managing nudity on its platforms by defining clear criteria to govern the Adult Nudity and Sexual Activity policy, which ensure all users are treated in a manner consistent with human rights standards. It should also examine whether the Adult Nudity and Sexual Activity policy protects against non-consensual image sharing, and whether other policies need to be strengthened in this regard.
The Oversight Board’s decision
The Oversight Board overturns Meta’s original decision to remove the posts.
The Board also recommends that Meta:
- Define clear, objective, rights-respecting criteria to govern its Adult Nudity and Sexual Activity Community Standard, so that all people are treated in a manner consistent with international human rights standards, without discrimination on the basis of sex or gender. Meta should first conduct a comprehensive human rights impact assessment on such a change, engaging diverse stakeholders, and create a plan to address any harms identified.
- Provide more detail in its public-facing Sexual Solicitation Community Standard on the criteria that lead to content being removed.
- Revise its guidance for moderators on the Sexual Solicitation Community Standard so that it more accurately reflects the public rules on the policy. This would help to reduce enforcement errors on Meta’s part.
Recommended reading: for those interested in reading the whole decision, please check here.
According to Recommendation No 36(2020) by the UN CERD Committee, the processes of identifying, assessing, monitoring and mitigating human rights impacts of AI systems throughout their life cycle should include community impact assessments (para 60). What are community impact assessments and why do you think that they are important?
You can also watch the video by Access Now on How AI is defining our bodies: https://www.youtube.com/watch?v=NS3XD6yF_gk
Exercise 2 / Flashcards / Quiz
States should ensure that AI systems are designed for transparency and subject to external scrutiny. What specific measures need to be taken and what stakeholders should be involved?
Exercise 3: AI systems are socio-technical systems: Looking for the human factor too
From your readings, can you identify the contribution of humans a) as to how bias enters AI decision-making systems and b) as to how to mitigate discriminatory outcomes?
Further readings/recourses
- Lu, ‘Regulating Algorithmic Harms’, Florida Law Review (2025 forthcoming)
- Mittelstadt, ‘From Individual to Group Privacy in Big Data Analytics’ (2017) 30 Philosophy & Technology 478
- Taylor, Floridi, van der Sloot (eds), Group Privacy: New Challenges of Data Technologies (Springer 2017)
Feedback/Errata