2 2. Right to Freedom of Expression
This chapter explains how the right to freedom of expression applies to the digital environment as well as AI systems. More specifically, this chapter addresses:
- what the protective scope of the right to freedom of expression includes (section 1)
- the restrictions that may be attached to the right of freedom of expression and what the requirements are for said restrictions to be lawful under human rights law (section 2)
- how private actors’ activities may severely affect the effective exercise of human rights and whether and, if yes, the extent to which private actors have human rights duties (section 3)
- how the right to freedom of thought regains its relevance in addressing AI systems that maybe incompatible with human rights (e.g. sentiment detection, manipulation and exploitation of vulnerabilities) (section 4)
(1) Protective Scope of the Right to Freedom of Expression
(1.1) Relevant law
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.
2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary
1. Everyone shall have the right to hold opinions without interference.
2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.
3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary:
(a) For respect of the rights or reputations of others
(b) For the protection of national security or of public order (ordre public), or of public health or morals.
Article 11 EU Charter – Freedom of expression and information
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.
2. The freedom and pluralism of the media shall be respected.
(1.2) Clarifying the scope of protection of Article 19 ICCPR
UN Human Rights Committee, General Comment No 34, Article 19: Freedoms of opinion and expression, UN Doc CCPR/C/GC/34, 12 September 2011
[…]
2. Freedom of opinion and freedom of expression are indispensable conditions for the full development of the person. They are essential for any society.1 They constitute the foundation stone for every free and democratic society. The two freedoms are closely related, with freedom of expression providing the vehicle for the exchange and development of opinions.
[…]
4. […] The freedoms of opinion and expression form a basis for the full enjoyment of a wide range of other human rights. For instance, freedom of expression is integral to the enjoyment of the rights to freedom of assembly and association, and the exercise of the right to vote.
[…]
11. Paragraph 2 requires States parties to guarantee the right to freedom of expression, including the right to seek, receive and impart information and ideas of all kinds regardless of frontiers. This right includes the expression and receipt of communications of every form of idea and opinion capable of transmission to others, subject to the provisions in article 19, paragraph 3, and article 20. It includes political discourse, commentary on one’s own and on public affairs, canvassing, discussion of human rights, journalism, cultural and artistic expression, teaching, and religious discourse. It may also include commercial advertising. The scope of paragraph 2 embraces even expression that may be regarded as deeply offensive, although such expression may be restricted in accordance with the provisions of article 19, paragraph 3 and article 20.
12. Paragraph 2 protects all forms of expression and the means of their dissemination. Such forms include spoken, written and sign language and such non-verbal expression as images and objects of art.23 Means of expression include books, newspapers, pamphlets, posters, banners, dress and legal submissions. They include all forms of audio-visual as well as electronic and internet-based modes of expression.
13. A free, uncensored and unhindered press or other media is essential in any society to ensure freedom of opinion and expression and the enjoyment of other Covenant rights. It constitutes one of the cornerstones of a democratic society. The Covenant embraces a right whereby the media may receive information on the basis of which it can carry out its function. The free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential. This implies a free press and other media able to comment on public issues without censorship or restraint and to inform public opinion. The public also has a corresponding right to receive media output.
[…]
15. States parties should take account of the extent to which developments in information and communication technologies, such as internet and mobile based electronic information dissemination systems, have substantially changed communication practices around the world. There is now a global network for exchanging ideas and opinions that does not necessarily rely on the traditional mass media intermediaries. States parties should take all necessary steps to foster the independence of these new media and to ensure access of individuals thereto.
(1.3) Protection of Internet-based modes of expression
ECtHR, Times Newspapers Ltd (Nos. 1 and 2) v The United Kingdom, App nos. 3002/03 and 23676/03, 10 March 2009
27. The Court has consistently emphasised that Article 10 guarantees not only the right to impart information but also the right of the public to receive it […]. In the light of [the Internet’s] accessibility and its capacity to store and communicate vast amounts of information, the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general.
(1.4) Internet access as a human right or as an enabler of exercising freedom of expression?
Community Court of Justice of the Economic Community of West African States (ECOWAS), Amnesty International & Ors v The Togolese Republic, 25 June 2020
38. Access to internet is not stricto senso a fundamental human right but since internet service provides a platform to enhance the exercise of freedom of expression, it then becomes a derivative right that it is a component to the exercise of the right to freedom of expression. It is a vehicle that provides a platform that will enhance the enjoyment of the right to freedom of expression. Right to internet access is closely linked to the right of freedom of speech which can be seen to encompass freedom of expression as well. Since access to internet is complementary to the enjoyment of the right to freedom of expression, it is necessary that access to internet and the right to freedom of expression be deemed to be an integral part of human right that requires protection by law and makes its violation actionable. In this regard, access to internet being a derivative right and at the same time component part of each other, should be jointly treated as an element of human right to which states are under obligation to provide protection for in accordance with the law just in the same way as the right to freedom of expression is protected. Against this background, access to internet should be seen as a right that requires protection of the law and any interference with it has to be provided for by the law specifying the grounds for such interference.
(1.5) States’ obligations and bridging divides
How do we think about the effective exercise of the right to freedom of expression “on the ground”? How different individuals are affected in being able to access the Internet and technological systems or being able to effectively use them?
UN Human Rights Council, Res 47/16, The promotion, protection and enjoyment of human rights on the Internet, UN Doc A/HRC/RES/47/16, 7 July 2021
3. Also condemns unequivocally online attacks against women, including sexual and gender-based violence and abuse of women, in particular where women journalists, media workers, public officials or others engaging in public debate are targeted for their expression, and calls for gender-sensitive responses that take into account the particular forms of online discrimination
[…]
8. Calls upon all States to bridge the digital divides, including the gender digital divide, and to enhance the use of information and communications technology, in order to promote the full enjoyment of human rights for all, including by:
(a) Fostering an enabling online environment that is safe and conducive to engagement by all, without discrimination and with consideration for individuals facing systemic inequalities;
(b) Maintaining and enhancing efforts to promote access to information on the Internet as one means of facilitating affordable and inclusive education globally, underlining the need to address digital literacy and the digital divides;
(c) Promoting equal opportunities, including gender equality, in the design and implementation of information and communications technology and in mainstreaming a gender perspective in policy decisions and the frameworks that guide them;
UN Human Rights Council, Res 38/7, The promotion, protection and enjoyment of human rights on the Internet, UN Doc A/HRC/RES/38/7, 17 July 2018
7. Encourages all States to take appropriate measures to promote, with the participation of persons with disabilities, the design, development, production and distribution of information and communications technology and systems, including assistive and adaptive technologies, that are accessible to persons with disabilities.
(2) Restrictions to the Right of Freedom of Expression
A restriction to freedom of expression may be imposed as long as said restriction a) is provided by law; b) serves one of the legitimate purposes exhaustively enumerated in the respective provision; and c) and is necessary in a democratic society.
(2.1) Conditions to be met for the restrictions to be lawful
UN Human Rights Committee, General Comment No 34, Article 19: Freedoms of opinion and expression, UN Doc CCPR/C/GC/34, 12 September 2011
21. […] when a State party imposes restrictions on the exercise of freedom of expression, these may not put in jeopardy the right itself. The Committee recalls that the relation between right and restriction and between norm and exception must not be reversed. […]
22. Paragraph 3 lays down specific conditions and it is only subject to these conditions that restrictions may be imposed: the restrictions must be “provided by law”; they may only be imposed for one of the grounds set out in subparagraphs (a) and (b) of paragraph 3; and they must conform to the strict tests of necessity and proportionality. Restrictions are not allowed on grounds not specified in paragraph 3, even if such grounds would justify restrictions to other rights protected in the Covenant. Restrictions must be applied only for those purposes for which they were prescribed and must be directly related to the specific need on which they are predicated.
[…]
25. For the purposes of paragraph 3, a norm, to be characterized as a “law”, must be formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be made accessible to the public. […]
27. It is for the State party to demonstrate the legal basis for any restrictions imposed on freedom of expression. If, with regard to a particular State party, the Committee has to consider whether a particular restriction is imposed by law, the State party should provide details of the law and of actions that fall within the scope of the law.
28. The first of the legitimate grounds for restriction listed in paragraph 3 is that of respect for the rights or reputations of others. The term “rights” includes human rights as recognized in the Covenant and more generally in international human rights law. For example, it may be legitimate to restrict freedom of expression in order to protect the right to vote under article 25, as well as rights article under 17 […] Such restrictions must be constructed with care: while it may be permissible to protect voters from forms of expression that constitute intimidation or coercion, such restrictions must not impede political debate, including, for example, calls for the boycotting of a non-compulsory vote. The term “others” relates to other persons individually or as members of a community. Thus, it may, for instance, refer to individual members of a community defined by its religious faith or ethnicity.
29. The second legitimate ground is that of protection of national security or of public order (ordre public), or of public health or morals.
30. Extreme care must be taken by States parties to ensure that treason laws and similar provisions relating to national security, whether described as official secrets or sedition laws or otherwise, are crafted and applied in a manner that conforms to the strict requirements of paragraph 3. It is not compatible with paragraph 3, for instance, to invoke such laws to suppress or withhold from the public information of legitimate public interest that does not harm national security or to prosecute journalists, researchers, environmental activists, human rights defenders, or others, for having disseminated such information […]
31. On the basis of maintenance of public order (ordre public) it may, for instance, be permissible in certain circumstances to regulate speech-making in a particular public place. Contempt of court proceedings relating to forms of expression may be tested against the public order (ordre public) ground. In order to comply with paragraph 3, such proceedings and the penalty imposed must be shown to be warranted in the exercise of a court’s power to maintain orderly proceedings. Such proceedings should not in any way be used to restrict the legitimate exercise of defence rights.
32. The Committee observed in general comment No. 22, that “the concept of morals derives from many social, philosophical and religious traditions; consequently, limitations… for the purpose of protecting morals must be based on principles not deriving exclusively from a single tradition”. Any such limitations must be understood in the light of universality of human rights and the principle of non-discrimination.
33. Restrictions must be “necessary” for a legitimate purpose. The Committee observed in general comment No. 27 that “restrictive measures must conform to the principle of proportionality; they must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected…The principle of proportionality has to be respected not only in the law that frames the restrictions but also by the administrative and judicial authorities in applying the law”. The principle of proportionality must also take account of the form of expression at issue as well as the means of its dissemination. For instance, the value placed by the Covenant upon uninhibited expression is particularly high in the circumstances of public debate in a democratic society concerning figures in the public and political domain.
34. Restrictions must not be overbroad. The Committee observed in general comment No. 27 that “restrictive measures must conform to the principle of proportionality; they must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected. […]
35. When a State party invokes a legitimate ground for restriction of freedom of expression, it must demonstrate in specific and individualized fashion the precise nature of the threat, and the necessity and proportionality of the specific action taken, in particular by establishing a direct and immediate connection between the expression and the threat.
[…]
43. Any restrictions on the operation of websites, blogs or any other internet-based, electronic or other such information dissemination system, including systems to support such communication, such as internet service providers or search engines, are only permissible to the extent that they are compatible with paragraph 3. Permissible restrictions generally should be content-specific; generic bans on the operation of certain sites and systems are not compatible with paragraph 3. It is also inconsistent with paragraph 3 to prohibit a site or an information dissemination system from publishing material solely on the basis that it may be critical of the government or the political social system espoused by the government.
(2.2) States’ obligations when blocking and filtering content online
Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Freedom of expression, states and the private sector in the digital age, UN Doc A/HRC/32/38, 11 May 2016
46. States often block and filter content with the assistance of the private sector. Internet service providers may block access to specific keywords, web pages or entire websites. On platforms that host content, the type of filtering technique depends on the nature of the platform and the content in question. Domain name registrars may refuse to register those that match a government blacklist; social media companies may remove postings or suspend accounts; search engines may take down search results that link to illegal content. The method of restriction required by Governments or employed by companies can raise both necessity and proportionality concerns, depending on the validity of the rationale cited for the removal and the risk of removal of legal or protected expression.
47. Ambiguities in State regulation coupled with onerous intermediary liability obligations could result in excessive filtering. Even if content regulations werevalidly enacted and enforced, users may still experience unnecessary access restrictions […]
(2.3) Assessing whether the individual’s conviction to paying a fine for sharing a link on social media was a necessary restriction to Article 19 ICCPR
Views adopted by the UN Human Rights Committee, Pavel Katorzhevsky v Belarus, communication no. 3095/2018, UN Doc CCPR/C/139/D/3095/2018, 13 October 2023
[…]
7.2 The Committee notes the author’s allegations that the authorities violated his rights under article 19 of the Covenant, as he was convicted and fined for sharing a link on the social media network VKontakte to an article entitled “Idiocy and fake honour to the victims of war in a capital city gymnasium”. […]
7.3 The Committee notes that the issue before it is to determine whether the restrictions imposed were justified under article 19 (3) of the Covenant. In that respect, it recalls its general comment No. 34 (2011), in which it stated, inter alia, that freedom of expression is essential for any society and a foundation stone for every free and democratic society. It notes that article 19 (3) of the Covenant allows restrictions on freedom of expression, including on the freedom to impart information and ideas, only to the extent that they are provided by law and only if they are necessary (a) for respect of the rights and reputations of others; or (b) for the protection of national security or public order (ordre public), or of public health or morals. Finally, any restriction on freedom of expression must not be overbroad in nature, that is, it must be the least intrusive among the measures which might achieve the relevant protective function and proportionate to the interest to be protected. The Committee recalls that the onus is on the State party to demonstrate that the restrictions on the author’s rights under article 19 of the Covenant were necessary and proportionate.
7.4 The Committee notes the State party’s submission that in the present case, the restrictions of the right to freedom of expression for the protection of national security or of public order were governed by the Law on Countering Extremism […] The Committee observes that according to article 1 of the Law, “extremist materials are informational products (printed, audio, audiovisual and other informational messages and/or materials, posters, banners and other visual agitation, and advertising products) intended for public use, distributed publicly or distributed in any way, containing calls for extremist activities, or promoting such activities, and recognized as extremist materials by a court decision”. The Committee also notes that when convicting the author, the national courts referred to the decision of the Central District Court of Minsk of 10 November 2016 by which all “informational products” (which include posts to websites and social media platforms) published on vk.com/rdbelarus were declared to be extremist materials and were included in the State List of Extremist Materials.
7.5 The Committee notes that the author posted a link to the article, which had been published on 26 November 2016, after the court decision finding all the informational products on the website to be extremist materials had been delivered. The Committee also notes that on 10 November 2016, the Central District Court of Minsk did not examine the article shared by the author to assess and determine its nature. The Committee observes that, as acknowledged by the State party […], all the informational products (posts) that were published on the mentioned websites before and after the delivery of the court’s decision of 10 November 2016 are automatically declared extremist materials without an individualized assessment of each informational product (post).
7.6 The Committee recalls that any restrictions on the operation of websites, blogs or any other Internet-based, electronic or other such information dissemination systems, including systems to support such communication, such as Internet service providers or search engines, are only permissible to the extent that they are compatible with article 19 (3) of the Covenant.19 Permissible restrictions generally should be content-specific; generic bans on the operation of certain sites and systems are not compatible with paragraph 3.20
7.7 The Committee observes that in his appeal to Gomel Regional Court, the author requested the authorities to carry out an individualized assessment of the article entitled “Idiocy and fake honour to the victims of war in a capital city gymnasium”, relying on article 19 (3) of the Covenant. However, the appeal court merely acknowledged and upheld the decision of the Central District Court of Minsk of 10 November 2016. The Committee reiterates that even if the sanctions imposed on the author were permitted under domestic law, the State party must show that they were necessary for one of the legitimate aims set out in article 19 (3). The Committee further observes that the State party has failed to invoke any specific grounds related to the author to support the necessity of the restrictions imposed on him, as is required under article 19 (3) of the Covenant.
7.8 In particular, the Committee notes that the court decisions made no individualized assessment of the author’s case and have not provided any explanation as to why the conviction and fine imposed on him were necessary and the least intrusive among the measures which might achieve the relevant protective function and were proportionate to the interest to be protected. It therefore considers that the author’s right to freedom of expression under article 19 (2) of the Covenant has been violated.
(2.4) Actio popularis is not allowed in human rights law: conceptualising the victim status requirement under the ECHR in case of collateral effects of a blocking measure of an Internet service/platform
ECtHR, Cengiz and Others v Turkey, App nos. 48226/10 and 14027/11, 1 December 2015
3. Relying on Article 10 of the Convention, the applicants complained in particular of a measure that had deprived them of all access to YouTube.
[…]
5. Mr Cengiz was born in 1974 and lives in İzmir. He is a lecturer at the Law Faculty of İzmir University and is an expert and legal practitioner in the field of freedom of expression.
Mr Akdeniz and Mr Altıparmak were born in 1968 and 1973 respectively. Mr Akdeniz is a professor of law at the Law Faculty of Istanbul Bilgi University. Mr Altıparmak is an assistant professor of law at the Political Science Faculty of Ankara University and director of the university’s Human Rights Centre.
The Court’s assessment on the victim status of the applicants
47. The Court notes that in a decision adopted on 5 May 2008, the Ankara Criminal Court of First Instance ordered the blocking of access to YouTube under section 8(1)(b), (2), (3) and (9) of Law no. 5651 on the ground that the content of ten video files available on the website in question had infringed Law no. 5816 prohibiting insults to the memory of Atatürk. The first applicant lodged an objection against the blocking order on 21 May 2010, seeking to have it set aside, and the second and third applicants did likewise on 31 May 2010. In their objections they relied on the protection of their right to freedom to receive and impart information and ideas.
48. On 9 June 2010 the Ankara Criminal Court of First Instance, finding that the applicants had not been parties to the case and thus did not have locus standi to challenge such orders, dismissed their objection. In so doing, it noted, among other things, that the blocking order complied with the requirements of the relevant legislation. It also adopted a further decision on 17 June 2010. Attempts by two of the applicants to challenge that decision were to no avail.
49. The Court reiterates at the outset that the Convention does not allow an actio popularis but requires as a condition for the exercise of the right of individual petition that the applicant be able to claim on arguable grounds that he himself has been a direct or indirect victim of a violation of the Convention resulting from an act or omission which can be attributed to a Contracting State. In Tanrıkulu and Others […], it found that readers of a newspaper whose distribution had been prohibited did not have victim status. Similarly, in Akdeniz […], it held that the mere fact that Mr Akdeniz – like the other users of two music-streaming websites in Turkey – was indirectly affected by a blocking order could not suffice for him to be acknowledged as a “victim” within the meaning of Article 34 of the Convention. In view of those considerations, the answer to the question whether an applicant can claim to be the victim of a measure blocking access to a website will therefore depend on an assessment of the circumstances of each case, in particular the way in which the person concerned uses the website and the potential impact of the measure on him. It is also relevant that the Internet has now become one of the principal means by which individuals exercise their right to freedom to receive and impart information and ideas, providing as it does essential tools for participation in activities and discussions concerning political issues and issues of general interest […]
50. In the present case, the Court observes that the applicants lodged their applications with it as active YouTube users; among other things, they drew attention to the repercussions of the blocking order on their academic work and to the significant features of the website in question. They stated, in particular, that through their YouTube accounts they used the platform not only to access videos relating to their professional sphere, but also in an active manner, for the purpose of uploading and sharing files of that nature. The second and third applicants also pointed out that they had published videos on their academic activities. In that respect, the case more closely resembles that of Mr Yıldırım, who stated that he published his academic work and his views on various topics on his own website […], than that of Mr Akdeniz […], who was acting as a simple website user.
51. The present case also differs in another respect from that in Akdeniz, where the Court had regard, inter alia, to the fact that the applicant could easily have had access to a whole range of musical works by a variety of means without infringing copyright rules […]. YouTube, however, not only hosts artistic and musical works, but is also a very popular platform for political speeches and political and social activities. The files shared by YouTube contain information that could be of particular interest to anyone […]. Accordingly, the measure in issue blocked access to a website containing specific information of interest to the applicants that is not easily accessible by other means. The website also constitutes an important source of communication for the applicants.
52. Moreover, as to the importance of Internet sites in the exercise of freedom of expression, the Court reiterates that “[i]n the light of its accessibility and its capacity to store and communicate vast amounts of information, the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general” […]. In this connection, the Court observes that YouTube is a video-hosting website on which users can upload, view and share videos and is undoubtedly an important means of exercising the freedom to receive and impart information and ideas. In particular, as the applicants rightly noted, political content ignored by the traditional media is often shared via YouTube, thus fostering the emergence of citizen journalism. From that perspective, the Court accepts that YouTube is a unique platform on account of its characteristics, its accessibility and above all its potential impact, and that no alternatives were available to the applicants.
53. Furthermore, the Court observes that, after the applicants lodged their applications, the Constitutional Court examined whether active users of websites such as https://twitter.com and www.youtube.com could be regarded as victims. In particular, in the case concerning the administrative decision to block access to YouTube, it granted victim status to certain active users of the site, among them the second and third applicants. In reaching that conclusion, it mainly had regard to the fact that the individuals concerned, who all had a YouTube account, made active use of the site. In the case of the two applicants in question, it also took into consideration the fact that they taught at universities, carried out research in the field of human rights, used the website to access a range of visual material and shared their research via their YouTube accounts […] The Court endorses the Constitutional Court’s conclusions concerning these applicants’ victim status. In addition, it observes that the situation of the first applicant, also an active YouTube user, is no different from that of the other two applicants.
54. To sum up, the Court observes that the applicants are essentially complaining of the collateral effects of the measure taken against YouTube in the context of Internet legislation. Their contention is that, on account of the YouTube features, the blocking order deprived them of a significant means of exercising their right to freedom to receive and impart information and ideas.
55. Having regard to the foregoing and to the need for flexible application of the criteria for acknowledging victim status, the Court accepts that, in the particular circumstances of the case, the applicants may legitimately claim that the decision to block access to YouTube affected their right to receive and impart information and ideas even though they were not directly targeted by it. It therefore dismisses the Government’s preliminary objection as to victim status.
56. Moreover, the Court reiterates that Article 10 of the Convention guarantees “everyone” the freedom to receive and impart information and ideas and that no distinction is made according to the nature of the aim pursued or the role played by natural or legal persons in the exercise of that freedom. Article 10 applies not only to the content of information but also to the means of dissemination, since any restriction imposed on such means necessarily interferes with the right to receive and impart information. Likewise, the Court reaffirms that Article 10 guarantees not only the right to impart information but also the right of the public to receive it […]
57. In the present case, the evidence before the Court indicates that as a result of a measure ordered by the Ankara Criminal Court of First Instance on 5 May 2008, the applicants had no access to YouTube for a lengthy period. As active YouTube users, they can therefore legitimately claim that the measure in question affected their right to receive and impart information and ideas. The Court considers that, whatever its legal basis, such a measure was bound to have an influence on the accessibility of the Internet and, accordingly, engaged the responsibility of the respondent State under Article 10 […]. The measure in question therefore amounted to “interference by public authority” with the exercise of the rights guaranteed by Article 10.
(2.5) Applying the principle of legality to collateral effects of a (preventive) blocking measure of an Internet service/platform: need for robust legislative frameworks and effective judicial review of such measures
ECtHR, Ahmet Yildirim v Turkey, App no 3111/10, 18 December 2012
46. […] the applicant owns and runs a website which he apparently uses in order to publish his academic work and his views on various topics. He complained of his inability to access his website as a result of a measure ordered in the context of criminal proceedings which were unconnected to his site. This amounted in his view to a prior restraint, imposed before a ruling had been given on the merits.
[…]
51. In the present case the measure blocking access to the website stemmed from a decision of the Denizli Criminal Court of First Instance. It was initially designed as a preventive measure ordered by the court in the context of the criminal proceedings brought against a third-party website under Law no. 5816 prohibiting insults against the memory of Atatürk. However, the administrative body responsible for executing the blocking order, the TİB, requested that an order be given blocking all access to Google Sites. In a decision of 24 June 2009, the Denizli Criminal Court of First Instance granted the request. Ruling on an application by the applicant to have it set aside, the Denizli Criminal Court subsequently upheld the order, taking the view that the only means of blocking access to the website that was the subject of criminal proceedings was to block access to Google Sites. The TİB therefore blocked access to the entire Google Sites domain, thereby incidentally preventing the applicant from accessing his own website. It appears from the case file that, as a result of the measure, the applicant was completely unable for an indeterminate period of time to access his own website. All his attempts to do so were unsuccessful because of the blocking order issued by the court. He can therefore legitimately claim that the measure in question affected his right to receive and impart information and ideas.
52. The crux of the case therefore concerns the collateral effect of a preventive measure adopted in the context of judicial proceedings. Although neither Google Sites as such nor the applicant’s website was the subject of the proceedings in question, the TİB blocked access to them in order to execute the measure ordered by the Denizli Criminal Court of First Instance. The measure was to remain in place until such time as a decision was given on the merits or the illegal content of the site hosted by Google Sites was removed (section 9 of Law no. 5651). It therefore constituted a prior restraint as it was imposed before a ruling had been given on the merits.
53. The Court considers that, whatever its legal basis, such a measure was bound to have an influence on the accessibility of the Internet and, accordingly, engaged the responsibility of the respondent State under Article 10 […]
54. It further observes that the blocking of access complained of resulted from a prohibition initially imposed on a third-party website. It was the blocking of all access to Google Sites which actually affected the applicant, who owned another website hosted on the same domain. It is true that the measure did not, strictly speaking, constitute a wholesale ban but rather a restriction on Internet access which had the effect of also blocking access to the applicant’s website. Nevertheless, the fact that the effects of the restriction in issue were limited does not diminish its significance, especially since the Internet has now become one of the principal means by which individuals exercise their right to freedom of expression and information, providing as it does essential tools for participation in activities and discussions concerning political issues and issues of general interest.
[…]
59. The Court has consistently held that, for domestic law to meet these requirements, it must afford a measure of legal protection against arbitrary interferences by public authorities with the rights guaranteed by the Convention. […] the law must indicate with sufficient clarity the scope of any such discretion and the manner of its exercise […]
60. The question here is whether, at the time the blocking order was issued, a clear and precise rule existed enabling the applicant to regulate his conduct in the matter.
61. The Court observes that, under section 8(1) of Law no. 5651, a judge may order the blocking of access to “Internet publications where there are sufficient grounds to suspect that their content is such as to amount to … offences”. […]
62. Neither Google Sites nor the applicant’s website was the subject of judicial proceedings for the purposes of section 8(1) of Law no. 5651. It is clear from the fact that this provision was referred to in the decision of 24 June 2009 (see paragraph 10 above) that Google Sites was held to be liable for the content of a website which it hosted. However, sections 4, 5 and 6 of Law no. 5651, which deal with the liability of content providers, hosting service providers and access providers, make no provision for a wholesale blocking of access such as that ordered in the present case. Nor has it been maintained that the Law authorised the blocking of an entire Internet domain like Google Sites which allows the exchange of ideas and information. Moreover, there is nothing in the case file to indicate that Google Sites was notified under section 5(2) of Law no. 5651 that it was hosting illegal content, or that it refused to comply with an interim measure concerning a site that was the subject of pending criminal proceedings.
63. The Court also observes that section 8, subsections (3) and (4), of Law no. 5651 conferred extensive powers on an administrative body (the TİB) in the implementation of a blocking order originally issued in relation to a specified site. The facts of the case demonstrate that the TİB could request the extension of the scope of a blocking order even though no proceedings had been brought against the website or domain in question and no real need for wholesale blocking had been established.
64. […] the Court considers that such prior restraints are not necessarily incompatible with the Convention as a matter of principle. However, a legal framework is required, ensuring both tight control over the scope of bans and effective judicial review to prevent any abuse of power […]. In that regard, the judicial review of such a measure, based on a weighing-up of the competing interests at stake and designed to strike a balance between them, is inconceivable without a framework establishing precise and specific rules regarding the application of preventive restrictions on freedom of expression […]. The Court observes that when the Denizli Criminal Court of First Instance decided to block all access to Google Sites under Law no. 5651 it merely referred to a recommendation from the TİB, without ascertaining whether a less far-reaching measure could have been taken to block access specifically to the offending website […]
65. The Court also notes that in his application of 1 July 2009 to have the blocking order set aside one of the applicant’s main arguments was that, to prevent other websites from being affected by the measure in question, a method should have been chosen whereby only the offending website was made inaccessible.
66. However, there is no indication that the judges considering the application sought to weigh up the various interests at stake, in particular by assessing the need to block all access to Google Sites. In the Court’s view, this shortcoming was simply a consequence of the wording of section 8 of Law no. 5651 itself, which did not lay down any obligation for the domestic courts to examine whether the wholesale blocking of Google Sites was necessary, having regard to the criteria established and applied by the Court under Article 10 of the Convention. Such an obligation, however, flows directly from the Convention and from the case-law of the Convention institutions. In reaching their decision, the courts simply found it established that the only means of blocking access to the offending website in accordance with the order made to that effect was to block all access to Google Sites […]. However, in the Court’s view, they should have taken into consideration, among other elements, the fact that such a measure, by rendering large quantities of information inaccessible, substantially restricted the rights of Internet users and had a significant collateral effect.
67. In the light of these considerations and of its examination of the legislation in question as applied in the instant case, the Court concludes that the interference resulting from the application of section 8 of Law no. 5651 did not satisfy the foreseeability requirement under the Convention and did not afford the applicant the degree of protection to which he was entitled by the rule of law in a democratic society. […]
68. The Court further observes that the measure in question produced arbitrary effects and could not be said to have been aimed solely at blocking access to the offending website, since it consisted in the wholesale blocking of all the sites hosted by Google Sites. Furthermore, the judicial review procedures concerning the blocking of Internet sites are insufficient to meet the criteria for avoiding abuse, as domestic law does not provide for any safeguards to ensure that a blocking order in respect of a specific site is not used as a means of blocking access in general.
69. Accordingly, there has been a violation of Article 10 of the Convention.
70. In view of that conclusion, the Court does not consider it necessary in the instant case to examine whether the other requirements of paragraph 2 of Article 10 have been met.
(2.6) Elaborating upon the principle of legality regarding the effects of a wholesale blocking of access to an entire website: domestic law’s foreseeability and safeguards; transparency of blocking measures; accessibility of judicial review
ECtHR, Vladimir Kharitonov v Russia, App no 10795/14, 23 June 2020
34. The applicant has been the owner and administrator of a website featuring content relating to the production and distribution of electronic books, from news to analytical reports to practical guides. The website has existed since 2008 and has been updated several times a week. It has been stored on the servers of a US-based company offering accessible shared web-hosting solutions. The applicant’s website has a unique domain name – “www.digital-books.ru” – but shares a numerical network address (“IP address”) with many other websites hosted on the same server.
35. In December 2012, the applicant discovered that access to his website had been blocked. This was an incidental effect of a State agency’s decision to block access to another website which was hosted on the same server and had the same IP address as the applicant’s website. The Court reiterates that measures blocking access to websites are bound to have an influence on the accessibility of the Internet and, accordingly, engage the responsibility of the respondent State under Article 10 […]
36. The applicant was not aware of the proceedings against the third‑party website, the grounds for the blocking measure or its duration. He did not have knowledge of, or control over, when, if ever, the measure would be lifted and access to his website restored. He was unable to share the latest developments and news about electronic publishing, while visitors to his website were prevented from accessing the entire website content. It follows that the blocking measure in question amounted to “interference by a public authority” with the right to receive and impart information, since Article 10 guarantees not only the right to impart information but also the right of the public to receive it […]. Such interference will constitute a breach of Article 10 unless it is “prescribed by law”, pursues one or more of the legitimate aims referred to in Article 10 § 2 and is “necessary in a democratic society” to achieve those aims.
37. The Court reiterates that the expression “prescribed by law” not only refers to a statutory basis in domestic law, but also requires that the law be both adequately accessible and foreseeable, that is, formulated with sufficient precision to enable the individual to foresee the consequences which a given action may entail. In matters affecting fundamental rights it would be contrary to the rule of law, one of the basic principles of a democratic society enshrined in the Convention, for a legal discretion granted to the executive to be expressed in terms of an unfettered power. Consequently, the law must afford a measure of legal protection against arbitrary interferences by public authorities with the rights safeguarded by the Convention, and indicate with sufficient clarity the scope of any discretion conferred on the competent authorities and the manner of its exercise […].
38. […] The Court notes with concern that section 15.1 allows the authorities to target an entire website without distinguishing between the legal and illegal content it may contain. Reiterating that the wholesale blocking of access to an entire website is an extreme measure which has been compared to banning a newspaper or television station, the Court considers that a legal provision giving an executive agency so broad a discretion carries a risk of content being blocked arbitrarily and excessively […].
[…]
40 As it happened, the blocking of the applicant’s website was an automatic consequence of Roskomnadzor’s decision to add the IP address of the offending website to the register of blocked material. That decision had the immediate effect of blocking access to the entire cluster of websites hosted by DreamHost which shared an IP address with the offending website. […]
41. Section 15.1 of the Information Act conferred extensive powers on Roskomnadzor in the implementation of a blocking order issued in relation to a specific website. Roskomnadzor can place a website on the Integrated Register of blocked content, ask the website owner and its hosting service provider to take down the illegal content, and add the website’s IP address to the Integrated Register if they refuse to do so or fail to respond. However, the law did not require Roskomnadzor to check whether that address was used by more than one website or to establish the need for blocking by IP address. That manner of proceeding could, and did in the circumstances of the present case, have the practical effect of extending the scope of the blocking order far beyond the illegal content which had been originally targeted […]. In fact, as the applicant and third-party interveners pointed out, millions of websites have remained blocked in Russia for the sole reason that they shared an IP address with some other websites featuring illegal content […]
[…]
43. Turning next to the issue of safeguards against abuse which domestic legislation must provide in respect of incidental blocking measures, the Court reiterates that the exercise of powers to interfere with the right to impart information must be clearly circumscribed to minimise the impact of such measures on the accessibility of the Internet. In the instant case, Roskomnadzor gave effect to a decision by which a drug-control agency had determined the content of the offending website to be illegal. Both the original determination and Roskomnadzor’s implementing orders had been made without any advance notification to the parties whose rights and interests were likely to be affected. The blocking measures had not been sanctioned by a court or other independent adjudicatory body providing a forum in which the interested parties could have been heard. Nor did the Russian law call for any impact assessment of the blocking measure prior to its implementation. The Government acknowledged that Roskomnadzor was not legally required to identify the potential collateral effects of blocking an IP address, even though commonly used Internet tools, such as “reverse IP address lookup”, could have promptly supplied a list of websites hosted on the same server.
44. As regards the transparency of blocking measures, the Government submitted that the applicant should have consulted Roskomnadzor’s website. Indeed, Roskomnadzor provides a web service (http://blocklist.rkn.gov.ru/) which enables anyone to find out whether a website has been blocked and indicates the legal basis, the date and number of the blocking decision and the issuing body. It does not, however, give access to the text of the blocking decision, any indication of the reasons for the measure or information about avenues of appeal. Nor does Russian legislation make any provision for third-party notification of blocking decisions in circumstances where they have a collateral effect on the rights of other website owners. The applicant had no access to the blocking decision: it had not been produced in the domestic proceedings and the Russian courts had rejected his disclosure request.
45. Lastly, as regards the proceedings which the applicant instituted to challenge the incidental effects of the blocking order, there is no indication that the judges considering his complaint sought to weigh up the various interests at stake, in particular by assessing the need to block access to all websites sharing the same IP address. The domestic courts did not apply the Plenary Supreme Court’s Ruling no. 21 of 27 June 2013, which required them to have regard to the criteria established in the Convention in its interpretation by the Court […]. In reaching their decision, the courts confined their scrutiny to establishing that Roskomnadzor had acted in accordance with the letter of the law. However, in the Court’s view, a Convention-compliant review should have taken into consideration, among other elements, the fact that such a measure, by rendering large quantities of information inaccessible, substantially restricted the rights of Internet users and had a significant collateral effect […]
46. The Court reiterates that it is incompatible with the rule of law if the legal framework fails to establish safeguards capable of protecting individuals from excessive and arbitrary effects of blocking measures, such as those in issue in the instant case. When exceptional circumstances justify the blocking of illegal content, a State agency making the blocking order must ensure that the measure strictly targets the illegal content and has no arbitrary or excessive effects, irrespective of the manner of its implementation. Any indiscriminate blocking measure which interferes with lawful content or websites as a collateral effect of a measure aimed at illegal content or websites amounts to arbitrary interference with the rights of owners of such websites. In the light of its examination of the Russian legislation as applied in the instant case, the Court concludes that the interference resulted from the application of the procedure under section 15.1 of the Information Act which did not satisfy the foreseeability requirement under the Convention and did not afford the applicant the degree of protection from abuse to which he was entitled by the rule of law in a democratic society […] Accordingly, the interference was not “prescribed by law” and it is not necessary to examine whether the other requirements of paragraph 2 of Article 10 have been met.
(2.7) Information on filter-bypassing tools arbitrarily banned by court; information technologies content-neutral; assessing role and value of filter-bypassing technologies
ECtHR, Engels v Russia, App no 61919/16, 23 June 2020
The case concerns a decision by the Russian courts that information about unfiltered-browsing technologies available from the applicant’s website constituted prohibited content.
4. The applicant is a Russian-born German politician and activist working to support freedom of expression on the Internet. In 2012, he founded, together with local Russian activists, the RosKomSvoboda website (rublacklist.net) dedicated to news, information, analysis and research relating to freedom of expression online, online privacy issues, copyright and digital communications. Its name is an abbreviation for “Russian Committee for Freedom”, an allusion to the name of the Russian telecoms regulator Roskomnadzor (“Russian Committee for Oversight”), which maintains a list of proscribed online content.
5. One page of the RosKomSvoboda website (rublacklist.net/bypass) provided a list and a short description of tools and software for bypassing restrictions on private communications and content filters on the Internet, such as virtual private networks (VPN), the Tor browser, the “invisible Internet” (I2P) technology, the “turbo” mode in web browsers, and the use of online translation engines for accessing content.
6. In 2015, a district prosecutor in the Krasnodar Region lodged a public-interest claim with the Anapa Town Court, seeking a decision that information on the rublacklist.net/bypass page should be prohibited from dissemination in Russia. The prosecutor submitted that the anonymising tools available from that page enabled users to access extremist material on another, unrelated website. On 13 April 2015 the Anapa Town Court, without informing the applicant about the proceedings, granted the prosecutor’s application. It noted that the information on the rublacklist.net/bypass page had been made freely available without a password or registration to any user who wished to read or copy it. The Town Court declared illegal the content of the rublacklist.net/bypass page and ordered Roskomnadzor to enforce the decision immediately by blocking access to the applicant’s website.
7. Roskomnadzor asked the applicant to take down the webpage rublacklist.net/bypass, otherwise the website would be blocked. The applicant complied with the request and deleted the offending information.
[…]
27. The statutory basis for the interference was section 15.1 of the Information Act. Subsection (5) of that provision lists three types of decisions by which the Russian authorities may categorise online content as illegal. In the instant case, the decision was made by a court of general jurisdiction in accordance with the second part of subsection (5). Unlike the first part of that subsection, which defined seven particular categories of online content susceptible to blocking, or the third part, which referred expressly to libellous content, the second part allowed websites to be blocked on the basis of a “judicial decision which identified particular Internet content as constituting information the dissemination of which should be prohibited in Russia”. The Court finds that the breadth of this provision is exceptional and unparalleled. It does not give the courts or website owners any indication as to the nature or categories of online content that is susceptible to be banned. Nor does it refer to any secondary legislation, by-laws or regulations which could have circumscribed its scope of application. The Court finds that such a vague and overly broad legal provision fails to satisfy the foreseeability requirement. It does not afford website owners, such as the applicant, the opportunity to regulate their conduct, as they cannot know in advance what content is susceptible to be banned and can lead to a blocking measure against their entire website. […]
28. The present case illustrates the manner in which this legal provision is capable of producing arbitrary effects in practice. Following an application lodged by a town prosecutor, a Russian court held that the information about filter-bypassing tools and software available on the applicant’s website constituted “information the dissemination of which should be prohibited in Russia”. It did not establish that filter-bypassing technologies were illegal in Russia or that providing information about them was contrary to any Russian law. Nor did it find any extremist speech, calls for violence or unlawful activities, child pornography, or any other prohibited content on the applicant’s webpage. The only basis for its decision was the fact that filter-bypassing technologies might enable users to access extremist content on some other website which was not connected or affiliated with the applicant and the content of which he had no control over.
29. The Court notes that the utility of filter-bypassing technologies cannot be reduced to a tool for malevolently seeking to obtain extremist content. Even though the use of any information technology can be subverted to carry out activities which are incompatible with the principles of a democratic society, filter-bypassing technologies primarily serve a multitude of legitimate purposes, such as enabling secure links to remote servers, channeling data through faster servers to reduce page-loading time on slow connections, and providing a quick and free online translation. None of these legitimate uses were considered by the Russian court before issuing the blocking order.
[…]
34. The Court reiterates that it is incompatible with the rule of law if the legal framework fails to establish safeguards capable of protecting individuals from excessive and arbitrary effects of sweeping blocking measures, such as those in issue in the instant case. In the light of its examination of the Russian legislation as applied in the instant case, the Court concludes that the interference resulted from the application of the procedure under subsection (5)(2) of section 15.1 of the Information Act which did not satisfy the foreseeability requirement under the Convention and did not afford the applicant the degree of protection from abuse to which he was entitled by the rule of law in a democratic society. Accordingly, the interference was not “prescribed by law” and it is not necessary to examine whether the other requirements of paragraph 2 of Article 10 have been met. […]
(2.8) Are generic and total Internet or services shutdowns lawful under human rights?
UN Human Rights Council, Res 47/16, The promotion, protection and enjoyment of human rights on the Internet, UN Doc A/HRC/RES/47/16, 7 July 2021
11. Condemns unequivocally measures in violation of international human rights law that prevent or disrupt an individual’s ability to seek, receive or impart information online, including Internet shutdowns and online censorship, calls upon all States to refrain from and to cease such measures, and also calls upon States to ensure that all domestic laws, policies and practices are consistent with their international human rights obligations with regard to freedom of opinion and expression, and of association and peaceful assembly, online.
Case study
#KeepItOn: authorities in Mozambique must stop normalizing internet shutdowns during protests, 8 November 2024
We, the undersigned organizations, and members of the #KeepItOn coalition — a global network of over 334 human rights organizations from 105 countries working to end internet shutdowns — urgently demand that the government of Mozambique put an immediate end to the increasing use of shutdowns amid ongoing protests and police crackdown on protesters in Mozambique. Reports from local rights groups indicate that police have resorted to excessive use of violence resulting in more than 20 deaths and multiple injuries.
Since October 25, 2024, in response to growing protests against disputed election results announced by the Election Commission, authorities in Mozambique have imposed at least five instances of curfew style mobile internet shutdowns with the most recent happening on November 6, alongside social media shutdowns lasting several hours. The recent shutdowns in the country follow a worrying trend that authorities in Mozambique began in October 2023, when they imposed a total internet blackout for at least three hours for the first time during local elections. Mozambican authorities’ regular practice of shutting down the internet around elections and in times of political unrest must not be allowed to continue.
Internet shutdowns are a violation of human rights.
[…]
Election-related shutdowns prevent voters, journalists, opposition, and election observers from accessing or sharing essential information, decreasing the fairness, credibility, and transparency of elections. They empower authoritarian regimes to control the narrative throughout the electoral period, undermining the electorate’s ability to make informed decisions, access polling resources, and fully shape their nation’s future. Now more than ever, when there is unrest due to contested election results, the government must ensure people have unfettered access to open and secure internet and digital platforms to promote transparency and access pertinent information in a timely manner.
In response to growing instances of shutdowns in the region in connection with elections, the African Commission on Human and People’s Rights (ACHPR) in March 2024 adopted resolution 580 of 2024, which recognizes the importance of internet connectivity to the realization of free, fair, and credible election, as a tenet of democracy, […] [including] the increased use of the internet and social media platforms for the dissemination of information to voters, election observers, election management bodies, and other stakeholders during elections.
Moreover, imposing shutdowns during protests violates people’s fundamental rights to assembly. Clement N. Voule, the UN Special Rapporteur on the rights to freedom of peaceful assembly and association, highlighted in his report the crucial role of digital technologies in expanding opportunities for the enjoyment and exercise of peaceful assembly and association rights and also raised concerns about the use of these technologies by state and non-state actors “to silence, surveil and harass dissidents, political opposition, human rights defenders, activists, and protesters.”
[…]
Restricting access to the internet, mobile devices, and communication platforms during periods of unrest such as conflicts and protests, and moments of national importance such as elections, further puts people at risk and undermines the enjoyment of all other human rights, from education and work to healthcare and public services, as well as free expression and peaceful assembly […]
Questions to reflect
- How Internet or services shutdowns impact the effective exercise of human rights? Are they lawful under human rights?
- Do you agree with how the ECtHR used the concept of an active YouTube user in the Cengiz and Others v Turkey case to justify the fact that it adopts a flexible application of the criteria for acknowledging victim status while not permitting an action popularis? Who is an active social media user, in your view? Receiving information does not make someone an active social media user?
- What is the meaning of collateral effects and how does the ECtHR use this concept in its reasoning?
- What are measures that States are required to take in order for the collateral effects of a blocking measure of an Internet service/platform to be lawful under Article 10 ECHR? How has the Court elaborated its approach on the requirements throughout its case law?
- How are filter-bypassing technologies assessed from a technological and international human rights law points of view?
(3) Private Sector’s Human Rights Duties
(3.1) The private governance of content regulation
Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Freedom of expression, states and the private sector in the digital age, UN Doc A/HRC/32/38, 11 May 2016
Internal policies and practices
51. Intermediaries’ policies and rules may have significant effects on the freedom of expression. While terms of service are the primary source of regulation, design and engineering choices may also affect the delivery of content.
Terms of service
52. Terms of service, which individuals typically must accept as a condition to access a platform, often contain restrictions on content that may be shared. These restrictions are formulated under local laws and regulations and reflect similar prohibitions, including those against harassment, hate speech, promotion of criminal activity, gratuitous violence and direct threats. Terms of service are frequently formulated in such a general way that it may be difficult to predict with reasonable certainty what kinds of content may be restricted. The inconsistent enforcement of terms of service has also attracted public scrutiny. Some have argued that the world’s most popular platforms do not adequately address the needs and interests of vulnerable groups; for example, there have been accusations of reluctance “to engage directly with technology-related violence against women, until it becomes a public relations issue”. At the same time, platforms have been criticized for overzealous censorship of a wide range of legitimate but (perhaps to some audiences) “uncomfortable” expressions. Lack of an appeals process or poor communication by the company about why content was removed or an account deactivated adds to these concerns. Terms of service that require registration linked to an individual’s real name or evidence to demonstrate valid use of a pseudonym can also disproportionately inhibit the ability of vulnerable groups or civil society actors in closed societies to use online platforms for expression, association or advocacy
[…]
54. The work of private censorship is complicated by the sheer volume of complaints and flagged content that intermediaries identify on a daily basis. Large platforms may also outsource content moderation, creating even more distance between content moderators and internal policymaking decisions, and exacerbating inconsistencies in enforcement. […]
Design and engineering choices
55. The manner in which intermediaries curate, categorize and rank content affects what information users access and view on their platforms. For example, platforms deploy algorithmic predictions of user preferences and consequently guide the advertisements individuals might see, how their social media feeds are arranged and the order in which search results appear. Other self-regulatory measures, such as “counter speech” initiatives to support anti-terror or anti-harassment messages, also affect the ways in which users might consume and process Internet content concerning sensitive topics. It remains an open question how freedom of expression concerns raised by design and engineering choices should be reconciled with the freedom of private entities to design and customize their platforms as they choose
Remedies
69. To enforce terms of service, companies may not always have sufficient processes to appeal content removal or account deactivation decisions where a user believes the action was in error or the result of abusive flagging campaigns. Further research that examines best practices in how companies communicate terms of service enforcement decisions and how they implement appeals mechanisms may be useful.
[…]
71. The appropriate role of the State in supplementing or regulating corporate remedial mechanisms also requires closer analysis. Civil proceedings and other judicial redress are often available to consumers adversely affected by corporate action, but these are often cumbersome and expensive. Meaningful alternatives may include complaint and grievance mechanisms established and run by consumer protection bodies and industry regulators. Several States also mandate internal remedial or grievance mechanisms: India, for example, requires corporations that possess, deal with or handle sensitive personal data to designate grievance officers to address “any discrepancies and grievances […] with respect to processing of information”.
(3.2) Case study on how terms of service and remedies by business corporations impact the right to freedom of expression: META Oversight Board: “Two buttons” meme case, case no: 021-005-FB-UA, 20 May 2021
2. Case description
On December 24, 2020, a Facebook user in the United States posted a comment with an adaption of the “daily struggle” or “two buttons” meme. A meme is a piece of media, which is often humorous, that spreads quickly across the internet. This featured the same-split screen cartoon from the original meme, but with the Turkish flag substituted for the cartoon character’s face. The cartoon character has its right hand on its head and appears to be sweating. Above the character, in the other half of the split-screen, there are two red buttons with corresponding statements in English: “The Armenian Genocide is a lie” and “The Armenians were terrorists that deserved it.” The meme was preceded by a “thinking face” emoji.
The comment was shared on a public Facebook page that describes itself as forum for discussing religious matters from a secular perspective. It responded to a post containing an image of a person wearing a niqab with overlay text in English: “Not all prisoners are behind bars.” At the time the comment was removed, that original post it responded to had 260 views, 423 reactions and 149 comments. A Facebook user in Sri Lanka reported the comment for violating the Hate Speech Community Standard.
Facebook removed the meme on December 24, 2020. Within a short period of time, two content moderators reviewed the comment against the company’s policies and reached different conclusions. While the first concluded that the meme violated Facebook’s Hate Speech policy, the second determined that the meme violated the Cruel and Insensitive policy. The content was removed and logged in Facebook’s systems based on the second review. On this basis, Facebook notified the user that their comment “goes against our Community Standard on cruel insensitive content.”
After the user’s appeal, Facebook upheld its decision but found that the content should have been removed under its Hate Speech policy. For Facebook, the statement “The Armenians were terrorists that deserved it” specifically violated the prohibition on content claiming that all members of a protected characteristic are criminals, including terrorists. No other parts of the content, such as the claim that the Armenian genocide was a lie, were deemed to be violating. Facebook did not inform the user that it upheld the decision to remove their content under a different Community Standard.
The user submitted their appeal to the Oversight Board on December 24, 2020.
Lastly, in this decision, the Board referred to the atrocities committed against the Armenian people from 1915 onwards as genocide, as this term is commonly used to describe the massacres and mass deportations suffered by Armenians and it is also referred to in the content under review. The Board does not have the authority to legally qualify such atrocities and this qualification is not the subject of this decision.
- Authority and scope
The Board has authority to review Facebook’s decision following an appeal from the user whose post was removed (Charter Article 2, Section 1; Bylaws Article 2, Section 2.1). The Board may uphold or reverse that decision (Charter Article 3, Section 5). The Board’s decisions are binding and may include policy advisory statements with recommendations. These recommendations are non-binding, but Facebook must respond to them (Charter Article 3, Section 4). The Board is an independent grievance mechanism to address disputes in a transparent and principled manner.
- Relevant standards
The Oversight Board considered the following standards in its decision:
- Facebook’s Community Standards
Facebook’s Community Standards define hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Under “Tier 1,” prohibited content (“do not post”) includes content targeting a person or group of people on the basis of a protected characteristic with:
- “dehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form) to or about […] criminals (including, but not limited to, “thieves”, “bank robbers”, or saying “All [protected characteristic or quasi-protected characteristic] are ‘criminals’”).”
- speech “[m]ocking the concept, events or victims of hate crimes even if no real person is depicted in an image.”
- speech “[d]enying or distorting information about the Holocaust.”
However, Facebook allows “content that includes someone else’s hate speech to condemn it or raise awareness.” According to the Hate Speech Community Standard’s policy rationale, “speech that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If intention is unclear, we may remove content.”
Additionally, the Board noted Facebook’s Cruel and Insensitive Community Standard which forbids content that targets “victims of serious physical or emotional harm,” including “attempts to mock victims […] many of which take the form of memes and GIFs.” This policy prohibits content (“do not post”) that “contains sadistic remarks and any visual or written depiction of real people experiencing premature death.”
[…]
- Human rights standards
The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Facebook announced its Corporate Human Rights Policy, where it committed to respecting rights in accordance with the UNGPs. The Board’s analysis in this case was informed by the following human rights standards:
- Freedom of expression: Article 19, International Covenant on Civil and Political Rights (ICCPR), General Comment No. 34, Human Rights Committee, 2011; UN Special Rapporteur on freedom of opinion and expression reports: A/HRC/35/22/Add.3 (2017), A/HRC/41/35/Add.2 (2019), A/HRC/38/35 (2018), A/74/486 (2019), and A/HRC/44/49/Add.2 (2020); the Rabat Plan of Action, OHCHR, (2013).
- The right to non-discrimination: Article 2, para. 1, ICCPR; Articles 1 and 2, Convention on the Elimination of All Forms of Racial Discrimination (CERD).
- The right to be informed in the context of access to justice: Article 14, para. 3(a), ICCPR; General Comment No. 32, Human Rights Committee, (2007).
[…]
- User statement
The user stated in their appeal to the Board that “historical events should not be censored.” They noted that their comment was not meant to offend but to point out “the irony of a particular historical event.” The user noted that “perhaps Facebook misinterpreted this as an attack.” The user further stated that even if the content invokes “religion and war,” it is not a “hot button issue.” The user found Facebook and its policies overly restrictive and argued that “[h]umor like many things is subjective and something offensive to one person may be funny to another.”
- Explanation of Facebook’s decision
Facebook explained that it removed the comment as a Tier 1 attack under the Hate Speech Community Standard, specifically for violating its policy prohibiting content alleging that all members of a protected characteristic are criminals, including terrorists. According to Facebook, while the first statement in the meme “The Armenian Genocide is a lie” is a negative generalization, it did not directly attack Armenians and thus did not violate the company’s Community Standards. Facebook found that the second statement “The Armenians were terrorists that deserved it” directly attacked Armenians by alleging that they are criminals based on their ethnicity and nationality. This violated the company’s Hate Speech policy.
In its decision rationale, Facebook assessed whether the exception for content that shares hate speech to condemn it or raise awareness of it should apply in this case. Facebook argued that the meme did not fall into this exception, as the user was not clear they intended to condemn hate speech. Specifically, Facebook explained to the Board that the sweating cartoon character in the meme could be reasonably viewed as either condemning or embracing the statements. Facebook also explained that its Hate Speech policy previously included an exception for humor.
[…]
- Oversight Board analysis
The Board looked at the question of whether this content should be restored through three lenses: Facebook’s Community Standards; the company’s values; and its human rights responsibilities.
8.1 Compliance with Community Standards
The Board analyzed each of the two statements against Facebook’s Community Standards, before examining the effect of juxtaposing these statements in this version of the “daily struggle” or “two buttons” meme.
8.1.1. Analysis of the statement “The Armenian Genocide is a lie”
The Board noted that Facebook did not find this statement to violate its Hate Speech Community Standard. Facebook enforces its Hate Speech Community Standard by identifying (i) a “direct attack,” and (ii) a “protected characteristic” the direct attack was based upon. The policy rationale lists “dehumanizing speech” as an example of an attack. Ethnicity and national origin are included among the list of protected characteristics.
Under the “do not post” section of its Hate Speech policy, Facebook prohibits speech “[m]ocking the concept, events or victims of hate crimes even if no real person is depicted in an image.” A majority of the Board noted, however, that the user’s intent was not to mock the victims of the events referred to in the statement, but to use the meme, in the form of satire, to criticize the statement itself. For the minority, the user’s intent was not sufficiently clear. The user could be sharing the content to embrace the statement rather than to refute it.
In this case, Facebook notified the user that their content violated the Cruel and Insensitive Community Standard. Under this policy, Facebook prohibits “attempts to mock victims [of serious physical or emotional harm],” including content that “contains sadistic remarks and any visual or written depiction of real people experiencing premature death.” The Board noted but did not consider Facebook’s explanation that this policy is not applicable to this case because the meme does not depict or name the victims of the events referred to in the statement.
Under the “do not post” section of its Hate Speech policy, Facebook also prohibits speech “[d]enying or distorting information about the Holocaust.” […]
8.1.2. Analysis of the statement “The Armenians were terrorists that deserved it”
The Board noted that Facebook found this statement to violate its Hate Speech Community Standard. The “do not post” section of this Hate Speech Community Standard prohibits “[d]ehumanizing speech or imagery in the form of comparisons, generalizations, or unqualified behavioral statements (in written or visual form).” The policy includes speech that portrays the targeted group as “criminals.” The Board believed the term “terrorists” fell into this category.
8.1.3 Analysis of the combined statements in the meme
The Board is of the view that one should evaluate the content as a whole, including the effect of juxtaposing these statements in a well-known meme. A common purpose of the “daily struggle” or “two buttons” meme is to contrast two different options to highlight potential contradictions or other connotations, rather than to indicate support for the options presented.
For the majority, the exception to the Hate Speech policy is crucial. This exception allows people to “share content that includes someone else’s hate speech to condemn it or raise awareness.” It also states: “our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If intention is unclear, we may remove content.” The majority noted that the content could also fall under the company’s satire exception, which is not publicly available.
Assessing the content as a whole, the majority found that the user’s intent was clear. They shared the meme as satire to raise awareness about and condemn the Turkish government’s efforts to deny the Armenian genocide while, at the same time, justifying the same historic atrocities. The user’s intent was not to mock the victims of these events, nor to claim those victims were criminals or that the atrocity was justified. The majority took into account the Turkish government’s position on genocide suffered by Armenians from 1915 onwards […] as well as the history between Turkey and Armenia. In this context, they found that the cartoon character’s sweating face replaced with a Turkish flag and the content’s direct link to the Armenian genocide, meant the user shared the meme to criticize the Turkish government’s position on this issue. The use of the “thinking face” emoji, which is commonly used sarcastically, alongside the meme, supports this conclusion. The majority noted public comment “PC-10007” (made available under section 7 above), which suggested that “this meme, as described, does not mock victims of genocide, but mocks the denialism common in contemporary Turkey, that simultaneously says the genocide did not happen and that victims deserved it.” It would thus be wrong to remove this comment in the name of protecting Armenians, when the post is a criticism of the Turkish government, in support of Armenians.
As such, the majority found that, taken as a whole, the content fell within the policy exception in Facebook’s Hate Speech Community Standard. For the minority, in the absence of specific context, the user’s intent was not sufficiently clear to conclude that the content was shared as satire criticizing the Turkish government. Additionally, the minority found that the user was not able to properly articulate what the alleged humor intended to express. Given the content includes a harmful generalization against Armenians, the minority found that it violated the Hate Speech Community Standard.
[…]
8.3 Compliance with Facebook’s human rights responsibilities
Freedom of expression (Article 19 ICCPR)
Article 19, para. 2 of the ICCPR provides broad protection for expression of “all kinds,” including written and non-verbal “political discourse,” as well as “cultural and artistic expression.” The UN Human Rights Committee has made clear the protection of Article 19 extends to expression that may be considered “deeply offensive” […]
In this case, the Board found that the cartoon, in the form of a satirical meme, took a position on a political issue: the Turkish government’s stance on the Armenian genocide. The Board noted that “cartoons that clarify political positions” and “memes that mock public figures” may be considered forms of artistic expression protected under international human rights law […] The Board further emphasized that the value placed by the ICCPR upon uninhibited expression concerning public figures in the political domain and public institutions “is particularly high” […]
The Board also noted that laws establishing general prohibitions of expressions with incorrect opinions or interpretations of historical facts, often justified through references to hate speech, are incompatible with Article 19 of the ICCPR, unless they amount to incitement of hostility, discrimination or violence under Article 20 of the ICCPR […]
While the right to freedom of expression is fundamental, it is not absolute. It may be restricted, but restrictions should meet the requirements of legality, legitimate aim, and necessity and proportionality […]. Facebook should seek to align its content moderation policies on hate speech with these principles […]
- Legality
Any rules restricting expression must be clear, precise, and publicly accessible […]. Individuals must have enough information to determine if and how their speech may be limited, so that they can adjust their behavior accordingly. Facebook’s Community Standards “permit content that includes someone else’s hate speech to condemn it or raise awareness,” but ask users to “clearly indicate their intent.” In addition, the Board also noted that Facebook removed an exception for humor from its Hate Speech policy following a Civil Rights Audit concluded in July 2020. While this exception was removed, the company kept a narrower exception for satire that is currently not communicated to users in its Hate Speech Community Standard.
The Board also noted that Facebook wrongfully reported to the user that they violated the Cruel and Insensitive Community Standard, when Facebook based its enforcement on the Hate Speech policy. The Board found that it is not clear enough to users that the Cruel and Insensitive Community Standard only applies to content that depicts or names victims of harm.
Additionally, the Board found that properly notifying users of the reasons for enforcement action against them would help users follow Facebook’s rules. This relates to the legality issue, as the lack of relevant information for users subject to content removal “creates an environment of secretive norms, inconsistent with the standards of clarity, specificity and predictability” which may interfere with “the individual’s ability to challenge content actions or follow up on content-related complaints.” […] Facebook’s approach to user notice in this case therefore failed the legality test.
- Legitimate aim
Any restriction on freedom of expression should also pursue a “legitimate aim.” The Board agreed the restriction pursued the legitimate aim of protecting the rights of others (General Comment No. 34, para. 28). These include the rights to equality and non-discrimination, including based on ethnicity and national origin […]
The Board also reaffirmed its finding in case decision 2021-002-FB-UA that “it is not a legitimate aim to restrict expression for the sole purpose of protecting individuals from offense […], as the value international human rights law placed on uninhibited expression is high […]”
III. Necessity and proportionality
Any restrictions on freedom of expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected” […]
The Board assessed whether the content removal was necessary to protect the rights of Armenians to equality and non-discrimination. The Board noted that freedom of expression currently faces substantial restrictions in Turkey, with disproportionate effects on ethnic minorities living in the country, including Armenians. In a report on his mission to Turkey in 2016, the UN Special Rapporteur on freedom of expression found censorship to be operating in “all the places that are fundamental to democratic life: the media, educational institutions, the judiciary and the bar, government bureaucracy, political space and the vast online expanses of the digital age” […] In the follow-up report published in 2019, the UN Special Rapporteur mentioned that the situation had not improved […]
Turkish authorities have specifically targeted expression denouncing the atrocities committed by the Turkish Ottoman Empire against Armenians from 1915 onwards. In a joint allegation letter, a number of UN special procedures mentioned that Article 301 of the Turkish Criminal Code appears to constitute “a deliberate effort to obstruct access to the truth about what appears to be policy of violence directed against the Turkish Armenian community” and “the right of victims to justice and reparation.” The Board also noted the assassination, in 2007, of Hrant Dink, a journalist of Armenian origin who published a number of articles on the identity of Turkish citizens of Armenian origin. In one of these articles, Dink discussed the lack of recognition of the genocide and how this affects the identity of Armenians. Dink was previously found guilty of demeaning the “Turkish identity” through his writing by Turkish courts. In 2010, the European Court of Human Rights concluded that the verdict of Dink and the failure of Turkish authorities to take the appropriate measures to protect his life amounted to a violation of his freedom of expression […]
A majority of the Board concluded that Facebook’s interference with the user’s freedom of expression was mistaken. The removal of the comment would not protect the rights of Armenians to equality and non-discrimination. The user was not endorsing the statements contrasted in the meme, but rather attributing them to the Turkish government. They did this to condemn and raise awareness of the government’s contradictory and self-serving position. The majority found that the effects of satire, such as this meme, would be lessened if people had to explicitly declare their intent. The fact that the “two buttons” or “daily struggle” meme is usually intended to be humorous, even though the subject matter here was serious, also contributed to the majority’s decision.
The majority also noted that the content was shared in English on a Facebook page with followers based in several countries. While the meme could be misinterpreted by some Facebook users, the majority found that it does not increase the risk of Armenians being subjected to discrimination and violence, especially as the content is aimed at an international audience. They found that bringing this important issue to an international audience is in the public interest.
Additionally, the Board found that removing information without cause cannot be proportionate. Removing content that serves the public on a matter of public interest requires particularly weighty reasons to be proportionate. In this regard, the Board was concerned with Facebook content moderators’ capacity to review this meme and similar pieces of content containing satire. Contractors should follow adequate procedures and be provided with time, resources and support to assess satirical content and relevant context properly.
While supporting majority’s views on protecting satire on the platform, the minority did not believe that the content was satire. The minority found that the user could be embracing the statements contained in the meme, and thus engaging in discrimination against Armenians. Therefore, the minority held that the requirements of necessity and proportionality have been met in this case. In case decision 2021-002-FB-UA, the Board noted Facebook’s position that the content depicting blackface would be removed unless the user clearly indicated their intent to condemn the practice or raise awareness of it. The minority found that, similarly, where the satirical nature of the content is not obvious, as in this case, the user’s intent should be made explicit. The minority concluded that, while satire is about ambiguity, it should not be ambiguous regarding the target of the attack, i.e., the Turkish government or the Armenian people.
- Oversight Board decision
The Oversight Board overturns Facebook’s decision to remove the content and requires the content to be restored.
- Policy advisory statement
The following recommendations are numbered, and the Board requests that Facebook provides an individual response to each as drafted:
[…]
- Include the satire exception, which is currently not communicated to users, in the public language of the Hate Speech Community Standard.
Having adequate tools in place to deal with issues of satire
To improve the accuracy of the enforcement of its content policies for the benefit of users, Facebook should:
- Make sure that it has adequate procedures in place to assess satirical content and relevant context properly. This includes providing content moderators with: (i) access to Facebook’s local operation teams to gather relevant cultural and background information; and (ii) sufficient time to consult with Facebook’s local operation teams and to make the assessment. Facebook should ensure that its policies for content moderators incentivize further investigation or escalation where a content moderator is not sure if a meme is satirical or not.
[…]
(3.3) Critique of META Oversight Board’s work
A Kulick, ‘Meta’s Oversight Board and Beyond – Corporations as Interpreters and Adjudicators of International Human Rights’ (2023) 22 The Law & Practice of International Courts and Tribunals 161, 179-180
Within the confines of its mandate, the [Oversight Board] has done rather well to establish a certain amount of independent review of content decisions on Facebook and Instagram […] However, its confines – set by Meta – are considerable. The [Oversight Board]’s decisions, despite its formal “independence”, nonetheless constitute corporate interpretation of human rights attributable to Meta. Most problematically, the Board’s rather idiosyncratic method of interpretation, disguised as a mere application of authoritative interpretations of international human rights treaty provisions, bears the high potential of changing the content of these norms in the interest and image of the social media corporation […]
[…]
One may very well counter that the [Oversight Board] is not a court of law, no less an international one, and that it is not exclusively composed of legal experts. However, the initial idea was to create a “Facebook Supreme Court”, and the body that came out of this initiative decides cases based on procedure and on reasoning resembling those of a court and applies as relevant standards, for the most part, international human rights norms.
Exercise:
Check the recommendation tracker of META’s recommendations. Does META enforce the Oversight Board’s recommendations?
Questions to reflect
- Can you explain and assess in what ways content regulation is subject to private governance?
- What are States’ obligations in supplementing or regulating the private sector?
- Do you agree with how the META Oversight Board in the “Two buttons” meme case weighed the interest of satire under the right to freedom of expression vis-à-vis the prohibition of hate speech?
- How do you assess the Oversight Board’s mandate and work in terms of protecting human rights?
Fun play: Two Buttons Meme generator
(4) Interfering with the Right to Freedom of Thought? AI Practices Incompatible with Human Rights Law
This section gives a snapshot of emerging challenges that design and engineering choices as well as the deployment of AI systems/practices – both in the public and the private sectors – pose to human autonomy and dignity. Some of these challenges include emotion detection and manipulation and exploitation of human vulnerabilities. AI systems have a growing capacity not only to predict choices but also to influence emotions and thoughts and alter an anticipated course of action. This capability is enabled by a high degree of precise personalisation and scalability. Examples include recommender systems oradvertisements based on targeting techniques optimised to appeal to individuals’ or groups’ vulnerabilities. In turn, sentiment detection refers to systems that can infer a person’s inner emotional state based on physical, physiological or behavioural markers (e.g., facial expressions, vocal tone) and sort them into discrete categories (e.g., angry).
Are these practices compatible with human rights law? Human rights law’s capacity to capture and address novel harms caused by AI systems is not straightforward. The ability of individuals to make informed choices or decisions bring to the foreground core aspects of human dignity and self-determination. Different rights may be relevant and applicable, such as aspects of the right to privacy or the right to mental integrity. The neglected right of freedom of thought has great potential in addressing impermissible interferences with how we form our opinions, views, decisions. Novel rights are also under discussion, including the so-called right to cognitive liberty. Due to the highly invasive harms and effects, other areas of regulation have also come into play, including recent prohibition/bans of specific AI systems under the EU DSA and the EU AI Act.
(4.1) Case studies
(4.1.1) Sentiment detection: EU’s experimentation with iBorderCtrl and lying-detection
D Boffey, EU border ‘lie detector’ system criticised as pseudoscience, The Guardian, 2 November 2018,
The EU has been accused of promoting pseudoscience after announcing plans for a “smart lie-detection system” at its busiest borders to identify illegal migrants. The “lie detector”, to be trialled in Hungary, Greece and Latvia, involves the use of a computer animation of a border guard, personalised to the traveller’s gender, ethnicity and language, asking questions via a webcam.
The “deception detection” system will analyse the micro-expressions of those seeking to enter EU territory to see if they are being truthful about their personal background and intentions. Those arriving at the border will be required to have uploaded pictures of their passport, visa and proof of funds. According to an article published by the European commission, the “unique approach to ‘deception detection’ analyses the micro-expressions of travellers to figure out if the interviewee is lying”.
The project’s coordinator, George Boultadakis, who works for the technology supplier, European Dynamics, in Luxembourg, said: “We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks. The system will collect data that will move beyond biometrics and on to biomarkers of deceit.”
[…]
Border officials will use a handheld device to automatically crosscheck information, comparing the facial images captured during the pre-screening stage to passports and photos taken on previous border crossings. When documents have been reassessed, and fingerprinting, palm-vein scanning and face matching have been carried out, the potential risk will be recalculated. A border guard will then take over from the automated system.
The project, which has received €4.5m (£3.95m) in EU funding, has been heavily criticised by experts. Bruno Verschuere, a senior lecturer in forensic psychology at the University of Amsterdam, told the Dutch newspaper De Volskrant he believed the system would deliver unfair outcomes. “Non-verbal signals, such as micro-expressions, really do not say anything about whether someone is lying or not,” he said. “This is the embodiment of everything that can go wrong with lie detection. There is no scientific foundation for the methods that are going to be used now.
(4.1.2) YouTube, Snapchat, and TikTok recommender systems
Commission sends requests for information to YouTube, Snapchat, and TikTok on recommender systems under the Digital Services Act, 2 October 2024
Today, the Commission has sent a request for information to YouTube, Snapchat, and TikTok under the Digital Services Act (DSA), asking the platforms to provide more information on the design and functioning of their recommender systems. […]
YouTube and Snapchat are requested to provide detailed information on the parameters used by their algorithms to recommend content to users, as well as their role in amplifying certain systemic risks, including those related to the electoral process and civic discourse, users’ mental well-being (e.g. addictive behaviour and content ‘rabbit holes’), and the protection of minors. […]
TikTok has been requested to provide more information on the measures it adopted to avoid the manipulation of the service by malicious actors and to mitigate risks related to elections, pluralism of media, and civic discourse, which may be amplified by certain recommender systems.
(4.1.3) X’s interface and users’ ability to make free and informed decisions
Commission sends preliminary findings to X for breach of the Digital Services Act, Press release, 12 July 2024
Today, the Commission has informed X of its preliminary view that it is in breach of the DSA.
[…]
First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.
[…]
(4.1.4) Deceptive advertisements and disinformation
Commission opens formal proceedings against Facebook and Instagram under the DSA, Press release, 30 April 2024
The current proceedings will focus on the following areas:
Deceptive advertisements and disinformation. The Commission suspects that Meta does not comply with DSA obligations related to addressing the dissemination of deceptive advertisements, disinformation campaigns and coordinated inauthentic behaviour in the EU. The proliferation of such content may present a risk to civic discourse, electoral processes and fundamental rights, as well as consumer protection.
[…]
(4.2) Certain AI systems/practices may not be compatible with human rights law
(4.2.1) Guidelines on addressing the human rights impacts of algorithmic systems, Appendix to Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems, 8 April 2020
[…] The functionality of algorithmic systems is frequently based on the systematic aggregation and analysis of data collected through the digital tracking at scale of online and offline identity and behaviour of individuals and groups. In addition to the intrusion on individuals’ privacy and the increasing potential of highly personalised manipulation, tracking at scale can have a serious adverse effect on the exercise of human rights […]
(4.2.2) UNGA Res 78/265, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, UN Doc A/RES/78/265, 1 April 2024
5. Emphasizes that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of artificial intelligence systems, calls upon all Member States and, where applicable, other stakeholders to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations […]
(4.2.3) UN Human Rights Council, Res 54/21, Right to privacy in the digital age, UN Doc A/HRC/RES/54/21, 16 October 2023
3. Also recalls the increasing impact of new and emerging technologies, such as those developed in the fields of surveillance, artificial intelligence, automated decision-making and machine-learning, and of profiling, tracking and biometrics, including facial recognition, without human rights safeguards, present to the full enjoyment of the right to privacy and other human rights, and acknowledges that some applications may not be compatible with international human rights law.
(4.2.4) Recital 69, Digital Services Act
When recipients of the service are presented with advertisements based on targeting techniques optimised to match their interests and potentially appeal to their vulnerabilities, this can have particularly serious negative effects. In certain cases, manipulative techniques can negatively impact entire groups and amplify societal harms, for example by contributing to disinformation campaigns or by discriminating against certain groups. Online platforms are particularly sensitive environments for such practices and they present a higher societal risk. Consequently, providers of online platforms should not present advertisements based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679, using special categories of personal data referred to in Article 9(1) of that Regulation, including by using profiling categories based on those special categories. […]
(4.2.5) Recital 70, Digital Services Act
A core part of the online platform’s business is the manner in which information is prioritised and presented on its online interface to facilitate and optimise access to information for the recipients of the service. This is done, for example, by algorithmically suggesting, ranking and prioritising information, distinguishing through text or other visual representations, or otherwise curating information provided by recipients. Such recommender systems can have a significant impact on the ability of recipients to retrieve and interact with information online, including to facilitate the search of relevant information for recipients of the service and contribute to an improved user experience. They also play an important role in the amplification of certain messages, the viral dissemination of information and the stimulation of online behaviour. Consequently, online platforms should consistently ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed, and can influence how information is presented to them. They should clearly present the parameters for such recommender systems in an easily comprehensible manner to ensure that the recipients of the service understand how information is prioritised for them. Those parameters should include at least the most important criteria in determining the information suggested to the recipient of the service and the reasons for their respective importance, including where information is prioritised based on profiling and their online behaviour.
(4.3) Relevant law
(4.3.1) Freedom of thought provisions
Everyone shall have the right to hold opinions without interference.
Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.
Article 10(1) EU Charter – Freedom of thought, conscience and religion
Everyone has the right to freedom of thought, conscience and religion.
(4.3.2) Mental integrity provisions
Article 3 EU Charter – Right to the integrity of the person
1. Everyone has the right to respect for his or her physical and mental integrity.
2. In the fields of medicine and biology, the following must be respected in particular:
a) the free and informed consent of the person concerned, according to the procedures laid down by law;
b) the prohibition of eugenic practices, in particular those aiming at the selection of persons;
c) the prohibition on making the human body and its parts as such a source of financial gain;
d) the prohibition of the reproductive cloning of human beings.
Article 5(1) Inter-American Convention on Human Rights
Every person has the right to have his physical, mental, and moral integrity respected.
(4.3.3) Article 7 Framework Convention on AI, Human Rights, Democracy and the Rule of Law
Each Party shall adopt or maintain measures to respect human dignity and individual autonomy in relation to activities within the lifecycle of artificial intelligence systems
(4.3.4) Article 5(1) EU AI Act – Prohibited AI Practices
The following AI practices shall be prohibited:
(a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
[…]
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
[…]
(4.3.5) Article 25(1) Digital Services Act – Online interface design and organisation
Providers of online platforms shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions.
(4.4) Freedom to express one’s opinion necessarily includes freedom not to express one’s opinion; any form of effort to coerce the holding or not holding of any opinion is prohibited
UN Human Rights Committee, General Comment No 34, Article 19: Freedoms of opinion and expression, UN Doc CCPR/C/GC/34, 12 September 2011
9. Paragraph 1 of article 19 requires protection of the right to hold opinions without interference. This is a right to which the Covenant permits no exception or restriction. Freedom of opinion extends to the right to change an opinion whenever and for whatever reason a person so freely chooses. No person may be subject to the impairment of any rights under the Covenant on the basis of his or her actual, perceived or supposed opinions. All forms of opinion are protected, including opinions of a political, scientific, historic, moral or religious nature. It is incompatible with paragraph 1 to criminalize the holding of an opinion. The harassment, intimidation or stigmatization of a person, including arrest, detention, trial or imprisonment for reasons of the opinions they may hold, constitutes a violation of article 19, paragraph 1.
10. Any form of effort to coerce the holding or not holding of any opinion is prohibited. Freedom to express one’s opinion necessarily includes freedom not to express one’s opinion.
(4.5) Interim report of the Special Rapporteur on freedom of religion or belief, Ahmed Shaheed, Freedom of thought, UN Doc A/76/380, 5 October 2021
2. Freedom of thought, along with one’s conscience and belief, is regarded as part of one’s forum internum – a person’s inner sanctum (mind) where mental faculties are developed, exercised and defined. The drafting history of the Universal Declaration of Human Rights suggests that some delegates, including the Lebanese delegate, Charles Malik, considered free exercise of these faculties as essential for protecting “the human person’s most sacred and inviolable possessions”, which enable people to “perceive the truth, to choose freely and to exist”.
[…]
4. Articles 4 and 18 of the International Covenant on Civil and Political Rights confirm the right’s significance, ascribing it absolute protection, even during public emergencies. Consequently, and unlike forum externum (external realm) freedoms that are subject to State limitations, if prescribed by law and necessary to protect public safety, order, health or morals, or the rights of others, States legally cannot ever interfere with freedom of thought. Despite its proclaimed importance and absolute nature, the right’s scope and content remain largely underdeveloped and poorly understood. The right receives scant attention in jurisprudence, legislation and scholarship, international and otherwise.
[…]
11. What constitutes “thought” not only lacks legal precision, but also scientific and philosophical consensus. Neuroscientists generally agree that thoughts are created when billions of neurons (nerve cells) in the brain […] But the consensus ends there. Some neuroscientists distinguish “thought” from other cognitive processes, including emotion, based on the primary part of the brain engaged. Others emphasize the complex, highly interrelated nature of anatomical aspects of the brain that support cognitive functions, comparing efforts to “trace a thought from beginning to end” to “asking where the forest begins”.
[…]
D. Attributes of the right to freedom of thought
25. Beyond absolute protection, relatively little is clear about the right’s core elements or “attributes”. Below, the Special Rapporteur maps four possible attributes of the right based on international human rights jurisprudence and commentary: (a) not being forced to reveal one’s thoughts; (b) no punishment and/or sanctions for one’s thoughts; (c) no impermissible alteration of one’s thoughts; and (d) States fostering an enabling environment for freedom of thought.
1. Freedom not to reveal one’s thoughts
26. In discussing freedom of thought in its general comment No. 22, the Human Rights Committee asserted that, “[i]n accordance with articles 18 (2) and 17[of the International Covenant], no one can be compelled to reveal his thoughts” implying that “mental privacy” is a core attribute of freedom of thought. The right not to reveal one’s thoughts against one’s will arguably includes “the right to remain silent”, without explaining such silence. Meanwhile, United States courts recognize that an individual’s right to privacy encompasses mental privacy.
[….]
3. Protection from impermissible alteration of thought
28. Several commentators contend that freedom of thought protects against alteration of one’s thoughts, in particular circumstances. This is a complex matter to delineate because, in reality, our thoughts are perpetually influenced by others. Parents entice their children to eat healthily, companies persuade consumers to buy their products through glossy advertising, and policymakers use “nudges” to influence citizens’ behaviour towards desired outcomes, including for organ donation, nutrition and environmental conservation. These specific examples may not often evoke human rights concerns, but they nonetheless raise questions about what constitutes “mental autonomy”. Ultimately, scholars propose three categories of impermissible alteration of one’s thought that could violate freedom of thought.
[…]
35. A growing body of legal scholarship supports the claim that freedom of thought includes freedom from manipulation. While modification bypasses psychological processes to directly alter biological function, manipulation engages and controls psychological processes. Some scholars define manipulation to induce the formation of “biased mental models […], knowledge and ideologies”, or a form of “cognitive mind control”. […]
36. Legal scholars contend that mental influences, which involve “conscious and uncoerced processes” such as persuasion, are prima facie bit not necessarily legitimate. Case-by-case assessments of whether certain practices impermissibly manipulate one’s thoughts could consider, among other factors:
- (a) Did the rights holder, whether explicitly or tacitly and where they have capacity to do so, consent to the practice? Was that consent free and informed?
- (b) Concealment or obfuscation. Would a “reasonable person” be aware of the intended influence? For example, if the content is an advert or government campaign, is it clearly attributable, labelled or otherwise evident as such? During content curation or moderation, is the user clearly notified when and why certain content was removed or displayed?
- (c) Asymmetrical power. Is there an imbalance of power between the influencer and the rights holder? Does the influencer exercise this power to promote a certain narrative to the exclusion of others? Is this done in a limited, transparent and consistent manner, which the recipient can readily change or appeal?
- (d) Some commentators point to “harm” in intent or effect to distinguish permissible “influence” from impermissible “manipulation”. However, others contend that it is not always necessary to prove “harm” to establish the latter. Rather, it is an aggravating factor. If the influence undermines one’s rational decision-making, it may impair freedom of thought even if the desired result is a commonly held good.
[…]
1. Inference and predictive technologies
68. Several stakeholders assert that the use of predictive technologies by digital technology companies should raise concerns regarding freedom of thought. Predictive systems, by nature, do not reveal “actual” thoughts. Yet armed with vast and growing quantities of personal and non-personal data, such systems can reportedly build sophisticated individualized psychological profiles, which can potentially infer and even modify thoughts in certain circumstances.
69. They also express concern about the proliferation of predictive technologies, such as so-called artificial intelligence-powered polygraphs that make use of artificial intelligence, which feed biometric data (e.g., heart rate, speech patterns and facial features) into “truth detection” algorithms […] The accuracy and, in some cases, the scientific basis of these technologies is heavily contested. Nonetheless, some argue that irrespective of whether these technologies violate mental privacy, they can and do still result in punishment for inferred thought. For example, Chinese authorities reportedly deploy “emotion detection” technologies to infer “criminal” states of mind among the public, which could lead to administrative or criminal sanctions. Moreover, several corporations and educational institutions allegedly utilize biometric data to infer the thoughts of their employees and students, respectively. Technology that monitors employee brain activity in workplaces is already proliferating, and some scholars postulate that employees might be punished for inferred thoughts, such as thoughts on unionizing.
70. Recent research indicates that result rankings from Internet search engines have a dramatic impact on consumer attitudes, preferences and behaviour – potentially even modifying their very thoughts. For example, five experiments in the United States and India have illustrated the power of search rankings to alter the preferences of undecided voters in democratic elections, noting that many users choose and trust higher-ranked results over lower-ranked results. Research shows these practices could have a significant impact on the users’ decision-making processes, including among undecided voters, showing that they can lead to shifts in voting preferences by 20 per cent or more.
71. Reportedly, Facebook has claimed that tweaking content on individuals’ “newsfeeds” could transfer emotions from person-to-person, and that their predictive marketing could identify when children feel “insecure”, “worthless” and “need a confidence boost”. In Kenya, finance applications allegedly have mined their users’ mobile phone data to predict when they were most vulnerable to predatory credit offers.
[…]
2. Microtargeting
73. Microtargeting is the use of (often large volumes of) personal data gathered from digital footprints to tailor what individuals or small groups see online. While traditional advertising is mainly informative, modern advertising draws on techniques such as microtargeting and advances in behavioural sciences to examine links between emotional responses and decision-making and play on subconscious desires […]
Questions to reflect
- The prohibition of AI systems provided in Article 5 the AI Act, includes extensive qualifiers in the text (e.g., materially distort, appreciably impair, significant harms) suggesting a high threshold set to distinguish lawful and legitimate techniques from unlawful, subliminal or purposefully manipulative or deceptive techniques. These qualifiers and other elements in these provisions are highly unclear, and it remains to be seen how they will be interpreted and applied considering how our understanding of the effects of the technologies evolves. Where would you draw the line between lawful/legitimate techniques of influence from unlawful, manipulative or deceptive techniques?
- The AI Convention is criticised for leaving discretion to State parties to assess the need for banning certain AI systems and for not providing clear criteria to this end (Article 16(4)). Why do you think the AI Convention is silent in this regard?
- How would you assess the human rights law’s potential to capture novel harms stemming from emotion detection and manipulation and exploitation of human vulnerabilities? What are the rights that may be relevant here?
Feedback/Errata
1 Responses to 2. Right to Freedom of Expression