[ad_1]
Nimesh Singh

Social media’s pervasiveness impacts an individual’s public and political life to a substantial degree. The debate on ‘content regulation’ has led to intense debate on the role of intermediaries, the government and co-regulatory models. Parallelly, the recent discourse on ‘self-regulation’ post the introduction of the 2021 Intermediary Guidelines provides an opportunity to discuss self-regulation with respect to content moderation and intermediaries. This Essay attempts to explain the unique position of social media intermediaries vis-à-vis self-regulation and argues for a participatory model under a common self-regulatory body of intermediaries. In particular, this Essay will explore the details of such a framework, drawing from contemporary regulatory attempts on ancillary aspects of social-media regulation.
I. INTRODUCTION
The Informational Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘the Ethics Code’) is around ‘self-regulation’ in an ostensible attempt to regulate digital platforms. ‘Self-regulation’ means regulation by the company or industry itself, as opposed to the State’s regulatory instruments.[1] Ironically, this mechanism is not extended to social media intermediaries under Rule 9.
In this Essay, I will focus on ‘social media intermediaries’, such as Facebook and Twitter, to analyse the viability of a self-regulatory mechanism. While there is literature identifying various issues involving intermediaries, there is little contextualization of the same in the Indian setting and a lack of literature addressing the problem from a ‘self-regulatory’ perspective.
I will engage in a cross-jurisprudential analysis and identify best practices across jurisdictions to arrive at a framework viable for the Indian setting. Part II will briefly state the relationship between regulation and social media. Part III will build the substantive argument by exploring ‘how’ these platforms respond to self-regulation and the spectrum of the types of self-regulation. Under this spectrum, Part IV will discuss the non-viability of the current mechanism as envisaged under the Ethics Code provisions. Through this analysis, Part V will engage in a counter-factual debate, to arrive at a more ‘apt’ model to meet India’s regulatory requirements.
II. PECULIARITIES OF SOCIAL MEDIA AND THE CYBERSPACE
The fundamental idea behind regulation is to ensure that content stays within the bounds of ‘justifiability.’[2] However, this is not simple. The ‘digital’ nature of social media platforms as well as the pervasiveness of the internet, makes it substantially challenging to regulate them the way traditional print media has been regulated.[3]
There appears to exist a general consensus behind the idea that State interference in any media regulation should be minimal. This is attributed to the role of media houses in general, and social media in particular, towards shaping discourse and supporting democratic institutions.[4] Freedom of speech is a recognised fundamental right under Art. 19(1)(a) of the Constitution, and is inclusive of freedom of media. From this ‘right-based’ perspective, any regulatory attempt must ensure the maximum preservation and promotion of this right. The free-speech-promoting facets of the digital sphere are best brought out in a ‘minimalist’ regulatory regime.[5]
III. HOW INTERMEDIARIES RESPOND TO SELF-REGULATION
One unique problem with the traditional self-regulatory mechanism is that they are viewed as a monolith.[6] The traditional view suggests that regulation is either ‘public’ or ‘self’, i.e., ‘private’ regulation.[7] However, there is an entire range of arrangements that may be classified as self-regulation.[8] These depend upon the degree of autonomy granted to the private actor. This implies the existence of a spectrum containing different degrees of such autonomy and government constraints.[9] On the one hand, there may be complete industry autonomy to self-regulate, while there may be complete public regulation on the other.
It is tempting to give such platforms maximum autonomy and argue for limiting the State’s role only as a criminal regulator. In the absence of the State, it would be the platform that would formulate content policies.
But this is unlikely to succeed. Past experience shows that any purely self-regulated body wishes to ‘look good’ in the eyes of the State to ensure that it does not step in with public regulation.[10] The platform would err on the side of caution and block any content that any user ‘objects’ to. This leads to platforms setting arbitrary and overbroad guidelines so as to ‘appease’ the State, allowing it to act against a very wide array of content, which may otherwise be a legitimate exercise of free speech. Denying access to this small pool of persons does not hamper the platforms’ business model either, and thus a clear incentive to go ‘overbroad’ exists under this autonomy-oriented model.
Further, the fear of the State imposing liability could impact the platforms’ commercial ability.[11] This may, again, lead to platforms creating overly-broad terms that the signatory platforms must follow, paving way for arbitrary application.[12] For instance, in 2017, YouTube’s Restricted Content feature came under harsh criticism for blocking content involving LGBTQIA+ persons.[13] This incentivisation to ‘over’ self-regulate arises in a purely autonomous paradigm. If not autonomy, then can the guidelines under the Ethics Code serve as a benchmark for content self-regulation?
IV. STATE INTERFERENCE IN THE PRESENT REGULATORY ECOSYSTEM
Given that the Code is entirely ‘state-sponsored,’ it is unlikely to serve the objectives of regulation.
There are at least two fundamental problems while adopting the ‘state-sponsored’ Ethics Code Guidelines. First, under Rule 3, the Ethics Code lays down the situations where the social media intermediary is obligated to take-down user-generated content. This ‘collateral censorship’ involves the State directing the platform to delete certain content, or else the platform shall stand penalised. This penalty under the Code comes in the form of revocation of safe-harbour protections, available under §79 of the Information Technology Act, 2000. This is similar to the ‘deputisation’ of State function to intermediaries under the German NetzDG Law, which puts an obligation on intermediaries to monitor and take-down hate speech. While, in any case taking down content is, in fact, a State function, the problem is specifically with respect to the basis of the regulation itself being determined solely by the State. This must be taken with a note of caution. A purely state-governed set of rules may work in other regulatory ecosystems, such as financial markets, environment regulations, etc. But owing to the degree of sensitivity of IT vis-à-vis free speech, and the sheer potential of state abuse, I argue that a purely State-sponsored model is likely to be detrimental. The guidelines, as well as their enforcement, would therefore depend upon the political dispensation in power. This is especially so since the power to impose penalty is in the hands of the State and not an independent self-regulatory body.
However, while I acknowledge that a fear of ‘potential abuse’ is not necessarily grounds for doubting the legislation or policy itself, the degree is sufficient given the nature of the right involved in this specific case. This is not to say, however, that the State cannot establish any guidelines. As I shall explain in the later segments of this Paper, a set of hortative, principled guidelines can bolster the formation of a more robust guidelines, such as the case with Australia’s Basic Online Safety Expectations Norms.
Second, in any case, the standard established under Rule 3 is overbroad and suffers from vagueness. In Shreya Singhal v. Union of India, the Supreme Court underlined that regulated parties must know ‘what’ is required of them, for which precision and proper guidance are necessary.[14] It accordingly struck down §66A of the IT Act as unconstitutional.[15] Under the Ethics Code, particularly Rules 3(b)(v) and 3(b)(vii), concern violation of “any law” and “the unity, integrity, defence, security or sovereignty of India”. These are very broad principles and have, in fact, been used previously by the State to restrict speech. Imposing these standards on social media intermediaries thus renders them tools of the State, which is classically the antithesis to self-regulation, as well as free speech itself. It is, therefore, difficult to accept the due-diligence guidelines under Rule 3 of the Ethics Code on similar grounds. Most particularly, they have been used to broadly restrict speech and content in the past. It is likely that the State if given the authority to frame guidelines on its own, will opt for similar overbroad principles to gain as much regulatory power as possible over online speech. This is true notwithstanding a narrow reading of such provisions in decided cases on free speech. Practically, even valid forms of speech might be censored by the social media intermediary, who is likely to act overbroad and err on the side of excessive regulation, as opposed to losing its immunity under the IT Act.
V. THE CASE FOR A ‘PARTICIPATORY’ SELF-REGULATION
Protecting free speech while maintaining effective regulation involves two elements: (a) preventing ‘collateral censorship’ as far as possible; and (b) ensuring participatory evolution of a detailed code enforced by an independent regulatory authority of industry members and platforms themselves.[16] In this Part, I will explain the important facets that construct such a model.
A. ONE UNIFORM SELF-REGULATORY BODY
There is no common industry grouping governing social media intermediaries. This results in different platforms having their own standards of guidelines and their independent assessments as to what constitutes a violation of such guidelines. This leads to a two-fold problem. First, there is no uniform set of content that gets moderated, and second, the nature of such moderation itself. A content that is flagged on one platform, may not be flagged on the other. This also leads to a lack of consensus on the appropriate approach towards flagging as well. For instance, in 2019, Facebook and YouTube announced that it would not remove politicians’ posts that violate community standards, in anticipation of the 2020 elections, unless there is a safety risk or incitement of violence. On the other hand, Twitter stated that any violation of guidelines by politicians would led to their accounts being de-emphasised. The nature of free speech rights involved automatically merits a Constitutional scrutiny under Article 19 and 14.[17] Non-uniformity, leads to arbitrariness, which is the antithesis to Article 14, and equality jurisprudence at large. Given that in regulating content the intermediaries are performing ‘State functions’ on ‘deputaisation’ from the State, such fundamental right enforcement is possible. Regardless, post the Kaushal Kishore decision,[18] such intermediaries may also be directly under the purview of Part III.
In fact, Facebook and Twitter have expressed their agreement to form a self-regulatory body collectively. While presently, no such self-regulatory body exists, there exists precedent for previous collaboration. Facebook, Twitter and YouTube have worked together under the Global Internet Forum to Counter Terrorism (‘GIFCT’) to arrive at a set of best practices, in addition to sharing information ‘hashes’ to detect terrorist-generated content. This implies that collaboration to determine common content guidelines, in consultation with industry experts, and civil representatives is possible. Albeit on a different field of counter-terrorism, the evidence of such cooperation to eliminate ‘harmful’ content is likely to serve as precedent to contend with content moderation problems as well. This is especially because it is in the best interests of companies to reduce consumer dissatisfaction arising from unclear and non-uniform moderation policies. The platforms also have a direct incentive for collaboration since they too, wish to not let the regulator-State interfere, which has the potential to disrupt their business, as explained in Part III of this blog. [19]
The formation of the self-regulatory body would allow for a uniform code of principles to be adopted, from existing industry principles. New Zealand’s Code of Practice for Online Safety and Harms was adopted by building on existing practices. The membership, while purely voluntary, incentivises platforms to join in, to maintain the public image as ‘responsible’ and ‘transparent’ organisations. The need for incentivising platforms to ‘opt-in’ can be demonstrated from the experience of the European Union (‘EU’) and its 2018 Code of Practice on Disinformation. The European Commission’s assessment found that there was a lack of an appropriate monitoring mechanism and key-performance indicators to evaluate the success of the Code’s adoption, which is not possible without a proper uniform body responsible for evolving such indicators. It also highlighted the need to independently verify the reports turned in by the intermediaries.[20]
The framework of an independent, voluntary self-regulatory body addresses these concerns firstly by addressing the concern for intermediaries’ incentive of maintaining a public image. Second, subjecting reports prepared by the platforms to detailed scrutiny by the membership of the self-regulatory body, as well as members of the public, resolves the concerns faced by the EU. This reporting mechanism, along with transparency, will now be explained below.
B. BUILDING TRUST VIA PARTICIPATORY FEEDBACK
It has been noted across jurisdictions that users are often not given a clear explanation as to why their content is blocked or taken down, disabling them from avoiding such takedowns in the future. The problem is not limited to a lack of explanation during moderation, but the disjunction between guidelines for users and moderators in the first place. Moderators are given an entirely different set of rather detailed guidelines to actually enforce the content moderation policy. Instagram’s terms of use prohibit, for instance, ‘nudity’, but there is little guidance on what type of content such description covers.
Building trust is crucial to promoting user engagement and participation, which makes self-regulation effective in the first place. Lack of proper information leads people to rationalise the moderation as being ‘biased,’ ‘pressured’ or ‘externally ‘influenced.’[21] Providing such complete information to users is crucial to avoiding arbitrariness and bolster good decision-making.[22]
A good solution would be to release transparency reports in tandem with established best international practices. This report could include, inter alia, the number of flaggings, takedowns, as well as a detailed break-up of the demographics and contextual details of the persons that posted as well as flagged the content. Understandably, this may lead to privacy concerns. However, as indicated by the Santa Clara Principles, which are relevant international standards for regulation, companies “should not collect data on targeted groups for this purpose”. Further, the Data Protection Bill, 2022 puts significant obligations on a data fiduciaries for this purpose.
Thus, a constant ‘feedback’ mechanism is created within this transparency structure. The moderation report can be presented at the self-regulatory body that is formed by such intermediaries. Any political effect of moderation, whether or not intentional, would therefore be public information. Furthermore, there could be a statutory requirement to present this report to shareholders at periodic meetings. Such public declaration of reports directly incentivises to ‘appropriately’ self-regulate by involving as much participation and consultation as possible. The accountability pressure generated by this process has, in fact, worked in 2018, when YouTube and Facebook released public reports, albeit much less detailed. The reports contained redacted details of content takedowns.
Such detailed transparency would enable detailed scrutiny, especially since it has the potential to encroach upon individual rights.[23] The role of the State becomes relevant in ensuring the regular publication of such reports. The State itself does not have to examine the report contents but only makes sure of the regular ‘availability’ of such reports. Even placing a very minimalist role, the State is crucially enabling an atmosphere of participatory regulation. This is not to suggest that companies will perfectly be able to differentiate between content that should be protected versus that should be taken down. However, the mistakes or inherent biases[24] being public enables a better understanding of this bias, and allows for informed judicial recourse.[25]
C. PERMISSIBLE GOVERNMENT INVOLVEMENT
A key argument generally advanced in favour of self-regulation is its ability to provide incentives for compliance with regulations.[26] If guidelines are developed by the industry itself, they are likely to be accepted by companies as reasonable.[27] The present Ethics Code fails to address this since, as I explained above. What then, is the role of the State?
Some inspiration may be drawn from Australia’s Basic Online Safety Expectations (‘BOSE’).[28] It is premised around the idea of a collaboration between the regulator (the eSafety Commissioner) and intermediaries. The unique feature of BOSE is that they are ‘expectations’ which are not per se enforceable. It nevertheless requires platforms to demonstrate how they ‘meet’ such expectations.[29] Admittedly, it is unclear as to what constitutes ‘meeting’ such expectations. It appears from the text of the regulation that Australia intends to create a guiding line to shape online regulatory discourse, thus it likely refers to a low threshold. But it does not prevent the broad discretion granted to the State in issuing a non-compliance report.[30] Nevertheless, the mechanism gives insight into the possibility of the State forming ‘minimum’ guidelines that could form a starting point for the self-regulatory body to evolve a more participatory code.
Another minimal but effective involvement of the State could be regulating supplementary avenues to strengthen the foundation for self-regulation. This could include net neutrality and algorithmic disclosure regulations.[31] How an intermediary utilises algorithms to detect content patterns is an important component of transparency, critical to building trust in the regulatory system. The State could establish pre-determined standards for such algorithms, requiring wide public testing to ensure compliance with such standards.[32] This would be similar to, for instance, the EU Regulation 2019/1150, which aims to promote transparency for social media users.
VI. CONCLUSION
This Essay has explored the much-debated aspect of content-regulation in the social media self-regulatory discourse. A re-thinking needs to begin from the degree of autonomy versus state involvement. Regulatory practices from various jurisdictions on ancillary subjects, such as hate-speech regulation, provide insights into a possible participatory framework. In light of intermediary incentives, as well as the possibility of ‘collateral regulation’ by the State, a common self-regulatory body with the participation of stakeholders and minimal state interference is likely to succeed. Not only does it address the immediate concerns with content regulation, but it also serves as a self-improving, feedback-oriented mechanism that can serve other concurrent regulatory challenges.
The author is a 2nd year undergraduate law student at the WB National University of Juridical Sciences.
[1] Carol Soon, ‘Managing Cyberspace: State Regulation versus Self-Regulation’, [2015] Southeast Asian Affairs 322.
[2] Soon, (n 1), 321.
[3] Cherian George, ‘The Internet and the Narrow Tailoring Dilemma for ‘Asian’ Democracies’, (2003)6(3) The Communication Review 247.
[4] Ibid.
[5] Alexandra Paslawsky, ‘The Growth of Social Media Norms and Government’s Attempts at Regulation’, (2013) 35 FILJ 1534.
[6] Anthoy Ogus, ‘Rethinking Self-Regulation’, (1995)15 OJLS 98.
[7] Ibid.; George, (n 5), 249.
[8] A.C. Page, ‘Self-Regulation: The Constitutional Dimension’ (1989) 49 Modern Law Review 148; P. Cane, ‘Self-Regulation and Judicial Review’, (1987) 6 Civil Justice Quarterly 328.
[9] Page, (n 8), 148.
[10] T. Laitila, ‘Journalistic Codes of Ethics’, (1995) 10(4) European Journal of Communication 527.
[11] Jack M. Balkin, Free Speech is a Triangle, (2018) 118 Columbia Law Review 2011.
[12] S.M. West, Policing the Digital Semicommons: Researching Content Moderation Practices by Social Media Companies, [2018] International Communications Conference.
[13] C. Southerton et al, ‘Restricted Modes: Social Media, Content Classification and LGBTQ Sexual Citizenship’, (2021) 23(5) New Media & Society 920.
[14] Shreya Singhal v. Union of India, (2013) 12 SCC 73, para 63.
[15] Ibid, para 76.
[16] Soshana Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’, (2015) 30 Journal on Information Technology 75; Balkin, (n 11), 2033.
[17] Maneka Gandhi v. Union of India, AIR 1978 SC 597, propounded a combined reading of Part III rights.
[18] Kaushal Kishore v. State of Uttar Pradesh & Ors., W.P. (Crim) No. 113 of 2016
[19] Balkin, (n 11), 2034.
[20] Para 8.3
[21] J. Bosland & J. Gill, ‘The Principle of Open Justice and the Judicial Duty to give Public Reasons’, (2014) 38(2) Melbourne University Law Review, 482.
[22] Ibid., Nicolas P. Suzor et al, What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation, (2019) 13 International Journal of Communication 1528.
[23] Ibid., 1526.
[24] K. Crawford & T. Gillespie, ‘What is a flag for? Social Media Reporting Tools and The Vocabulary of Complaint’, (2018) 18(3) New Media & Society 410.
[25] S.U. Noble, Algorithms of oppression: How Search Engines Reinforce Racism, (2018); C. Sandvig et al, ‘An Algorithmic Audit’ in S. P. Gangadharan, V. Eubanks, and S. Barocas (eds.), Data and discrimination: Selected Essays (2014).
[26] Douglas C. Michael, ‘Federal Agency Use of Audited Self-Regulation as a Regulatory Technique’, (1995) 47 Administrative Law Review 171; Ian Ayres and John Braithwaite, Responsive Regulation: Transcending the Deregulation Debate (1992), 103; Angela J. Campbell, ‘Self-Regulation and the Media’, (1999) 51 Federal Communications Journal, 716.
[27] Peter P. Swire, Markets, Self-Regulation, and Government Enforcement in the Protection of Personal Information, in the United States Department of Commerce, Privacy and Self-Regulation in the Information Age, available at https://www.ntia.gov/page/chapter-1-theory-markets-and-privacy (last accessed February 25, 2023).
[28] Online Safety (Basic Online Safety Expectations) Determination 2022 (Australia).
[29] See Basic Online Safety Expecations, Regulatory Guidance , July 2022 (Australia).
[30] Ibid.
[31] Barbara van Schewick, Internet Architecture and Innovation 77 (2010).
[32] Fabiana Di Porto, ‘Co-Regulating Algorithmic Disclosure for Digital Platforms’, (2021) 40(2) Policy and Society 272.
[ad_2]
Source link