By Arya Tripathy on March 28, 2024
1. Introduction
On March 1, 2024, the Ministry of Electronics and Information Technology (MeitY) issued due diligence related advisory to platforms and intermediaries regarding the deployment and use of AI algorithms and software on their platforms (1st Advisory).[1] Subsequently and owing to public debate, MeitY issued a revised advisory on March 15, 2024,[2] superseding the earlier advisory (2nd Advisory). Both advisories have been issued with a specific focus on intermediaries’ rights and liabilities under the Information Technology Act, 2000 (IT Act) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Intermediary Rules).[3] This post provides an analysis of the advisories, and its potential impact.
2. Who are intermediaries?
Section 2(w) of the IT Act defines “intermediary” as any person who on behalf of another person receives, stores, or transmits electronic records, or provides any service concerning that record. It includes telecom, network, internet, and web-hosting service providers; search engines; online payment and auction sites; online marketplaces and cyber cafes. Intermediary Rules classify intermediaries into 3 kinds:
(i) Social media intermediaries – one that primarily or solely enables online interaction between users, allowing them to create, upload, share, disseminate, modify, or access information using the intermediary’s services;
(ii) Significant social media intermediaries – social media intermediaries who have 5 million registered users in India; and
(iii) Other categories – every other intermediary is likely to be captured in this residual category such as ISPs, e-commerce platforms, search engines, and payment sites.
3. Due diligence obligation of intermediaries:
3.1 Rule 3 of the Intermediary Rules imposes due diligence obligations on intermediaries. It states that the intermediary’s terms of use, privacy policy, or user agreements, must inform the users that its computer resource cannot be used to host, display, upload, modify, publish, transmit, store, update, or share certain kinds of information (Prohibited Information). It includes information that infringes third-party intellectual property, is unlawful, defamatory, obscene, harmful to children, deceiving or misleading about the origin of the message, misinformation, impersonating another person, threatens state security and sovereignty, contains viruses, etc. Misinformation is any information that is patently false and untrue, misleading or is identified as fake/false/misleading by the notified fact check unit of the Central Government.[4] Further, information that infringes bodily privacy, or is harassing/insulting on gender basis is also included in the prohibited list.
3.2 The intermediary must prominently publish these restrictions in English or any Eighth Schedule language as per the user’s choice on its website, application, or both; and require them to comply. Furthermore, the intermediary shall periodically inform them at least once a year that non-compliance with these requirements may result in the termination of their access and usage rights, and such content can be taken down immediately. Compliance with these diligence requirements is fundamental for intermediaries to avail safe harbor for content on their platforms, and where it is shown that diligence was not conducted, the liability can be affixed on the intermediaries.
4. Application of the advisories:
4.1 Several concerns have been raised regarding the legislative basis for the advisories, their application, and scope. Regarding the legislative basis, the advisories do not cite the specific statutory provisions under the IT Act or the Intermediary Rules basis which it has been issued. However, it does make ample reference to Rule 3 of the Intermediary Rules, which have been promulgated under Section 87 of the IT Act.[5] 2nd Advisory largely elaborates on existing diligence requirements under the Intermediary Rules in the context of AI and LLMs, and can be viewed as a cautionary note to intermediaries.
4.2 The superseded 1st Advisory provided a stipulation regarding prior permission for the deployment of untested AI models, LLMs, generative AI software, or algorithms (collectively referred to as AI Tools) on its computer resource. This was criticized and arguably was an overreach as it aimed at creating AI licensing and governance mechanisms in the absence of empowering provisions under the IT Act. What is an untested or unreliable AI Tool is ambiguous, given that there is no regulatory sandbox mechanism, testing standard, certification, recognized labs for testing, or similar parameters for viewing AI Tools as adequately tested, or reliable. Even the most widely used AI Tools as it stands today are constantly evolving and have error margins. There was no clarity on what information must be provided or what process should be followed for obtaining government approval, and in the absence of such detailing, compliance as well as implementation was difficult. Parallels can be drawn with requirements under the USA government’s Executive Order for Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence of October 2023.[6] The order did not usher in a licensing regime but required AI developers to disclose their safety test and other critical findings to the government. It also contemplates controlled and standardized settings in collaboration with developers. Should MeitY expect compliance with a prior permission regime in the future, it would be imperative to elaborate on the parameters, standards, and approval process. As it stands today, 2nd Advisory removes the prior permission requirement as a welcome relief.
4.3 The advisories use the term “intermediaries/platforms” and requests intermediaries to ensure compliance immediately. The use of the term “platform” has caused confusion as it is not defined under the IT Act or the Intermediary Rules. In reality, not all platforms qualify as intermediaries such as inventory e-commerce players. After the issue of 1st Advisory, the Minister of State, Mr. Rajeev Chandrashekhar clarified through social media posts that the advisory applies only to significant intermediaries, and not to others.[7] These clarificatory posts do not have any statutory effect, but reveal the regulatory intent, and it appears that the immediate focus is on significant social media intermediaries. It also appears that the term platform will likely be interpreted as the residual category of intermediaries such as e-commerce, search engines, etc., and exclude non-intermediary platforms. If the intent were to regulate non-intermediary platforms, it would be a regulatory overreach and could be challenged. In our view, all intermediaries, and not just significant ones should pay heed to the 2nd Advisory until such time formal MeitY clarifications are issued.
4.4 Further, there is uncertainty about the binding effect of the advisories, with many contending that these are only recommendatory. A substantial portion of 2nd Advisory is rather explanatory on the scope of diligence requirements under Rule 3 of the Intermediary Rules. For example, Rule 3 includes misinformation as Prohibited Information and 2nd Advisory emphasizes the possibility of misinformation proliferation with the deployment of AI on an intermediary platform. To this effect, 2nd Advisory states that non-compliance with Intermediary Rules may result in penal consequences against intermediaries, or its identified users under the IT Act as well as criminal law. Hence, the 2nd Advisory per se may not be binding, but non-compliance thereof could also result in a breach of the diligence norms under the Intermediary Rules, and enforced against intermediaries.
5. Stipulations:
5.1 Where an intermediary proposes to use and deploy AI Tools, it is required to comply with the following key stipulations:
(i) use of AI Tools should not permit users to host, display, upload, modify, publish, transmit, store, update, or share Prohibited Information;
(ii) inform the users through terms of use or user agreements about the consequences of dealing with Prohibited Information in line with what is provided in Rule 3 of the Intermediary Rules;
(iii) ensure that the use of AI Tools does not permit bias, discrimination, or threaten the integrity of the electoral process;[8]
(iv) deploy under-testing or unreliable AI Tools only after appropriate labeling, where such label must call out the possible and inherent unreliability or fallacy of the generated output data;
(v) provide for a consent pop-up mechanism to inform the users about such unreliability; and
(vi) if synthetic creation, generation, or modification of text, audio, visual, or audio-visual information is allowed on the platform, label or embed a unique and permanent metadata identifier on such synthetic information, so that the origin is easily identifiable alongside it being computer-generated, with the user’s or intermediaries’ identity.
5.2 Stipulations at (i) and (ii) are derivatives of Rule 3, and intermediaries must ensure compliance. Concerning labeling and consent requirements for untested or unreliable AI Tools as stated at (iv) and (v) above, the compliance is for deployment and not on actual generated output data, can be viewed as procedural, and minimal disclaimers aligned as per 2nd Advisory should suffice, and this is unlikely to pose challenges. Further, the obligation to ensure that AI Tools do not impact the integrity of the electoral process, with India gearing up for general elections, the requirement squarely strikes at the proliferation of misinformation and fake news which already is included as Prohibited Information.
But the mandate around preventing bias, and metadata tagging are onerous and lack the necessary detailing required for compliance as well as implementation.
5.3 While the intent is to prevent bias or discrimination, it may be detached from the functional reality of AI Tools. Bias can originate anywhere in the AI chain, from design to deployment. It could be inherent in the foundational models and language used to train AI Tools. For example, most Indian dialects and languages have nouns attributed to a specific gender, and thus, the bias evaluation and minimization process will differ from what is used for English. This can result in gender-specific output when the expectation is for a gender-neutral one. Further, some biases are hard to foresee and predict, and only crop up after use in a specific context. Therefore, it is difficult to ensure absolute compliance with the mandate to eliminate bias and discrimination.
5.4 On tagging identifiers, again 2nd Advisory fails to clarify the meaning of computer-generated synthetic information, as no output will be completely de hors of the input data fed into AI Tools at development and subsequently, during deployment and use stages. The most plausible interpretation of synthetic information would seek to cover large portions of user prompt-based generated output when the intent is to tackle misinformation and deepfakes. While metadata tagging is possible and compliance with 2nd Advisory provides the legal basis for the processing of metadata (arguably one needs to identify if these will qualify as personal data!), this would be a time-consuming and expensive process for entities. In the past, identifying the origin of messages has posed technical challenges for some key intermediaries including WhatsApp.
6. Conclusion
While the focus has been on intermediaries and their march to compliance, 2nd Advisory shall have significant implications for AI Tool developers, whether they directly engage with users or not. Where the AI Tool developer is a non-intermediary but its model is used by an intermediary, the onerous obligations are likely to trickle down contractually in their arrangements with intermediaries. The developers will likely have to cooperate with contracting intermediaries for increased scrutiny of the development modules, tests, and training models before they are deployed. Furthermore, it would be expected that the developers provide robust representations and warranties, allow audits, and agree to indemnity obligations. In essence and despite being outside the ambit of Intermediary Rules, non-intermediary developers will be expected to provide tighter contractual protection and perhaps, regular feedback to the intermediaries, so that they are in a position to comply. If the AI Tool developer is also an intermediary like Open AI’s ChatGPT, it would have a direct application. Either way, this is a significant regulatory move that sets the stage and heightened speculation around the future of AI regulation in India.
[1] Advisory No. eNo. 2(4)/2023-CyberLaws-3
[2] Advisory No. eNo. 2(4)/2023-CyberLaws-3
[3] Author’s analysis of the Intermediary Rules can be accessed here. Please note that the analysis is dated and subsequently, the Rules were amended which have not been captured in the post.
[4] Fact check unit under the Press Information Bureau, Ministry of Information and Broadcasting has been notified as the relevant body through Gazette Notification S.O.1491(E) dated March 20, 2024
[5] Section 87 (1) read with 87(2)(z) and (zg) allows the Central Government to issue guidelines to intermediaries.
[6] The briefing of the said order can be accessed here (last accessed on March 28, 2024)
[7] X posts can be accessed here and here (last accessed on March 28, 2024)
[8] In February 2024, representatives from OpenAI met officials of Election Commission of India to discuss means for preventing any misuse of ChatGPT in the electoral process as well as explore collaboration with the Commission.