Indian government asks genAI developers to self-regulate

News
Mar 18, 20244 mins
Artificial IntelligenceGenerative AIGovernment

India is changing its approach to regulating generative AI, after initially requiring developers to submit at-risk models for government approval.

Generative AI, tipping point
Credit: Gearstd/Shutterstock

Developers of risky generative AI models are now free to release them without government approval, but are being encouraged to self-regulate by adding labels to the output of their models warning of their potential unreliability.

In a reversal of its previous stance, The Ministry of Electronics and Information Technology (MeitY) issued a fresh advisory on March 15, abandoning a requirement mandating government approval for AI models under development.

The updated advisory on AI governance emphasizes the significance of industry self-regulation and proactive measures to tackle emerging challenges. Instead of the government acting as a watchdog, it aims to strike a balance between fostering innovation and mitigating potential risks associated with AI technologies.

It recommends labelling AI-generated content that is vulnerable to misuse, for example in creating deceptive material such as deepfakes: “Under-tested/unreliable Artificial Intelligence foundational model(s)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” said the advisory.

Additionally, it suggests the use of a “consent popup” or equivalent mechanism to explicitly inform users about potential fallibility or unreliability.

A welcome change

IDC associate VP of research Sharath Srinivasamurthy welcomed the change of direction.

“I see this [advisory] as a step in the right direction as there was a backlash on the previous advisory. AI, especially genAI, is an emerging technology, and regulations will evolve as we go through this journey,” he said. “I think regulations are needed, especially considering this technology’s impact on people. The only question is what amount of regulation is needed. It is good to see the government moving in the right direction with an advisory to start with.”

The new advisory gives developers more freedom to innovate, while placing guardrails on usage.

“As technology evolves, we will see new use cases and, on the other hand, new concerns. The government needs to be agile in policy making and that is what is happening,” said Srinivasamurthy, adding that he expected government to keep working on finding the right balance between risk and innovation.

Akshara Bassi, senior analyst at Counterpoint, weighed in on the implications for AI model developers. “MeitY’s decision to remove the requirement for government approval before launching untested AI models will lead companies like OpenAI, Google, and Microsoft to integrate their models and services directly into India’s existing app ecosystem,” she said. “This move is expected to make services smarter and shorten the time it takes to bring them to market.”

Bassi expects the move will promote the adoption of AI services as a fundamental feature in systems and applications, helping spread their use cost-effectively as enterprises will have a broader range of AI models to choose from.

“As the ecosystem in India is in nascent stages, the government is providing a boost by changing regulations to drive adoption of AI and boost indigenous AI platforms; however, as it matures, we would see more changes in the ecosystem from all stakeholders,” she said.

Labelling deepfakes

The government advisory suggests that if any intermediary allows or facilitates the creation or modification of text, audio, visual or audio-visual information through its software or any other computer resource in a way that can be potentially used as deepfake or misinformation, then such information must be labelled or embedded with permanent unique metadata or identification.

This label or identifier should be able to identify the intermediary’s computer resource that has been used to create, generate, or modify such information. Additionally, if any user makes changes to the information, the metadata should be configured to identify the user or computer resource that made those changes.

MeitY reminded developers that other existing legislation still applies: “It is reiterated that non-compliance with the provisions of the IT Act 2000 and/or IT Rules could result in consequences including but not limited to prosecution under the IT Act 2000 and other criminal laws, for intermediaries, platforms and their users.”