The Indian government is not looking to control or restrict online content, but rather to ensure transparency by requiring creators to label AI-generated content. This mandate is intended to empower audiences to make informed choices, according to Electronics and IT Secretary S. Krishnan, speaking on Thursday.
The statement follows the government's proposed changes to the IT rules on Wednesday, which demand the clear labeling of AI-generated content and increase the accountability of large platforms like Facebook and YouTube. The goal is to curb user harm from deepfakes and misinformation by requiring platforms to verify and flag synthetic information.
Focus on transparency, not prohibition
Krishnan clarified the government's rationale, emphasising that the focus is solely on disclosure.
"All that we are asking for is to label the content," Krishnan stated. "You must put in a label which indicates whether a particular piece of content has been generated synthetically or not. We are not saying don't put it up... Whatever you're creating, it's fine. You just say it is synthetically generated. Once that is established, people can then make up their minds as to whether it is good, bad, or whatever".
Shared responsibility and enforcement
Krishnan pointed out that India focuses on promoting innovation in Artificial Intelligence (AI) before imposing any rules. He explained that the responsibility for a new labeling requirement will be divided among users, companies providing AI services, and social media platforms.
Providers of computer resources or software used to create synthetic content must ensure the creation of labels that are prominent and cannot be deleted. Enforcement action, he clarified, would apply only to unlawful content, a rule that applies to all online content, not just that generated by AI.
IT rule amendments
The proposed amendments provide a clear legal basis for the labeling, traceability, and accountability of synthetically-generated information.
The draft amendment, open for stakeholder comments until November 6, 2025, not only clearly defines synthetic content but also mandates labeling, visibility, and metadata embedding to distinguish such content from authentic media. These stricter rules are intended to increase the accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.