Good-faith AI use exempt as India tightens synthetic content rules
The government has clarified that AI-assisted content created for education, training, or technical enhancement without misrepresentation will not require synthetic labelling under the amended IT Rules, which take effect from February 20, 2026.

The government on Wednesday clarified that content created with the help of artificial intelligence for educational purposes, bringing clarity, or making technical changes without misrepresentation does not need to be labelled as synthetically generated.
The clarification was issued in the form of frequently asked questions (FAQs), a day after the government tightened rules for social media platforms such as YouTube and X. The amended rules mandate the takedown of unlawful content within three hours and require clear labelling of all AI-generated and synthetic content.
What qualifies as Synthetic Generated Content (SGI)?
In its FAQs, the Ministry of Electronics and IT (MeitY) explained that not every AI-assisted creation or editing qualifies as Synthetic Generated Content (SGI).
“Not every AI-assisted creation or editing qualifies as SGI. Content is treated as SGI only when it is artificially or algorithmically created or altered in a way that it appears real or authentic or true and is likely to be indistinguishable from a real person or real-world event,” the ministry said.
Routine or good-faith actions such as editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression will not be treated as SGI, provided these actions do not materially alter, distort, or misrepresent the substance, context, or meaning of the underlying content.
Educational and technical uses exempted
The FAQ clarified that content created in good faith — including education and training materials, presentations, reducing file size for faster uploads, and publishing notices — will not be considered synthetically generated content.
Similarly, the use of technology to remove background noise from audio recordings, transcribe interviews, stabilise shaky videos, or correct colour balance will not fall under the SGI category.
The ministry also stated that routine preparation, formatting, or designing of documents, presentations, PDF files, educational or training materials, or research outputs will not be treated as SGI, as long as they do not result in false documents or electronic records.
Additionally, AI tools used for adding subtitles, translating speeches without altering content, generating summaries or tags for search optimisation, providing audio descriptions for visually impaired users, or improving clarity by reducing echo or distortion will be exempt — provided they do not manipulate any material part of the original content.
When AI content will be treated as unlawful
However, the government made it clear that if AI tools are used to generate fake certificates, fake official letters, forged IDs, or fabricated electronic records, such content will not fall under these exclusions. It may instead be treated as unlawful SGI or false records.
The clarification also stated that the use of computer resources solely to improve accessibility, clarity, quality, translation, description, searchability, or discoverability will not be treated as SGI, provided the process does not generate, alter, or manipulate any material part of the underlying content.
Amended IT rules to take effect from February 20, 2026
The Ministry of Electronics and Information Technology issued a gazette notification amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new rules will come into force on February 20, 2026.
The amendments come in response to the growing misuse of artificial intelligence to create and circulate obscene, deceptive, and fake content on social media platforms.
Authorities had flagged a rise in AI-generated deepfakes, non-consensual intimate imagery, and misleading videos that impersonate individuals or fabricate real-world events, often spreading rapidly online.
Faster takedowns and mandatory metadata
Under the amended rules, platforms must ensure faster takedowns of unlawful content, mandatory labelling of AI-generated material, and embedding of permanent metadata or identifiers with AI content. The rules also shorten user grievance redressal timelines.
The revised framework places accountability on both social media platforms and AI tools to prevent the promotion and amplification of unlawful synthetic material.
ALSO READ: Is social media clinically addictive? Instagram chief says no in trial