India's Deepfake Reckoning: IT Amendment Rules 2026 & SGI Liability | M S Sulthan
Disclaimer: As per the rules of the Bar Council of India, this content is for educational and informational purposes only. It does not constitute legal advice or solicitation.

India's Deepfake Reckoning: The IT (Intermediary Guidelines) Amendment Rules, 2026

By M S Sulthan Legal Associates, Kozhikode | April 14, 2026 | Cyber & Data Privacy / Technology Law

On 10 February 2026, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026—India's first dedicated regulatory framework for AI-generated synthetic media—through Gazette Notification G.S.R. 120(E). Coming into force on 20 February 2026, the rules introduce a binding statutory definition of "Synthetically Generated Information" (SGI), compress takedown timelines to as little as two hours, mandate permanent provenance metadata on all synthetic content, and make compliance a condition for retaining safe harbour protection under Section 79 of the IT Act, 2000.

The regulatory impulse was unmistakable. A series of high-profile deepfake incidents involving manipulated videos of business leaders, film actors, and political figures had prompted the Prime Minister to describe the phenomenon as a national "crisis." When a major AI chatbot generated indecent and sexually explicit imagery in early 2026, MeitY issued a formal rectification order demanding remediation within 72 hours, underscoring the government's readiness to enforce.

For legal and compliance professionals, technology platforms, generative AI startups, and digital media companies, the 2026 First Amendment is a paradigm shift. Indian digital law has moved from regulating what content is posted to regulating how it is created, labelled, and traced.

Background & Context: From Advisory to Binding Law

India's regulatory encounter with deepfakes began in earnest in late 2023 with viral manipulated videos. Initially, MeitY responded with non-binding advisories. However, as deepfake misuse escalated through 2024 and 2025—featuring AI-cloned voices in financial scams and non-consensual intimate imagery—the existing 2021 IT Rules proved insufficient. The 2026 amendment was notified under Section 87 of the Information Technology Act, 2000, establishing arguably the world's most operationally prescriptive deepfake regime.

The Core Definition: What Is 'Synthetically Generated Information' (SGI)?

Rule 2(1)(wa): Defines SGI as any audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information appears to be real, authentic, or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as, indistinguishable from a natural person or real-world event.

This definition has two defining features:

  • Technology-Neutral: It covers generative AI, neural networks, face-swap algorithms, and future synthetic media technologies.
  • Perceptual Threshold: What matters is whether the content is likely to be perceived as indistinguishable from reality, regardless of the creator's deceptive intent.

Note: Pure text-only AI content (like AI-generated articles or chatbot text responses) is currently excluded from the SGI definition, though it remains subject to other applicable laws. The rules also include carve-outs for routine editing (color correction, noise reduction) and accessibility improvements.

The Compliance Architecture Across the Content Chain

I. Prohibited SGI Categories

Intermediaries must deploy automated detection to prevent the generation of: CSAM, Non-Consensual Intimate Imagery (NCII), false documents/forged records, content facilitating explosives/arms, and deepfakes falsely impersonating individuals to deceive.

II. Mandatory Labelling & Metadata

Permitted SGI must carry: (i) a permanent visible watermark covering at least 10% of the screen area; (ii) a spoken disclaimer for the first 10% of audio duration; and (iii) a permanent, unmodifiable unique metadata identifier (digital fingerprint) embedded in the file.

III. Obligations for AI Tool Providers

Providers of generative AI platforms or voice cloning services must prominently warn users of criminal liability under the BNS, 2023, and POCSO Act for misuse, with violations resulting in account suspension and law enforcement reporting.

IV. Obligations for SSMIs

Significant Social Media Intermediaries (>5M users) must obtain user declarations for AI-generated content AND deploy automated tools to verify that declaration before publication. User self-disclosure alone is legally insufficient.

The Three-Hour Takedown Mandate

Rule 3(1)(d) Accelerated Timelines: The compliance window for removing unlawful content pursuant to a lawful court order or government direction has been slashed from 36 hours to just 3 hours. For NCII and CSAM, the window is compressed to 2 hours.

Furthermore, grievance officers must now acknowledge complaints within 24 hours and resolve them within 7 days. This mandates that platforms maintain 24/7 compliance engineering and legal escalation teams capable of real-time automated flagging and actioning.

Safe Harbour at Stake: Section 79 Implications

The Compliance Bargain: Safe harbour protection under Section 79 of the IT Act is now conditional on SGI compliance. Platforms that fail to implement mandatory labelling, miss the accelerated takedown windows, or allow Prohibited SGI to remain accessible will lose their statutory immunity entirely. They can then be sued as the direct publisher of the defamatory or fraudulent synthetic content.

Comparative Perspective: India vs. EU, UK, and US

Jurisdiction Deepfake Regulatory Approach (2026)
India (IT Amendment Rules 2026) Highly prescriptive. 2-3 hour takedown mandates, permanent 10% watermarks, embedded provenance metadata, and automated pre-publication verification for SSMIs.
European Union (AI Act & DSA) Focuses on 'General Purpose AI' obligations, copyright compliance, and systemic risk assessments. Mandates labelling but does not prescribe hour-level takedown timelines for platforms.
United States (State Laws) Fragmented. No comprehensive federal deepfake law. State-level statutes target specific issues like electoral deepfakes or NCII, varying widely in enforcement standards.
China (Deep Synthesis Mgmt) Requires labelling and strict identity verification before publishing SGI, closely mirroring India's approach but without the explicit 3-hour takedown windows.

Practical Compliance Roadmap for Platforms

  • Content Audit: Immediately audit all published AI-generated content to ensure it meets SGI labelling and metadata requirements.
  • Technical Infrastructure: Procure or build systems to embed permanent, tamper-resistant provenance identifiers in generated content.
  • Automated Detection: SSMIs must deploy technical measures for upload-time verification of user SGI declarations.
  • Takedown Protocols: Revise escalation procedures to guarantee 3-hour actioning of government/court orders, 24/7.
  • Terms of Service: Update platform ToS and implement quarterly user notifications regarding SGI misuse risks.
  • Contract Review: Ensure vendor agreements with AI API providers include warranties for SGI compliance.

Criticisms and Open Legal Questions

Digital rights organizations have raised concerns regarding the constitutionality of the broad SGI definition and pre-publication verification requirements. Critics argue that automated detection systems carry significant error rates, risking the removal of legitimate satire or political commentary. Furthermore, mandatory pre-publication verification (Rule 4(1A)) has been characterized as a form of prior restraint, potentially conflicting with constitutional free speech protections under Article 19(2) and the precedents set in Shreya Singhal v. Union of India.

Frequently Asked Questions (FAQ)

1. What exactly qualifies as Synthetically Generated Information (SGI)?
Under the 2026 Rules, SGI is any audio, visual, or audio-visual content created or altered using a computer resource (like AI) that appears indistinguishable from a real person or event. It specifically excludes pure text-only outputs and routine editing (like color correction) that doesn't alter the core meaning.
2. What is the new takedown timeline for unlawful deepfakes?
Platforms must remove or disable access to unlawful content within 3 hours of receiving a lawful court order or authorized government direction. For severe content like Non-Consensual Intimate Imagery (NCII) or Child Sexual Abuse Material (CSAM), the deadline is strictly 2 hours.
3. What happens if a social media platform fails to comply with these rules?
If an intermediary fails to implement mandatory labelling, metadata embedding, or misses the 2-3 hour takedown windows, they automatically forfeit their "Safe Harbour" immunity under Section 79 of the IT Act. This exposes the platform to direct civil and criminal liability for the user-generated content.
4. Do users have to declare if they upload AI-generated content?
Yes. Significant Social Media Intermediaries (SSMIs) must obtain a declaration from users at the time of upload confirming whether the content is SGI. Moreover, the platform is legally obligated to use automated technical tools to independently verify the accuracy of the user's declaration before publishing.

Ensure your digital platform or generative AI startup is fully compliant with the 2026 IT Amendment Rules to protect your Safe Harbour status. Contact our Cyber & Data Privacy desk.

Email: contact@mssulthan.com

© 2026 M S Sulthan Legal Associates, Kozhikode. All Rights Reserved.

Loading latest insights...