India's Deepfake Reckoning: The IT (Intermediary Guidelines) Amendment Rules, 2026
On 10 February 2026, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026—India's first dedicated regulatory framework for AI-generated synthetic media—through Gazette Notification G.S.R. 120(E). Coming into force on 20 February 2026, the rules introduce a binding statutory definition of "Synthetically Generated Information" (SGI), compress takedown timelines to as little as two hours, mandate permanent provenance metadata on all synthetic content, and make compliance a condition for retaining safe harbour protection under Section 79 of the IT Act, 2000.
The regulatory impulse was unmistakable. A series of high-profile deepfake incidents involving manipulated videos of business leaders, film actors, and political figures had prompted the Prime Minister to describe the phenomenon as a national "crisis." When a major AI chatbot generated indecent and sexually explicit imagery in early 2026, MeitY issued a formal rectification order demanding remediation within 72 hours, underscoring the government's readiness to enforce.
For legal and compliance professionals, technology platforms, generative AI startups, and digital media companies, the 2026 First Amendment is a paradigm shift. Indian digital law has moved from regulating what content is posted to regulating how it is created, labelled, and traced.
Background & Context: From Advisory to Binding Law
India's regulatory encounter with deepfakes began in earnest in late 2023 with viral manipulated videos. Initially, MeitY responded with non-binding advisories. However, as deepfake misuse escalated through 2024 and 2025—featuring AI-cloned voices in financial scams and non-consensual intimate imagery—the existing 2021 IT Rules proved insufficient. The 2026 amendment was notified under Section 87 of the Information Technology Act, 2000, establishing arguably the world's most operationally prescriptive deepfake regime.
The Core Definition: What Is 'Synthetically Generated Information' (SGI)?
This definition has two defining features:
- Technology-Neutral: It covers generative AI, neural networks, face-swap algorithms, and future synthetic media technologies.
- Perceptual Threshold: What matters is whether the content is likely to be perceived as indistinguishable from reality, regardless of the creator's deceptive intent.
Note: Pure text-only AI content (like AI-generated articles or chatbot text responses) is currently excluded from the SGI definition, though it remains subject to other applicable laws. The rules also include carve-outs for routine editing (color correction, noise reduction) and accessibility improvements.
The Compliance Architecture Across the Content Chain
I. Prohibited SGI Categories
Intermediaries must deploy automated detection to prevent the generation of: CSAM, Non-Consensual Intimate Imagery (NCII), false documents/forged records, content facilitating explosives/arms, and deepfakes falsely impersonating individuals to deceive.
II. Mandatory Labelling & Metadata
Permitted SGI must carry: (i) a permanent visible watermark covering at least 10% of the screen area; (ii) a spoken disclaimer for the first 10% of audio duration; and (iii) a permanent, unmodifiable unique metadata identifier (digital fingerprint) embedded in the file.
III. Obligations for AI Tool Providers
Providers of generative AI platforms or voice cloning services must prominently warn users of criminal liability under the BNS, 2023, and POCSO Act for misuse, with violations resulting in account suspension and law enforcement reporting.
IV. Obligations for SSMIs
Significant Social Media Intermediaries (>5M users) must obtain user declarations for AI-generated content AND deploy automated tools to verify that declaration before publication. User self-disclosure alone is legally insufficient.
The Three-Hour Takedown Mandate
Furthermore, grievance officers must now acknowledge complaints within 24 hours and resolve them within 7 days. This mandates that platforms maintain 24/7 compliance engineering and legal escalation teams capable of real-time automated flagging and actioning.
Safe Harbour at Stake: Section 79 Implications
Comparative Perspective: India vs. EU, UK, and US
| Jurisdiction | Deepfake Regulatory Approach (2026) |
|---|---|
| India (IT Amendment Rules 2026) | Highly prescriptive. 2-3 hour takedown mandates, permanent 10% watermarks, embedded provenance metadata, and automated pre-publication verification for SSMIs. |
| European Union (AI Act & DSA) | Focuses on 'General Purpose AI' obligations, copyright compliance, and systemic risk assessments. Mandates labelling but does not prescribe hour-level takedown timelines for platforms. |
| United States (State Laws) | Fragmented. No comprehensive federal deepfake law. State-level statutes target specific issues like electoral deepfakes or NCII, varying widely in enforcement standards. |
| China (Deep Synthesis Mgmt) | Requires labelling and strict identity verification before publishing SGI, closely mirroring India's approach but without the explicit 3-hour takedown windows. |
Practical Compliance Roadmap for Platforms
- Content Audit: Immediately audit all published AI-generated content to ensure it meets SGI labelling and metadata requirements.
- Technical Infrastructure: Procure or build systems to embed permanent, tamper-resistant provenance identifiers in generated content.
- Automated Detection: SSMIs must deploy technical measures for upload-time verification of user SGI declarations.
- Takedown Protocols: Revise escalation procedures to guarantee 3-hour actioning of government/court orders, 24/7.
- Terms of Service: Update platform ToS and implement quarterly user notifications regarding SGI misuse risks.
- Contract Review: Ensure vendor agreements with AI API providers include warranties for SGI compliance.
Criticisms and Open Legal Questions
Digital rights organizations have raised concerns regarding the constitutionality of the broad SGI definition and pre-publication verification requirements. Critics argue that automated detection systems carry significant error rates, risking the removal of legitimate satire or political commentary. Furthermore, mandatory pre-publication verification (Rule 4(1A)) has been characterized as a form of prior restraint, potentially conflicting with constitutional free speech protections under Article 19(2) and the precedents set in Shreya Singhal v. Union of India.