India's Proposed AI Labelling Rules Won't Stop Deepfakes — They'll Stifle Innovation

A cottage industry of vendors, consultants, and certification schemes is poised to add new layers of cost that slow product development and burden users, even as actual threats slip through undetected.

Article related image
Dragon Claws/iStock.com
Author
By Dev Chandrasekhar

Dev Chandrasekhar advises corporates on big picture narratives relating to strategy, markets, and policy.

November 20, 2025 at 4:26 AM IST

India's latest regulatory proposal—mandatory labelling of AI-generated content—has the familiar ring of decisive action on a hot-button issue. The Ministry of Electronics and Information Technology wants platforms to slap visible warnings on synthetic media, covering at least 10% of images or audio clips. Officials pitch it as a safeguard for India's billion internet users against deepfakes and misinformation. In reality, it’s an expensive game of regulatory whack-a-mole that risks hitting Indian innovation harder than the harms it aims to address.

The scale problem is staggering. OpenAI's Sam Altman says India is the company's second-largest user market, with adoption tripling in a year. Google, Microsoft, and Meta are pouring billions into Indian AI infrastructure. According to a NITI Aayog study, AI adoption can contribute an additional $500-600 billion to India’s GDP by 2035. Into this growth trajectory, the government is introducing compliance requirements that, in practice, are impossible to meet.

Building reliable AI-detection systems requires industrial-scale machine learning infrastructure and continuous retraining that costs millions of dollars a month. For Alphabet or Meta, with their $60 billion R&D budgets, that is manageable. For Bengaluru startups competing in a tight funding cycle, it's existential.

Cost, however, is not the real problem. Futility is.

A recent Stanford study found that leading AI detectors achieve only 60-70% accuracy, with false-positive rates above 15%. That error rate becomes catastrophic when labels must be applied to billions of content pieces daily. Under the current draft, even routine edits like cropping a photo or compressing an audio clip could trigger labelling requirements, generating absurd compliance overhead with little corresponding safety benefit.

The competitive distortion is equally troubling. Foreign platforms—these are mostly American—and hardware dominate India's digital ecosystem. They can absorb compliance costs as the price of market access, then pass them on to advertisers and creators. Indian platforms such as ShareChat, Koo, and Chingari have no such pricing power.  Already fighting uphill battles for user attention against YouTube and Instagram, they now face millions in new infrastructure, detection and legal compliance, further tilting the playing field against them.

Supporters argue that consumer protection outweighs competitive concerns. But protection requires interventions that actually work, and evidence suggests labelling does not. When Meta rolled out "Made with AI" labels earlier this year, users revolted against tags appearing on minor edits, forcing a hasty rebrand to "AI Info." Research from MIT and the University of Michigan shows that labels can even reduce trust in authentic content—the  "liar's dividend," where audiences dismiss inconvenient truths as synthetic.

Meanwhile, bad actors will easily evade the rules. Content created on an AI platform abroad, might be shared over WhatsApp, screenshoted, reposted to Instagram, downloaded, re-uploaded to YouTube and forwarded again. At which point does labelling survive? MeitY's enforcement jurisdiction ends at India's digital borders. Viral content does not respect borders or compliance regimes.

Not everyone sees a crisis; a new market is already emerging. Vendors offering "AI detection as a service" are pitching Indian platforms and enterprises, promising compliance solutions. Expect a proliferating ecosystem of auditors, certifiers and consultants, an entire compliance economy built on regulatory complexity, not public safety.

There’s a smarter path. Instead of mandating crude "AI or not" labels that neither work technically nor influence behaviour, policymakers should target on high-risk harms: political advertising, financial fraud, and impersonation. California's AB 3211 focuses on high-risk contexts. The EU's AI Act takes a risk-tiered approach. Even China, no minimalist, applies labelling requirements far more surgically to content deemed potentially harmful.

India could do the same by investing in cryptographic provenance systems like C2PA that verify content authenticity at creation, while keeping adoption voluntary through incentives rather than mandates. It could fund media literacy campaigns, strengthen existing defamation and impersonation laws, and audit the amplification algorithms that elevate inflammatory content regardless of origin. These approaches target root causes without kneecapping domestic innovation.

The timing could not be more critical. India is positioning itself as a global AI leader, with engineering talent, English-language advantages, and a vast digital population creating real competitive strengths. But regulatory overhang shapes where technology gets built. Entrepreneurs deciding where to build their next AI product will factor in compliance risk and execution drag.

MeitY still has time to course-correct before final rules are issued. A risk-based framework, clear exemptions for routine edits, and a pilot program with measurable outcomes would signal thoughtful, evidence-driven regulation rather than reactive policymaking. The alternative is to implement unworkable rules that burden Indian companies while leaving real harms unaddressed. "Made in India AI" could then become a cautionary tale rather than a success story.