'The obligations are easy to write into the rules but very difficult to implement technically -- and even easier to circumvent.'

The central government's proposed amendment to the Information Technology Rules -- mandating labels and disclaimers for all artificially generated content -- could raise compliance costs and burdens for social media intermediaries, industry executives and policy experts said.
The amendments aim to curb the spread of harmful and "reasonably authentic" deepfake images, audio, and videos, according to senior government officials.
"We have held several meetings with major tech companies, which have assured us that they possess the technical know-how and necessary tools to combat this issue.
"If there are any specific concerns, we will identify them during the consultation period," a senior official from the ministry of electronics and information technology (Meity) said.
Executives from social media firms and policy experts, however, said that simply labelling or embedding metadata in artificial intelligence (AI)-generated content is unlikely to solve the problem.
"The obligations are easy to write into the rules but very difficult to implement technically -- and even easier to circumvent.
"A non-technical person can generate such content and remove a watermark, label, or disclaimer within minutes," said a senior executive at a social media company.
Another executive said the government's proposed requirement that AI disclaimers or labels cover at least 10 per cent of a content's area would interfere with user experience and increase page loading and landing times across platforms.
These watermarks and disclaimers, a third executive warned, could drive more users towards large language models and small language models developed in countries that do not adhere to India's IT Rules.
On Wednesday, Meity released draft amendments to the IT Rules, saying that all Internet intermediaries enabling AI-generated content must 'ensure that every such information is prominently labelled or embedded with a permanent unique metadata or identifier'.
Intermediaries must also ensure they have the technical tools to verify the accuracy of user declarations regarding AI use.
Further, the user's declaration on AI-generated content must be 'prominently displayed', the ministry proposed.
Stakeholders have been asked to submit feedback by November 6.
While the compliance burden will be greater for significant social media intermediaries, experts said smaller platforms -- those with fewer than 5 million users -- may also see higher compliance costs.
Effectively labelling content could prove challenging given the scale of the problem, said Kazim Rizvi, founding director of tech policy think-tank The Dialogue.
"A vast amount of digital content today has been synthetically altered in some form," he said.
"For non-significant social media intermediaries, the obligation to label synthetic media applies only when the content is generated using in-app tools.
"If a user uploads or posts third-party-generated synthetic media, the intermediary will have no obligation to label it," Rizvi added.
Other concerns -- such as the broad definition of what constitutes synthetic media -- are also likely to complicate compliance, said Rohit Kumar, founding partner at public policy firm The Quantum Hub.
"Given how widely AI tools are integrated into everyday content creation, this may lead to excessive or routine labelling, reducing the effectiveness of such notices over time.
"Moreover, automated detection mechanisms are still far from reliable -- they can misclassify legitimate, lightly edited, or AI-assisted content as synthetic, while sophisticated deepfakes may evade detection altogether," Kumar said.
Feature Presentation: Ashish Narsale/Rediff








