As AI continues to advance, enacting legislation to regulate its development and usage becomes crucial. Without appropriate regulations, there’s a risk of unintended harm, such as bias perpetuation or privacy infringement. A legal framework is essential to ensure AI is developed and deployed responsibly, maximizing benefits while minimizing risks
And we have recently seen hints of how, if we keep AI unchecked, it can create havoc. Very recently, the deepfake video of Rashmika Mandanna went viral; it sent uproars in the country and gave a glimpse of what AI is capable of; it is more than a medium to make innocent memes and make assignments. What concerns citizens the most now is whether Indian laws are capable enough to tackle this complex and ever-evolving medium.
Is the Law Equipped to Tackle AI?
The answer to this question can’t be answered in either yes or no. To begin with, let us take into consideration the case of Rashmika Mandanna’s Deepfake video.
An FIR was lodged for the same, which was registered at the Intelligence Fusion and the Strategic Operations Unit of the Delhi Police’s Special Cell under sections 465 and 469 of the Indian Penal Code as well as sections 66C and 66E of the Information Technology Act, and a probe has been initiated for it.
In the aforementioned FIR, sections 465 and 469 of the Indian Penal Code talk about forgery and forgery leading to harm to reputation, respectively, while sections 66C and 66E of the Information Technology Act talk about criminalizing identity theft in the digital realm, and section 66E guarantees protection of privacy by criminalizing the unauthorized capturing, publishing, or transmission of images of a person’s private areas without consent. Still, there are some aspects that aren’t covered in the following acts.
It’s reasonable to assert that deep-fake cases can indeed be registered in FIRs and addressed under Indian law. The inclusion of sections from the Indian Penal Code and the Information Technology Act in recent cases demonstrates the legal framework’s capability to address such offenses.
But the hazards of AI aren’t restricted to deepfake videos. Recently, with the onset of ChatGPT and other AI tools that help the user get answers to specific questions that the user may use as original content, there is no legislation that neither protects the “original content” from AI tools nor provides a specific answer to whether content generated through AI can be termed original content.
In an unusual case that garnered significant attention, an AI-powered application named ‘Raghav’ was acknowledged as a co-author of a copyrighted piece. But later, the copyright office raised objections and aimed to invalidate the registration.
There are a lot of queries that seek answers from the legislation regarding the onset of AI.
The Road Ahead
Elon Musk once said, “Mark my words, AI is far more dangerous than nukes.” So it becomes all the more important to keep AI in check and develop legislation for it so that it can restrain AI from becoming something horrid.
According to Union Minister for Electronics and Information Technology, Ashwini Vaishnaw, the Indian government is in the process of drafting a new law concerning artificial intelligence (AI) to safeguard the rights of news publishers and content creators. This legislation intends to strike a balance between the interests of publishers, creators, and AI technologies, with the overarching goal of encouraging innovation.
The upcoming law will either take the form of independent legislation or become a component of the upcoming Digital India Bill, which is set to replace the Information Technology Act of 2000.
Stressing the importance of acknowledging creativity in terms of both intellectual property and financial considerations, the minister highlighted the necessity for such respect.
These measures serve as a guarantee that, amidst the rapid expansion and progress of AI, Indian legislation is staying up-to-date with the advancement of this particular technology and addressing this emerging technology. By managing and monitoring its development, the aim is to prevent potential misuse or criminal activities that may arise alongside the advancements in AI.\