Deepfakes and Personality Rights: the IP Law Gap in India
- 4 hours ago
- 11 min read
Your Face Is Not Your Own Anymore: Deepfakes, Personality Rights and Why Indian IP Law is Falling behind.
Somewhere in late 2023, a video started spreading on the Indian social media. Actress Rashmika Mandanna was in it, seemingly doing something she had not done, saying something she had not said. The video looked real. It sounded real. It was fake as well, it was a deepfake, a fake made by an artificial intelligence that had superimposed her face on another body with disturbing precision. At the moment the video went viral, the response was swift: outrage, the need to regulate it, and a very uncomfortable realisation that in 2023, the face of a person had become essentially a raw material that could be abused by anyone. I would prefer to begin there since one can effortlessly imagine that deepfakes are an issue of celebrity, something that occurs to popular individuals and is corrected through a legal decree. However, the technology is not discriminatory. It is through the same tools that created the likeness of Rashmika Mandanna that can create yours or mine. The question that this poses to a law student is an awkward one: is Indian law any good in actually safeguarding you should someone make a convincing imitation of you and place it on the internet?

The truthful answer, to which I shall attempt to point in this work, is: yes, a little, occasionally, when you are wealthy enough to sue. And that just is not good enough.
What Is a Deepfake, and How Does It Present a Legal Issue?
Deepfakes are artificially created media (videos, images, and audio) created with the help of artificial intelligence. The most popular one is referred to as generative adversarial networks or GANs where two neural networks basically compete against each other, where one tries to produce convincing fakes and the other tries to detect them. The generator becomes extremely good (after thousands of executions). Diffusion models, which are the same technology as Midjourney and Stable Diffusion, have recently become capable of creating photorealistic synthetic art based on a single photograph. The fact that deepfakes are deceptive is not a legal issue. It is because they disconnect the identity of a person and his or her consent. When somebody takes your face in a commercial advert and does not even seek your permission on the matter, then that is wrong, that is wrong in a sense which the law has never underestimated - at least not in theory. Deepfakes make it provably easy to do this at scale, with complete deniability and across borders. An individual sitting in Romania would be able to make a deepfake of an Indian actress, place it in a server in the United States, and make it be seen by millions in India without having ever set foot on Indian soil.
This produces three overlapping categories of harm with intellectual property law. Firstly, there is the aspect of commercial exploitation fake endorsements, AI-generated performances, artificial reproductions of the work of real artists. Second, there is reputational damage, fake statements, faked scandals, fake associates. Third, and most unsettling, there is what I might term identity theft in the most ultimate sense: stealing someone their very being - their voice, their face, their mannerism, and not with any permission or reparation.
What Indian Law Says. A Greater Insight into the IP Framework.
A. Beginning with the Constitution.
Before proceeding to the discussion of individual statutes, it should be mentioned that the constitutional basis of the personality rights is stronger in India and than people think. In Justice K.S. Puttaswamy v. According to a nine-judge bench of the Supreme Court, the privacy was a fundamental right as stipulated in Article 21 of the Constitution of India (2017) (Union of India, 2017). Notably, the decision itself acknowledged that the right to privacy does not only concern physical privacy, but it also includes the right to privacy over the utilisation of one identity and personal information. Justice D.Y. Chandrachud, who expressed the opinion of himself and two others, directly referred to the term informational privacy and decisional autonomy as part of the right.
This is important in that it provides the courts with a constitutional hook where it is sought to guard personality rights where there is no particular statute to do so. Instead of declaring there exist no law that deals with this, a court may declare that there is an underlying right that deals with this. And that is specifically what the Delhi High Court has been doing more and more.
B. The Copyright Act, 1957 -Helpful, but Only to a Certain Degree.
The Copyright act safeguards the original works in the literary, artistic, musical, dramatic,
cinematicographic and sound-recording categories. On the face of it, it seems to provide a
simple solution to the ills that deep-fake technology are committed to; however, in practice, it
has a rather limited range of applicability.
The inherent problem is that the face, voice or persona of an individual is not a work under
the Act. A celebrity photo is indeed covered by a copyright but it is the photographer who is
entitled to the copyright but not the individual being photographed. The commercial value
that the photographed person accords the image, but who does not have a statutory claim of
the same, is naturally counter-intuitive.
Section 38A vestss rights on performers in the reproduction, fixation, broadcast and
communication of their performances. In a case where a deep-fake recreates a given piece of
protected performance, a particular song or even a certain film scene, there is a valid
argument of a violation of the rights of the performer. On the other hand, at least in the
situation when a deep-fake only recreates the overall style or timbre of a performer and does
not copy a specific work, which is the more common and a more insidious one, Section 38A
does not apply.
Section 57, which protects the moral rights, also works within its boundaries. It secures the
authors against misrepresentation or disfigurement of their original work, but not a general
right to his or her identity. Moral rights are a good value but fails to solve the underlying
issues of deep-fakes.
C. Trade Marks Act, 1999 -Shockingly Relevant.
The Trade Marks Act has been so amended in Section 2(1)(zb) to mean that any sign that can
identify goods or services of one person over the other constitutes the trademark and this is
specifically including personal names. Some Indian celebrities have taken advantage of this
clause and registered their names and signature marks as trademarks; the name of Amitabh
Bachchan and his signature is just one example that has been duly registered.
In Titan industries ltd. v. Ramkumar Jewellers (2012)(50) Ptc.486 (Del.) the Delhi high court
believed that image and personality of a celebrity could attain a secondary meaning in
business and thus it could be protected under trademark law without registration. The court
used a right-of-publicity approach and found that the goodwill in the identity of a celebrity
would be a valuable commercial property that cannot be misused.
This doctrine provides an effective tool against deep-fakes used to make fake endorsements.
When a company posts a video produced through an AI that claims to have Virat Kohli
promoting its product when this is not the case, then it will amount to a typical passing off:
the company deceitfully implies that there is a connection between Kohli and its product (his
name and likeness), which does not in reality exist. All three components of the tort are
established goodwill (Kohli commercial value as an endorser), misrepresentation (the false
endorsement), and harm (to the celebrity and to consumers basing their purchase choice on
genuine endorsements).
D. The Passing-Off Action and the ICC Development.
The case of ICC Development (International) Ltd. v. Arvee Enterprises (2003 26 PTC 245
(Del.)) deserves a closer look because it provides one of the broadest formulations of the
rights to personality in the Indian jurisprudence. The court decided that the right of publicity
applies to the name, signature, photograph, image, likeness, voice, and other distinguishing
appearance of every person, and is innate- not dependent on a trademark registration or other
statutory proclamation.
In practice, an action in passing-off can be brought against any persons who commercially
employ the likeness of another in a manner giving the impression of false connection or
approval. This is the strongest current cause of action under the Indian law in the context of
deep-fakes that are used in a commercial setting. The weakness is that passing-off is a
common-law tort: it requires a court case, the burden of proof, and a remedy that is
adjudicatory in nature; by the time all these are done, the perpetrator of the deep-fake may
already have gone viral.
IV. The DPDPA, the IT Act and Executive Responses.
Other than the conventional IP law, there are two other statutory provisions that are relevant.
The intentional capture, publication or transmission of images of a person of their private
parts without their consent is criminalized under section 66E of the Information technology
act, 2000. firstly created to contain voyeurism and revenge porn, courts have now applied it
to non-consensual intimate deep-fakes - a particularly frequent abuse of the technology.
Another dimension is presented by the Digital Personal Data Protection Act, 2023. Facial
features that are used to train AI models are biometrics data and considered sensitive personal
information under the Act. Prima facie, training a deep-fake model on the face of an
individual without their permission is a violation of their rights as a data principal, and potentially a potent source of legal redress against those who create deep-fake technology,
even without reference to its users.
On the executive level, in November 2023, months after the Rashmika Mandanna incident,
the Ministry of Electronics and Information Technology issued an advisory that required
platforms to remove deep-fake content within 24-36 hours of being notified of its existence
on their platform. Although this is a significant move it is not a binding law, as platforms are
not legally bound to act in this manner as they would to a statutory obligation, and there is a
large disparity between policy and enforceable rights.
V. The Courts Interfere: Two Precedent Decrees.
In most aspects, Indian courts have outdone the act of legislation in dealing with the
infringement of personality rights in the era of AI. Two Delhi High Court rulings are
particularly instructive.
VI. The Reality Issue: What Indian Law Yet Lacks.
The inherent disparity in the Indian law is the lack of a codified right of publicity. All other
large legal systems that have taken this issue seriously have ultimately determined that it will
need a freestanding statutory right. Such laws have been decades old in several American
states, most prominently California, through its Right of Publicity Act. EU AI Act that came
into effect in 2024 categorizes some of the applications of synthetic media as high-risk and
places strict transparency and consent requirements. India has neither.
How does this translate into practice? It implies that when a person is not a celebrity, not a
person who is rich enough to afford legal representation, his or her legal options are very
limited in case another individual comes up with a deepfake. In the event that the deepfake is
intimate in nature, they may have a criminal complaint under Section 66E. The DPDPA may
also give them a case in case they can demonstrate that their biometric information has been
used without their consent. But a civil right of compensation to the commercial use of their
likeness? It is much nonexistent except through tortured applications of passing off doctrine.
The international dimension is also there. Deepfake material is regularly produced outside of
India. The Indian courts cannot merely make orders, which bind the servers in the foreign
countries. A dynamic injunction by the Delhi High Court is not able to exclude a video on a
foreign platform which is not cooperative. This is not an issue inherent to India, but the legal
gap on the domestic level makes this problem even more prominent.
Another thing that I would like to draw your attention to is that the issue of deepfake harm is
gendered, which is not discussed in scholarly literature as much as it should be. More and
more studies, such as the one done by Sensity AI, and the American group WITNESS, have
discovered that most non-consentual deep fake materials are directed at women. The
Rashmika Mandanna case is not an exception, it is a just example. Any legal framework not
based upon the experiences of the people most affected by this technology is incomplete.
VII. What Would a Legal Response That Matters Be?
I would like to be specific in this part, as it seems so easy to write hollowly about the need of
better regulation but not tell what is better and what is not. India needs, therefore, as far as I
can see, the following.
To begin with, a Right of Publicity Act. A separate law should be enacted in parliament
which grants a property right to each and every person not just to celebrities, not just to the
rich, but to all to his name, likeness, voice, image and persona. This right must be alienable
(so that they may consent to commercial use), assignable and heritable by limited duration
following death. It must offer civil and criminal compensation in case of lousy cases.
Second, a change in the Copyright Act to address synthetic performances directly. In case an
AI system produces what seems to be a new work by an actual artist, a song with his/her
voice, a speech with his/her manner, then that will have to be approved by the artist no matter
whether it was a copy of a specific protected work. The understanding of the rights of
performers should be broadened to include identity, not only concerned works.
Third, accountability of platforms. Some of the responsibilities of the important social media
intermediaries are already imposed by the IT Rules, 2021. They need to be enhanced to
impose: compulsory labeling of AI-generated content, accelerated schedules of takedowns of
found deepfakes, and a specific remedy system of personality rights infringement - not to be
used in conjunction with general content regulation procedures.
Fourth, and probably, the most important, the international cooperation. WIPO has been
holding meetings on AI and intellectual property over a number of years. India must also take
an active part in such dialogs and should advocate the international standards regarding the
liability of deepfakes, responsibility of the platform, and inter-country enforcement. This is
one of those problems which cannot be dealt by one particular country.
VIII. Conclusion
At the beginning of researching this piece, I thought that I would find a comparatively simple
story: new technology, no law, scramble of courts to keep up. That was not what I found, and,
in a sense, more encouraging. Courts across India, especially the Delhi High Court have been
innovative and bold in using the current IP doctrine to safeguard the rights of personalities in
the era of AI. Anil Kapoor and Amitabh Bachchan orders are real innovation in law.
Still, there is the structural issue. Judicial innovation bridges the gaps; it does not establish a
system. It is the advantage of those who are able to litigate; it does not safeguard all the
people. And it answers to damage at the moment of its struck; it does no preclude or prevent
it. Despite all the ingenuity of Indian courts, an individual whose fake profile has been
promoted on WhatsApp tomorrow has only a limited number of useful legal opportunities at
their disposal today.
The Rashmika Mandanna case was a cause of furor. Outrage will work out well- it will
provoke the discussion that would not have otherwise taken place. However, indignation that
lacks laws is no more than a whistle. India has had its fair share of a need to implement a
serious comprehensive law on personality rights and synthetic media. Each month that goes
by without one is a month in which real people, who are mostly women, mostly without the
resources to go to court, are not even protected. That is not the case in which a country which
regards the right to dignity as being a guarantee within its constitution.
Your beauty ought to be yours. As things are in India to-day, it does not, not altogether, not,
so to speak, formally, practically. That needs to change.
Author: Aditi Ladha, in case of any queries please contact/write back to us via email to chhavi@khuranaandkhurana.com or at Khurana & Khurana, Advocates and IP Attorney.
REFERENCES
1. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
2. Titan Industries Ltd. v. Ramkumar Jewells, 2012 (50) PTC 486 (Del.).
3. ICC Development (International) Ltd. vs Arvee Enterprises, 2003 (26) PTC 245 (Del.).
4. Anil Kapoor v. Simply Life India and Ors., CS(COMM) 652/2023 (Delhi high court).
5. Amitabh Bachchan v. Sktt, Ors. vs. Rajat Nagi & Ors., CS(COMM) 819/2022 (Delhi High
Court).
6. Copyright Act, 1957 (India) Sections 38A (Performer Rights and 57 Moral Rights).
7. Trade Marks Act, 1999 (India) -Section 2(1)(zb) (Meaning of Trademark).
8. ITA, 2000 (India) -Section 66E (Violation of Privacy).




Comments