top of page

Legal Challenges of AI-generated Content in Media & Entertainment Law-The Accountability Vacuum- “Everyone Profits, Nobody is Liable”

  • 1 day ago
  • 5 min read

Introduction


In today’s world, artificial intelligence technology is used for many purposes, helping across many domains. But what if the content created by generative artificial intelligence technology infringes certain laws or breaches the standard criteria for publishing such content? Who will be accountable for the same? When any such question comes into play, everyone points a finger at someone else, and no one actually takes responsibility. This is what we are referring to here as an “Accountability Vacuum.” In the domain of Media and Entertainment, these tools have been advantageous in creating several new scripts, write-ups, songs, animated videos, and in generating many more ideas that have proven helpful for the industry.


However, some of the biggest questions that arise with such generated content, and which the law has not properly addressed in the recent past, are who owns what is created, and who possesses the legal liability if any harm or violation has been caused due to such content. Does it actually fall under the definition of creative content? Can such content fall under the definition of a new invention? To understand these questions in detail, we need to understand how such GAI works and what void exists between such AI and the regulatory framework.

 

The Real Crisis Of Copyright, Ownership & Authorship Of the Content Produced by AI


These AI programs occupy a grey area in Intellectual Property law, and more specifically, within copyright law. It is essentially a tripartite implied arrangement among the user, the owner, and the AI itself. IP laws are still silent on situations such as whether AI-generated content should be eligible for copyright, ownership, or authorship claims. And if such content can be submitted for claiming such rights, will it fall under the definition of literary or creative work which, under Sec. 13 of The Copyright Act, 1957, states that Works in which copyright subsists, and these works include-


(a) original literary, dramatic, musical and artistic works;

(b) cinematograph films; and

(c) sound recordings.[1]


Now, the first question that arises is: does the GAI content fall under the above category, meaning is it original? Because GAI generates unique content, which is based on results fetched from online sources. The law is still silent on whether such content is original or not, and if such content falls under the ambit of original artistic work, then who holds the rights and liabilities over such content the user, the AI, or the creator of the AI? And if such content is fetched from existing data, then is the fetching of such data violative of copyright law or not?



The Dispute Between Training Data & Fair Use


Training data is the dataset used to train an AI model before introducing it to the market. It is basically the dataset that the Generative AI model has access to, and from which it can fetch data. This dataset often includes copyrighted materials, and this kind of learning and output model has caused issues across the creative industries. In 2023, a user named “ghostwriter” uploaded AI-generated songs using voices that mimicked the artists Drake and The Weeknd. The AI-generated song went viral on TikTok and was even submitted for a Grammy, but it was denied, and it was stated that such content raises IP concerns.


AI-generated responses are essentially derivative works of copyrighted materials, potentially infringing the author’s copyright[2]. AI developers have responded to infringement claims with a single primary defence: fair use. The argument runs roughly as follows training a model on text or images is a transformative act; the model does not reproduce the works, it learns patterns from them; the output competes with no individual original; and therefore, the use falls within the permitted boundaries that copyright law reserves for transformative, non-commercial, or educational purposes.[3]

 

How to deal with Deepfakes, Digital Replicas and Right of Publicity


When we talk about the training data dispute, it is primarily a property rights problem, whereas the growth of deepfakes and synthetic digital replicas is something very personal. There is the problem of dignity, identity, and the right of individuals, specifically those in the public eye, to control how their voice, likeness, and persona are used.[4]


The entertainment industry has a significant role in this. Musicians, actors, and public figures have watched as AI tools reproduce their voices in fake videos or podcasts, insert their likenesses into illegal pornographic material, show their performances in films, albums, or videos they never agreed to appear in, and generate fake endorsements of products they have never promoted or even seen. These are not fringe occurrences; they are an accelerating daily reality.[5]


There is a need for common law and statutory rights to restrict the commercial use of anyone’s identity, as well as the right of publicity, to provide the legal framework for dealing with such problems. The U.S.A. provides some of the most advanced protections, including posthumous rights, especially in states like New York and California. However, at the same time, these jurisdictions still have constrained publicity laws or laws that do not comprehensively address issues of deepfakes and digital replicas.


Legislative efforts at the federal level have been fragmented. Victims of intimate images created by AI technology without permission have a private right of action under the DEFIANCE Act, 2024. The proposed NO FAKES Act will create a legal right against the production of digital replicas without permission. However, the financial incentive structures that allow deepfake sites and replica production companies to operate profitably while victims must bear the cost of enforcement actions are not addressed in either of these major legislative initiatives.[6]


The exploitation of background actors’ scanned likenesses to train AI models for perpetual use with a single payment and no recurring royalties was another aspect of this dilemma that came to light during the Hollywood strikes of 2023. The Screen Actors Guild’s eventual agreement with major studios included some protections, but these protections cover only a fraction of the entertainment workforce globally.


Regulatory Responses and the Precedents


There are no stringent regulations that fully combat the harm caused by GAI or deepfakes, but some precedents have helped us understand the legal framework. In cases such as Midler v. Ford and Carson v. Johnny Portable Toilets, courts have recognized the right to prevent the commercial use of an individual’s identity. In deepfake cases where there is disclosure of private facts or portrayal of individuals in compromising situations, or where such content is created for revenge or with malicious intent, such conduct should fall under privacy law.[7] Legal frameworks should aim to stop the misuse of such technology and not the advancement of the technology per se.


Conclusion


The accountability vacuum at the centre of AI-generated content law is not a temporary gap awaiting the right court decision or legislation. It is the result of a deliberate structural arrangement in which the economic benefits of AI content generation are concentrated among a small number of powerful actors, while the legal costs and burdens of enforcement are distributed across a vast number of individuals who lack the resources to bear them.


Copyright law needs an honest reckoning with the question of AI authorship, not to grant machines legal personhood, but to ensure that the human creative labour embedded in AI systems’ training receives recognition and reward. Deepfake legislation needs to move beyond criminalising the most egregious non-consensual uses and towards structural prevention. Platform liability needs to be redrawn to account for the fact that the companies building AI content generation tools are not neutral conduits.


Author: Medha Banta and Jyoti Shuklain case of any queries please contact/write back to us via email to chhavi@khuranaandkhurana.com or at  Khurana & Khurana, Advocates and IP Attorney.


[1] The Copyright Act of 1957.

[3] Authors Guild v Google Inc, 804 F.3d 202 (2d Cir 2015); Campbell v Acuff-Rose Music Inc, 510 US 569 (1994)

[4] Danielle Keats Citron and Mary Anne Franks, 'Criminalizing Revenge Porn' (2014) 47 Wake Forest Law Review 345, 350.

[5] Voice Actor v AI Startup (hypothetical referenced in) Jon Garon, 'Synthetic Humans in Entertainment' (2023) 31 Cardozo Arts & Entertainment Law Journal 90.

[6] DEFIANCE Act 2024, Pub L 118-; DEEPFAKES Accountability Act, HR 4337, 118th Cong (2023).

Comments


bottom of page