Jan 21, 2025
From Elvis to AI: How U.S. Laws Have Adapted to Protect Digital Rights
By Johana Gutierrez
Protecting our public image has been a key concern for individuals throughout history, even before the internet transformed how we share and access information. From protecting physical likenesses and reputations in print media to defending against unauthorized digital representations in today's AI-driven world, the desire to maintain control over one’s identity remains a timeless priority. This demand to control our public persona has historically driven the development of laws designed to protect reputations and prevent exploitation. With the emergence of the internet, the scope of these protections has had to significantly expand in order to address the new unique challenges posed by digital technologies. As active internet users, it is important for us to understand the legal frameworks used to protect us and the internet from the ever evolving digital threats we face today.
Right of Publicity
The historical development of the laws and regulations aimed at protecting our public image began with privacy law, a concept rooted in common law traditions. This concept laid the foundation for the “right of publicity”, a term first coined in 1953 during the landmark case Haelan Laboratories, Inc. v. Topps Chewing Gum, Inc. This pivotal case recognized the need to protect individuals, particularly public figures, from emotional and economic harm caused by unauthorized use of their likeness or name. Following this case, states adopted their own right of publicity protections by either enacting specific statutes or by recognizing these rights in case law.
One of the earliest and most comprehensive right of publicity statutes in the United States was Tennessee’s Personal Rights Protection Act (TPRPA) of 1984. This landmark legislation was heavily influenced by efforts to protect the post-mortem commercial legacy of one of Tennessee's most iconic figures, Elvis Presley. The TPRPA granted individuals “freely assignable and licensable” property rights over their name, photograph, and likeness, prohibiting unauthorized commercial exploitation of these personal attributes. Although this law predates the "early web" or "the dot-com era", it continues to be a relevant basis in addressing the complex challenges posed by modern technological advancements.
Building on its foundational protections and in response to the rise of generative AI technologies, the TPRPA was amended on March 21, 2024 with the introduction of the Ensuring Likeness, Voice, and Image Security (ELVIS) Act. Prior to the amendment, the TPRPA primarily addressed unauthorized use of name and likeness in advertising contexts. The ELVIS Act broadened this scope by adding “voice” as a protected personal right and prohibiting any unauthorized use of an individual's name, photograph, voice, or likeness, including uses in AI-generated content and deepfakes. By extending its protections to modern threats like deepfakes and impersonator accounts, the ELVIS Act emphasizes the importance of adapting existing legislation to address the new challenges faced by our evolving digital landscape.
Section 230
As the internet rapidly expanded during the 1990s, it became clear that existing legal frameworks struggled to address the challenges of user-generated content. To resolve this, Congress passed Section 230 of the Communications Decency Act in 1996. Often called the “26 words that created the internet,” it states:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
These provisions shield online platforms from liability for most user-generated content while allowing them to moderate harmful or offensive material in good faith. For example, “if a user posts a false statement that harms someone’s reputation, the website or social media platform is not liable for defamation. Only the user who posted the statement is legally liable. Only the poster, not the site, can be sued” (Hansen, n.d.). Section 230 enabled platforms like Facebook and YouTube to thrive by driving innovation and empowering them to address misinformation, harassment, and harmful content such as deepfakes or impersonator accounts. It also indirectly helps users protect their online image by providing platforms the flexibility to moderate harmful content and create an environment where users can report defamatory or false material. Although platforms are not legally obligated to remove all harmful content, many choose to do so to uphold trust, ensure user safety, and comply with their policies.
Section 230’s role in shaping internet regulation remains a topic of ongoing debate, particularly as policymakers work to balance platform accountability with free expression. As Luke Arrigoni, CEO of Loti AI, observes, "Policymakers have to make sure they won’t stifle innovation while providing guardrails for exponentially more capable machines." This delicate balance continues to influence many discussions around modern internet governance.
The Digital Millennium Copyright Act
While Section 230 addressed the complexities of platform liability for user-generated content, the increase in digital media and the ease of its distribution made copyright infringement a pressing issue. The Digital Millennium Copyright Act (DMCA) was enacted in 1998 with the primary purpose of adapting copyright law to the digital environment and protecting the rights of content creators. This law makes it illegal to create or share tools or services that can break digital locks (known as digital rights management, or DRM) that are used to protect things like movies, music, or software. It also increases penalties for online copyright violations, making it a powerful disincentive for those who illegally share or misuse copyrighted content.
Under this law, companies that host digital content are obligated to remove copyrighted material when they receive a formal notice from the copyright owner. However, the responsibility for enforcement largely falls on copyright holders themselves. They must actively monitor for unauthorized uses of their works online and upon identifying infringements, issue takedown notices to the platforms hosting the content. This process can be both challenging and costly, often requiring specialized tools or services to track infringements across the internet and issue takedown notices. Despite this, the DMCA’s takedown process has become vital for addressing unauthorized uses of copyrighted works, including those manipulated into deepfakes.
Take It Down Act
While effective at managing unauthorized use of copyrighted material, the DMCA lacked the scope to address the unique challenges posed by AI-generated content and privacy violations. The rapid advancement of AI technologies has brought about new and sophisticated threats, such as the proliferation of non-consensual intimate images (NCII) and malicious deepfakes which may not always fall under copyright law. Recognizing the urgency of addressing these issues, lawmakers introduced the Take It Down Act in 2024. This bipartisan bill focuses on protecting individuals’ dignity and privacy by establishing clear pathways for victims to report and remove harmful content from online platforms. As of January 2025, the Act has been unanimously passed by the U.S. Senate and is currently awaiting consideration in the House of Representatives. If enacted, it will mandate that online platforms remove harmful content, including AI-generated NCII, within 48 hours of a victim's request. By addressing the challenges posed by AI-generated content, the Take It Down Act represents a critical advancement in protecting individuals’ dignity and online safety.
Toward a Safer Digital Environment
Beyond protecting people from identity misuse, these laws collectively serve a critical role in reducing the spread of misinformation and disinformation online. By empowering platforms to moderate harmful content under Section 230 and introducing mechanisms for removing false or manipulated media under the DMCA, these regulations collectively help create a safer and more trustworthy digital environment. As platforms work to balance content moderation with user engagement, they must also address the dissemination of misleading information, a challenge that grows more complex with advances in AI technology.
The growing prevalence of AI has also spurred discussions around broader regulatory measures. Proposed federal legislation, such as the Deepfakes Accountability Act, aims to address the misuse of AI-generated content more comprehensively. These initiatives seek to criminalize malicious deepfakes, establish ethical guidelines for AI-generated media, and promote technological transparency. As digital threats evolve, these efforts highlight the importance of proactive legal frameworks that anticipate and address emerging challenges.
The evolution of U.S. laws protecting digital rights reflects a deliberate and ongoing effort to adapt to technological advancements while prioritizing individual safety and autonomy. From Tennessee’s pioneering Right of Publicity law to the transformative measures introduced in the Take It Down Act, these legislative tools demonstrate the necessity of robust legal protections in an increasingly interconnected and AI-driven world. As gaps remain, continued innovation in policy will be essential to ensuring that individuals are protected against the misuse of their identity, content, and reputation in the digital age.