Can I Claim Copyright for AI-Generated Images? Part 2

May 30, 2025

Can I Claim Copyright for AI-Generated Images? Part 2
Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

Introduction

Welcome to the second part of our blog series on the evolving landscape of intellectual property in the age of artificial intelligence. In our previous discussion, we touched upon the fundamental questions surrounding AI and copyright in Can I Claim Copyright for AI-Generated Images? Part 1. This installment, “Copyrightability of AI-Generated Content: Jurisdictional Analysis,” delves deeper into how different legal systems worldwide are grappling with the complex issue of copyright ownership for works produced by artificial intelligence. From the human authorship requirement in the United States to the dynamic interpretations emerging in Canada, the European Union, and other key jurisdictions, we will explore the varied approaches, landmark cases, and ongoing debates shaping this critical area of law. Understanding these jurisdictional nuances is crucial for creators, businesses, and policymakers alike, as AI’s creative capabilities continue to expand.

Copyrightability of AI-Generated Content: Jurisdictional Analysis

The question of copyright ownership for AI-generated content is being addressed differently across various jurisdictions, though a common thread of human authorship remains central.

United States Perspective

The U.S. Copyright Office (USCO) consistently maintains that human authorship is an indispensable prerequisite for copyright protection. The USCO has affirmed that existing copyright law is sufficiently flexible to accommodate new technologies. However, it explicitly states that outputs from generative AI are protected only when a human author has determined sufficient expressive elements within the work.  

What is US Perspective on AI Generated Coipyrights?

A crucial distinction is drawn between AI as an assistive tool and AI as an autonomous generator. When AI merely assists a human in the creative process—for instance, using tools to age or de-age individuals in an image, or to remove unwanted objects or crowds from a scene—it does not preclude copyrightability for the overall human-authored work. In such cases, the human’s creative expression, enhanced by the AI, remains protectable. The USCO’s report does not suggest that these AI-generated changes themselves would be excluded from copyright, implying no need to disclaim them in a copyright application. Conversely, purely AI-generated material, or content where there is insufficient human control over the expressive elements, is not copyrightable.

The role of prompts in AI generation is also a key area of analysis. The mere provision of prompts, even highly detailed ones, is generally considered insufficient to establish human authorship. Prompts are viewed as unprotectable ideas rather than the expressive elements of the work itself. The USCO likens prompts to instructions given to a commissioned artist, where the instructions themselves are unprotectable ideas, and the AI system, rather than the user, is largely responsible for determining the expressive elements in the output. The observation that identical prompts can yield different outputs further supports the conclusion that the human user lacks sufficient control over the final expression.

However, copyright protection can extend to human-authored work that is perceptible within an AI output, or where a human makes creative arrangements, selections, or modifications to the output. This determination is made on a case-by-case basis. Examples of such protectable human creative input include editing, arranging, or selecting AI-generated elements. If a human inputs their own copyrightable work as a prompt, and that original work is clearly perceptible in the AI-generated output, the perceptible human expression remains copyrightable. This establishes a clear spectrum of human involvement. At one end, simple prompts are deemed insufficient for control over expressive elements, while at the other, significant editing, selection, or integration of AI output into a larger human work is considered sufficient. This means that creators must actively demonstrate their expressive choices and control over the final form of the work to secure copyright protection. The distinction shifts the legal inquiry from merely “using AI” to “how AI is used” and “what creative choices the human makes.”

Key case law highlights this position:

  • Thaler v. Perlmutter (D.D.C. 2023): This landmark case upheld the USCO’s refusal to register copyright in an art image generated autonomously by an AI system, definitively reaffirming the human authorship requirement. The court concurred with the Copyright Office that the AI was the author, and since AI is not a legal person, no one could claim copyright.   
  • Allen v. Perlmutter (D. Colo.): This is an ongoing case where the plaintiff argues for copyright protection for AI-generated art, contending it was created with “more human involvement” than the art at issue in Thaler. This case is expected to further refine the legal standard for “sufficient human involvement.”  
  • Thomson Reuters Enterprise Centre GMBH v. ROSS Intelligence Inc. (D. Del. 2025): While not directly concerning AI output copyright, this decision granted partial summary judgment for copyright infringement against an AI company for using copyrighted material to train its AI. This illustrates the broader legal challenges AI poses, extending beyond the copyrightability of its outputs to the legality of its training data.

A significant practical consequence of this stance is that if content is entirely generated by AI and lacks sufficient human authorship, it cannot be protected by copyright. Such content effectively falls into the public domain and can be freely used, reproduced, or sold by anyone without permission from the AI user. This presents a substantial commercial risk for creators and businesses who rely solely on AI generation without incorporating sufficient human creative input.  

Canadian Perspective

Canadian copyright law, similar to the U.S. framework, mandates originality, expression, and fixation for a work to receive protection. Originality requires that the work be the product of the author’s “own creativity, skill and judgment” and not merely a copy of another work. Copyright protection in Canada is automatic upon meeting these criteria, without the necessity of registration. 

A notable case illustrating the complexities in Canada is that of Suryast. In December 2021, the Canadian Intellectual Property Office (CIPO) controversially granted copyright for “Suryast,” a painting created using the RAGHAV AI Painting App, listing both the human creator (Ankit Sahni) and the AI (RAGHAV) as co-authors. This marked the first instance where Canada attributed copyright authorship to a non-human entity. 

This Suryast case highlights a temporary divergence in interpretation between Canada and the U.S. regarding AI authorship. However, the CIPO’s automated registration process does not verify claims, meaning challenges to copyright validity must be brought before the Federal Court. The Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPC) has filed an application to strike “Suryast” from the registry, arguing that no copyright subsists in the image or, alternatively, that Sahni should be listed as the sole owner. This challenge aligns with the USCO’s explicit refusal to register “Suryast,” where the USCO concluded that the expressive elements were provided by the AI, not Sahni. If Canada aligns with the U.S. position, “Suryast” is widely expected to be struck from the registry in its entirety. This indicates strong pressure for Canada to conform to the human authorship requirement, suggesting a broader global trend towards a human-centric view of copyright, where initial automated or less stringent interpretations are often challenged and ultimately brought in line with established principles. The Canadian government has actively consulted on AI copyright, specifically addressing authorship and ownership rights related to AI-generated content, and is considering various legislative options to provide clarity. 

European Union Perspective

In the European Union, copyright protection is afforded to the “author” of a “work” that constitutes an “own intellectual creation”. This standard necessitates “human involvement in the creative process” and must reflect the author’s “personal contribution” and “free and creative choices”. It explicitly emphasizes that the content should not be dictated solely by technical considerations or predefined rules. Consequently, works entirely generated by AI are generally not protected by copyright in the EU due to the absence of human intervention.

However, when a person utilizes AI as a creative tool—for example, by setting specific parameters, selecting from various generated results, or making significant post-generation adjustments—the resulting work may be eligible for copyright protection, as this demonstrates sufficient human input and creative control. Similar to Canada, the EU does not require formalities such as registration or affixing a copyright notice to obtain copyright protection; it exists automatically once the originality and fixation criteria are met.  

The EU’s approach is notably proactive and comprehensive in establishing regulatory frameworks for AI’s interaction with copyright, particularly concerning the legality of training data and the transparency of AI output. The Copyright in the Digital Single Market (CDSM) Directive permits text and data mining (TDM) for AI training, but critically, rights holders can “opt out” of these exceptions through machine-readable means. The recently adopted EU AI Act further reinforces the necessity for copyright compliance for general-purpose AI models, mandating that providers implement policies compliant with EU copyright law and disclose sufficiently detailed summaries of their training data. This dual framework aims to balance innovation with rights protection. Furthermore, the EU AI Act mandates that providers of generative AI systems ensure their outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. This addresses growing concerns about content “regurgitation” and plagiarism by AI. This contrasts with the U.S., which primarily relies on existing law and case-by-case determinations for output copyrightability. The EU’s strategy indicates a different philosophical approach to regulating emerging technology, emphasizing a more top-down, legislative framework to manage the entire lifecycle of AI content, from input to output.  

Other Jurisdictions (UK, Japan, Australia)

United Kingdom

Under the UK Copyright, Designs and Patents Act 1988, copyright owners possess exclusive rights, particularly concerning reproduction, public performance, and adaptation. The UK government has initiated a consultation on copyright and AI, proposing a controversial exception for text and data mining (TDM) that would permit AI developers to use copyrighted works for training without requiring a license, unless rights holders explicitly opt-out. The UK’s current legal framework is generally perceived as more restrictive than both the EU and U.S. for commercial TDM activities, potentially impeding AI innovation within the country. A high-profile case, Getty Images vs. Stability AI, is currently ongoing, addressing copyright infringement claims arising from AI training data. While some sources suggest the UK might allow copyright for AI-generated outputs where the human user makes a “significant contribution” , other sources indicate that English law requires a “real person” as the author and evidence of “human personality” for originality to subsist.  

The UK is navigating a particularly complex path, endeavoring to balance the promotion of AI innovation with the protection of its influential creative industries. Its post-Brexit divergence from the EU’s TDM exception has created a more restrictive environment for AI training, prompting a government consultation on potential reforms. This situation underscores the geopolitical dimension of AI copyright, where different nations adopt varying strategies to gain a competitive edge in AI development while simultaneously addressing concerns from traditional creative sectors.  

Japan

Japan is pursuing an “innovation-friendly AI regulation strategy,” favoring the application of existing laws and the development of voluntary industry guidelines over the enactment of sweeping new statutes. A recent ruling by the IP High Court in January 2025 reinforced that AI-generated inventions cannot receive patent protection under current Japanese patent law, as inventorship is strictly confined to “natural persons”. While this ruling pertains to patent law, it strongly signals a similar human-centric approach to intellectual property rights for AI-generated content in Japan, emphasizing the fundamental requirement of human creation.  

Japan’s preference for “technologically neutral, sector-specific laws and voluntary industry guidelines” suggests a less prescriptive, more adaptable regulatory environment compared to the EU’s comprehensive legislative approach. The patent ruling , while not directly on copyright, is highly indicative that the “human author/inventor” principle is deeply embedded in Japanese intellectual property law. This precedent strongly suggests that AI-generated works will likely face similar hurdles in obtaining copyright protection, reinforcing the global trend of human authorship. This highlights how legal interpretations in one intellectual property domain can often foreshadow developments in another.  

Australia

Australian copyright law grants protection if the author is a human who has contributed “independent intellectual effort”. Works created solely by Artificial Intelligence are not eligible for copyright provisions in Australia, as AI tools currently lack the legal status to own copyright. If a human cannot demonstrate “significant human effort,” the creative output may not be protected by copyright. Ambiguity currently persists regarding what constitutes “sufficient effort,” particularly in artistic works where an artist might provide additional instructions or manipulate the AI-generated image independently to achieve the final output. Australian copyright legislation does not contain any specific exceptions for data mining or the use of works for machine learning, raising concerns about the legality of AI training data.

Australia faces challenges similar to other jurisdictions regarding the human authorship requirement and the need for “independent intellectual effort”. The absence of clear, specific guidelines for what constitutes “sufficient effort” creates ongoing ambiguity for creators. This suggests a common global problem where existing laws are interpreted to fit new technology, but the precise line distinguishing human creative control from mere ideation or AI autonomy remains legally grey and subject to case-by-case determination. 

Table 1: Core Copyright Requirements and AI-Generated Content Status by Key Jurisdiction

Jurisdiction Originality Requirement Fixation Requirement Human Authorship Requirement Copyrightability of Purely AI-Generated Content Copyrightability of AI-Assisted Content Key Case/Legislation
United States Independent creation + minimal creativity Tangible medium Yes (explicitly required) No Yes, with sufficient human creative input (not mere prompts) Thaler v. Perlmutter, USCO Reports
Canada Own creativity, skill, judgment Tangible medium Yes (implied, debated in Suryast) No (likely to align with US after Suryast challenge) Yes, with sufficient human input Suryast case, ongoing Federal Court challenge
European Union Own intellectual creation (human involvement, personal contribution) Tangible medium Yes (explicitly required) No Yes, with human creative choices (setting parameters, selecting results) EU AI Act, CDSM Directive, CJEU rulings
United Kingdom Originality (human personality, skill, judgment) Tangible medium Yes (real person required) No (implied, but some debate on “significant contribution”) Yes, with significant human contribution CDPA 1988, ongoing government consultation, Getty Images v. Stability AI
Japan Originality (implied human creation) Tangible medium (implied) Yes (natural person required for inventorship, likely applies to authorship) No (for patents, likely for copyright) Yes (implied, if human makes creative choices) IP High Court ruling on AI inventorship
Australia Independent intellectual effort by human Tangible medium (implied) Yes (explicitly required) No Yes, with “significant human effort” (ambiguity on threshold) Australian Copyright Act, ongoing legislative reform discussions

 

AI Tool Provider Terms of Service and User Rights

The contractual agreements between AI service providers and their users, typically articulated in Terms of Service (ToS) or Terms of Use (ToU), frequently attempt to define and allocate ownership rights for AI-generated content.

Analysis of Ownership Clauses in Major AI Platforms

  • OpenAI (DALL-E, ChatGPT): OpenAI’s terms generally stipulate that the user retains ownership of their “Input” (prompts) and that OpenAI assigns all its right, title, and interest in the “Output” to the user. This contractual assignment implies that users can utilize the generated content for any purpose, including commercial publication. However, a critical caveat is consistently present: this assignment is “to the extent permitted by applicable law”. This crucial phrase means that if statutory copyright law, such as the USCO’s policy, does not recognize copyright in purely AI-generated content due to the absence of human authorship, then the user cannot claim or enforce such copyright, irrespective of the ToS. OpenAI also includes a prohibition against users representing that AI-generated output was human-generated when it was not.  

  • Midjourney: Midjourney’s terms assert that users “own all Assets You create with the Services to the fullest extent possible under applicable law”. However, this ownership comes with specific conditions. For instance, companies with over $1,000,000 USD in annual revenue must subscribe to a “Pro” or “Mega” plan to claim ownership of their assets, and upscaled images of others remain the property of their original creators. Furthermore, Midjourney grants itself a broad, perpetual, worldwide, non-exclusive, sublicensable, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute the user’s content. For users with free accounts, the generated assets are generally subject to a Creative Commons – Noncommercial license.  

  • Stable Diffusion: Stable Diffusion’s terms frequently state that the “User retains all rights, title, and interest including any intellectual property rights of the generated images”. Some interpretations suggest that if the software is utilized locally on a user’s machine, they retain copyright over their work. However, the open-source nature of the model and its training on potentially copyrighted material introduce complexities regarding the enforceability of such rights. The CreativeML Open RAIL-M license governs the model itself, granting users rights over both the outputs and the model. 

The Interplay Between Platform Terms and National Copyright Laws

While AI platform terms of service may assign ownership or grant certain rights to the user, these contractual agreements cannot create copyright where none exists under statutory law. If a work fundamentally lacks the prerequisite of human authorship, it is not copyrightable, regardless of what a platform’s ToS states. This means that even if a ToS states the user “owns” the output, that ownership may not encompass exclusive copyright protection enforceable against third parties, effectively placing the content in the public domain.  

AI companies often derive revenue from subscription and API fees rather than directly from the outputs themselves. Their terms frequently attempt to strategically shift the burden of compliance and potential liability for copyright infringement—whether from the AI’s training data or from the output’s similarity to existing copyrighted works—from the AI developer to the user. Users are therefore strongly advised to exercise caution, particularly by avoiding the generation of images or content that closely mimic known copyrighted works, and to remain updated on evolving case law to mitigate their legal risk. This strategic shifting of liability is a critical, often overlooked, risk for businesses and individuals who use AI for commercial purposes, as they may be exposed to legal action without possessing enforceable copyright themselves.  

Table 2: AI Platform Terms of Service: Copyright Ownership Summary

AI Platform Stated Output Ownership Key Caveats/Conditions Training Data Use Liability/Risk Allocation
OpenAI (DALL-E, ChatGPT) User owns Output “To the extent permitted by applicable law”; User must not represent output as human-generated if it’s not May use content to provide and improve services User responsible for content, including ensuring it doesn’t violate law/terms; User may be liable for infringement
Midjourney User owns Assets “To the fullest extent possible under applicable law”; Paid plan required for commercial ownership for companies >$1M revenue; Upscaled images owned by original creators User grants Midjourney perpetual, worldwide, non-exclusive, sublicensable, royalty-free, irrevocable license to use, reproduce, prepare derivative works of, display, perform, sublicense, and distribute user content User responsible for all content provided/generated; User assumes risks associated with use
Stable Diffusion User retains all rights, title, and interest including IP rights of generated images Open-source model (CreativeML Open RAIL-M license); Enforceability of rights complicated by training data Trained on millions of images (some copyrighted); Legality of training data complex User assumes risks; Advised to avoid mimicking copyrighted works; Consult legal experts

  

Emerging Legal Challenges and Future Outlook

The rapid advancement of generative AI continues to pose complex legal challenges that necessitate ongoing re-evaluation of intellectual property frameworks.

The Debate Over Sui Generis Rights for AI-Generated Content

A significant and ongoing international debate centers on whether to introduce new, specific (“sui generis”) intellectual property rights for purely AI-generated content, particularly in instances where traditional copyright protection is deemed inapplicable.   

Arguments against the creation of sui generis rights are prominent. The U.S. Copyright Office has explicitly concluded that “the case has not been made for changes to existing law to provide additional protection for AI-generated outputs”. Many legal experts and creative industry stakeholders contend that extending copyright to unoriginal, purely AI-generated content would fundamentally undermine the core purpose of copyright—which is to protect and incentivize human creativity—and could potentially discourage human artistic and literary endeavors. Furthermore, such protection could inadvertently create a “perpetual cycle” where AI-generated works are subsequently used to train future AI models, potentially further diminishing the economic value and distinctiveness of human-created content.   

Conversely, some proponents argue that AI-generated works involve complex algorithms and significant computational “creative” input from the AI system itself, suggesting that the AI system could, in some sense, be considered an author, or that new rights are necessary to incentivize continued AI development and innovation. For example, Ukraine has introduced a sui generis right for non-original AI-generated works, though notably with a shorter protection term (25 years from creation compared to life plus 70 years for human authors). 

This ongoing debate over sui generis rights for AI-generated content reveals a core policy dilemma: how to incentivize technological innovation and investment in AI without simultaneously eroding the foundational principles of copyright (human authorship, originality) and potentially undermining the rights and economic viability of human creators. The current leaning against sui generis rights by major bodies like the USCO indicates a strong preference for upholding the human-centric nature of copyright, even if it means purely AI-generated outputs remain unprotected. This reflects a broader commitment to the historical purpose of copyright law.  

Implications for Licensing, Liability, and Infringement

The use of vast quantities of copyrighted works to train AI models without explicit authorization or license is a major and contentious area of ongoing litigation. AI companies frequently invoke “fair use” (in the U.S.) or “fair dealing” (in Canada and the UK) as defenses for their training processes. However, courts are actively grappling with whether such large-scale, often commercial, use of copyrighted material for training constitutes a transformative use or directly infringes existing rights. The EU’s Digital Single Market (DSM) Directive attempts to address this by allowing TDM with opt-out mechanisms, but effective enforcement remains a challenge.  Beyond training data, questions of liability arise when AI-generated content infringes on existing copyrights, either by directly reproducing copyrighted material or by creating outputs that are substantially similar to protected works. Determining who is liable—the AI developer, the user who provided the prompt, or the AI system itself—is a complex legal question with no definitive answers yet. This uncertainty creates significant risk for businesses and individuals utilizing AI-generated content, particularly for commercial purposes. 

The legal landscape is further complicated by the emergence of new licensing models and technical solutions. Rights holders are increasingly exploring direct licensing deals to monetize their content for AI training, particularly in sectors like publishing and journalism. Simultaneously, technical solutions are being developed to facilitate “opt-outs” for rights holders and to ensure transparency regarding the provenance of AI-generated content, such as watermarking and provenance tracking. However, a lack of established market standards and enforcement mechanisms for these tools contributes to ongoing legal uncertainty.  

Conclusion

The jurisdictional analysis of copyrightability for AI-generated content reveals a complex and rapidly evolving legal landscape. While a common thread of human authorship largely prevails across major jurisdictions like the United States, the European Union, and increasingly, Canada, the nuances of interpretation and legislative approaches differ significantly. The ongoing debates surrounding “sufficient human involvement,” the legality of AI training data, and the potential for sui generis rights underscore the profound challenges AI presents to established intellectual property frameworks. Furthermore, the interplay between national copyright laws and the terms of service set by AI tool providers creates a precarious environment for users, who may be granted contractual “ownership” without actual, enforceable copyright protection. As generative AI continues to advance, the legal community will face ongoing pressure to clarify issues of liability, infringement, and the very definition of “authorship” in the digital age. Ultimately, the future of copyright for AI-generated content will likely be shaped by a delicate balance between fostering innovation and safeguarding the fundamental principles that incentivize and protect human creativity.

Table of Contents