Copyright for Works Created by Artificial Intelligence: Modern Challenges and Legal Solutions

Inmar Case: Advertising Agency vs. the “Prompter”
Copyright for works created by artificial intelligence, such as AI-generated images or distinctive visual concepts, has become a particularly topical issue in recent years. In December 2025, Inmar Legal was involved in an illustrative matter that highlighted the practical importance of this debate.
The firm represented the defendant, an advertising agency that created and distributed a social-media video built around a visual concept that fully replicated a widely circulated viral image. That original viral image had been created using generative AI tools and had already spread widely online.
The claimant asserted that they held copyright in the “original” visual concept, demanded that distribution of the disputed video stop, and threatened litigation and significant costs. According to the claimant, the defendant’s video infringed their exclusive rights because the visuals were constructed using elements that, in substance, matched what the claimant considered to be their protected work.
This led to a sharp discussion of how AI authorship, the rights (if any) of the person who designed the prompt, and the traditional rules of copyright law relate to each other.
Although the dispute never reached court and was resolved pre-trial, Inmar Legal was fully confident in the defendant’s legal position: in this scenario, copyright for works created by artificial intelligence objectively did not arise, and the claimant’s demands were legally unfounded. It is not accidental that, in foreign legal discourse, the creator of such an image is often referred to not as an “author,” but as a “prompter” or a “prompt engineer.”
To understand why this position is legally defensible, it is important to examine the underlying legal grounds and identify principles that can be applied in future practice.
Why the Issue Matters: Who Owns Rights to AI-Generated Works?
In recent years, the question of who owns copyright for works created by artificial intelligence and whether such rights can exist at all, given how AI systems function has become especially urgent. On the one hand, generative models are capable of producing high-quality visual outputs that many perceive as creative achievements.
On the other hand many legal systems do not contain direct mechanisms for defining the legal status of outcomes produced with the assistance of AI, because traditional copyright law is oriented exclusively toward results of human creativity.
The problem is compounded by the fact that generative models are trained on vast datasets that include images, texts, and multimedia originally protected by copyright. This raises difficult questions about artificial intelligence and copyright, and whether the current regime maintains a fair balance between the interests of authors and technological innovation.
If one looks deeper, it becomes clear that in many cases AI infringes copyright not because a user intends wrongdoing, but because training processes may have incorporated protected materials without explicit permission from rights holders. This, in turn, fuels a new wave of discussion about whether intellectual property regimes require recalibration for the era of digital technologies and automated content generation.
In addition, many users do not ask themselves who owns rights to AI-generated images at all. They may freely use protected images for their visuals or videos, teach others how to generate “beautiful pictures,” and even recommend uploading copyrighted images into an AI model to generate similar results.
Key Global Approaches: U.S. and EU Practice
United States: Human Authorship as an Exclusive Requirement
In the United States, copyright for works created by artificial intelligence is generally not recognized, and U.S. practice remains one of the strictest among developed jurisdictions on this issue. Under the traditional approach reflected in 17 U.S.C. § 102(a), a copyrightable work must be an original work of authorship—i.e., the result of human creative effort .
This approach rejects AI authorship: formally, only a human can be the author of a copyright-protected work, not a computer program.
A landmark dispute on the legal status of AI-generated works—and whether such results can be protected by copyright—was Thaler v. Perlmutter.
In that case, computer scientist Stephen Thaler used a generative system called the “Creativity Machine,” which autonomously created an artwork titled A Recent Entrance to Paradise. In 2019, Thaler applied to the U.S. Copyright Office to register copyright, listing the AI system as the “author” and himself as the “claimant/owner.”
The Copyright Office refused registration, relying on its established practice that only a human can be the author of a work protected by copyright. The Office referred to the Compendium of U.S. Copyright Office Practices, which expressly indicates that registration may be refused if a work is “created by a machine without human involvement”.
Thaler challenged the refusal first in district court and then in the U.S. Court of Appeals for the D.C. Circuit. On March 18, 2025, the D.C. Circuit upheld the refusal, confirming that U.S. law requires human authorship; because the image was generated autonomously by AI without meaningful human creative contribution, it could not be protected by copyright.
The court emphasized that even if legislation does not explicitly define “author” as a natural person, the ordinary and historical understanding of the Copyright Act implies that authorship is a human attribute, not a machine attribute.
As a result, the U.S. Copyright Office’s practice remains consistent: applications for works created exclusively by AI are rejected on the basis that copyright for works created by artificial intelligence—without human participation—is not registrable and not protected under the current framework. (See also the Copyright Office’s AI materials: Copyright and Artificial Intelligence and Title 17 compilation.)
This conservative approach means that images generated solely by neural networks do not receive automatic protection in the U.S. However, users may still face liability where their use involves copying protected elements from third-party works (including materials reflected in training data or external sources).
European Union: Mixed Approaches and Regulatory Initiatives
In the European Union, the legal status of AI-generated images is also under active discussion, but the approach is less uniform than in the U.S. EU copyright is framed by Directive 2001/29/EC (InfoSoc Directive) and subsequent digital-market instruments, including Directive (EU) 2019/790 on copyright in the Digital Single Market (DSM Directive).
While these acts do not establish copyright protection specifically for AI-generated works, they provide a framework within which issues of data use and exceptions (including for analysis and certain forms of text and data mining) are debated—topics directly relevant to machine learning.
In addition, the EU has adopted the AI Act, designed to regulate AI systems across sectors. The AI Act does not create direct rules on AI authorship, but it introduces a compliance architecture focused on responsible AI, including transparency-related obligations.
EU case law has not yet formed a stable line specifically on IP issues arising from generative images. Nevertheless, conceptually, EU policy discussions increasingly emphasize balancing innovation incentives with protection of authors’ interests where generative models may rely on or imitate protected works.
Copyright for Works Created by Artificial Intelligence: Russian Rules and Practice
In Russia, intellectual property law generally provides that the author of a work is a natural person—someone who created the work through creative effort. This is explicitly reflected in Part IV of the Civil Code of the Russian Federation, including Article 1228, which defines an author as the citizen who created the result of intellectual activity (see: Civil Code of the Russian Federation, Article 1228).
Based on this rule, a core position follows: copyright for works created by artificial intelligence does not arise if there is no human creative contribution.
At present, Russian judicial practice on AI use remains limited, and questions concerning copyright in an AI-generated image or, especially, a “visual concept” have not been addressed in a direct and “pure” form.
In case A40-200471/2023 (Ninth Arbitration Court of Appeal, 08.04.2024), the court stated that deepfake technology is an additional tool for processing (technical editing) of video materials, rather than a method of creating them. Therefore, the fact that a designer used deepfake to technically edit source materials does not, in itself, demonstrate that the video is freely usable without permission, nor does it negate the personal creative contribution of those involved in scriptwriting, filming, audio production, and other creative work.
Courts have also indirectly touched on AI-related arguments in other disputes (including cases where references to AI were not accepted as proof that no infringement occurred, and cases where the claimant’s rights to an image were upheld despite arguments about AI generation). These matters illustrate that courts focus on evidence of human contribution and that AI infringes copyright where protected objects are used without authorization.
A key point is that Russian legislation contains no special, dedicated rules on copyright in AI works; courts apply the traditional provisions of the Civil Code. In the absence of direct regulation, the main logic of decisions is to assess whether a human creative contribution exists and whether original elements are present that indicate a protectable object of intellectual property.
Training Data: When AI Infringes Copyright
A separate and substantial legal issue linked to the question of who owns rights to AI-generated works concerns the use of copyrighted materials for training generative models. When AI is trained on large datasets that include copyrighted images, the risk arises that the generated output may contain elements that are similar to—or even identical with—original works.
In such situations, AI infringes copyright not because of any “intent,” but because protected features may have been embedded into the model’s generation process.
This issue is increasingly discussed in legal scholarship and practice. Rights holders may argue that using their works for training without permission infringes their rights, and that resulting images—while formally new—may nonetheless rely on protected elements.
A vivid example involves prompts that generate content using protected images of fairy-tale characters, cartoon heroes, or other protected intellectual products. Such scenarios underline that copyright for works created by artificial intelligence cannot reasonably be recognized where the generation process itself involved unlawful use of a protected work. If an algorithm “saw” protected material and reproduces its elements, legal risk arises—one reason many lawyers argue for specific regulation of access to training data and compensation mechanisms for rights holders.
These questions are not purely academic. Commercial disputes are already emerging, with rights holders asserting claims against both developers and users of generative systems—strengthening the need for clearer rules on training, generation, and downstream use of AI outputs.
Corporate LLM Policies as a Temporary Solution
While the legislation of most countries generally denies copyright for works created by artificial intelligence in a fully autonomous scenario, major developers of language and visual models have introduced internal corporate policies governing content generation.
For example, OpenAI (ChatGPT) restricts the creation of materials that closely resemble specific protected works or that may infringe third-party rights. These restrictions do not automatically recognize copyright in AI outputs and are not purely driven by statutory requirements; rather, they reflect a risk-minimization policy for the company and its users (see: OpenAI Usage Policies).
As reflected in such policies, restrictions may cover, for example:
- generating content that reproduces specific existing works;
- generating content that is substantially similar to recognizable works;
- creating images likely to infringe third-party rights in visual concepts.
This is one reason why platforms may refuse to assist with exact replication of a viral image or clip, even if it was originally AI-generated.
Other large AI models implement filters and responsible-use principles that discourage generation of content that could infringe third-party rights. These corporate policies act as interim regulators until lawmakers introduce clearer norms. They reflect an understanding that automated content generation carries infringement risks and that such risks should be mitigated through platform controls.
Artificial Intelligence and Copyright: Likely Directions of Development
The debate about whether copyright for works created by artificial intelligence should be recognized is gradually shifting from the simpler question of who owns rights to AI-generated works to a more complex task: how to build a system that supports innovation while protecting the interests of rights holders and society.
It is increasingly clear that automatically extending classical copyright to all AI-generated outputs could lead to overregulation and, in practice, slow the development of digital creativity. Instead, a multi-level approach is more frequently proposed.
Transparency of Training Data as a Basic Principle
One key direction is greater transparency regarding training data. If generative models use materials protected by copyright, AI developers may be expected to disclose general categories of data sources and implement mechanisms for consent or remuneration. This approach shifts the regulatory focus from controlling outputs to controlling the conditions of model training.
In other words, rather than automatically restricting use of outputs, it may be more coherent to regulate training. This can reduce the risk that AI infringes copyright at the stage of model development, rather than addressing disputes only after generation and use.
An “opt-in” approach—open datasets where creators can voluntarily permit use of their works for machine learning—is also discussed as a promising direction.
Labelling and Traceability of AI Content
A second principle is mandatory labelling of AI-generated content and the development of traceability mechanisms. European regulation is moving in this direction, including certain identification-related requirements associated with the EU AI Act (see: Regulation (EU) 2024/1689 (AI Act) – EUR-Lex, EN).
Such a system does not restrict creativity or prevent free use of images, but it reduces deception. Users should understand whether they are looking at an algorithmic output rather than a work created exclusively by a human author. In the future, this may include digital watermarks, metadata, or technical markers enabling identification of origin, the model used, and the nature of human–AI interaction.
This is especially important where it is necessary to distinguish AI authorship claims from genuine human creative contribution.
Public-Domain Default for AI Outputs, With Special Rules for Commercial Use
In expert discussions, a view is increasingly expressed that, by default, copyright for works created by artificial intelligence should not arise automatically, and such results could be treated as being in the public domain. However, in commercial use, it may be appropriate to consider the provenance of the material.
If an image clearly reproduces protected elements or the recognizable style of a specific author, licensing or other contractual mechanisms may be required. For non-commercial use—educational, research, or household contexts—excessive regulation may be unjustified.
This differentiated approach preserves creative freedom while encouraging hybrid human–AI co-creation models in which the final result may qualify for protection due to meaningful human participation.
Preventing Harm Rather Than Policing Formal Similarity
Modern regulation increasingly proposes shifting the focus from formal similarity bans to preventing actual harm. This includes addressing deepfakes, manipulation of images of real persons, misinformation, and other socially harmful practices.
Such an approach addresses truly significant risks without turning every use of AI generation into a potential infringement. Corporate policies of major platforms—such as restrictions on generating images similar to recognizable works—can be seen as voluntary risk-reduction measures. In the future, more formalized mechanisms for external audit and collective negotiations between creative industries and technology companies may also emerge.
Need for International Harmonization
Because digital technologies operate across borders, effective regulation is difficult without international coordination. Differences between U.S. approaches (often associated with fair use discussions) and EU approaches (which emphasize transparency and certain data-related safeguards) can fragment enforcement.
In this context, international organizations such as WIPO (the World Intellectual Property Organization) are increasingly discussed as venues for developing common standards for artificial intelligence and copyright (see: WIPO – AI and IP).
A forward-looking model therefore is not the blanket extension of classical copyright to all AI outputs, but a balanced framework addressing responsibility for training data, transparency of content creation, and prevention of socially significant harm.
Copyright for Works Created by Artificial Intelligence: What Next?
Today, AI-generated results—including images produced by neural networks—are generally not treated as copyrightable objects in the classical sense in most jurisdictions, because authorship and creative contribution are traditionally linked to humans. This is reflected in the current legal regime and confirmed by U.S. case law, EU discussions, and the application of Russian norms.
Accordingly, at present, copyright for works created by artificial intelligence does not arise for the person who merely created an instruction (prompt) for producing a particular image.
At the same time, it is evident that the existing legal base does not fully answer the challenges of the modern digital era. The rise of generative systems and the mass spread of automated creativity require legislative development in a way that:
- does not obstruct innovation; and
- protects the rights of those who contribute creatively—whether a developer, a user who meaningfully shapes the creative process, or the author of source data.
Only balanced regulation will ensure fair interaction between technology and the classical copyright system, and will enable a clear distinction between scenarios where rights in AI-generated images genuinely have legal value and qualify for legal protection, and scenarios that remain outside the scope of protection.