Copyright and Artificial Intelligence: Analysis of AIPPI Resolution Q295

October 29, 2025

During the recent 2025 AIPPI World Congress in Yokohama, national groups from around the world discussed and adopted Resolution Q295 on Copyright and Artificial Intelligence (AI).

 

Two of the major topics currently being debated at the intersection of copyright and AI concern the content that goes into AI (training) and the content that comes out (output). The following points from Resolution Q295 are particularly noteworthy:

 

Training

  • General rule: The use of protected works to train AI should only be allowed with authorization or under an applicable exception.
  • Existing exceptions: Exceptions and limitations on the use of works that already exist in each jurisdiction should also apply to AI training uses.
  • Ad-hoc exceptions: A specific exception should exist for AI training in cases when this is not-for-profit and a public interest is present.
  • Berne standard: Any exception must comply with the three-step test of the Berne Convention.
  • Opt-out system: In jurisdictions where AI training is already permitted without prior authorization from the rights holder, the rights holder should have the right to opt out of such use of their works. If they do not exercise this right, they should be entitled to financial compensation for such uses.
  • Transparency: The party responsible for the AI system must disclose the protected works used for training, as well as the protected works input by users, so that rights holders can identify them and exercise their rights.

 Output

  • General rule: Existing rules on copyright infringement should also apply to outputs generated by a trained AI system.
  • Idea/style: An AI-generated output should not be considered infringing solely because it has the “same style” as a protected work used in training.
  • Moral rights: The author of a work should have the right to object to its alteration if it harms their honor or reputation.
  • Input vs. output: An AI-generated output should not be considered infringing a work merely because such work was infringed during AI training.
  • Authorizations/exceptions: Their application should be strict, meaning that an AI output could still infringe a work even if that work was used for training under a strict authorization or exception.
  • Responsible parties: An infringing AI output may have the following responsible parties: (1) the AI system provider, (2) the party commercially exploiting the AI system, and/or (3) the party using the AI system with the purpose of creating infringing outputs.