In November 2025, the UK High Court handed down its long-awaited decision in Getty Images v Stability AI, the first major judgment anywhere in the world to meaningfully address how copyright and trade mark laws apply to generative AI systems.
The case has been closely watched by creators, rights-holders, technology companies and lawyers alike. While the decision arises under UK law, its reasoning will inevitably influence how similar disputes unfold in Australia.
So what did the Court decide — and what should Australian businesses and content owners take away from it?
The Dispute in Brief
Getty Images alleged that Stability AI, the developer of the image-generation model Stable Diffusion, had:
- Used Getty’s copyrighted images to train its AI model without permission; and
- Enabled the generation of images containing Getty’s trade marks and watermarks, including “Getty Images” and “iStock”.
While the case initially appeared set to answer the question many have been asking — is training AI on copyrighted content unlawful? — it ultimately turned on a narrower, but still highly significant, set of issues.
Copyright: Why the Core Claim Was Never Decided
Crucially, Getty accepted during the proceedings that the training and development of Stable Diffusion did not take place in the United Kingdom.
That concession proved fatal to Getty’s primary copyright infringement claim. Under UK law, copyright infringement is territorial. If the act of copying occurred overseas, a UK court may have no infringement before it.
As a result, the High Court did not decide whether training an AI model on copyrighted images constitutes infringement as a matter of law.
This is an important reminder for rights-holders: where AI training occurs can be just as important as how it occurs.
Are AI Models “Infringing Copies”?
Getty’s fallback argument was that the AI model itself — specifically its “weights” — embodied copies of Getty’s images. On that basis, Getty argued that making Stable Diffusion available in the UK amounted to importing and dealing in infringing copies.
The Court rejected this argument.
It held that:
- AI model weights are statistical parameters, not stored images or reconstructions;
- They do not contain a recognisable reproduction of any particular photograph; and
- Accordingly, the model is not an “infringing copy” for the purposes of UK copyright law.
Trade Marks: Where Getty Found Success
Getty’s trade mark claims fared better.
The Court accepted evidence that earlier versions of Stable Diffusion, when prompted in ordinary and realistic ways, could generate images containing Getty or iStock watermarks.
That mattered because:
- Trade marks can be infringed even without intent; and
- Liability can arise where marks are used in the course of trade and are capable of confusing consumers.
Importantly, the Court gave weight to Stability AI’s later efforts to introduce filtering and safeguards, particularly in hosted environments.
What Does This Mean for Australia?
Although this is a UK decision, Australian courts are likely to look closely at its reasoning when similar disputes arise here.
Copyright and AI TrainingAustralia’s Copyright Act also operates territorially, but Australian courts have historically taken a broad approach to what constitutes “reproduction”, particularly in digital contexts.
Whether AI training involves reproduction “in a material form” remains an open question in Australia. The UK decision will be persuasive, but it does not settle the issue locally.
Secondary InfringementThe UK Court’s conclusion that model weights are not copies may resonate with Australian courts, but again, this would depend on the evidence and statutory interpretation under Australian law.
Trade Marks and OutputsOn trade marks, the lesson is clearer.
If an AI system generates outputs that include registered trade marks and those outputs are used commercially, there is a real risk of infringement under Australian trade mark law. The focus is increasingly on outputs, not just inputs.
Practical Takeaways
For rights-holders and content owners:
- Monitor not only dataset usage, but where AI training takes place
- Trade marks and watermarks remain powerful enforcement tools
- Contractual controls over AI training are still the most reliable protection
- Open-source availability does not eliminate legal risk
- Filtering, watermark detection and output controls matter
- Maintain clear records of data provenance, training locations and safeguards
Getty Images v Stability AI does not provide all the answers — but it marks a significant step in how courts are beginning to approach AI and intellectual property.
Copyright law may struggle to stretch around AI training practices. Trade mark law, however, is already proving more adaptable.
For Australian businesses operating in this space, the message is clear: the legal landscape is evolving quickly, and proactive risk management is essential.
You can listen to the full discussion of this case on the latest episode of Elise Explains IP. If you would like advice on how AI intersects with your IP strategy, please get in touch.