The UK just saw its first major ruling on AI and copyright — and it’s a big deal for anyone building or using generative AI tools. The UK High Court has weighed in on Getty Images’ case against Stability AI, the company behind Stable Diffusion. Getty argued that Stable Diffusion itself was an “infringing copy” of Getty’s images because the system was trained on them. The court, as it turned out, disagreed.

The decision gives some clarity on how UK courts might handle copyright and trademark issues around AI training and outputs. Here are the issues that CIOs and AI developers really need to worry about.

No ‘copy’ inside the AI model

After digging into how the model was built, the judge concluded that Stable Diffusion doesn’t actually store the original images after it has been created— it just learns from patterns in the data. In plain English: the model doesn’t keep copies of photos, so it can’t be a “copy” in the legal sense.

That finding knocked out Getty’s secondary copyright infringement claim. For AI developers, this part of the decision is reassuring — the court recognised the technical distinction between training on data and copying data.

Before the court could rule on whether the training process itself infringed Getty’s copyright works (the so-called ‘input’ claim), Getty dropped that part of the case.

As such, we still don’t have a UK precedent on whether using copyright-protected material for training models is lawful. Stability AI’s defence was that its data wasn’t stored or downloaded in the UK — and Getty’s withdrawal means we’ll have to wait for another case (or new legislation) to get a clear answer.

Trademark trouble: logos in the outputs

The court did find some limited trademark infringement. A few AI-generated images still contained “Getty Images” watermarks — and that was enough for the judge to say: yes, that’s trademark infringement.

This opens the door for brand owners to argue that their marks are being misused if AI systems generate content showing their logos or brand names, even accidentally. In short, copyright isn’t the only legal risk for AI output — trademarks are now on the table, too.

What happens next

The ruling leaves AI developers and creatives in limbo on the biggest question: can you legally train AI models on copyrighted works without permission in the UK?

Right now, there’s no clear “yes” or “no.” The UK IPO carried out a Consultation seeking responses from the relevant stakeholders, and Parliament might eventually step in. Until then, every company training or deploying generative AI models should assume there’s still legal risk around the data used for training. US courts have taken a more flexible “fair use” approach in some cases, but it’s unclear if UK courts will follow that path if and when the next UK case comes before the courts.

Bottom line for tech leaders

First of all, the good news: The court confirmed AI models aren’t actually ‘copies’ of the data they’re trained on. The bad news, however, is that we still don’t know if training on copyrighted material is legal in the UK. Trademark owners, too, have a new angle to attack AI output that mimics their brand. 

What might the next steps be? That’s still unclear, but CIOs and AI developers should watch out for any appeals arising from the Getty case – and also for any clarifications or changes to UK copyright law.

Varuni Paranavitane is of counsel at Finnegan, Henderson, Farabow, Garrett & Dunner, LLP

Read more: Cloud sovereignty is now fashionable. But most such offerings are anything but.