
The recent launch of GPT-5 has once again highlighted the fast-moving capabilities of LLMs. But amongst the wonder at what it can produce – in visuals, images that are increasingly indistinguishable from reality – come serious questions about how trustworthy the output can be.
Can we trust AI models to generate images for business use when they’ve been trained on biased, unlicensed content scraped from across the internet? Are companies exposing themselves to legal risk by using outputs that may infringe on copyright, likeness, or trademarks? And will consumers trust in, and connect with, AI-generated content absent transparency about training data and labelling? These are all questions businesses should be asking themselves as they bring these tools and this content into their creative workflow.
How, then, do we ensure that AI models for content generation are not only innovative but also trustworthy? The answer lies in how these tools are built. From respecting intellectual property and artist rights within training practices to transparency of outputs, responsible AI development is key to producing content that businesses can trust and consumers will engage with.
Building trust into AI
Most of the popular tools available today have been trained on scraped content from the web, absent rightsholder permission or payment. This means the outputs of these models could contain elements that violate copyright, personal likeness or trademarks, putting a business at legal risk if it uses these outputs.
Transparency is key to trustworthiness. Customers should be able to ask, and receive clear and detailed answers, on how models were built, including with what data, so they can make an informed decision on whether to use those capabilities. And it should be clear to everyone if an image has been created using AI.
The output is only as good as the input
Trust is also shaped by the quality of the output. An unfortunate effect of models trained from scraped content is the increasing amount of AI “slop” – low-quality, low-value gen AI content – proliferating the visual landscape.
Bias in training data is another key concern, with images generated by many of the popular models perpetuating outdated stereotypes around ethnicity, gender, age or disability.
If AI datasets contain copyrighted material, there is also the issue of permission and compensation. Using someone else’s intellectual property without consent or payment isn’t innovation – it’s exploitation. Compensating creators for the use of their work in training data not only lessens legal risk for the end user but also sustains the creative ecosystem, thereby ensuring a continuing flow of high-quality human-created content, so AI models have timely content to tap into. If artists are not compensated for their work, eventually that work will go away.
Reflecting the diversity of our communities matters. Accurately representing people matters. Mitigating deepfakes via responsibly trained AI models matters. Every business leader making a decision about bringing generative AI tools into the business should be asking the question about how their underlying models were trained and whether the creators behind the content they were trained on were compensated. They should demand answers. These should be table stakes. Not exceptions.
Trustworthy AI is everyone’s responsibility
As AI capabilities expand, so too does the need for a shared set of rules. Policymakers, industry leaders and technology providers must work together to create a framework where innovation thrives without eroding the rights of creators or public trust. Strong copyright protections, transparent AI labelling and robust licensing markets are all part of that foundation.
Business leaders also have a role to play as the customers of these tools. Choose tools which have been responsibly developed; this will not only encourage more innovation which respects intellectual property, but also ensure that the output does not put your business at legal risk or erode your hard-won consumer trust.
Grant Farhall is the chief product officer at Getty Images