On 11 November, Liz Lohn, the Financial Times’ director of product and AI, told her audience at London’s JournalismAI Festival that disclosures of machine assistance in the production of the pink paper’s articles were undermining its brand.

“People pay a lot,” Lohn told us later, referring to the FT’s not inconsiderable annual subscription price. “Whenever they see an AI disclaimer, that erodes trust and creates the feeling that AI in journalism equals cheap.” In fact, added the FT’s product and AI director, there had been cases where readers had gone as far as to cancel their subscription based on the impression that too much AI slop was circulating amid the paper’s people-powered coverage of all things political and financial.

That poses a dilemma for the FT: how to surf the wave of change in media operations promised by GenAI without undermining its ability to reach its target audience. For Lohn, that means being very careful about mentioning when and how AI has actually been used in the production process. Context, in this case, is king. “If the content has been through human review,” says Lohn, “do we really need to put in the disclaimer that AI at some point has been used in the process, or is the responsibility on the journalist, on the FT?”

AI transparency as a business risk

The FT‘s real-world experience runs counter to the market consensus on AI transparency: namely, that users would actually like to know when a machine intelligence has been involved in creating the product or service they are about to consume. That should, in time, foster trust in future use of AI in that process – unless, of course, you believe that AI-powered content creation ends up creating more garbage than gold, as a survey conducted by the University of Arizona recently found. Indeed, the study’s authors found that content claiming to have been made using, or partly using, AI was viewed as less trustworthy by respondents than that produced by actual journalists (which, as multiple examples have shown, is hardly infallible.)

Nevertheless, most businesses are persisting with full disclosure of AI usage. Transparency is an issue for all firms automating customer-facing services in one way or another, says CRM platform Zendesk’s European CTO Mattias Goehler. According to Zendesk’s own research, around 25% of all customer interactions, like complicated customer-service tasks and upselling, are defined as ‘high-value.’ The other 75%, meanwhile, remain ripe for some form of AI automation – and if they are automated, the received wisdom at Zendesk and many other companies is to let customers know. “I have rarely seen it any other way,” says Goehler.

Transparency will become even more critical when Zendesk launches its AI voice agents at the beginning of 2026. The firm’s own research, which examined over 15m recent customer service interactions (including human, AI-augmented and fully automated conversations), found that 47% of them failed to satisfactorily resolve the problem at hand. This leaves a significant room for improvement and, in the process, the opportunity to build trust in the methods used to improve this dismal success rate.

Another company taking the full disclosure route is the BBC. Executive news editor for digital development Nathalie Malinerich told the audience at JournalismAI that the corporation is very careful about the wording of AI disclosure and goes out of its way to make it abundantly clear to readers as to which pieces of content have been created using or partially using AI. “We do disclose everything,” said Malinerich, even “whether it’s assistance with translation [or] assistance with summarisation.”

That will change over time, however, Malinerich continued, as people become accustomed to the idea of using agents to shape content. “We know,” she said, “that there are big differences generationally between acceptance of AI.”

Will AI adoption help the AI trust penalty?

In the meantime, work is underway to devise common ways to disclose the use of AI in content creation. The British Standards Institute’s (BSI) common standard (BS ISO/IEC 42001:2023) provides a framework for organisations to establish, implement, maintain, and continually improve an AI management system (AIMS), ensuring AI applications are developed and operated ethically, transparently, and in alignment with regulatory standards. It helps manage AI-specific risks such as bias and lack of transparency.

Mark Thirwell, the BSI’s global digital director, says that such standards are critical for building trust in AI. For his part, Thirwell is mainly focused on improving the transparency of underlying training data over whether content is disclosed as AI-generated. “You wouldn’t buy a toaster if someone hadn’t checked it to make sure it wasn’t going to set the kitchen on fire,” he argues.

Thirwell posits that common standards can, and must, interrogate the trustworthiness of AI. Does it do what it says it’s going to do? Does it do that every time? Does it not do anything else – as hallucination and misinformation become increasingly problematic? Does it keep your data secure? Does it have integrity? And unique to AI, is it ethical?

“If it’s detecting cancers or sifting through CVs,” he says, “is there going to be a bias based on the data it holds?” This is where transparency of the underlying data becomes key. “If I get declined for a mortgage application online because an AI algorithm decides I’m not worthy,” says Thirwell, “can I understand why that is? Can I contest it?”

Thirwell’s view of how the transparency-to-trust roadmap will develop over the next decade centres around specialist use cases. “Medical devices, AI for biometric identification, and other really niche use cases will make a big difference,” he says, “and trust will grow as these evolve into place. But there’s a lot of work needed on governance and regulation, to clarify and make it easier for organisations to adhere to regulation. Once that’s addressed, then that will help to grow trust.”

The caveat to tangible and beneficial use cases building trust is that it will only happen as long as guardrails are put in place. Otherwise, says Thirwell, it only takes “one big front page issue that’s going to rock trust.”

Read more: How industrial AI is powering a quiet tech revolution