Lloyds Register of Shipping published its first classification of ship seaworthiness in 1764. Used by merchants and investors to provide confidence that a vessel could carry its cargo without disaster, and by insurers writing premiums, the list quickly became an essential enabler for maritime trade and for Lloyds’ own insurance business. AI could do with similar assurances. A palpable lack of trust is often cited by consumers and businesses alike as the main barrier towards the adoption of AI services – something that could be addressed by a robust third-party assurance framework.

The UK government has recognised that such a framework can only work when the conditions are right for people to trust these assurance providers. Such is the aim of its ‘Trusted third-party AI assurance roadmap,’ published on 3 September. That plan aims to address four key problems customers face when assessing AI products – namely, the quality of third-party assurance for AI products or services; whether or not those conducting assurance assessments have the requisite skills; lack of ready information on the training and construction of AI systems; and a lack of quality metrics due to insufficient innovation in new assurance techniques. 

Developing skills and ethics for AI assurance professionals 

Regarding the question of assessing quality and skills, the roadmap proposes convening a consortium, supported by the Department of Science, Innovation and Technology, that will work with the UK’s quality infrastructure and professional bodies to develop a voluntary code of ethics for AI assurance. The body will also devise a skills competency framework for AI assurance professionals, which, in time, should lead to the creation of a professional certification scheme and accompanying training programs. 

The government also considered the role that process certification might play in guaranteeing the quality of assurance. While undeniably valuable, this was rejected in the short term as being too complex, lengthy and expensive. Similarly, accreditation of assurance providers was considered not viable in the immediate future. The immediate focus for guaranteeing the quality of assurance services rests, therefore, on the individuals themselves who will carry out assessments.

Unlocking AI assurance in practice

When it comes to addressing the general paucity of information on how AI products are trained and constructed, the roadmap advocates for clear processes defining the exchange of relevant data between AI developers and their customers, the better to allay developer concerns about ceding market advantage to their competitors. To better support innovation in the sector, too, the government has proposed the establishment of a dedicated ‘AI innovation fund’ to facilitate new investment in the space. 

All these steps should be welcomed – but the roadmap’s focus on how individuals should perform assurance services ignores the crucial need for established benchmarks, which they should use as the basis for their work. Many are available, which, ironically, complicates the work of the assurance provider: which benchmark, after all, is best? The framework also does not do enough to spur innovation of improved benchmarks. 

Second, the framework does not address the practicalities involved in certifying AI products and services. Such labels provide instant and highly visible reassurance to consumers: website certifications, for example, have successfully provided a digital stamp of approval for the online presence of millions of organisations since the mid-1990s. According to research published last year by TrustedSite, 83% of consumers said they would be more likely to trust a site that prominently displays a third-party trust badge compared to one that doesn’t. 

Despite these failings, the very fact that the UK government has proposed a roadmap for building an AI assurance ecosystem is positive. While it does not light a path forward to the adherence of assurance providers to a single benchmark a la Lloyds, it nevertheless sows the seeds for a new career path for AI assurance professionals and supports the growth of AI assurance companies. In the end, it will help drive domestic economic growth and create a valuable service export.

Professor Robert Trager is the director of the Oxford Martin AI Governance Initiative at the  University of Oxford. Ray Eitel-Porter is a senior research associate at the Intellectual Forum, University of Cambridge and the author of ‘Governing the Machine.’

Read more: Data quality is no longer optional