When it comes to binding legislation, artificial intelligence is still something of a Wild West. But in the place of statute, a patchwork of “soft” laws – ethical guidelines, principles, standards – have sprung up in recent years. Introduced by a variety of stakeholders from public to private, these are typically agreed to by companies on a voluntary basis. Could AI soft law inform regulatory efforts on the horizon?

AI soft law

Ethical guidelines around AI-powered systems like autonomous vehicles could inform the development of law. (Photo by Qilai Shen/Bloomberg via Getty Images)

A research team at Arizona State University compiled a database of 634 AI soft laws published between 2001 and 2019. They found that around 90% of programs were created between 2017 and 2019, indicating the boom in soft law instruments has been a fairly recent phenomenon. 

“Per its definition, soft law comprises any program that sets substantive expectations, but is not directly enforceable by government,” says Carlos Ignacio Gutierrez, a governance of artificial intelligence fellow at ASU and member of the research team that put together the database. “Essentially this includes efforts whose implementation is not reliant on direct government action (via a fee or penalty) such as principles, guidelines, best practices, standards.”


Companies could spend nearly $342bn on AI software, hardware, and services in 2021, predicts IDC, forecasting that the AI market will expand by 18.8% this year, and remain on track to hit $500bn by 2024.

But the growth of AI is matched by a dawning awareness of its potential harms, particularly in high-risk AI applications including self-driving cars, autonomous weapon systems, facial recognition, algorithmic decision making, and social credit ranking algorithms. This has stoked policymakers’ interest in creating new laws around AI, but regulatory efforts are complicated by the dynamism of the field and the lack of consensus around what AI even is.  

“For new technologies, soft law precedes hard law because it can be accomplished much faster,” says Anna Jobin, senior researcher at the Alexander von Humboldt Institute for Internet and Society, who has also studied AI soft law. “It’s faster because it can be created ad hoc, by different stakeholders, initiatives can reach across sectors or even national borders, and their implementation can be very flexible.”

Some soft law is implemented by consortiums of companies or industry bodies. For example, the Institute of Electrical and Electronics Engineers (IEEE) is currently undertaking an ambitious initiative to develop soft law governance of AI, working with several hundred AI experts across multiple disciplines with the goal of developing IEEE standards on various aspects of AI governance.

Gutierrez’s research found that multi-stakeholder alliances involving government, private sector, and non-profits, accounted for 21% of AI soft law programs, with non-profit and private sector alliances making up 12%.

But the involvement of companies and industry stakeholders in developing standards has led to the erosion of their credibility. Companies using AI have earned a reputation for “ethics washing”, where an ‘ethical’ label is slapped on technologies that aren’t held to any official standards. This is a problem in the nascent field of AI auditing too. If there is no legally binding framework for AI ethics, then what are AI applications being audited for?

Ethics is sometimes used by industry to look as if it is doing something, in a bid to fend off regulation, says Angela Daly, reader in law and technology at Strathclyde University. “If it looks like a sector is unregulated then there’s much more appetite for regulation.”

Our database shows that most soft law programs do not publicly disclose enforcement mechanisms.
Carlos Ignacio Gutierrez, Arizona State University

But there’s a growing awareness that soft law may not be enough to govern the next generation of AI. One of the reasons is the lack of enforcement of AI soft law. “If organisations do not find that the enforcement of a program is in their interest, it will certainly be abandoned,” says Gutierrez. “In effect, our database shows that most soft law programs (about 70%) do not publicly disclose enforcement mechanisms.” 

From soft law to hard law

This could herald a shift from soft to hard law – which represents a fairly well-known trajectory for the regulation of new technologies. Thilo Hagendorff, researcher at the University of Tuebingen’s Cluster of Excellence ‘Machine Learning’, says that we can see three phases in AI governance and ethics. “The first phase started five to six years ago, when the first (quite abstract, high level) AI ethics guidelines were published by corporations, scientific institutions, and governments.”

The second phase began around two years ago, he says, when academic papers increasingly stressed that principles alone cannot guarantee ethical AI, and that other approaches or enforcement mechanisms might be needed. “The third phase is the current one, where frameworks describe how high-level principles can be put into practice, where ethics-as-a-service approaches are developed, and where binding legal norms are developed.” 

But soft law should not be seen only as the preserve of corporate stakeholders.  Gutierrez’s research found that government institutions play a prominent role in deploying such programs. More than a third were created by the public sector, leading the researchers to conclude that despite its non-binding nature, “soft law appears to serve a valuable role in a public authority’s toolkit to complement or replace hard law”. 

Consensus in AI soft law

Could soft law help inform hard law? Research examining the corpus of soft law has discovered that AI “principles” tend to coalesce around some common themes. One paper that analysed principles and guidelines on ethical AI found a consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. 

A separate paper by Jobin, which analysed more than AI ethical guidelines, revealed a global convergence emerging around five principles with considerable overlap: transparency, justice and fairness, non-maleficence, responsibility, and privacy. 

“There is a strong consensus on a good handful of values,” says Hagendorff. “There is a consensus on the importance of values that can be solved technically. However, one can identify many ethical issues that cannot be remedied technically (gender gaps in the AI industry, sustainability issues, precarious clickwork, dual-use issues, labour displacement), and these issues are very often severely neglected.”

There is a consensus on the importance of values that can be solved technically. However, one can identify many ethical issues that cannot be remedied technically.
Thilo Hagendorff, University of Tuebingen

Technical problems have tended to take priority because until recently, there has been much more activity from the corporate sector than the government sector in this arena. But for problems that can’t be solved technically, policymakers don’t always need to rush to implement entirely new legislation, says Daly. Existing laws might well cover the harms that could potentially emanate from AI. “It’s not actually true that AI is kind of being developed in a vacuum, legally speaking,” she says. “Certainly in the EU, and also in the UK, there are pre-existing areas of law which do apply to AI.”

Some of these include data protection law, constitutional and administrative law for public sector use of AI, and consumer protection law in the private sector, along with any sector-specific legislation that might apply. “I see the question being ‘is AI and our particular uses and developments of AI covered by existing laws?’,” says Daly, “and if we may get to the point where we can see, actually, this issue isn’t very well covered by consumer protection law so we do need new laws. But I don’t think we have really done that.”

Gutierrez says that recent studies find that in the US, methods and applications of AI create regulatory gaps that in most cases (about 88%) only require clarification or amendment for their resolution. Meaning that in only ~12% of cases, bespoke laws are needed. “This provides an indication that the hard law frameworks in the U.S. are relatively resilient to AI for now,” he says. “However, future AI-based technologies can change this really quickly.” 

The EU is ploughing ahead with its plans to implement hard law around AI, but many of the inherent struggles are coming to bear. Legislative proposals argue for banning some “high risk” AI, including AI used for indiscriminate surveillance and the manipulation of behaviour, and subjecting other high-risk applications to an intensive regulatory regime. But as Tech Monitor reported, there is much concern over the definition of AI the legislation uses, and other vagueness that could dampen its effect. The regulation is currently undergoing debate in the EU, and industry is expected to push back forcefully on the proposals. 

However these efforts go, hard law is unlikely to subsume soft law for some time yet. “Soft law is not a panacea or silver bullet,” says Gutierrez. “By itself, it is unable to solve all of the governance issues experienced by society due to AI. Nevertheless, whether by choice or necessity, soft law is and will continue to play a central role in the governance of AI for some time.”