UK infrastructure runs on open-source software. This means that every government department, contractor, bank, and hospital is utilising software systems from public code repositories built by strangers on the internet. 

Far too often, departments do this without checking who wrote the code in the first place or whether it’s been compromised. As advanced cyber attackers from nation-states become more common, and as the world relies more heavily on open-source software, this issue has become a matter of national security.

A growing threat to UK infrastructure

Today, more than 90% of modern container applications run on open-source. It’s in the NHS, the rail network, government services, financial systems – everywhere. It is the backbone of today’s essential services. Yet there’s almost zero verification of where that code comes from. We’re effectively plugging anonymous packages into mission-critical systems and hoping for the best.

Last year, the maintainers of the Linux kernel – a fundamental operating system component – took the step of removing several Russian contributors due to concerns over Russian trolls potentially infiltrating the Linux kernel and installing malicious code. This was a decision that resonated across the open-source community because nation-state attackers aren’t coming through the front door; they’re sneaking into the supply chain, hiding in code dependencies no one bothers to check.

Open source isn’t broken – how we use it is

Open source itself isn’t the problem; its advantages have been well-documented. However, our systems for securing and fixing it are outdated. Current methods – via unverified binaries and a reliance on unknown maintainers – make these waters dangerously murky. 

Most organisations pull pre-built binaries (pre-compiled software artefacts) from public repositories such as GitHub or Docker Hub. These are rarely verified, their build processes are ambiguous, and their maintainers could range from enthusiastic hobbyists to well-resourced, patient foreign state actors. The range of availability is broad and untraceable. Historically, the open-source ecosystem has thrived on trust, but in an era marked by heightened geopolitical tensions and advanced cyber attacks, indiscriminately trusting unknown entities on the internet is no longer viable.

Thankfully, the UK is beginning to address these issues. The National Cyber Security Centre (NCSC) has published guidance and frameworks aimed at mitigating risks within critical national infrastructure, such as the Software Security Code of Practice. It gives a practical framework for building software that is secure from the beginning, while aligning with other global initiatives such as the EU Cyber Resilience Act. With fourteen principles across four key areas, the Code of Practice brings safety and compliance to the forefront, outlining the fundamental security and resilience measures organisations should be expected to meet.

But guidance alone isn’t enough. The Code, too, is voluntary, and voluntary standards rarely change behaviour at scale. Instead, what we need are measures that compel government departments to adopt a more thoughtful approach to open-source software. These could include tax incentives that financially reward companies which comply with the Code, or new procurement rules that make adherence to its precepts a significant factor in winning new public sector IT services contracts. Additional risk should also be shifted onto developers and vendors that fail to adhere to established secure development practices. SMEs, too, should be given practical, realistic guidance on how to use open-source software without compromising wider networks. 

A new focus on provenance for open-source

The NHS Cyber Security Charter is a promising step. It raises the bar for suppliers, emphasises transparency, and acknowledges the real risk from these unseen threats. But adhering to the Software Code of Practice is just the first step. Beyond incentivising businesses to use this framework, the Code must go further. In its current form, it doesn’t require organisations to verify where their open source actually comes from. That’s a gaping hole. Until we have frameworks that force visibility into who built the software we’re using – and how it was built – our infrastructure remains exposed.

Again, open source isn’t the problem, but our outdated way of consuming it is. We need software supply chains where provenance is non-negotiable, builds are verified, and every component has a traceable chain of custody back to a real, accountable maintainer. Anything less is just hoping no one notices the backdoors until it’s too late.

Dan Lorenc is the CEO and co-founder of Chainguard

Read more: We are becoming numb to cybersecurity breaches