Andrea Carcano, a former teenage amateur white hat hacker, never planned on founding his own cybersecurity company. Carcano began his infatuation with engineering malware and viruses while studying for his PhD in a European Commission (EC) lab, where he was freely able to demonstrate how these malicious programs could stubborn CNI control systems. He then switched to the defensive. After joining the European CRISALIS project, he leveraged his hacking skills to craft a detection system for Stuxnet-type malware, before being recruited by an oil and gas major that shall remain nameless in this introduction – whereupon his career stalled.

The ex-white hat hacker had made the mistake of suggesting that a multi-billion-euro-capitalised firm might develop its own AI-powered cyber-defences, a proposal his manager immediately laughed off as too costly. So, Carcano went about building his own. The result was Nozomi Networks, an OT and IoT security provider for critical infrastructure. “I never dreamed of building my own company – I always had fun in solving the problem,” says Carcano, which is why he would step down as CEO within four years to return to product evolution, something he calls his ‘superpower.’

In the following interview, edited for length and clarity, Carcano explains why IT cannot just be dropped into the critical operational technology (OT) world, how collaboration is essential to mitigating risk, but that facilitating organisations need more support. 

Are cybersecurity companies guilty of using alarmist language? Sometimes, says Nozomi Networks’ Andrea Carcano – but you should see the incidents that don’t make it into the news. (Photo: Nozomi Networks)

What makes defending critical infrastructure from cyberattacks different from that of a large company?

Carcano: For organisations running critical infrastructure, availability requirements are very different to other sectors. For example, if an antivirus software mistakenly blocks an email, you’ll be upset, but you can call IT and they’ll fix it. Industrial networks, meanwhile, need to work 24/7, whether producing electricity or building cars. If you block processes for even a few minutes, the best-case scenario is that you lose money; in the worst-case scenario, you create a significant problem for the community itself. When you apply technology for IT in the OT industrial world, that type of technology is not equipped to work in a world where uptime is everything. In addition, it’s important to understand the physicality of the environment and its processes to discover if an attack has taken place. We know from cases like Stuxnet that subverting physical infrastructure can have surprising, sometimes catastrophic results. 

‘Catastrophic’ is a strong term. What’s your take on the criticism that the cybersecurity world too often reaches for alarmist language?

I think it’s a partially valid point. But is it true? How many disasters have happened that have reached the media that we know about? Only a few, right?  

What we’re seeing, though, is the number of attacks growing exponentially. But they‘re not trying to disrupt the plant, because then the disaster will be too big – you don’t want your target to collapse, but to pay you money. And you can’t believe how many companies today pay money to regain control of their critical infrastructure operations, or even part of them. If you go on the black market, the majority of the cyber weapons are not the ones that can really destroy a plant or put the plant in a critical state but block production, so they can then go back to the company and say, ‘If you want it back, you must pay.’ 

CISOs and CIOs in critical infrastructure organisations face an enduring dilemma: they must support innovation but do so in ways that don’t create new vulnerabilities that cybercriminals might exploit. How, then, should key decision-makers work together to bridge the gap between risk and reward? 

I think this is a question without a unique solution. Having spent all my life in cybersecurity, including on the customer side for four years [at Italian multinational energy company Eni], I think the right approach is not to try to look for 100% security. That doesn’t exist. Instead, you always need to mitigate. Even when you drive your car at higher speeds on the motorway, this will increase the risk of you being injured in an accident, but you still travel at that speed. 

It’s the same for every organisation. You cannot completely close yourself off; you gain efficiencies by exchanging data or receiving it in real-time from sensors installed throughout your power plant or factory. I don’t think we should stop companies from innovating in that area. What we need to do instead is learn from attacks, from what we see developing in the IT world, and try to compensate for the new risks with new measures. That’s what we’re doing, and I think that’s what every cybersecurity technology should be doing. 

Bearing this in mind, do you have some illustrative examples of poor practice?

Power plants, mines, pharmaceutical plants and other such facilities are designed to operate in a completely segregated way. People are on site 24/7, they exchange data manually, collecting it with hard drives and then manually entering that data into a different system for analysis. That’s the base. But at some point, the business wants to receive that data in real time and therefore invests in real-time devices. 

However, 99% of this technology usually has no authentication, and while newer devices might have it, technology is only replaced in 20-year cycles. Because of this, it’s basic best practice that those devices cannot be connected to the internet. If they were, they’d need to go through some segregation.

Yet sometimes we enter a plant and find one of its most sophisticated components is exposed to the internet and very reachable by anyone in the network. As those devices don’t have authentication, if you can speak their language, they will do what you ask. And in some cases, we discovered there was someone from Russia, or China, or other such countries, monitoring that infrastructure, receiving data in real time every day simply because it was exposed through an insecure internet connection. We found that one customer had 50% of its plants exposed to the internet in this way!

Why does this happen? There may be rules restricting internet use on company computers, but it just takes one person working 24/7 with purchasing power getting bored and buying a $50 5G router instead of a more expensive, more secure version.

This month, we’ve seen continued grumbling about the backlog on the CVE program and the US government threatening to withdraw funding for the program at MITRE. How could continued uncertainty about both impact the wider private sector’s response to cyber-threats?

One is that certain organisations, including MITRE, are behind in publishing newly discovered vulnerabilities, despite having a backlog of potential flaws on their books. So now the gigantic question is, how big is that list? If it’s very big, it means you have many vulnerabilities that are, effectively, open secrets – known to the cybercriminal underworld but only partially understood by the companies that have to patch them. That, in turn, leads to a world where every organisation is struggling to defend against these flaws without being able to rely on a unified response aided by organisations like MITRE. 

I grew up in the cybersecurity world, always assuming those types of organisations would be working without an issue. I think making sure they are well funded, but not just by the US, which is the main financial contributor, is very important. Possibly we could have an effort that is driven by NATO with adequate funds to make sure it’s maintained properly. 

This is something that cannot be solved by one private company. Every private company should be forced to collaborate, and it should be driven by organisations that don’t have an economic interest. 

Read more: The good CTO helps define a company’s culture as much as its technology, says Checkatrade’s Gorden Pretorius