Virtually every company faces the challenge of transforming itself in the AI era. Few have to balance that with managing a multi-year integration of its biggest rival. But that’s exactly the situation at UBS, which took over fellow Swiss bank Credit Suisse in 2023. 

Stephan Hug is UBS’ head of group functions technology, a more than 5,000-strong group which spans risk, finance, compliance, HR, legal, comms and branding and corporate services technology. He came to UBS after over 20 years at Credit Suisse, including eight years as its chief architect. 

In this discussion, edited for length and clarity, he explains how the bank balances the risks and benefits of integrating AI into its operations, who really benefits from AI, and what digital and data sovereignty means for the famously neutral European country.

A headshot of Stephan Hug.
AI deployments must be managed according to UBS’ values, argues its head of group functions technology Stephan Hug. (Photo: UBS)

Where are you at with your AI journey?

Stephan Hug: We have been very busy in the last two and a half years with the integration of Credit Suisse. That has been and still is our number one priority. So far, we have been extremely successful – nothing [has] failed. By the end of 2026, we should be materially completed. 

About a year ago, the group executive board, under the lead of Sergio Ermotti, our CEO, said, “Look, we can’t really wait until this whole integration is over. The world is progressing. We have to make an investment into AI.” And so, they came up with something called AI Vision 30, which is really a four-year pointed investment into AI across the firm. And that comes in many, many different shapes. We have deployed Copilot to all our employees. That’s probably what is most visible to a lot of our employees. 

Then we started defining what we wanted to do in AI. What were the bigger building blocks we want to invest in to basically move UBS along to become an AI-enabled company? There are a couple of very foundational elements. One is clearly around governance and controls, because we have to get that right. We are a heavily regulated bank. We are a Swiss Bank Trust, and our reputation is very important. 

How do you ensure that AI models don’t introduce unacceptable risk?

We originally had a platform called Risk Lab, and as the name suggests, we use that to develop and test risk models very successfully. [Enterprise AI firm] Domino Data Lab is basically underpinning Risk Lab. So, it’s an integrated platform where we can develop, test, and ultimately deploy models. It’s integrated with a data platform that runs on top of Databricks in my organisation. It has proven to be extremely useful. And then when AI came along, we decided that we would use Risk Lab across the firm and would extend it to AI models as well.

Traditional deterministic risk models have been part of banking for years. So where does Gen AI fit in?

You could say the overnight success of Gen AI and large language models is what triggered our executive board to look into that and say, “Hey, if we don’t want to miss the train, we have to invest,” and GenAI plays an important role in our journey. 

We do have an internal version of ChatGPT, called Red, which gives access to ChatGPT in a more secure fashion than just using ChatGPT over the internet. We can basically control what data gets sent where.

We have seen a lot of chatbots popping up across the organisation to solve certain problems. For example, as a bank, we have a lot of policies. They’re usually very complicated. We have written a chatbot that allows us to query the universe of policies. So, I can say, “I’m travelling to New York, I want to invite my team to a dinner. What is my allowance? What do I have to do?”

How do you manage the rollout of AI models in the organisation?

First of all, we need an inventory of all models. That’s very, very important. That’s important with the risk models [the bank uses]. This is equally important for any machine learning model or any other AI model. We need to understand what the use case is and what it is being used for. We need to understand how the model has been tested. We need to understand where the model is deployed.

We did the same with applications. Is the application SOX relevant? I have to understand who the owner is. I want to understand where it is deployed. I want to understand who uses it.

We are a wealth manager, and that comes with certain values. We’re not going to allow, at least in the first round, AI models that make bold decisions around your investments, that make recommendations to you. Initially, we will always put the human in the loop.

But you can basically use the AI to come up with ideas at scale that probably a financial advisor by himself could not come up with, because we can analyse your behaviour. We can analyse your investments, and then say, ‘Oh, clients that did something similar, or have a similar profile to you, did this. Would that be an idea for you?”

AI and data sovereignty are big talking points across Europe right now. How are they shaping your AI strategy?

For me, it’s more about where the data resides. We absolutely have to make sure that Swiss data only resides in the Swiss region of Microsoft Azure. Swiss banking laws, Swiss banking secrecy, do not allow us to store client and client-identifying data outside of Switzerland.

Switzerland is a very small country. It’s only nine million people. Yes, it’s rich, but I don’t think we have the bandwidth to really develop a sovereign cloud. Swisscom is making some strides toward that [goal], but they will be hugely behind the three big cloud providers. Because we are a global organisation, we need a globally consistent infrastructure. 

Our two leading tech universities, ETH and EPFL, announced a Swiss LLM. We will definitely look into that. Will it completely replace what we do with OpenAI? No.

Read more: Data centres are powering the AI revolution. It’s time to take them off grid, says AVK-SEG’s Ben Pritchard