
In the first nine months of 2025, we have travelled around Europe and beyond to understand the challenges and opportunities organisations are facing up to as they seek to make the most of artificial intelligence. The resulting series of roundtables has taken us from London to Abu Dhabi, via Riyadh, from Manchester to Munich, and from Copenhagen, latterly, to Stockholm. In this final article of the year, we bring you some of the best insights and opinions from the series, including our early autumn visit to the Swedish capital.
AI is testing the relationship between the business and IT
Artificial intelligence is again putting the association between the IT function and the rest of the business under the spotlight. As one attendee at our Stockholm event asked, “Who’s driving AI adoption: IT or the business?” Typically, there is no single answer – which, unsurprisingly, is causing renewed tension between the two. Some organisations are witnessing bad decision-making, epitomised by AI use cases that bear little relationship to overall business needs. “Why do I need a Wi-Fi-enabled air fryer?” asked one delegate (rhetorically), capturing the essence of the use case brought to market with little understanding of organisational mission, let alone customer need. IT and business need to better understand each other – and work closer together – if AI’s promise is to turn into tangible benefits.
Another attendee compared the traditional relationship between business and IT to that of the car owner and the mechanic. Just as the car owner doesn’t need to understand what goes on under the bonnet, so the business doesn’t need to understand the intricacies of information technology. That analogy served organisations well, the delegate argued, until the AI age. Now it is not uncommon for a reasonably well-read senior executive to know more than a thing or two about IT leadership. The balance of power and persuasion is changing, he said. IT leaders who want to restore the previous power balance need to invest time in becoming experts in AI.
If these issues feel familiar, that’s because they are. While in some organisations business and IT work seamlessly, that’s not the case everywhere. For many, including a sizeable proportion of those representing some of Sweden’s biggest firms at our event, the gap between the business and IT remains as wide as ever. AI is exposing the problems the gap creates.
There are AI “lighthouses everywhere”…
Our Manchester cohort agreed that generative AI (GenAI) provides multiple (often low-key rather than ‘big bang’) use cases. One described this phenomenon as “lighthouses everywhere”. Take one healthcare example. A senior IT leader pointed to the recent adoption of machine learning to assess scans in his radiology department. He and his team are applying AI to act as a first pair of eyes. This not only accelerates the detection process, it requires just one – not two – specialist consultants to provide secondary approval. False positives generated by automation can often prove a counter to the productivity gains offered by automation. In this scenario, this hasn’t been a problem. Regardless, our healthcare executive said, it is better to have too many false positives than false negatives in a health scenario.
Too many AI initiatives are failing the ‘why’ test
Before embarking on any technology project, you need to address business needs and expectations, said one senior technologist in attendance at our Abu Dhabi event in April. This means defining a “proper” problem statement that directly serves a business barrier or opportunity demanding immediate attention. Only then can you apply a “proper solution”. Too many organisations, this attendee said, fail to fully articulate their problem statement when it comes to AI. This may be project management 101, but many organisations are so keen to adopt AI at pace that they forget the basics. And the basics start with “why”.
In AI we trust?
At our London event in March, delegates argued about the relative merits of AI when it came to data protection, validation and trust. Transparency – both in the use of GenAI and in being to understand how decisions are made – emerged as a key necessity according to this group of IT leaders. “Every organisation should have a secured version of ChatGPT [or similar GenAI chatbot],” said one attendee, “because if they don’t, their people will use public ChatGPT.” Another argued that organisations need to become a lot better at explaining the relative risks of sharing different types of data. There is a big difference between personal data and network data. “It’s about classification,” he said.
AI and a new kind of talent
Reflecting the GenAI use cases beginning to emerge, different skill sets will become essential. That view was expressed by one attendee at our Munich event. He said: “When I hire people, I want to know whether they are able to be precise in articulating a ‘problem statement’.” Those able to apply logic, reasoning, and understanding will thrive, he said. Applying those skills will allow this newly-skilled workforce to instruct AI to query existing ERP systems, for example, in order to extract valuable information otherwise likely to be locked away.
In a similar vein, others have noted that the expert ‘prompt engineer’ will thrive in this new era by possessing the same critical reasoning skills. A generalised fear that GenAI might lead to deskilling was thus dismissed by the group of voices assembled. Borrowing a saying often attributed to an American software engineer, one observed: “A fool with a tool is still a fool.”
AI in the cloud: the pros and cons
Asked about the merits of on-premise versus cloud as a destination for AI workloads, a common argument for the former comes in the form of data protection. As one guest at our Riyadh roundtable said in the context of training large language models, “because of the confidentiality of the information we are dealing with, we don’t want to use publicly-owned and hosted LLMs.” Others agreed, citing industry-specific regulations and the overriding principle that data used for learning cannot reside outside the country.
Others are happy to embrace the benefits of the cloud. “It’s one-click deployment,” said another attendee, arguing that cloud was preferable to “the headaches” of managing rollout on-premise.
One solution to the privacy dilemma is to adopt a hybrid approach, making use of public cloud (commonly from multiple public cloud providers) for less sensitive workloads while deploying private cloud for those applications that rely on private data or intellectual property. The latter is a cloud-like service in an on-premise or managed data centre setting.
The future of coding
If GenAI is changing the way organisations do business, might it mark the demise of the human coder? This scenario was proposed by a passionate developer at our Copenhagen event. He argued that while AI currently augments human software engineering, more and more coding (including validation) can be automated. If that’s the case, why continue to build applications in the traditional way? The proposition was offered without sentimentality. It is a natural progression, he argued. We don’t build our own homes, so why our own software?
Another guest compared the moment to the emergence of the Gutenberg press in the mid-15th century which signalled the end of the manual scribe. Just as most scribes didn’t lament the passing of this particular craft, so most coders might not miss the mundanity of their current role.
The last of the 2025 series of AMD/Tech Monitor executive roundtables – ‘Deploying AI: Balancing Power, Performance and Place’ – took place on Thursday, 18 September 2025, at the Nobis Hotel, Stockholm, Sweden.