Two events in mid-April – the first in Riyadh, the second in Abu Dhabi – sought to understand what is keeping the Middle East’s IT leaders busy, and to do so through the prism of AI and the opportunities it opens up. 

First up, an executive conversation with technology leaders in Saudi Arabia’s capital city. The group represented a range of vertical markets – from energy and utilities to telecoms and financial services – as well as varied maturity in terms of adoption. Some were relatively new to AI, considering use cases at an early, pre-proof-of-concept stage. Others came with experience, first of machine learning (ML) and, latterly, of generative AI (GenAI). 

Wanted: GPUs

The adoption of AI depends in part on access to significant volumes of processors, typically graphics processing units (GPUs), required to train large language models (LLMs). One attendee said his company “was fighting to get hold of GPUs”. This wait, which he put down to “big players” in his sector getting priority access, meant his organisation was forced to put GenAI applications on hold. These use cases range from simple knowledge-based systems designed to automate and improve customer service to more advanced agentic applications to support and supplement processes such as underwriting. 

“It’s decision time,” he said. Even so, without access to the horsepower necessary to realise AI’s full potential within his organisation, he had to settle on small language models to develop proofs of concept (PoCs) to satisfy peers expecting faster movement. 

Attendees at a roundtable convened by Tech Monitor and AMD in Riyadh. (Photo: Tech Monitor)

Putting AI to work

Other AI use cases shared during the evening’s conversation included a chatbot / WhatsApp integration using GenAI, and a machine learning application capable of processing thousands of images that would otherwise require manual monitoring. In many of these cases, a “human in the loop” augmented and often authenticated the work of the AI. 

Another attendee looking to make the argument for GenAI said that, to date, his organisation is only using “out of the box” capabilities, mostly consisting of the “co-pilot” capabilities that accompany productivity tools. This, he said, was useful but limiting. “We’re missing the integration and the ease of use,” he said, going on to argue that bespoke assimilation of AI with existing workloads is the way to make the most of the technology, itself not an easy proposition to put into practice. “Use cases are simple to say but hard to deliver.”

Asked what barriers stood in the way of delivery, he offered two. First, it is challenging to bring “multiple stakeholders” together to work on applications that work across organisational functions. Second, he said it was always challenging to make the case for additional budget.

AI in the cloud: the pros and cons

The conversation then turned to the relative merits of on-premise and cloud computing. Asked what role cloud plays in the training of AI models and the location of AI workloads, attendees began by offering notes of caution. As one guest said about model training, “because of the confidentiality of the information we are dealing with, we don’t want to use publicly-owned and hosted LLMs.” Others agreed, citing industry-specific regulations and an overriding principle that data used to learn cannot reside outside of the country. 

On the other hand, some are happy to embrace the benefits of the cloud. “It’s one click deployment,” said another attendee, comparing it with “the headaches” of managing rollout on-premise. One solution to the privacy dilemma is to adopt a hybrid approach by making use of public cloud (sometimes from multiple public cloud providers) for less sensitive workloads while deploying private cloud (effectively, a cloud-like service in an on-premise or managed data centre setting) for those applications that rely on private data or intellectual property. 

Whether the choice is cloud or on-premise, the increasing power of these processing units that underscore AI is likely to have a material impact on cooling requirements. Air-based cooling is of diminishing value when it comes to an effective means of keeping hardware performant. Every organisation represented on the night, and many more beyond, will need to consider the adoption of liquid cooling (water or other coolants) in order to remove heat inside their own data centre or those they rely on from public cloud providers. 

In other words, AI innovation seems to be in reach for many organisations in the region – but a handful of challenges must be overcome first.  

‘AI innovation in an age of environmental and regulatory volatility’ – a Tech Monitor / AMD executive roundtable discussion – took place on Tuesday, 15 April 2025, at the Narcissus Hotel and Spa, Riyadh.