The promise of AI is an animating force across the economy, with every organisation assessing how it can harness the technology to drive efficiency and effectiveness. Set against this enthusiasm is the reality of mitigating the risks of AI, risks that potentially include compliance and regulatory action, customer loss, brand damage and, even, legal action.
In this Tech Monitor webinar, brought to you in association with SS&C Blue Prism, we examined what AI governance should look like for the enterprise organisation. With a focus on financial services, the session looked at the best path forward for banks, insurance companies and other heavily-regulated FS firms seeking to harness the value and performance of AI in the safest way possible.
During the conversation, we heard from James Kelly, Head of product marketing, SS&C Blue Prism and Simon Thomas, Director Intelligent Automation, SS&C Technologies. The topics we explored, included:
– The need for AI Governance
– How GenAI and Agentic AI are impacting businesses
– Avoiding the ‘AI Chasm of Compliance’
– Regulating and managing LLM / AI usage
– Use cases of Agentic AI and AI Governance
You can watch the video in full here.
On the need and nature of AI governance, Kelly told Tech Monitor that rather than look at the societal “big picture impacts”, business should focus on how they “can successfully implement AI into their processes … and to do so safely”. On the perils of AI, he said these included unauthorised access, hallucinations, and illicit access to sensitive information. “These are not the sort of things you experience as a personal user of the technology but when you get into the enterprise – particularly in regulated industries – those things start to gain more importance,” said Kelly.
Thomas shared the experience of SS&C Global Investor & Distribution Systems (GIDS), and two AI use cases in particular. In both, the combination of machine and human intervention is critical.
The first use case Thomas cites is designed to create client inquiry letters on behalf of transfer agents. The letters are based on the notes taken in the course of an agent investigation. The information is pushed through a large language model (LLM), used in conjunction with a standardised prompt. The letters generated are quality controlled “before they go out of the door, so the client doesn’t see something that has only been generated by AI.”
The second use case is for recording complaint descriptions and summaries, aligned to FCA standards of compliance. Using the Blue Prism Chorus process management platform in conjunction with AI tools, a transcription is automatically generated every time a complaint is identified. Interestingly, Thomas noted that the transcript doesn’t need to be flawless – “In order to understand the intent of the conversation, we don’t need to have 100% perfect transcription.”
When it comes to the governance overseeing these types of use cases, Kelly said organisations require “an insulation layer” for an LLM. Such a framework aids protection on the input side, ensuring the organisation is authorised to use the model. It also protects on the output side, ensuring that sensitive information doesn’t leak.
Asked how the governance of artificial intelligence differs from other forms of technology governance, Kelly noted that of AI – and generative AI especially – has “the ability to think and develop by itself. Therein lies a great opportunity not only for productivity savings but opportunities for the AI to create something we might not necessarily want. […] With AI, there’s the potential for exponential ‘run away,’ so to have that governed and secure is very important, particularly when it’s high risk with lots of sensitive data at play.”
Thomas added that AI governance is particularly important given that the regulations that apply to it are still evolving. In addition, AI expands organisational concerns into areas including ethics, where traditional technology governance rarely treads.
