Discussions around the EU AI Act have often been dominated by timing. Will implementation be delayed, will requirements be softened, and how much breathing room will businesses get? The regulation itself is introducing brand-new obligations for organisations that develop or use AI systems across the EU, with stricter expectations for those that influence markets and decision-making.

Of course, those questions matter, but they risk distracting from a more pressing issue: where AI regulation directly impacts the operational systems that quietly make commercial decisions every day. Quoting and contracting platforms sit at the core of how businesses price products, agree terms, manage risk and track revenue. As AI becomes embedded into these systems to drive efficiency and consistency, they are likely to be among the first to fall under high-risk classification.

If organisations overlook this, the impact may go beyond simple fines. Deals might take longer to close, negotiations become more cautious, and questions from customers or partners may be harder to answer with confidence. Because these systems directly shape relationships, margins and trust, any lack of clarity quickly becomes visible. When a business cannot clearly explain how key commercial decisions are reached, regulation stops being a box-ticking exercise and starts to restrict how confidently that business can grow.

The hidden risk inside everyday decision-making

Organisations may feel confident in their overall preparedness for AI regulation. Policies and governance frameworks are being discussed, and compliance teams continue to expand. Yet the risk is that this confidence is often based on high-level assessments rather than a detailed understanding of how decisions are made inside core systems.

The gap appears as quoting and contracting sit across multiple functions. Sales focuses on speed and competitiveness, legal focuses on risk, finance focuses on revenue recognition, and compliance focuses on adherence to regulation. When AI is layered into these workflows, responsibility can easily become blurred.

In reality, businesses often struggle to clearly outline where AI is influencing pricing decisions, how contract terms are generated, or who ultimately owns oversight when something goes wrong. Accountability becomes fragmented, even though the decisions being made carry significant commercial and regulatory weight.

When governance sits outside the system

AI governance does not necessarily fail because organisations lack policies. It fails because governance is treated as something that sits alongside systems rather than within them. When AI is embedded into quoting and contracting, oversight has to move closer to the point where decisions are made. Readiness is less about knowing the regulation and more about knowing how your own systems behave under pressure, where decisions are automated.

Under the EU AI Act, high-risk systems will be subject to stricter standards around transparency and human oversight. Organisations will need to demonstrate not just that controls exist, but that they are embedded into day-to-day operations. 

This is where a clear structure inside commercial systems becomes important. AI-enabled configure, price, quote (CPQ) and contract lifecycle management (CLM) platforms can help create a clear overview. It can help generate audit trails, surfacing decision logic, and giving sales, legal, finance and compliance teams a shared overview. Used well, these systems support ongoing human oversight, allowing organisations to trace decisions back to their inputs and step in before small issues turn into larger risks.

Using time wisely before enforcement arrives

The ongoing debate about the AI Act timeline should be seen as an opportunity rather than a pause button. Any additional time ahead of implementation should be used to strengthen foundations. That means bringing quoting and contracting into the centre of governance discussions.

Stronger readiness starts with clarity. Clear ownership across sales, legal, finance and compliance teams is essential, along with agreed standards for how AI-driven decisions are documented, reviewed and challenged. It’s important to keep in mind that human oversight needs to be practical and ongoing. Businesses should be able to trace decisions back to their inputs and intervene when outcomes no longer make sense. 

Ultimately, this is about trust. Customers, partners, and regulators need confidence that commercial decisions are fair, consistent, and compliant. As AI regulation moves closer to enforcement, quoting and contracting systems will become a proving ground for organisational readiness. The organisations that recognise this will be better placed to operate with confidence in a more regulated and transparent AI-driven economy.

Spencer Earp is an SVP EMEA at Conga

Read more: The real AI revolution will be boring.