AI Governance Gaps Are Growing – And So Are Liability Risks For Boards
AI Governance Gaps Are Growing – And So Are Liability Risks For Boards
President Trump’s “One, Big, Beautiful Bill” Act, which encourages states to pause the enforcement of AI laws, passed the US House of Representatives in late May. It’s becoming increasingly clear that AI is not just another technology trend; it’s an urgent geostrategic priority. While the US pauses AI regulation in the pursuit of innovative freedom to compete with China, the EU AI Act took full effect in February this year, with broad implications for firms using AI across the EU (see Verdantix Strategic Focus: Regulatory Radar And The Next Wave Of AI Risk Compliance). As AI becomes deeply woven into products, services and decision-making, regulatory landscapes are shifting due to growing concerns around AI safety, transparency and reliability – and boards can no longer afford to watch from the sidelines.
Due to regional governments introducing AI regulations with competing priorities, scopes and timelines, there now exists a complex patchwork of AI frameworks. For boards overseeing global operations, this means:
- Adopting the ‘highest common denominator’ approach.
Boards must identify the most stringent applicable AI standard and apply it across jurisdictions. The challenge here is that AI rules are likely to intercept with pre-existing legal frameworks, such as the General Data Protection Regulation (GDPR). Firms will need to ensure that pre-existing risk processes and workflows remain relevant to emerging legal frameworks to stay compliant.
- Agreeing on what constitutes AI – and who is responsible.
Organizations often lack a shared understanding of what technologies truly qualify as AI, and where the boundaries lie with other technology such as predictive analytics. Without a shared understanding of AI, controls can get fuzzy, impeding the board’s ability to ensure that appropriate decisions are made. This can lead to misunderstandings in compliance status – or worse, assuming it’s someone else’s problem – exposing organizations to fines and reputational risks. - Preparing for new risks that haven't been managed before.
Not only do evolving AI regulations amplify existing risks including operational, legal and reputational risks; they also introduce entirely new risks such as intentional and unintentional risks derived from AI. AI presents unique challenges: opaque algorithms, data quality issues and potential bias. Boards need to ensure their organization has the right governance strategy and tools in place to tackle these issues. As firms expand their AI capabilities, those lacking governance will fail to meet consumer expectations for reliable AI.
Over the coming years, organizations will continue to invest in AI and develop their AI strategies (see Market Insight: Use Cases And Adoption Challenges For AI In Risk Management). Failure to understand AI risks could negatively impact a firm’s IT strategy and ability to realise value from AI investments. Keeping up with this rapidly evolving regulatory landscape requires a proactive approach to AI governance, which isn’t just good hygiene, but also a competitive advantage.
To read more about AI risks and innovations, check out Verdantix research on risk management and AI.