What Does The Newly Proposed EU Regulation Mean For AI Compliance?
The European Commission’s proposed Artificial Intelligence Act responds to longstanding concerns around issues associated with the use of AI, particularly following the release of the European Commission’s white paper, On Artificial Intelligence. The Act will work in tandem with existing AI legislation such as the General Data Protection Regulation (GDPR), while developing more defined parameters for the commercialization and utilization of AI systems which are used or output data in the EU. The Act also follows other regional attempts at AI compliance regulation, such as the Canadian federal government’s mandated algorithmic impact assessments in 2020 and the US Federal Trade Commission’s clarification of its authority to pursue enforcement actions against providers of harmful AI in 2021.
A key aspect of the legislation is the proposed classification of AI systems into three risk categories: (1) unacceptable-risk AI systems, including any manipulative or exploitative techniques or any form of social scoring; (2) high-risk AI systems, such as systems which evaluate consumer creditworthiness; and (3) limited and minimal risk AI systems, which include AI chatbots and spam filters. Currently, financial services and healthcare service/ product providers are the most exposed to high-risk AI systems, given their heightened consumption and handling of ‘people’ data.
The Act details specific rules for businesses providing or using high-risk AI systems to ensure the protection of EU citizens. These include the creation and maintenance of a risk management system, data governance of training, validation and testing data sets, technical documentation of the system prior to implementation, automatic record keeping of events and that the system is designed with sufficient transparency such that users can easily interpret and appropriately use the system’s output. Other rules demand that high-risk AI systems be subject to a conformity assessment and registration in an EU database, be designed with effective human supervision in mind and have robust cyber security protocols.
Businesses (either those using AI systems or developing them) found in contravention of the rules laid out in the Act will be subject to fines of up to €30 million, or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher.
Given the proliferation of AI-assisted technologies leveraged by industrial organizations, particularly for improving asset reliability and performance with asset management software products such as those from C3 AI, Seeq and SymphonyIndustrialAI, it is imperative that tech developers and buyers alike will have to put risk mitigation strategies in place to ease the transition when the AI legislation is passed – likely around 2027. To minimise risk, firms using AI should consider technologies or services which help ensure compliance with applicable legislation, such as the AI health check service offered by Simmons & Simmons. The market for software products which assist with AI compliance will develop in tandem with the expansion of AI regulation. GRC software suppliers, which already specialise in compliance management, and AI analytics solution providers, owing to their familiarity with AI-related issues, could offer AI compliance services, alongside specialist firms formed off the back of the new legislation.
Further, AI product suppliers and firms using AI systems should recruit chief risk officers (CROs), to establish an AI governance framework which clarifies ‘acceptable’ outputs of AI systems. Firms must also continuously perform conformity assessments based on internal and external regulations, alongside the maintenance of an inventory of internal systems which use AI.