Artificial Intelligence: The Looming Regulatory Tsunami

  • Blog
  • Risk Management

Artificial Intelligence: The Looming Regulatory Tsunami

The exponential growth of artificial intelligence across all sectors is creating enormous pressure on regulators to define a minimum set of rules to protect customers and their privacy. Like with any other emerging technology, regulators across regions are starting to delve into the use cases, potential opportunities and threats of AI in order to prepare a set of rules to standardize its use – and minimize any potential misuse.

Although there is currently no tangible, unique AI policy or regulation in effect, so far the focus has been to prevent harm to customers through the misuse of information, disinformation or exploitation of users’ vulnerabilities. However, the usage of AI is now more common, extending beyond regular word processing or clunky chatbot solutions. Different industries are applying AI to a variety of solutions, from threat intelligence monitoring systems to better risk modelling scenarios. Using AI in data processing and forecasting can benefit businesses by improving predictability and operational efficiencies, but it could also generate ethical concerns around data sources, their purpose and possible risks.

Governments across the world are starting to examine the implications and potential risks to consumers and users. They are currently establishing forums and mechanisms to better understand the reality of AI technology and how to create a transparent, fair framework for all participants.

EU

Since early 2021, the EU Commission has been working to define rules to ensure better conditions for the development and safer use of artificial intelligence. The group proposed the AI Act in 2021, which aims to promote safer, transparent, traceable, technology-neutral, non-discriminatory and environmentally friendly use of this technology. In June 2023, the initial proposal from the Commission passed scrutiny from lawmakers, and it is now entering the harmonizing phase with all EU members before it is converted into law. This is expected to occur between late 2023 and early 2024.

UK

The UK is taking action to shape frameworks around artificial intelligence, not only from a regulatory perspective but also with regard to technology and usage. The government has created the AI Foundation Model Taskforce, consisting of experts in this domain. Its goal is to seize the opportunities presented by AI and build public confidence in its use, complementing the work already being undertaken by AI developers themselves.

In parallel, the UK is attempting to influence international approaches by hosting the first global AI summit in November 2023, with a focus on safety and security. From a legal perspective, there are no explicit regulations solely dedicated to AI. However, in 2020 the government published ‘AI in the UK: Ready, Willing, and Able’, setting general guidelines for the responsible usage and development of AI. Additionally, various sectors are establishing minimum parameters for managing AI and private information in accordance with the UK Data Protection Act 2018.

US

Similar to other regions, the United States is in early stages of AI regulation. While there is not a consistent view or set of standards on AI across all states, there are some key initiatives and legislation that cover certain aspects of the technology. For example, there are elements related to cybersecurity (NIST), trade and Federal Trade Commission principles that pertain to AI. The Department of Homeland Security has also established guidelines related to the usage of AI in national security applications.

What’s next?

The message is clear: regulation is on the way. When looking at the different public statements from regulators and business leaders on AI, the direction of travel is obvious. Privacy, safety, transparency, accountability and fairness will be at the centre of regulations across regions and markets.

Firms should start to look at their approach to AI, particularly these specific aspects, as part of their existing and future operating models in order to minimize any risk exposure.

Daniel Garcia

Senior Manager

Daniel is a risk and compliance subject-matter expert (SME), with over 16 years of global experience, having worked for major financial institutions and consulting firms in Latin America, Europe and Asia. He leads the Verdantix Risk Management practice, where he steers market research intelligence and provides Advisory services on risk and compliance matters. Daniel has a BA in Economics and an MSc in Capital Markets and Financial Engineering.