With The Growth In AI Comes The Need For Greater AI Safety

  • Blog
  • Operational Excellence

With The Growth In AI Comes The Need For Greater AI Safety

In an era of technological advancement and the increasing use of artificial intelligence (AI), it is vital to follow a proactive rather than a reactive approach to safety, in other words, preventing incidents from happening. With AI now playing central roles in more industries it can have catastrophic consequences if something goes wrong: such as when being used on a power grid, the stock market or a nuclear power plant. The four main areas of AI-safety research are: verification, validation, security and control. Max Tegmark - cosmologist and machine learning researcher at Massachusetts Institute of Technology – described the importance of AI safety through past failures in different industries.

AI is fast moving within manufacturing and has no signs of slowing down. Tesla is using robots to enhance both efficiency and precision of car production by moving items from one workstation to the other. However, as robots are increasingly used it becomes more important to verify and validate the software. In 2015, a contractor at one of Volkswagen’s production plants in Baunatal, Germany, was working on setting up a robot to grab auto parts and manipulate them. The robot grabbed and crushed him against a metal plate. The robot made an invalid assumption that the person was an auto part. By adding intelligence and better validation of software this accident could have been avoided. 

Sophisticated algorithms have done wonders for the power grid, making it possible for firms to optimize energy consumption and reduce their carbon footprint, and future AI is likely to make the “smart grid” even smarter. However, as AI takes charge of more physical systems it becomes more important for machines to collaborate effectively with their human controllers and for developers to verify the software. On August 14, 2003, 55 million people in the United States and Canada went powerless, where many remained powerless for days. The primary cause was a bug in the software which prevented the alarm system in an Ohio control room from alerting operators the need to redistribute power.  Not only is verification, validation and control important to improve, but also security. In 2015, hackers got into the system of a western Ukrainian power company, cutting power to 225,000 households. A year later, customers in parts of Kyiv were left without electricity for an hour after hackers disabled an electricity substation. As AI systems improve so does the abilities of performing more sophisticated hacks, therefore constant improvement is important. 

Although these examples of system failures involve dated technologies, it’s important to take valuable lessons from them and take a proactive approach to safety engineering. AI has moreover enhanced the operation of industries, but as it becomes smarter and takes more control it’s crucial that in the designing of next generation AI that the creators allocate greater resources into AI-safety - verification, validation, security and control - to stop truly catastrophic events form occurring, as well as prepare for an oncoming wave of AI regulations that ensures trust in AI like the EU AI Act.

Henry Kirkman

Analyst

Henry is an Analyst in the Verdantix Operational Excellence practice. His current research agenda focuses on connected worker solutions, technologies for industrial asset maintenance, and the industrial applications of AI, including generative AI and computer vision. Prior to joining Verdantix, Henry completed a Masters degree in Civil Engineering at the University of Exeter.