HomeNewsBuild and test robust AI-driven systems in a rigorous and versatile manner

Build and test robust AI-driven systems in a rigorous and versatile manner

Neural networks have fundamentally modified the best way engineers design robot controllers, creating more adaptable and efficient machines. Yet these brain-like machine learning systems are a double-edged sword: Their complexity makes them powerful, however it also makes it difficult to ensure that a robot powered by a neural network will perform its task safely.

The traditional method for checking safety and stability is techniques called Lyapunov functions. If you could find a Lyapunov function whose value is repeatedly decreasing, you’ll be able to make sure that unsafe or unstable situations related to higher values ​​won’t ever occur. However, for robots controlled by neural networks, previous approaches to checking Lyapunov conditions didn’t scale well to complex machines.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and other institutes have now developed latest techniques that rigorously certify Lyapunov calculations in additional complex systems. Their algorithm efficiently searches for and verifies a Lyapunov function, thereby providing a stability guarantee for the system. This approach could potentially enable safer use of robots and autonomous vehicles, including aircraft and spacecraft.

To outperform previous algorithms, the researchers found a low-cost shortcut to the training and verification process. They generated lower-cost counterexamples—for instance, conflicting data from sensors that would have confused the controller—after which optimized the robot system to account for them. Understanding these edge cases helped the machines learn tips on how to handle difficult circumstances, enabling them to operate safely under a wider range of conditions than previously possible. They then developed a novel verification formulation that allows the usage of a scalable neural network verifier, α,β-CROWN, to supply rigorous worst-case scenario guarantees beyond the counterexamples.

“We've seen some impressive empirical performance in AI-controlled machines corresponding to humanoids and robot dogs, but these AI controllers lack the formal guarantees which can be critical for safety-critical systems,” says Lujie Yang, an electrical engineering and computer science (EECS) doctoral student at MIT and CSAIL collaborator who co-leads a brand new paper on the project with researcher Hongkai Dai SM '12, PhD '16 on the Toyota Research Institute. “Our work bridges the gap between this level of performance from neural network controllers and the protection guarantees needed to deploy more complex neural network controllers in the true world,” notes Yang.

For a digital demonstration, the team simulated how a quadrotor drone with lidar sensors would stabilize itself in a two-dimensional environment. Their algorithm successfully guided the drone to a stable hover position using only the limited environmental information provided by the lidar sensors. In two additional experiments, their approach enabled the stable operation of two simulated robotic systems under a wider range of conditions: an inverted pendulum and a path-following vehicle. While modest, these experiments are relatively more complex than what the neural network verification community could have done before, particularly because they involved sensor models.

“Unlike common machine learning problems, the rigorous use of neural networks as Lyapunov functions requires solving difficult global optimization problems, and thus scalability is the important thing bottleneck,” says Sicun Gao, associate professor of computer science and engineering on the University of California, San Diego, who was not involved on this work. “The current work makes a crucial contribution by developing algorithmic approaches which can be significantly better tailored to the actual use of neural networks as Lyapunov functions on top of things problems. It achieves a powerful improvement in scalability and the standard of solutions over existing approaches. The work opens exciting directions for the further development of optimization algorithms for neural Lyapunov methods and the rigorous use of deep learning on top of things and robotics basically.”

Yang and her colleagues' stability approach has potentially wide-ranging applications where ensuring safety is critical. It could help ensure a smoother ride for autonomous vehicles corresponding to airplanes and spacecraft. Similarly, drones that deliver goods or map different terrains may gain advantage from such safety assurances.

The techniques developed listed below are very general and never limited to robotics. The same techniques could potentially be useful in other applications in the long run, corresponding to biomedicine and industrial processing.

While the technique is an improvement over previous work by way of scalability, the researchers are exploring how it will probably work higher in higher-dimensional systems. They also want to think about data beyond lidar readings, corresponding to images and point clouds.

In future research, the team wants to supply the identical stability guarantees for systems which can be in uncertain environments and are subject to perturbations. For example, if a drone is exposed to a robust gust of wind, Yang and her colleagues need to be sure that it still flies stably and performs the specified task.

They also plan to use their method to optimization problems, where the goal is to attenuate the time and distance it takes for a robot to finish a task while remaining stable. They plan to increase their technique to humanoids and other real-world machines where a robot must remain stable while maintaining contact with its environment.

Russ Tedrake, Toyota Professor of EECS, Aerospace, and Mechanical Engineering at MIT, Vice President for Robotics Research at TRI, and CSAIL member, is senior creator of this research. The paper can also be credited with graduate student Zhouxing Shi of the University of California, Los Angeles, and Associate Professor Cho-Jui Hsieh and Assistant Professor Huan Zhang of the University of Illinois Urbana-Champaign. Their work was supported partly by Amazon, the National Science Foundation, the Office of Naval Research, and Schmidt Sciences' AI2050 program. The researchers' paper can be presented on the International Conference on Machine Learning 2024.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read