An Abstraction-Based Framework for Neural Network Verification
Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. We propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou tool, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of abstraction techniques in making neural network verification more scalable.
[Based on a CAV 2020 paper. Joint work with Yizhak Elboher and Justin Gottschlich]
Guy Katz is an assistant professor at the Hebrew University of Jerusalem, Israel. He received his Ph.D. at the Weizmann Institute of Science in 2015. His research interests lie at the intersection between Formal Methods and Software Engineering, and in particular in the application of formal methods to software systems with components generated via machine learning.
Tue 16 Jun Times are displayed in time zone: (GMT-07:00) Pacific Time (US & Canada) change
|10:30 - 11:00|
Guy KatzHebrew University
|11:00 - 11:30|