Numerical abstract domains are a key component of modern static analyzers. Despite recent advances, precise analysis with highly expressive domains remains too costly for many real-world programs. To address this challenge, we introduce a new data-driven method, called LAIT, that produces a faster and more scalable numerical analysis without significant loss of precision. Our approach is based on the key insight that sequences of abstract elements produced by the analyzer contain redundancy which can be exploited to increase performance without compromising precision significantly. Concretely, we present an iterative learning algorithm that learns a neural policy that identifies and removes redundant constraints at various points in the sequence. We believe that our method is generic and can be applied to various numerical domains.
We instantiate LAIT for the widely used Polyhedra and Octagon domains. Our evaluation of LAIT on a range of real-world applications with both domains shows that while the approach is designed to be generic, it is orders of magnitude faster on the most costly benchmarks than a state-of-the-art numerical library while maintaining close-to-original analysis precision. Further, LAIT outperforms hand-crafted heuristics and a domain-specific learning approach in terms of both precision and speed.
Thu 18 JunDisplayed time zone: Pacific Time (US & Canada) change
10:40 - 11:40 | Machine Learning IIPLDI Research Papers at PLDI Research Papers live stream Chair(s): Ke Wang Visa Research | ||
10:40 20mTalk | Proving Data-Poisoning Robustness in Decision Trees PLDI Research Papers Samuel Drews University of Wisconsin-Madison, USA, Aws Albarghouthi University of Wisconsin-Madison, USA, Loris D'Antoni University of Wisconsin-Madison, USA | ||
11:00 20mTalk | A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML) PLDI Research Papers Muhammad Usman University of Texas at Austin, USA, Wenxi Wang University of Texas at Austin, USA, Marko Vasic University of Texas at Austin, USA, Kaiyuan Wang Google, USA, Haris Vikalo University of Texas at Austin, USA, Sarfraz Khurshid University of Texas at Austin, USA | ||
11:20 20mTalk | Learning Fast and Precise Numerical Analysis PLDI Research Papers Jingxuan He ETH Zurich, Switzerland, Gagandeep Singh ETH Zurich, Switzerland, Markus Püschel ETH Zurich, Switzerland, Martin Vechev ETH Zurich, Switzerland |