Write a Blog >>
PLDI 2020
Mon 15 - Fri 19 June 2020
Thu 18 Jun 2020 10:40 - 11:00 at PLDI Research Papers live stream - Machine Learning II Chair(s): Ke Wang

Machine learning models are brittle, and small changes in the training data can result in different predictions. We study the problem of proving that a prediction is robust to \emph{data poisoning}, where an attacker can inject a number of malicious elements into the training set to influence the learned model. We target decision-tree models, a popular and simple class of machine learning models that underlies many complex learning techniques. We present a sound verification technique based on \emph{abstract interpretation} and implement it in a tool called Antidote. Antidote abstractly trains decision trees for an intractably large space of possible poisoned datasets. Due to the soundness of our abstraction, Antidote can produce proofs that, for a given input, the corresponding prediction would not have changed had the training set been tampered with or not. We demonstrate the effectiveness of Antidote on a number of popular datasets.

Thu 18 Jun
Times are displayed in time zone: (GMT-07:00) Pacific Time (US & Canada) change

10:40 - 11:40: PLDI Research Papers - Machine Learning II at PLDI Research Papers live stream
Chair(s): Ke WangVisa Research

YouTube lightning session video

pldi-2020-papers10:40 - 11:00
Samuel DrewsUniversity of Wisconsin-Madison, USA, Aws AlbarghouthiUniversity of Wisconsin-Madison, USA, Loris D'AntoniUniversity of Wisconsin-Madison, USA
pldi-2020-papers11:00 - 11:20
Muhammad UsmanUniversity of Texas at Austin, USA, Wenxi WangUniversity of Texas at Austin, USA, Marko VasicUniversity of Texas at Austin, USA, Kaiyuan WangGoogle, USA, Haris VikaloUniversity of Texas at Austin, USA, Sarfraz KhurshidUniversity of Texas at Austin, USA
pldi-2020-papers11:20 - 11:40
Jingxuan HeETH Zurich, Switzerland, Gagandeep SinghETH Zurich, Switzerland, Markus PĆ¼schelETH Zurich, Switzerland, Martin VechevETH Zurich, Switzerland