Write a Blog >>
PLDI 2020
Mon 15 - Fri 19 June 2020
Wed 17 Jun 2020 05:00 - 05:20 at PLDI Research Papers live stream - Machine Learning I Chair(s): Antonio Filieri

Type inference over partial contexts in dynamically typed languages is challenging. In this work, we present a graph neural network model that predicts types by probabilistically reasoning over a program's structure, names, and patterns. The network uses deep similarity learning to learn a TypeSpace — a continuous relaxation of the discrete space of types — and how to embed the type properties of a symbol (i.e. identifier) into it. Importantly, our model can employ one-shot learning to predict an open vocabulary of types, including rare and user-defined ones. We realise our approach in $\textsc{Typilus}$ for Python that combines the TypeSpace with an optional type checker. We show that Typilus accurately predicts types. $\textsc{Typilus}$ confidently predicts types for 70% of all annotatable symbols; when it predicts a type, that type optionally type checks 95% of the time. $\textsc{Typilus}$ can also find incorrect type annotations; two important and popular open source libraries, fairseq and allennlp, accepted our pull requests that fixed the annotation errors $\textsc{Typilus}$ discovered.

Wed 17 Jun

Displayed time zone: Pacific Time (US & Canada) change

05:00 - 06:00
05:00
20m
Talk
Typilus: Neural Type Hints
PLDI Research Papers
Miltiadis Allamanis Microsoft Research, Earl T. Barr University College London, UK, Soline Ducousso ENSTA Paris, France, Zheng Gao University College London, UK
05:20
20m
Talk
Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks
PLDI Research Papers
Jianan Yao Columbia University, USA, Gabriel Ryan Columbia University, USA, Justin Wong Columbia University, USA, Suman Jana Columbia University, USA, Ronghui Gu Columbia University, USA
05:40
20m
Talk
Blended, Precise Semantic Program Embeddings
PLDI Research Papers
Ke Wang Visa Research, Zhendong Su ETH Zurich, Switzerland