POPL 2024
Sun 14 - Sat 20 January 2024 London, United Kingdom
Sun 14 Jan 2024 11:50 - 12:00 at Kelvin Lecture - Second Session Chair(s): Steven Holtzen, Matthijs Vákár

A common task in many fields of science is to optimize a function defined as an expected value. In Lew et al. (2023), the authors introduced ADEV: an extension to forward mode automatic differentiation (AD) which derives unbiased gradient estimators for loss functions defined as expected values, for an expressive class of probabilistic computations. Forward mode AD has known scaling limitations when the input dimensionality is large compared to the output (e.g. $𝑓 : \mathbb{R}^n → \mathbb{R}$). This prohibits usage of ADEV in settings which include both large neural networks and probabilistic computation. In this work, we investigate the possibility of automatically deriving a reverse mode gradient estimator given a forward mode gradient estimator by extending the reverse mode algorithm from Radul et al. (2022).

Sun 14 Jan

Displayed time zone: London change

11:00 - 12:30
Second SessionLAFI at Kelvin Lecture
Chair(s): Steven Holtzen Northeastern University, Matthijs Vákár Utrecht University
11:00
10m
Talk
A Tree Sampler for Bounded Context-Free Languages
LAFI
Breandan Considine McGill University
File Attached
11:10
10m
Talk
A Multi-language Approach to Probabilistic Program Inference
LAFI
Sam Stites Northeastern University, Steven Holtzen Northeastern University
11:20
10m
Talk
Belief Programming in Partially Observable Probabilistic Environments
LAFI
Tobias Gürtler Saarland University, Saarland Informatics Campus, Benjamin Lucien Kaminski Saarland University; University College London
11:30
10m
Talk
Homomorphic Reverse Differentiation of IterationOnline
LAFI
Fernando Lucatelli Nunes Utrecht University, Gordon Plotkin Google, Matthijs Vákár Utrecht University
File Attached
11:40
10m
Talk
MultiSPPL: extending SPPL with multivariate leaf nodes
LAFI
Matin Ghavami Massachusetts Institute of Technology, Mathieu Huot MIT, Martin C. Rinard Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology
11:50
10m
Talk
Reverse mode ADEV via YOLO: tangent estimators transpose to gradient estimators
LAFI
McCoy Reynolds Becker MIT, Mathieu Huot MIT, Alexander K. Lew Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology
12:00
10m
Talk
Sparse Differentiation in Computer Graphics
LAFI
Kevin Mu University of Washington, Jesse Michel Massachusetts Institute of Technology, William S. Moses Massachusetts Institute of Technology, Shoaib Kamil Adobe Research, Zachary Tatlock University of Washington, Alec Jacobson University of Toronto, Jonathan Ragan-Kelley Massachusetts Institute of Technology
12:10
10m
Talk
A slice sampler for the Indian Buffet Process: expressivity in nonparametric probabilistic programming
LAFI
Maria-Nicoleta Craciun University of Oxford, C.-H. Luke Ong NTU, Hugo Paquet LIPN, Université Sorbonne Paris Nord, Sam Staton University of Oxford
12:20
10m
Talk
Effective Sequential Monte Carlo for Language Model Probabilistic Programs
LAFI
Alexander K. Lew Massachusetts Institute of Technology, Tan Zhi-Xuan Massachusetts Institute of Technology, Gabriel Grand Massachusetts Institute of Technology, Jacob Andreas MIT, Vikash K. Mansinghka Massachusetts Institute of Technology