POPL 2024
Sun 14 - Sat 20 January 2024 London, United Kingdom
Sun 14 Jan 2024 12:00 - 12:10 at Kelvin Lecture - Second Session Chair(s): Steven Holtzen, Matthijs Vákár

Automatic differentiation (AD) has had a tremendous impact on deep learning by automatically generating efficient derivative computations. Unlike the densely populated derivatives common in deep learning, graphics applications often require second-order methods in order to converge and the Hessians are sparsely populated, which makes them a poor fit for deep learning AD libraries. Because the dense Hessian computation achievable with traditional AD is asymptotically slow (e.g., cubic rather than linear time), whole papers are dedicated to the manual derivation and implementation of the sparse Hessian for a particular energy. We propose a simple algorithm for sparse automatic differentiation based on the insight that, applied carefully, dead code elimination is able to eliminate unnecessary derivative computations. Our algorithm automatically computes Hessians, is straightforward to implement in existing, general-purpose AD libraries, and enables rapid prototyping and scaling of graphics applications by enriching popular AD libraries with the ability to compute sparse Hessians. We call the realization of our algorithm dead index elemination, a variant of dead code elimination, which we implement in state-of-the-art AD libraries: JAX and Enzyme. In benchmarking, we find that it achieves state-of-the-art performance, matching hand-coded sparse Hessian implementations. We showcase our results on graphics applications such as UV mapping, discrete shells, and simulation of hyperelastic materials.

Sun 14 Jan

Displayed time zone: London change

11:00 - 12:30
Second SessionLAFI at Kelvin Lecture
Chair(s): Steven Holtzen Northeastern University, Matthijs Vákár Utrecht University
11:00
10m
Talk
A Tree Sampler for Bounded Context-Free Languages
LAFI
Breandan Considine McGill University
File Attached
11:10
10m
Talk
A Multi-language Approach to Probabilistic Program Inference
LAFI
Sam Stites Northeastern University, Steven Holtzen Northeastern University
11:20
10m
Talk
Belief Programming in Partially Observable Probabilistic Environments
LAFI
Tobias Gürtler Saarland University, Saarland Informatics Campus, Benjamin Lucien Kaminski Saarland University; University College London
11:30
10m
Talk
Homomorphic Reverse Differentiation of IterationOnline
LAFI
Fernando Lucatelli Nunes Utrecht University, Gordon Plotkin Google, Matthijs Vákár Utrecht University
File Attached
11:40
10m
Talk
MultiSPPL: extending SPPL with multivariate leaf nodes
LAFI
Matin Ghavami Massachusetts Institute of Technology, Mathieu Huot MIT, Martin C. Rinard Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology
11:50
10m
Talk
Reverse mode ADEV via YOLO: tangent estimators transpose to gradient estimators
LAFI
McCoy Reynolds Becker MIT, Mathieu Huot MIT, Alexander K. Lew Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology
12:00
10m
Talk
Sparse Differentiation in Computer Graphics
LAFI
Kevin Mu University of Washington, Jesse Michel Massachusetts Institute of Technology, William S. Moses Massachusetts Institute of Technology, Shoaib Kamil Adobe Research, Zachary Tatlock University of Washington, Alec Jacobson University of Toronto, Jonathan Ragan-Kelley Massachusetts Institute of Technology
12:10
10m
Talk
A slice sampler for the Indian Buffet Process: expressivity in nonparametric probabilistic programming
LAFI
Maria-Nicoleta Craciun University of Oxford, C.-H. Luke Ong NTU, Hugo Paquet LIPN, Université Sorbonne Paris Nord, Sam Staton University of Oxford
12:20
10m
Talk
Effective Sequential Monte Carlo for Language Model Probabilistic Programs
LAFI
Alexander K. Lew Massachusetts Institute of Technology, Tan Zhi-Xuan Massachusetts Institute of Technology, Gabriel Grand Massachusetts Institute of Technology, Jacob Andreas MIT, Vikash K. Mansinghka Massachusetts Institute of Technology