POPL 2024
Sun 14 - Sat 20 January 2024 London, United Kingdom

Neural networks are susceptible to privacy attacks. To date, no verifier can reason about the privacy of individuals participating in the training set. We propose a new privacy property, called local differential classification privacy (LDCP), extending local robustness to a differential privacy setting suitable for black-box classifiers. Given a neighborhood of inputs, a classifier is LDCP if it classifies all inputs the same regardless of whether it is trained with the full dataset or whether any single entry is omitted. A naive algorithm is highly impractical because it involves training a very large number of networks and verifying local robustness of the given neighborhood separately for every network. We propose Sphynx, an algorithm that computes an abstraction of all networks, with a high probability, from a small set of networks, and verifies LDCP directly on the abstract network. The challenge is twofold: network parameters do not adhere to a known distribution probability, making it difficult to predict an abstraction, and predicting too large abstraction harms the verification. Our key idea is to transform the parameters into a distribution given by KDE, allowing to keep the over-approximation error small. To verify LDCP, we extend a MILP verifier to analyze an abstract network. Experimental results show that by training only 7% of the networks, Sphynx predicts an abstract network obtaining 93% verification accuracy and reducing the analysis time by 1.7*10^4x.

Tue 16 Jan

Displayed time zone: London change

14:00 - 15:30
Session 7: Probabilistic and Quantum Programs, Neural NetworksVMCAI at Marconi Room
Chair(s): Andreas Podelski University of Freiburg
14:00
20m
Talk
Guaranteed inference for probabilistic programs: a parallelisable, small-step operational approach
VMCAI
Michele Boreale Università di Firenze, Luisa Collodi University of Florence
14:20
20m
Talk
Local Reasoning about Probabilistic Behaviour for Classical-Quantum Programs
VMCAI
Yuxin Deng East China Normal University, Huiling Wu East China Normal University, Ming Xu East China Normal University
14:40
20m
Talk
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
VMCAI
Jiaxu Tian East China Normal University, Dapeng Zhi East China Normal University, Si Liu ETH Zurich, Peixin Wang University of Oxford, Guy Katz Hebrew University, Min Zhang East China Normal University
15:00
20m
Talk
Verification of Neural Networks’ Local Differential Classification Privacy
VMCAI
Roie Reshef Technion, Anan Kabaha Technion, Israel Institute of Technology, Olga Seleznova Technion, Dana Drachsler Cohen Technion
15:20
10m
Talk
AGNES: Abstraction-guided Framework for Deep Neural Networks Security
VMCAI