POPL 2024
Sun 14 - Sat 20 January 2024 London, United Kingdom

Deep Neural Networks (DNNs) are becoming widespread, particularly in safety-critical areas. One prominent application is image recognition in autonomous driving, where the correct classification of objects, such as traffic signs, is essential for safe driving. Unfortunately, DNNs are prone to backdoors, meaning that they concentrate on attributes of the image that should be irrelevant for their correct classification. Backdoors are integrated into a DNN during training, either with malicious intent (such as a manipulated training process, because of which a yellow sticker always leads to a traffic sign being recognised as a stop sign) or unintentional (such as a rural background leading to any traffic sign being recognised as animal crossing, because of biased training data).

In this paper, we introduce AGNES, a tool to detect backdoors in DNNs for image recognition. We discuss the principle approach on which AGNES is based. Afterwards, we show that our tool performs better than many state-of-the-art methods for multiple relevant case studies.

Tue 16 Jan

Displayed time zone: London change

14:00 - 15:30
Session 7: Probabilistic and Quantum Programs, Neural NetworksVMCAI at Marconi Room
Chair(s): Andreas Podelski University of Freiburg
14:00
20m
Talk
Guaranteed inference for probabilistic programs: a parallelisable, small-step operational approach
VMCAI
Michele Boreale UniversitĂ  di Firenze, Luisa Collodi University of Florence
14:20
20m
Talk
Local Reasoning about Probabilistic Behaviour for Classical-Quantum Programs
VMCAI
Yuxin Deng East China Normal University, Huiling Wu East China Normal University, Ming Xu East China Normal University
14:40
20m
Talk
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
VMCAI
Jiaxu Tian East China Normal University, Dapeng Zhi East China Normal University, Si Liu ETH Zurich, Peixin Wang University of Oxford, Guy Katz Hebrew University, Min Zhang East China Normal University
15:00
20m
Talk
Verification of Neural Networks’ Local Differential Classification Privacy
VMCAI
Roie Reshef Technion, Anan Kabaha Technion, Israel Institute of Technology, Olga Seleznova Technion, Dana Drachsler Cohen Technion
15:20
10m
Talk
AGNES: Abstraction-guided Framework for Deep Neural Networks Security
VMCAI