Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group’s output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a normal logic program wherein each predicate’s truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
Tue 16 JanDisplayed time zone: London change
09:00 - 10:30 | Declarative Programming for AIPADL at Lovelace Room Chair(s): Martin Gebser University of Klagenfurt, Austria | ||
09:00 60mKeynote | Whats and Whys of Neural Network Verification (A Declarative Programming Perspective) PADL | ||
10:00 30mTalk | Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks PADL Parth Padalkar THE UNIVERSITY OF TEXAS AT DALLAS, Huaduo Wang THE UNIVERSITY OF TEXAS AT DALLAS, Gopal Gupta University of Texas at Dallas |