HICCNN: A Hierarchical Approach to Enhancing Interpretability in Convolutional Neural Networks

Authors: Yinze Luo, Yuxiang Luo, Bo Peng, and Lijun Sun
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 1711-1725
Keywords: Hierarchical interpretability, Neural network interpretability, Convolutional neural networks

Abstract

Convolutional Neural Networks CNNs frequently exhibit limited interpret-ability, which presents significant challenges to their deployment in high-stakes applications. Although existing methods such as ICCNN incorporate interpretability mechanisms, these approaches are typically confined to a single network layer and thus fail to capture the hierarchical nature of visual semantics. To overcome this limitation, we propose Hierarchical Interpreta-ble Compositional Convolutional Neural Networks, a novel approach that facilitates layer-wise hierarchical interpretability without requiring any modi-fications to the original network architecture. Specifically, our method al-lows CNNs to learn semantically meaningful and fine-grained features in a structured hierarchy, thereby achieving a closer alignment with human visu-al cognition. Extensive quantitative experiments demonstrate that our model not only offers superior interpretability compared to existing methods but also enhances classification performance—particularly in complex multi-class tasks—by effectively leveraging the hierarchical compositional struc-ture of the learned features. Moreover, we compare our method against Grad-CAM and demonstrate that our model achieves comparable semantic localization quality while offering built-in interpretability during inference, thereby eliminating the need for additional post-hoc explanation modules.
📄 View Full Paper (PDF) 📋 Show Citation