Neural Rule Learning with Network Architecture Search for Interpretable Classification

Authors: Xincheng He, Xueting Jiang, Haoran Liu, Shuo Guan, Jiayu Xue, Bowen Shen, and Yuangang Wang
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 2331-2346
Keywords: Neural Rule Learning, Network Architecture Search, Interpretable Classifier, Tabular Data, Deep Neural Network

Abstract

Deep neural network models have achieved unprecedented success in modeling unstructured data across tasks such as computer vision, speech recognition, and natural language processing. However, their inherent limitations in model transparency and interpretability hinder the analysis of underlying mechanisms, prediction processes, and decision rationales, restricting their application in critical domains such as healthcare, finance, and judiciary. Traditional rule-based models exhibit strong transparency and interpretability. However, they are overly dependent on feature engineering, which significantly increases the cost of human intervention. Furthermore, their limited capability to represent data restricts the efficient and comprehensive utilization of large-scale datasets. While ensemble learning methods can enhance predictive performance, they often do so at the expense of model interpretability. To address the challenge of balancing predictive performance and interpretability in structured data classification tasks, we proposes an interpretable classification method that integrates neural rule learning with network architecture search in this paper. On the one hand, this method automatically learns interpretable logical rules to represent and classify the data. On the other hand, by incorporating network architecture search techniques, the model adapts to the characteristics of the dataset and determines the optimal network structure to achieve superior predictive performance. Through comparative experiments with different types of classification models across multiple datasets, we find that the proposed method demonstrates strong competitiveness in both predictive accuracy and interpretability. It is capable of generating highly interpretable logical rules while maintaining excellent predictive performance.
📄 View Full Paper (PDF) 📋 Show Citation