International Journal of Scientific Research and Engineering Development

International Journal of Scientific Research and Engineering Development


( International Peer Reviewed Open Access Journal ) ISSN [ Online ] : 2581 - 7175

IJSRED » Archives » Volume 8 -Issue 6


Submit Your Manuscript OnlineIJSRED

πŸ“‘ Paper Information
πŸ“‘ Paper Title Demystifying Android Malware Detection with Explainable AI
πŸ‘€ Authors Miss. Shruti Bodke, Miss. Prachi Patil, Miss. Tanvi Patil, Prof. Prachi Dhanawat
πŸ“˜ Published Issue Volume 9 Issue 1
πŸ“… Year of Publication 2026
πŸ†” Unique Identification Number IJSRED-V9I1P64
πŸ“‘ Search on Google Click Here
πŸ“ Abstract
Machine learning–based techniques are widely regarded as effective solutions for detecting Android malware and have shown strong performance by utilizing commonly adopted features. However, in realworld applications, most machine learning models only provide a simple classification result, such as labeling an application as malicious or benign. In practice, security analysts and other stakeholders are more concerned with understanding the reasons behind such classifications. This challenge belongs to the field of interpretable machine learning, particularly within the domain of mobile malware detection. Although several interpretability techniques have been proposed in other artificial intelligence research areas, there has been limited work focusing on explaining why an Android application is identified as malware and addressing the domain-specific difficulties involved.
To address this limitation, this paper introduces a new interpretable machine learning framework, called XMal, which is capable of both accurately classifying malware and providing meaningful explanations for its decisions. The first stage of XMal employs a multi-layer perceptron combined with an attention mechanism to highlight the most influential features contributing to the classification outcome. The second stage automatically generates natural language explanations that describe the primary malicious behaviors found within applications. The proposed approach is evaluated through human studies and quantitative analysis, and is further compared with existing interpretable methods such as Drebin and LIME. The results demonstrate that XMal can more precisely uncover malicious behaviors and can also explain the causes of misclassifications. This study provides valuable insights into interpretable machine learning through the lens of Android malware detection and analysis.
πŸ“ How to Cite
Miss. Shruti Bodke, Miss. Prachi Patil, Miss. Tanvi Patil, Prof. Prachi Dhanawat,"Demystifying Android Malware Detection with Explainable AI" International Journal of Scientific Research and Engineering Development, V9(1): Page(533-538) Jan-Feb 2026. ISSN: 2581-7175. www.ijsred.com. Published by Scientific and Academic Research Publishing.