![]() |
International Journal of Scientific Research and Engineering Development( International Peer Reviewed Open Access Journal ) ISSN [ Online ] : 2581 - 7175 |
IJSRED » Archives » Volume 8 -Issue 6

๐ Paper Information
| ๐ Paper Title | Offensive and Defensive Dynamics of Adversarial AI in Cybersecurity |
| ๐ค Authors | Rosemary Chisom Dimakunne, Paul Clement Uwamotobon Akpabio, Monsuru Olarewaju Moshood |
| ๐ Published Issue | Volume 8 Issue 6 |
| ๐ Year of Publication | 2025 |
| ๐ Unique Identification Number | IJSRED-V8I6P298 |
| ๐ Search on Google | Click Here |
๐ Abstract
This study provides a dual perspective analysis of adversarial AI in cybersecurity, examining both attack (offensive) and defense (protective) strategies. The motivation stems from the growing adoption of machine learning (ML) based intrusion detection, malware classification, and other security systems, and the emergent threat of adversarial attacks that can degrade or evade those systems. We specifically investigate poisoning attacks (which corrupt training data or models) and evasion attacks (which add subtle perturbations to inputs at inference) and their impacts on popular ML models. The research combines empirical experiments with theoretical analysis to explore how such attacks succeed and how defenses perform under attack. We implemented representative adversarial attack algorithms on benchmark cybersecurity datasets. For evasion attacks, we used gradient based methods (e.g., FGSM and PGD) to craft small input perturbations that cause misclassification[1]. For poisoning attacks, we applied data injection and label flipping techniques (including a Generative Adversarial Network to generate malicious training samples) to subtly shift the training distribution and model parameters[2][3]. Experiments were conducted on standard intrusion detection datasets (including CICIDS2017 and UNSW-NB15) with classifiers such as logistic regression, decision tree, random forest, gradient boosting, and a deep neural network. Defensive methods evaluated include adversarial training (training on adversarial examples)[4], ensemble modeling (combining multiple classifiers)[5], feature squeezing based input filtering[6], as well as data sanitization (removing or down weighting suspect training data) and robust statistical techniques[7]. Performance was measured in terms of accuracy, precision, recall, F1-score, detection rates for adversarial inputs, and the robustness (accuracy on adversarially perturbed data) versus clean data accuracy trade off. The offensive experiments demonstrate that poisoning attacks can significantly shift model decision boundaries, leading to degraded training accuracy and biased models[8]. Even injecting a small fraction of poisoned data caused notable drops in detection accuracy (up to 20โ30% in our tests, depending on the model) as the modelโs learning was skewed. Evasion attacks were able to generate imperceptible input perturbations (e.g., modifying network traffic features by only a few percent) that caused high misclassification rates[1]. For example, a fast gradient sign method attack at an $\ell_\infty$ perturbation of $\epsilon=0.05$ reduced an intrusion detectorโs accuracy by over 40% (from ~95% to ~55%). On the defensive side, adversarial training substantially improved model robustness to evasion attacks, models trained on adversarial examples recovered much of the lost accuracy on perturbed inputs[4]. However, this came at the cost of slightly lower accuracy on normal (unperturbed) data (confirming the known trade off that robustness increases while clean accuracy drops)[9]. Ensemble models outperformed individual models in both accuracy and robustness: by forcing attackers to evade multiple classifiers simultaneously, ensembles made successful evasion significantly harder[5]. An ensemble intrusion detector (e.g., combining a decision tree, a k-NN, and an SVM) maintained higher accuracy under attack than any single model alone. The feature squeezing detection mechanism proved effective at flagging adversarial inputs by comparing a modelโs predictions on original vs. โsqueezedโ inputs (reduced feature precision), we detected a majority of adversarial samples with few false alarms (consistent with prior work achieving high detection rates)[6]. Data sanitization (removing outliers or training points with high negative impact) and robust statistics (regularization) also improved resistance to poisoning attacks, though their protection was limited against adaptive attackers[7]. This dual perspective analysis contributes to the cybersecurity and adversarial machine learning literature in several ways. (1) We present a taxonomy guided analysis of attack types (data poisoning vs. evasion) using standard terminology from NISTโs adversarial ML taxonomy[10], helping to clarify the threat landscape in cybersecurity contexts. (2) We implement multiple attack algorithms (including label flipping and data injection poisoning, as well as gradient based evasion attacks like FGSM/PGD and a GAN based attack) on real network intrusion detection datasets, quantifying their impact on different ML model types. (3) We evaluate a comprehensive suite of defense strategies from data sanitization and robust training stats[7], to adversarial training[4], ensemble learning[5], defensive distillation, and feature squeezing detection[6] analyzing their effectiveness and trade offs (e.g., robustness vs. clean accuracy, computational cost)[11][12]. (4) We distill practical guidelines for deploying robust ML models in cybersecurity: recommendations include incorporating adversarial examples into training, monitoring model inputs and outputs for signs of attack (e.g., using ensemble disagreement or feature squeezers as detectors), applying data provenance and sanitization checks to training data, and using layered defenses for defense in depth. By combining offensive and defensive insights, our work illuminates how adversaries can undermine ML based security systems and how defenders can strengthen them, contributing toward more resilient cybersecurity solutions.
๐ How to Cite
Rosemary Chisom Dimakunne, Paul Clement Uwamotobon Akpabio, Monsuru Olarewaju Moshood, "Offensive and Defensive Dynamics of Adversarial AI in Cybersecurity " International Journal of Scientific Research and Engineering Development, V8(6): Page(3183-3224) Nov-Dec 2025. ISSN: 2581-7175. www.ijsred.com. Published by Scientific and Academic Research Publishing.
๐ Other Details
