Impact of Feature Reduction on the Efficiency of Wireless Intrusion Detection Systems

Abstract
Intrusion Detection Systems (IDSs) are a major line of defense for protecting network resources from illegal penetrations. A common approach in intrusion detection models, specifically in anomaly detection models, is to use classifiers as detectors. Selecting the best set of features is central to ensuring the performance, speed of learning, accuracy, and reliability of these detectors as well as to remove noise from the set of features used to construct the classifiers. In most current systems, the features used for training and testing the intrusion detection systems consist of basic information related to the TCP/IP header, with no considerable attention to the features associated with lower level protocol frames. The resulting detectors were efficient and accurate in detecting network attacks at the network and transport layers, but unfortunately, not capable of detecting 802.11-specific attacks such as deauthentication attacks or MAC layer DoS attacks. In this paper, we propose a novel hybrid model that efficiently selects the optimal set of features in order to detect 802.11-specific intrusions. Our model for feature selection uses the information gain ratio measure as a means to compute the relevance of each feature and the k-means classifier to select the optimal set of MAC layer features that can improve the accuracy of intrusion detection systems while reducing the learning time of their learning algorithm. In the experimental section of this paper, we study the impact of the optimization of the feature set for wireless intrusion detection systems on the performance and learning time of different types of classifiers based on neural networks. Experimental results with three types of neural network architectures clearly show that the optimization of a wireless feature set has a significant impact on the efficiency and accuracy of the intrusion detection system.


Comments are closed.