Feature selection is a critical task in machine learning, big data and data mining. It aims on enhancing model performance by identifying the most relevant features and is widely used in data and information driven areas like facebook, twitter, medical, business etc. Since, the irrelevant and redundant features reduce the performance of a model which cause wrong predictions as well. In this paper, we have introduced a noble winnowing wrapper method which employs a sliding window random shuffle approach to dynamically adjust the feature subsets and identifies relevant features based on both model performance and a specific threshold value. Extensive evaluation results have been carried out to study the performances of our proposed method compared to the existing works. This iterative and adaptive process ensures an efficient identification of the most important features and reduces the computational complexity compared to the conventional wrapper methods and provides a balanced performance for both high and large datasets. The efficiency of this method is demonstrated through its application in ``Stroke Prediction" and ``Heart Disease" datasets, showcasing its potential to improve the execution time with a balanced performance and efficiency.
Winnowing Wrapper Method: an Efficient Feature Selection Technique