The performance of classification methods, such as Support Vector Machines, depends heavily on the proper choice of the feature set used to construct the classifier. Feature selection is an NP-hard problem that has been studied extensively in the literature. Most strategies propose the elimination of features independently of classifier construction by exploiting statistical properties of each of the variables, or via greedy search. All such strategies are heuristic by nature. In this work we propose two different Mixed Integer Linear Programming formulations based on extensions of Support Vector Machines to overcome these shortcomings. The proposed approaches perform variable selection simultaneously with classifier construction using optimization models. We ran experiments on real-world benchmark datasets, comparing our approaches with well-known feature selection techniques and obtained better predictions with consistently fewer relevant features.
Bibliographical noteFunding Information:
Support from the Institute of Complex Engineering Systems (ICM: P-05-004-F, CONICYT: FBO16) ( http://www.isci.cl ) is greatly acknowledged. The first author was supported by FONDECYT project 11121196. The research of the last author is supported by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy Office.
- Feature selection
- Mixed Integer Linear Programming
- Support Vector Machine