Abstract
In this work, the Synthetic Minority Over-sampling Technique (SMOTE) approach is adapted for high-dimensional binary settings. A novel distance metric is proposed for the computation of the neighborhood for each minority sample, which takes into account only a subset of the available attributes that are relevant for the task. Three variants for the distance metric are explored: Euclidean, Manhattan, and Chebyshev distances, and four different ranking strategies: Fisher Score, Mutual Information, Eigenvector Centrality, and Correlation Score. Our proposal was compared with various oversampling techniques on low- and high-dimensional datasets with the presence of class-imbalance, including a case study on Natural Language Processing (NLP). The proposed oversampling strategy showed superior results on average when compared with SMOTE and other variants, demonstrating the importance of selecting the right attributes when defining the neighborhood in SMOTE-based oversampling methods.
Original language | English |
---|---|
Pages (from-to) | 380-389 |
Number of pages | 10 |
Journal | Applied Soft Computing Journal |
Volume | 76 |
DOIs | |
State | Published - Mar 2019 |
Bibliographical note
Funding Information:This research was partially funded by FONDECYT, Chile projects 1160738 and 1160894 , and by the Complex Engineering Systems Institute, Chile ( CONICYT - PIA - FB0816 ). The authors are grateful to the anonymous reviewers who contributed to improving the quality of the original paper.
Publisher Copyright:
© 2018 Elsevier B.V.
Keywords
- Data resampling
- Feature selection
- High-dimensional datasets
- Imbalanced data classification
- SMOTE