TY - JOUR
T1 - An alternative SMOTE oversampling strategy for high-dimensional datasets
AU - Maldonado, Sebastián
AU - López, Julio
AU - Vairetti, Carla
N1 - Funding Information:
This research was partially funded by FONDECYT, Chile projects 1160738 and 1160894 , and by the Complex Engineering Systems Institute, Chile ( CONICYT - PIA - FB0816 ). The authors are grateful to the anonymous reviewers who contributed to improving the quality of the original paper.
Publisher Copyright:
© 2018 Elsevier B.V.
PY - 2019/3
Y1 - 2019/3
N2 - In this work, the Synthetic Minority Over-sampling Technique (SMOTE) approach is adapted for high-dimensional binary settings. A novel distance metric is proposed for the computation of the neighborhood for each minority sample, which takes into account only a subset of the available attributes that are relevant for the task. Three variants for the distance metric are explored: Euclidean, Manhattan, and Chebyshev distances, and four different ranking strategies: Fisher Score, Mutual Information, Eigenvector Centrality, and Correlation Score. Our proposal was compared with various oversampling techniques on low- and high-dimensional datasets with the presence of class-imbalance, including a case study on Natural Language Processing (NLP). The proposed oversampling strategy showed superior results on average when compared with SMOTE and other variants, demonstrating the importance of selecting the right attributes when defining the neighborhood in SMOTE-based oversampling methods.
AB - In this work, the Synthetic Minority Over-sampling Technique (SMOTE) approach is adapted for high-dimensional binary settings. A novel distance metric is proposed for the computation of the neighborhood for each minority sample, which takes into account only a subset of the available attributes that are relevant for the task. Three variants for the distance metric are explored: Euclidean, Manhattan, and Chebyshev distances, and four different ranking strategies: Fisher Score, Mutual Information, Eigenvector Centrality, and Correlation Score. Our proposal was compared with various oversampling techniques on low- and high-dimensional datasets with the presence of class-imbalance, including a case study on Natural Language Processing (NLP). The proposed oversampling strategy showed superior results on average when compared with SMOTE and other variants, demonstrating the importance of selecting the right attributes when defining the neighborhood in SMOTE-based oversampling methods.
KW - Data resampling
KW - Feature selection
KW - High-dimensional datasets
KW - Imbalanced data classification
KW - SMOTE
UR - http://www.scopus.com/inward/record.url?scp=85059345203&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2018.12.024
DO - 10.1016/j.asoc.2018.12.024
M3 - Article
AN - SCOPUS:85059345203
SN - 1568-4946
VL - 76
SP - 380
EP - 389
JO - Applied Soft Computing Journal
JF - Applied Soft Computing Journal
ER -