Abstract
In this paper, we present a novel approach for n-gram generation in text classification. The a-priori algorithm is adapted to prune word sequences by combining three feature selection techniques. Unlike the traditional two-step approach for text classification in which feature selection is performed after the n-gram construction process, our proposal performs an embedded feature elimination during the application of the a-priori algorithm. The proposed strategy reduces the number of branches to be explored, speeding up the process and making the construction of all the word sequences tractable. Our proposal has the additional advantage of constructing a low-dimensional dataset with only the features that are relevant for classification, that can be used directly without the need for a feature selection step. Experiments on text classification datasets for sentiment analysis demonstrate that our approach yields the best predictive performance when compared with other feature selection approaches, while also facilitating a better understanding of the words and phrases that explain a given task; in our case online reviews and ratings in various domains.
Original language | English |
---|---|
Pages (from-to) | 509-525 |
Number of pages | 17 |
Journal | Intelligent Data Analysis |
Volume | 25 |
Issue number | 3 |
DOIs | |
State | Published - 2021 |
Bibliographical note
Publisher Copyright:© 2021 - IOS Press. All rights reserved.
Keywords
- Feature selection
- n-gram construction
- sentiment analysis
- text categorization
- text classification