Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to handle text classification problems when multiple features are involved

I am working on a text classification problem where multiple text features and need to build a model to predict salary range. Please refer the Sample dataset Most of the resources/tutorials deal with feature extraction on only one column and then predicting target. I am aware of the processes such as text pre-processing, feature extraction (CountVectorizer or TF-IDF) and then the applying algorithms.

In this problem, I have multiple input text features. How to handle text classification problems when multiple features are involved? These are the methods I have already tried but I am not sure if these are the right methods. Kindly provide your inputs/suggestion.

1) Applied data cleaning on each feature separately followed by TF-IDF and then logistic regression. Here I tried to see if I can use only one feature for classification.

2) Applied Data cleaning on all the columns separately and then applied TF-IDF for each feature and then merged the all feature vectors to create only one feature vector. Finally logistic regression.

3) Applied Data cleaning on all the columns separately and merged all the cleaned columns to create one feature 'merged_text'. Then applied TF-IDF on this merged_text and followed by logistic regression.

All these 3 methods gave me around 35-40% accuracy on cross-validation & test set. I am expecting at-least 60% accuracy on the test set which is not provided.

Also, I didn't understand how use to 'company_name' & 'experience' with text data. there are about 2000+ unique values in company_name. Please provide input/pointer on how to handle numeric data in text classification problem.

like image 258
Chetan Ambi Avatar asked Dec 26 '18 07:12

Chetan Ambi


People also ask

Which algorithm is best for multiclass text classification?

Linear Support Vector Machine is widely regarded as one of the best text classification algorithms. We achieve a higher accuracy score of 79% which is 5% improvement over Naive Bayes.

Which of these are methods to reduce the number of features that a text classifier might use?

You can use some dimensionality reduction method to reduce this effect. Possible choice is Latent Semantic Analysis implemented in sklearn.


1 Answers

Try these things:

  1. Apply text preprocessing on 'job description', 'job designation' and 'key skills. Remove all stop words, separate each words removing punctuations, lowercase all words then apply TF-IDF or Count Vectorizer, don't forget to scale these features before training model.

  2. Convert Experience to Minimum experience and Maximum experience 2 features and treat is as a discrete numeric feature.

  3. Company and location can be treated as a categorical feature and create dummy variable/one hot encoding before training the model.

  4. Try combining job type and key skills and then do vectorization, see how if it works better.

  5. Use Random Forest Regressor, tune hyperparameters: n_estimators, max_depth, max_features using GridCV.

Hopefully, these will increase the performance of the model.

Let me know how is it performing with these.

like image 72
Ayush Kesarwani Avatar answered Oct 03 '22 19:10

Ayush Kesarwani