Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scikitlearn - how to model a single features composed of multiple independant values

My dataset is composed of millions of row and a couple (10's) of features.

One feature is a label composed of 1000 differents values (imagine each row is a user and this feature is the user's firstname :

Firstname,Feature1,Feature2,....
Quentin,1,2
Marc,0,2
Gaby,1,0
Quentin,1,0

What would be the best representation for this feature (to perform clustering) :

  1. I could convert the data as integer using a LabelEncoder, but it doesn't make sense here since there is no logical "order" between two differents label

    Firstname,F1,F2,....
    0,1,2
    1,0,2
    2,1,0
    0,1,0
    
  2. I could split the feature in 1000 features (one for each label) with 1 when the label match and 0 otherwise. However this would result in a very big matrix (too big if I can't use sparse matrix in my classifier)

    Quentin,Marc,Gaby,F1,F2,....
    1,0,0,1,2
    0,1,0,0,2
    0,0,1,1,0
    1,0,0,1,0
    
  3. I could represent the LabelEncoder value as a binary in N columns, this would reduce the dimension of the final matrix compared to the previous idea, but i'm not sure of the result :

    LabelEncoder(Quentin) = 0 = 0,0
    LabelEncoder(Marc)    = 1 = 0,1
    LabelEncoder(Gaby)    = 2 = 1,0
    
    A,B,F1,F2,....
    0,0,1,2
    0,1,0,2
    1,0,1,0
    0,0,1,0
    
  4. ... Any other idea ?

What do you think about solution 3 ?


Edit for some extra explanations

I should have mentioned in my first post, but In the real dataset, the feature is the more like the final leaf of a classification tree (Aa1, Aa2 etc. in the example - it's not a binary tree).

             A                         B                    C 
      Aa          Ab             Ba          Bb         Ca      Cb
    Aa1  Aa2  Ab1 Ab2 Ab3     Ba1 Ba2     Bb1 Bb2    Ca1 Ca2 Cb1 Cb2

So there is a similarity between 2 terms under the same level (Aa1 Aa2 and Aa3are quite similar, and Aa1 is as much different from Ba1 than Cb2).

The final goal is to find similar entities from a smaller dataset : We train a OneClassSVM on the smaller dataset and then get a distance of each term of the entiere dataset

like image 273
Quentin Avatar asked Oct 30 '22 22:10

Quentin


1 Answers

This problem is largely one of one-hot encoding. How do we represent multiple categorical values in a way that we can use clustering algorithms and not screw up the distance calculation that your algorithm needs to do (you could be using some sort of probabilistic finite mixture model, but I digress)? Like user3914041's answer, there really is no definite answer, but I'll go through each solution you presented and give my impression:

Solution 1

If you're converting the categorical column to an numerical one like you mentioned, then you face that pretty big issue you mentioned: you basically lose meaning of that column. What does it really even mean if Quentin in 0, Marc 1, and Gaby 2? At that point, why even include that column in the clustering? Like user3914041's answer, this is the easiest way to change your categorical values into numerical ones, but they just aren't useful, and could perhaps be detrimental to the results of the clustering.

Solution 2

In my opinion, depending upon how you implement all of this and your goals with the clustering, this would be your best bet. Since I'm assuming you plan to use sklearn and something like k-Means, you should be able to use sparse matrices fine. However, like imaluengo suggests, you should consider using a different distance metric. What you can consider doing is scaling all of your numeric features to the same range as the categorical features, and then use something like cosine distance. Or a mix of distance metrics, like I mention below. But all in all this will likely be the most useful representation of your categorical data for your clustering algorithm.

Solution 3

I agree with user3914041 in that this is not useful, and introduces some of the same problems as mentioned with #1 -- you lose meaning when two (probably) totally different names share a column value.

Solution 4

An additional solution is to follow the advice of the answer here. You can consider rolling your own version of a k-means-like algorithm that takes a mix of distance metrics (hamming distance for the one-hot encoded categorical data, and euclidean for the rest). There seems to be some work in developing k-means like algorithms for mixed categorical and numerical data, like here.

I guess it's also important to consider whether or not you need to cluster on this categorical data. What are you hoping to see?

like image 87
rabbit Avatar answered Nov 15 '22 09:11

rabbit