Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

cluster points after KMeans clustering (scikit learn)

I have done clustering using Kmeans using sklearn. While it has a method to print the centroids, I am finding it rather bizzare that scikit-learn doesn't have a method to print out the cluster-points of each cluster (or that I have not seen it so far). Is there a neat way to get the cluster-points of each cluster?

I currently have this rather cludgy code to do it, where V is the dataset:

def getClusterPoints(V, labels):
    clusters = {}
    for l in range(0, max(labels)+1):
        data_points = []
        indices = [i for i, x in enumerate(labels) if x == l]
        for idx in indices:
            data_points.append(V[idx])
        clusters[l] = data_points
    return clusters

Suggestions/links are much appreciated.

Thanks! PD.

like image 370
user1717931 Avatar asked Aug 26 '15 16:08

user1717931


2 Answers

For example

import numpy as np
from sklearn.cluster import KMeans
from sklearn import datasets

iris = datasets.load_iris()
X = iris.data
y = iris.target

estimator = KMeans(n_clusters=3)
estimator.fit(X)

You can get clusters of each point by

estimator.labels_

Out:

array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
   0, 0, 0, 0, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
   1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
   1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 1,
   2, 2, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2,
   1, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 1], dtype=int32)

Then get the indices of points for each cluster

{i: np.where(estimator.labels_ == i)[0] for i in range(estimator.n_clusters)}

Out:

{0: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
        17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
        34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]),
 1: array([ 50,  51,  53,  54,  55,  56,  57,  58,  59,  60,  61,  62,  63,
         64,  65,  66,  67,  68,  69,  70,  71,  72,  73,  74,  75,  76,
         78,  79,  80,  81,  82,  83,  84,  85,  86,  87,  88,  89,  90,
         91,  92,  93,  94,  95,  96,  97,  98,  99, 101, 106, 113, 114,
        119, 121, 123, 126, 127, 133, 138, 142, 146, 149]),
 2: array([ 52,  77, 100, 102, 103, 104, 105, 107, 108, 109, 110, 111, 112,
        115, 116, 117, 118, 120, 122, 124, 125, 128, 129, 130, 131, 132,
        134, 135, 136, 137, 139, 140, 141, 143, 144, 145, 147, 148])}

Edit

If you want to use array of points in X as values rather than the array of indices:

{i: X[np.where(estimator.labels_ == i)] for i in range(estimator.n_clusters)}
like image 136
yangjie Avatar answered Oct 22 '22 11:10

yangjie


If you read the documentation you could see that kmeans has labels_ attribute. This attribute provides the clusters.

See a complete example below:

import matplotlib.pyplot as plt
from sklearn.cluster import MiniBatchKMeans, KMeans
from sklearn.metrics.pairwise import pairwise_distances_argmin
from sklearn.datasets.samples_generator import make_blobs
import numpy as np

##############################################################################
# Generate sample data
np.random.seed(0)

batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)

##############################################################################
# Compute clustering with Means

k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)

##############################################################################
# Plot the results
for i in set(k_means.labels_):
    index = k_means.labels_ == i
    plt.plot(X[index,0], X[index,1], 'o')
plt.show()
like image 38
kikocorreoso Avatar answered Oct 22 '22 10:10

kikocorreoso