Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Determining the best k for a k nearest neighbour

I have need to do some cluster analysis on a set of 2 dimensional data (I may add extra dimensions along the way).

The analysis itself will form part of the data being fed into a visualisation, rather than the inputs into another process (e.g. Radial Basis Function Networks).

To this end, I'd like to find a set of clusters which primarily "looks right", rather than elucidating some hidden patterns.

My intuition is that k-means would be a good starting place for this, but that finding the right number of clusters to run the algorithm with would be problematic.

The problem I'm coming to is this:

How to determine the 'best' value for k such that the clusters formed are stable and visually verifiable?

Questions:

  • Assuming that this isn't NP-complete, what is the time complexity for finding a good k. (probably reported in number of times to run the k-means algorithm).
  • is k-means a good starting point for this type of problem? If so, what other approaches would you recommend. A specific example, backed by an anecdote/experience would be maxi-bon.
  • what short cuts/approximations would you recommend to increase the performance.
like image 265
jamesh Avatar asked Nov 09 '09 14:11

jamesh


People also ask

Which of these K-nearest neighbors and K means is better?

K-NN is a lazy learner while K-Means is an eager learner. An eager learner has a model fitting that means a training step but a lazy learner does not have a training phase. K-NN performs much better if all of the data have the same scale but this is not true for K-means.

What sort of data does K-nearest neighbors classify best on?

It can be used for both classification and regression problems. It's ideal for non-linear data since there's no assumption about underlying data. It can naturally handle multi-class cases.

What does the K parameter in K-nearest neighbors control?

The k-nearest neighbors algorithm, also known as KNN or k-NN, is a non-parametric, supervised learning classifier, which uses proximity to make classifications or predictions about the grouping of an individual data point.


8 Answers

For problems with an unknown number of clusters, agglomerative hierarchical clustering is often a better route than k-means.

Agglomerative clustering produces a tree structure, where the closer you are to the trunk, the fewer the number of clusters, so it's easy to scan through all numbers of clusters. The algorithm starts by assigning each point to its own cluster, and then repeatedly groups the two closest centroids. Keeping track of the grouping sequence allows an instant snapshot for any number of possible clusters. Therefore, it's often preferable to use this technique over k-means when you don't know how many groups you'll want.

There are other hierarchical clustering methods (see the paper suggested in Imran's comments). The primary advantage of an agglomerative approach is that there are many implementations out there, ready-made for your use.

like image 89
tom10 Avatar answered Sep 26 '22 01:09

tom10


In order to use k-means, you should know how many cluster there is. You can't try a naive meta-optimisation, since the more cluster you'll add (up to 1 cluster for each data point), the more it will brought you to over-fitting. You may look for some cluster validation methods and optimize the k hyperparameter with it but from my experience, it rarely work well. It's very costly too.

If I were you, I would do a PCA, eventually on polynomial space (take care of your available time) depending on what you know of your input, and cluster along the most representatives components.

More infos on your data set would be very helpful for a more precise answer.

like image 39
Aszarsha Avatar answered Sep 23 '22 01:09

Aszarsha


Here's my approximate solution:

  1. Start with k=2.
  2. For a number of tries:
    1. Run the k-means algorithm to find k clusters.
    2. Find the mean square distance from the origin to the cluster centroids.
  3. Repeat the 2-3, to find a standard deviation of the distances. This is a proxy for the stability of the clusters.
  4. If stability of clusters for k < stability of clusters for k - 1 then return k - 1
  5. Increment k by 1.

The thesis behind this algorithm is that the number of sets of k clusters is small for "good" values of k.

If we can find a local optimum for this stability, or an optimal delta for the stability, then we can find a good set of clusters which cannot be improved by adding more clusters.

like image 27
jamesh Avatar answered Sep 26 '22 01:09

jamesh


In a previous answer, I explained how Self-Organizing Maps (SOM) can be used in visual clustering.

Otherwise, there exist a variation of the K-Means algorithm called X-Means which is able to find the number of clusters by optimizing the Bayesian Information Criterion (BIC), in addition to solving the problem of scalability by using KD-trees.
Weka includes an implementation of X-Means along with many other clustering algorithm, all in an easy to use GUI tool.

Finally you might to refer to this page which discusses the Elbow Method among other techniques for determining the number of clusters in a dataset.

like image 24
Amro Avatar answered Sep 24 '22 01:09

Amro


You might look at papers on cluster validation. Here's one that is cited in papers that involve microarray analysis, which involves clustering genes with related expression levels.

One such technique is the Silhouette measure that evaluates how closely a labeled point is to its centroid. The general idea is that, if a point is assigned to one centroid but is still close to others, perhaps it was assigned to the wrong centroid. By counting these events across training sets and looking across various k-means clusterings, one looks for the k such that the labeled points overall fall into the "best" or minimally ambiguous arrangement.

It should be said that clustering is more of a data visualization and exploration technique. It can be difficult to elucidate with certainty that one clustering explains the data correctly, above all others. It's best to merge your clusterings with other relevant information. Is there something functional or otherwise informative about your data, such that you know some clusterings are impossible? This can reduce your solution space considerably.

like image 35
Alex Reynolds Avatar answered Sep 24 '22 01:09

Alex Reynolds


From your wikipedia link:

Regarding computational complexity, the k-means clustering problem is:

  • NP-hard in general Euclidean space d even for 2 clusters
  • NP-hard for a general number of clusters k even in the plane
  • If k and d are fixed, the problem can be exactly solved in time O(ndk+1 log n), where n is the number of entities to be clustered

Thus, a variety of heuristic algorithms are generally used.

That said, finding a good value of k is usually a heuristic process (i.e. you try a few and select the best).

I think k-means is a good starting point, it is simple and easy to implement (or copy). Only look further if you have serious performance problems.

If the set of points you want to cluster is exceptionally large a first order optimisation would be to randomly select a small subset, use that set to find your k-means.

like image 37
jilles de wit Avatar answered Sep 26 '22 01:09

jilles de wit


Choosing the best K can be seen as a Model Selection problem. One possible approach is Minimum Description Length, which in this context means: You could store a table with all the points (in which case K=N). At the other extreme, you have K=1, and all the points are stored as their distances from a single centroid. This Section from Introduction to Information Retrieval by Manning and Schutze suggest minimising the Akaike Information Criterion as a heuristic for an optimal K.

like image 35
Yuval F Avatar answered Sep 23 '22 01:09

Yuval F


This problematic belongs to the "internal evaluation" class of "clustering optimisation problems" which curent state of the art solution seems to use the **Silhouette* coeficient* as stated here

https://en.wikipedia.org/wiki/Cluster_analysis#Applications

and here:

https://en.wikipedia.org/wiki/Silhouette_(clustering) :

"silhouette plots and averages may be used to determine the natural number of clusters within a dataset"

scikit-learn provides a sample usage implementation of the methodology here http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html

like image 44
user1767316 Avatar answered Sep 26 '22 01:09

user1767316