Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Clustering tree structured data

Suppose we are given data in a semi-structured format as a tree. As an example, the tree can be formed as a valid XML document or as a valid JSON document. You could imagine it being a lisp-like S-expression or an (G)Algebraic Data Type in Haskell or Ocaml.

We are given a large number of "documents" in the tree structure. Our goal is to cluster documents which are similar. By clustering, we mean a way to divide the documents into j groups, such that elements in each looks like each other.

I am sure there are papers out there which describes approaches but since I am not very known in the area of AI/Clustering/MachineLearning, I want to ask somebody who are what to look for and where to dig.

My current approach is something like this:

  • I want to convert each document into an N-dimensional vector set up for a K-means clustering.
  • To do this, I recursively walk the document tree and for each level I calculate a vector. If I am at a tree vertex, I recur on all subvertices and then sum their vectors. Also, whenever I recur, a power factor is applied so it does matter less and less the further down the tree I go. The documents final vector is the root of the tree.
  • Depending on the data at a tree leaf, I apply a function which takes the data into a vector.

But surely, there are better approaches. One weakness of my approach is that it will only similarity-cluster trees which has a top structure much like each other. If the similarity is present, but occurs farther down the tree, then my approach probably won't work very well.

I imagine there are solutions in full-text-search as well, but I do want to take advantage of the semi-structure present in the data.

Distance function

As suggested, one need to define a distance function between documents. Without this function, we can't apply a clustering algorithm.

In fact, it may be that the question is about that very distance function and examples thereof. I want documents where elements near the root are the same to cluster close to each other. The farther down the tree we go, the less it matters.

The take-one-step-back viewpoint:

I want to cluster stack traces from programs. These are well-formed tree structures, where the function close to the root are the inner function which fails. I need a decent distance function between stack traces that probably occur because the same event happened in code.

like image 410
I GIVE CRAP ANSWERS Avatar asked Dec 12 '10 14:12

I GIVE CRAP ANSWERS


People also ask

What is a clustering tree?

Definition 3: A cluster tree is a tree T such that. Every leaf of T is a distinct symbol. Every internal node of T has at least two children. Each internal node of T is labelled with a non-negative value. Two or more nodes may be given the same value.

Can decision trees be used for clustering?

Decision trees can also be used to perform clustering, with a few adjustments. On one hand, new split criteria must be discovered to construct the tree without the knowledge of samples la- bels. On the other hand, new algorithms must be applied to merge sub- clusters at leaf nodes into actual clusters.

How do you learn clustering tree?

In this technique, the dataset is divided into clusters to create a tree-like structure, which is also called a dendrogram. The observations or any number of clusters can be selected by cutting the tree at the correct level. The most common example of this method is the Agglomerative Hierarchical algorithm.

What are the three major steps in cluster analysis?

The hierarchical cluster analysis follows three basic steps: 1) calculate the distances, 2) link the clusters, and 3) choose a solution by selecting the right number of clusters.


2 Answers

Given the nature of your problem (stack trace), I would reduce it to a string matching problem. Representing a stack trace as a tree is a bit of overhead: for each element in the stack trace, you have exactly one parent.

If string matching would indeed be more appropriate for your problem, you can run through your data, map each node onto a hash and create for each 'document' its n-grams.

Example:

Mapping:

  • Exception A -> 0
  • Exception B -> 1
  • Exception C -> 2
  • Exception D -> 3

Doc A: 0-1-2 Doc B: 1-2-3

2-grams for doc A: X0, 01, 12, 2X

2-grams for doc B: X1, 12, 23, 3X

Using the n-grams, you will be able to cluster similar sequences of events regardless of the root node (in this examplem event 12)

However, if you are still convinced that you need trees, instead of strings, you must consider the following: finding similarities for trees is a lot more complex. You will want to find similar subtrees, with subtrees that are similar over a greater depth resulting in a better similarity score. For this purpose, you will need to discover closed subtrees (subtrees that are the base subtrees for trees that extend it). What you don't want is a data collection containing subtrees that are very rare, or that are present in each document you are processing (which you will get if you do not look for frequent patterns).

Here are some pointers:

  • http://portal.acm.org/citation.cfm?id=1227182
  • http://www.springerlink.com/content/yu0bajqnn4cvh9w9/

Once you have your frequent subtrees, you can use them in the same way as you would use the n-grams for clustering.

like image 138
jvdbogae Avatar answered Nov 03 '22 01:11

jvdbogae


Here you may find a paper that seems closely related to your problem.

From the abstract:

This thesis presents Ixor, a system which collects, stores, and analyzes stack traces in distributed Java systems. When combined with third-party clustering software and adaptive cluster filtering, unusual executions can be identified.

like image 45
Dr. belisarius Avatar answered Nov 03 '22 01:11

Dr. belisarius