Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient set intersection - decide whether the intersection is larger than k

I am faced with a problem where I have to calculate intersections between all pairs in a collection of sets. None of the sets are smaller than a small constant k, and I'm only interested in whether two sets have an intersection larger than k-1 elements or not. I do not need the actual intersections nor the exact size, only whether it's larger than k-1 or not. Is there some clever pre-processing trick or a neat set intersection algorithm that I could use to speed things up?

More info that can be useful to answer the question:

  • The sets represent maximal cliques in a large, undirected, sparse graph. The number of sets can be in the order of tens of thousands or more, but most of the sets are likely to be small.
  • The sets are already sorted members of each set are in increasing order. Effectively they are sorted lists - I receive them this way from an underlying library for maximal clique search.
  • Nothing is known about the distribution of elements in the sets (i.e. whether they are in tight clumps or not).
  • Most of the set intersections are likely to be empty, so the ideal solution would be a clever data structure that helps me cut down the number of set intersections I have to make.
like image 361
Tamás Avatar asked Apr 27 '11 09:04

Tamás


3 Answers

Consider a mapping with all sets of size k as the keys and corresponding values of lists of all sets from your collection that contain the key as a subset. Given this mapping, you don't need to perform any intersection tests: for each key, all pairs of sets from the list will have an intersection of size at least k. This approach can produce the same pair of sets more than once, so that will need to be checked.

The mapping is easy enough to calculate. For each set in the collection, calculate all the size-k subsets and append the original set to the list for that key set. But is this actually faster? In general, no. The performance of this approach will depend on the distribution of the sizes of the sets in the collection and the value of k. With d distinct elements in the sets, you could have as many as d choose k keys, which can be very large.

However, the basic idea is usable to reduce the number of intersections. Instead of using sets of size k, use smaller ones of fixed size q as the keys. The values are again lists of all sets that have the key as a subset. Now, test each pair of sets from the list for intersection. Thus, with q=1 you only test those pairs of sets that have at least one element in common, with q=2 you only test those pairs of sets that have at least two elements in common, and so on. The optimal value for q will depend on the distribution of sizes of the sets, I think.

For the sets in question, a good choice might be q=2. The keys are then just the edges of the graph, giving a predictable size to the mapping. Since most sets are expected to be disjoint, q=2 should eliminate a lot of comparisons without much additional overhead.

like image 138
Michael J. Barber Avatar answered Oct 03 '22 01:10

Michael J. Barber


One possible optimization, which is more effective the smaller the range of values contained in each set:

  • Create a list of all the sets, sorted by their kth-greatest element (this is easy to find, since you already have each set with its elements in order). Call this list L.
  • For any two sets A and B, their intersection cannot have as many as k elements in it if the kth-greatest element in A is less than the least element in B.
  • So, for each set in turn, calculate its intersection only with the sets in the relevant part of L.

You can use the same fact to exit early from computing the intersection of any two sets - if there are only n-1 elements left to compare in one of the sets, and the intersection so far contains at most k-n elements, then stop. The above procedure is simply this rule applied to all the sets in L at once, with n=k, at the point where we're looking at the least element of set B and the kth-greatest element of A.

like image 40
Steve Jessop Avatar answered Sep 30 '22 01:09

Steve Jessop


The following strategy should be quite efficient. I've used variations of this for intersecting ascending sequences on a number of occasions.

First I assume that you have some sort of priority queue available (if not, rolling your own heap is pretty easy). And a fast key/value lookup (btree, hash, whatever).

With that said, here is pseudocode for an algorithm that should do what you want quite efficiently.

# Initial setup
sets = array of all sets
intersection_count = key/value lookup with keys = (set_pos, set_pos) and values are counts.
p_queue = priority queue whose elements are (set[0], 0, set_pos), organized by set[0]

# helper function
def process_intersections(current_sets):
    for all pairs of current_sets:
        if pair in intersection_count:
            intersection_count[pair] += 1
        else:
            intersection_count[pair] = 1

# Find all intersections
current_sets = []
last_element = first element of first thing in p_queue
while p_queue is not empty:
    (element, ind, set_pos) = get top element from p_queue
    if element != last_element:
        process_intersections(current_sets)
        last_element = element
        current_sets = []
    current_sets.append(set_pos)
    ind += 1
    if ind < len(sets[set_pos]):
        add (sets[set_pos][ind], ind, set_pos) to p_queue
# Don't forget the last one!
process_intersections(current_sets)

final answer = []
for (pair, count) in intersection_count.iteritems():
    if k-1 < count:
        final_answer.append(pair)

The running time will be O(sum(sizes of sets) * log(number of sets) + count(times a point is in a pair of sets). In particular note that if two sets have no intersection, you never try to intersect them.

like image 25
btilly Avatar answered Oct 04 '22 01:10

btilly