Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficiently Computing Significant Terms in SQL

I was introduced to ElasticSearch significant terms aggregation a while ago and was positively surprised how good and relevant this metric turns out to be. For those not familiar with it, it's quite a simple concept - for a given query (foreground set) a given property is scored against the statistical significance of the background set.

For example, if we were querying for the most significant crime types in the British Transport Police:

C = 5,064,554 -- total number of crimes
T =    66,799 -- total number of bicycle thefts
S =    47,347 -- total number of crimes in British Transport Police
I =     3,640 -- total number of bicycle thefts in British Transport Police

Ordinarily, bicycle thefts represent only 1% of crimes (66,799/5,064,554) but for the British Transport Police, who handle crime on railways and stations, 7% of crimes (3,640/47,347) is a bike theft. This is a significant seven-fold increase in frequency.

The significance for "bicycle theft" would be [(I/S) - (T/C)] * [(I/S) / (T/C)] = 0.371...

Where:

  • C is the number of all documents in the collection
  • S is the number of documents matching the query
  • T is the number of documents with the specific term
  • I is the number of documents that intersect both S and T

For practical reasons (the sheer amount of data I have and huge ElasticSearch memory requirements), I'm looking to implement the significant terms aggregation in SQL or directly in code.

I've been looking at some ways to potentially optimize this kind of query, specifically, decreasing the memory requirements and increasing the query speed, at the expense of some error margin - but so far I haven't cracked it. It seems to me that:

  • The variables C and S are easily cacheable or queriable.
  • The variable T could be derived from a Count-Min Sketch instead of querying the database.
  • The variable I however, seems impossible to derive with the Count-Min Sketch from T.

I was also looking at the MinHash, but from the description it seems that it couldn't be applied here.

Does anyone know about some clever algorithm or data structure that helps tackling this problem?

like image 654
Alix Axel Avatar asked Jun 02 '16 17:06

Alix Axel


People also ask

What is the most important technique used in SQL performance Optimisation?

1. Use the Database Engine Tuning Advisor. An important performance optimization tool for SQL performance tuning is the Database Engine Tuning Advisor. This tool allows admins to evaluate single Transact-SQL statements or a batch of statements to determine where improvements can be made.

What does CTE do in SQL?

The common table expression (CTE) is a powerful construct in SQL that helps simplify a query. CTEs work as virtual tables (with records and columns), created during the execution of a query, used by the query, and eliminated after query execution.


1 Answers

I doubt a SQL impl will be faster. The values for C and T are maintained ahead of time by Lucene. S is a simple count derived from the query results and I is looked up using O(1) data structures. The main cost are the many T lookups for each of the terms observed in the chosen field. Using min_doc_count typically helps drastically reduce the number of these lookups.

For practical reasons (the sheer amount of data I have and huge ElasticSearch memory requirements

Have you looked into using doc values to manage elasticsearch memory better? See https://www.elastic.co/blog/support-in-the-wild-my-biggest-elasticsearch-problem-at-scale

like image 54
MarkH Avatar answered Sep 21 '22 00:09

MarkH