Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computing a moving maximum in BigQuery

Given a BigQuery table with some ordering, and some numbers, I'd like to compute a "moving maximum" of the numbers -- similar to a moving average, but for a maximum instead. From Trying to calculate EMA (exponential moving average) using BigQuery it seems like the best way to do this is by using LEAD() and then doing the aggregation myself. (Bigquery moving average suggests essentially a CROSS JOIN, but that seems like it would be quite slow, given the size of the data.)

Ideally, I might be able to just return a single repeated field, rather than 20 individual fields, from the inner query, and then use normal aggregation over the repeated field, but I haven't figured out a way to do that, so I'm stuck with rolling my own aggregation. While this is easy enough for a sum or average, computing the max inline is pretty tricky, and I haven't figured out a good way to do it.

(The examples below are of course somewhat contrived in order to use public datasets. They also do the rolling max over 3 elements, whereas I'd like to do it for around 20. I'm already generating the query programmatically, so making it short isn't a big issue.)

One approach is to do the following:

SELECT word,
  (CASE
    WHEN word_count >= word_count_1 AND word_count >= word_count_2 THEN word_count
    WHEN word_count_1 >= word_count AND word_count_1 >= word_count_2 THEN word_count_1
    ELSE word_count_2 END
    ) AS max_count
FROM (
  SELECT word, word_count,
    LEAD(word_count, 1) OVER (ORDER BY word) AS word_count_1,
    LEAD(word_count, 2) OVER (ORDER BY word) AS word_count_2,
  FROM [publicdata:samples.shakespeare]
  WHERE corpus = 'macbeth'
)

This is O(n^2), but it at least works. I could also do a nested chain of IFs, like this:

SELECT word,
  IF(word_count >= word_count_1,
    IF(word_count >= word_count_2, word_count, word_count_2),
    IF(word_count_1 >= word_count_2, word_count_1, word_count_2)) AS max_count
FROM ...

This is O(n) to evaluate, but the query size is exponential in n, so I don't think it's a good option; certainly it would surpass the BigQuery query size limit for n=20. I could also do n nested queries:

SELECT word,
  IF(word_count_2 >= max_count, word_count_2, max_count) AS max_count
FROM (
  SELECT word,
    IF(word_count_1 >= word_count, word_count_1, word_count) AS max_count
  FROM ...
)

It seems like doing 20 nested queries might not be a great idea performance-wise, though.

Is there a good way to do this kind of query? If not, am I correct that for n around 20, the first is the least bad?

like image 803
Ben Kraft Avatar asked Jul 24 '14 00:07

Ben Kraft


1 Answers

A trick I'm using for rolling windows: CROSS JOIN with a table of numbers. In this case, to have a moving window of 3 years, I cross join with the numbers 0,1,2. Then you can create an id for each group (ending_at_year==year-i) and group by that.

SELECT ending_at_year, MAX(mean_temp) max_temp, COUNT(DISTINCT year) c
FROM 
(
 SELECT mean_temp, year-i ending_at_year, year
 FROM [publicdata:samples.gsod] a
 CROSS JOIN 
  (SELECT i FROM [fh-bigquery:public_dump.numbers_255] WHERE i<3) b
 WHERE station_number=722860
)
GROUP BY ending_at_year
HAVING c=3
ORDER BY ending_at_year;
like image 192
Felipe Hoffa Avatar answered Sep 23 '22 06:09

Felipe Hoffa