I have a table called 'ts' in postgres that looks like this:
dev -- numeric device id
ts -- Unix epoch timestamp
key -- string (only interested in records where key is 'u')
val -- int representing uptime in ms
I am logging process uptime to this table, and I do so every 30 seconds or so. The result is a table that has an ever-increasing val, which, periodically, resets to 0 when the process restarts, creating a sawtooth like pattern. Note that the actual value logged may not be 0, as logging may not happen immediately upon restart.
What I'd like to do is clean up the table a bit by periodically culling the values that do not represent the pre-reset max uptime. There are multiple devices mixed into the same table, so that max uptime would have to be grouped by dev.
Example:
dev ts key val
1 100000 'u' 50 -- boring
1 130100 'u' 30050 -- delete this
1 160100 'u' 60050 -- >> keep this one
1 190200 'u' 100 -- this record dies
1 220200 'u' 30100 -- >> keep this one too
1 250200 'u' 300
I want a query to pick out all the records except those I've marked above, which are uninteresting and can be deleted.
The culling will be run periodically in a batch process.
If you just want to keep local maxima, you can use lead() and lag():
select t.*
from (select t.*,
lead(val) over (partition by dev order by ts) as next_val,
lag(val) over (partition by dev order by ts) as prev_val
from t
where key = 'u'
) t
where val > prev_val and val > next_val;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With