Related, preceding question:
Select a random entry from a group after grouping by a value (not column)?
My current query looks like this:
WITH
points AS (
SELECT unnest(array_of_points) AS p
),
gtps AS (
SELECT DISTINCT ON(points.p)
points.p, m.groundtruth
FROM measurement m, points
WHERE st_distance(m.groundtruth, points.p) < distance
ORDER BY points.p, RANDOM()
)
SELECT DISTINCT ON(gtps.p, gtps.groundtruth, m.anchor_id)
m.id, m.anchor_id, gtps.groundtruth, gtps.p
FROM measurement m, gtps
ORDER BY gtps.p, gtps.groundtruth, m.anchor_id, RANDOM()
Semantics:
there are two input values:
array_of_points
distance
First paragraph (lines 1-6):
Second paragraph (lines 8-14):
points
table: get a random(!) groundtruth
point from the measurement
table, that has a distance < distance
gtps
tableThird paragraph (lines 16-19):
groundtruth
value inside the gtps
table: get all anchor_id
values and...anchor_id
value is not unique: Then choose a random oneOutput: id
, anchor_id
, groundtruth
, p
(input value from the array_of_points
)
Example table:
id | anchor_id | groundtruth | data
-----------------------------------
1 | 1 | POINT(1 4) | ...
2 | 3 | POINT(1 4) | ...
3 | 8 | POINT(1 4) | ...
4 | 6 | POINT(1 4) | ...
-----------------------------------
5 | 2 | POINT(3 2) | ...
6 | 4 | POINT(3 2) | ...
-----------------------------------
7 | 1 | POINT(4 3) | ...
8 | 1 | POINT(4 3) | ...
9 | 6 | POINT(4 3) | ...
10 | 7 | POINT(4 3) | ...
11 | 3 | POINT(4 3) | ...
-----------------------------------
12 | 1 | POINT(6 2) | ...
13 | 5 | POINT(6 2) | ...
Example result:
id | anchor_id | groundtruth | p
-----------------------------------------
1 | 1 | POINT(1 4) | POINT(1 0)
2 | 3 | POINT(1 4) | POINT(1 0)
4 | 6 | POINT(1 4) | POINT(1 0)
3 | 8 | POINT(1 4) | POINT(1 0)
5 | 2 | POINT(3 2) | POINT(2 2)
6 | 4 | POINT(3 2) | POINT(2 2)
1 | 1 | POINT(1 4) | POINT(4 8)
2 | 3 | POINT(1 4) | POINT(4 8)
4 | 6 | POINT(1 4) | POINT(4 8)
3 | 8 | POINT(1 4) | POINT(4 8)
12 | 1 | POINT(6 2) | POINT(7 3)
13 | 5 | POINT(6 2) | POINT(7 3)
1 | 1 | POINT(4 3) | POINT(9 1)
11 | 3 | POINT(4 3) | POINT(9 1)
9 | 6 | POINT(4 3) | POINT(9 1)
10 | 7 | POINT(4 3) | POINT(9 1)
As you can see:
groundtruth
values.groundtruth
values, those must all be equal.anchor_id
for that groundtruth.groundtruth
value.anchor_id
anchor_id
sBenchmarks (for two input values):
EXPLAIN VERBOSE:
Unique (cost=11119.32..11348.33 rows=18 width=72)
Output: m.id, m.anchor_id, gtps.groundtruth, gtps.p, (random())
CTE points
-> Result (cost=0.00..0.01 rows=1 width=0)
Output: unnest('{0101000000EE7C3F355EF24F4019390B7BDA011940:01010000003480B74082FA44402CD49AE61D173C40}'::geometry[])
CTE gtps
-> Unique (cost=7659.95..7698.12 rows=1 width=160)
Output: points.p, m.groundtruth, (random())
-> Sort (cost=7659.95..7679.04 rows=7634 width=160)
Output: points.p, m.groundtruth, (random())
Sort Key: points.p, (random())
-> Nested Loop (cost=0.00..6565.63 rows=7634 width=160)
Output: points.p, m.groundtruth, random()
Join Filter: (st_distance(m.groundtruth, points.p) < m.distance)
-> CTE Scan on points (cost=0.00..0.02 rows=1 width=32)
Output: points.p
-> Seq Scan on public.measurement m (cost=0.00..535.01 rows=22901 width=132)
Output: m.id, m.anchor_id, m.tag_node_id, m.experiment_id, m.run_id, m.anchor_node_id, m.groundtruth, m.distance, m.distance_error, m.distance_truth, m."timestamp"
-> Sort (cost=3421.18..3478.43 rows=22901 width=72)
Output: m.id, m.anchor_id, gtps.groundtruth, gtps.p, (random())
Sort Key: gtps.p, gtps.groundtruth, m.anchor_id, (random())
-> Nested Loop (cost=0.00..821.29 rows=22901 width=72)
Output: m.id, m.anchor_id, gtps.groundtruth, gtps.p, random()
-> CTE Scan on gtps (cost=0.00..0.02 rows=1 width=64)
Output: gtps.p, gtps.groundtruth
-> Seq Scan on public.measurement m (cost=0.00..535.01 rows=22901 width=8)
Output: m.id, m.anchor_id, m.tag_node_id, m.experiment_id, m.run_id, m.anchor_node_id, m.groundtruth, m.distance, m.distance_error, m.distance_truth, m."timestamp"
EXPLAIN ANALYZE:
Unique (cost=11119.32..11348.33 rows=18 width=72) (actual time=548.991..657.992 rows=36 loops=1)
CTE points
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.004..0.011 rows=2 loops=1)
CTE gtps
-> Unique (cost=7659.95..7698.12 rows=1 width=160) (actual time=133.416..146.745 rows=2 loops=1)
-> Sort (cost=7659.95..7679.04 rows=7634 width=160) (actual time=133.415..142.255 rows=15683 loops=1)
Sort Key: points.p, (random())
Sort Method: external merge Disk: 1248kB
-> Nested Loop (cost=0.00..6565.63 rows=7634 width=160) (actual time=0.045..46.670 rows=15683 loops=1)
Join Filter: (st_distance(m.groundtruth, points.p) < m.distance)
-> CTE Scan on points (cost=0.00..0.02 rows=1 width=32) (actual time=0.007..0.020 rows=2 loops=1)
-> Seq Scan on measurement m (cost=0.00..535.01 rows=22901 width=132) (actual time=0.013..3.902 rows=22901 loops=2)
-> Sort (cost=3421.18..3478.43 rows=22901 width=72) (actual time=548.989..631.323 rows=45802 loops=1)
Sort Key: gtps.p, gtps.groundtruth, m.anchor_id, (random())"
Sort Method: external merge Disk: 4008kB
-> Nested Loop (cost=0.00..821.29 rows=22901 width=72) (actual time=133.449..166.294 rows=45802 loops=1)
-> CTE Scan on gtps (cost=0.00..0.02 rows=1 width=64) (actual time=133.420..146.753 rows=2 loops=1)
-> Seq Scan on measurement m (cost=0.00..535.01 rows=22901 width=8) (actual time=0.014..4.409 rows=22901 loops=2)
Total runtime: 834.626 ms
When running live this should run with about 100-1000 input values. So for now it would take 35 to 350 seconds which is far to much.
I already tried to remove the RANDOM()
functions. This decreases the runtime (for 2 input values) from about 670ms to about 530ms. So this isn't the main impact at the moment.
It's also possible to run 2 or 3 separate queries and do some parts in software (it's running on a Ruby on Rails server) if that's easier/faster. For example the random selection?!
SELECT
m.groundtruth, ps.p, ARRAY_AGG(m.anchor_id), ARRAY_AGG(m.id)
FROM
measurement m
JOIN
(SELECT unnest(point_array) AS p) AS ps
ON ST_DWithin(ps.p, m.groundtruth, distance)
GROUP BY groundtruth, ps.p
With this query it is extremely fast (15ms), but there's missing a lot:
ps.p
anchor_id
in the array that appears more than once: keep a random one and delete all other. This also means to remove the corresponding id
from the id
-array for every deleted anchor_id
It would also be nice if anchor_id
and id
could be stored inside an array of tuples. For example: {[4,1],[6,3],[4,2],[8,5],[4,4]}
(constraints: every tuple is unique, every id (== 2nd value in the example) is unique, anchor_ids are not unique). This example displays the query without the filters that still must be applied. With the filters applied, it would look like this {[6,3],[4,4],[8,5]}
.
SELECT DISTINCT ON (ps.p)
m.groundtruth, ps.p, ARRAY_AGG(m.anchor_id), ARRAY_AGG(m.id)
FROM
measurement m
JOIN
(SELECT unnest(point_array) AS p) AS ps
ON ST_DWithin(ps.p, m.groundtruth, distance)
GROUP BY ps.p, m.groundtruth
ORDER BY ps.p, RANDOM()
This now give pretty nice results and is still very fast: 16ms
There's just one thing left to do:
ARRAY_AGG(m.anchor_id)
is already randomized, but:ARRAY_AGG(m.id)
. This means:anchor_id
array, then it has also to keep indices 1, 4 and 7 of the id
array (and of course delete all others)It would also be nice if anchor_id and id could be stored inside an array of tuples.
I suppose you create a two-dimensional array for that. That's easier to handle than an ARRAY of record
. Standard array_agg()
cannot aggregate multi-dimensional arrays. But you can write your own aggregate function rather easily for that:
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
Read the explanation in this related answer:
Selecting data into a Postgres array
For each anchor_id in the array that appears more than once: keep a random one and delete all other. This also means to remove the corresponding id from the id-array for every deleted anchor_id
SELECT DISTINCT ON (p)
p, groundtruth, array_agg_mult(ARRAY[ARRAY[anchor_id, id]]) AS ids
FROM (
SELECT DISTINCT ON (ps.p, m.groundtruth, m.anchor_id)
ps.p, m.groundtruth, m.anchor_id, m.id
FROM (SELECT unnest(point_array) AS p) AS ps
JOIN measurement m ON ST_DWithin(ps.p, m.groundtruth, distance)
ORDER BY ps.p, m.groundtruth, m.anchor_id, random()
) x
GROUP BY p, groundtruth
ORDER BY p, random();
Subquery x
gets distinct anchor_id
per (p, groundtruth)
and picks a random row if there are multiple peers. This way the connection anchor_id - id
stays intact.
The outer query aggregates a 2-dimensional array like you wished for, ordered by anchor_id
. If you want to have anchor_id
ordered randomly, use random once more:
array_agg_mult(ARRAY[ARRAY[anchor_id, id]] ORDER BY random())
And finally, the DISTINCT ON
picks only 1 groundtruth
per p
, randomly again.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With