Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Correlating users with one another in SQL query

I am trying to correlate users with one another and assign a common ID for web site visitors.

I have the rows (call it table a) a.UUID, a.seen_time, a.ip_address, a.user_id, a.subdomain, and I am trying to come up with a a.matched_id whereby if the row IP address is +/- 4hrs of the last (i.e. continuously), a single matched_id is assigned to those rows.

Note that for my purposes, an IP on 2 different subdomains are NOT necessarily the same match, unless they have the same user ID.

Here is the basic process I would follow in a regular programming language (however I need to construct SQL):

  • Get the necessary rows of table a
  • For each row, if any row ever has a matching user_id (subdomain doesn't matter), assign them the same matched_id (all else being equal, let's use MIN(uuid))
  • Partition into subdomain sets.

    For each of those subdomain partitions:

    • Now partition into buckets of IP addresses where each row is < 4hrs from the seen_time before(/after) it (ie on a row-by-row basis)

      For each of those IP address partitions:

      • If any 1 item has a matched_id already, assign that to all. Otherwise, assign a new matched_id to all (using MIN(uuid)). Continue.

I am using Amazon Redshift which is more or less queried the same as Postgres but with a few more limitations (if interested, see unsupported features and unsupported functions): Postgres/ANSI SQL answers accepted.

How can I construct this query in an efficient fashion?

What is the basic SQL process I must follow?

Thanks


-- UPDATE --

I have made the following progress shown below:

  • I don't know how efficient it is
  • I used discovery_time instead of seen_time as referenced to above, and the table name mydata instead of a, although its sometimes aliased as a and b
  • It uses an MD5 instead of MIN(UUID) since I believe getting that info would require another query - anyway, it doesn't matter too much
  • Key problem: It does not count the +/- 4 hrs 'from the last row' instead its as an absolute

Code:

--UPDATE mydata m SET matched_id = NULL; --for testing

WITH cte1 AS (
    --start with the max discovery time and go down from there
    --select the matched id if one already exists
    SELECT m.ip, m.subdomain, MAX(m.discovery_time) AS max_discovery_time, 
        CASE WHEN MIN(m.user_id) IS NOT NULL THEN MD5(MIN(m.user_id)) 
        ELSE MIN(m.matched_id) END AS known_matched_id
    FROM mydata m
    GROUP BY m.ip, m.subdomain

    ), cte2 AS (

    SELECT m.uuid, CASE WHEN c.known_matched_id IS NOT NULL THEN c.known_matched_id 
        ELSE MD5(CONCAT(c.ip, c.subdomain, c.max_discovery_time)) END AS matched_id
    FROM mydata m 
    --IP on different subdomains are not necessarily the same match
    RIGHT OUTER JOIN cte1 c ON CONCAT(c.ip, c.subdomain) = CONCAT(m.ip, m.subdomain) 
    WHERE m.discovery_time >= (c.max_discovery_time - INTERVAL '4 hours')
    --Does not work 'row by row' instead in terms of absolutes - need to make this recursive somehow,
    --but Redshift does not support recursive CTEs or user-defined functions
)

UPDATE mydata m
SET matched_id = c.matched_id
FROM cte2 c
WHERE c.uuid = m.uuid;

--view result for an example IP
SELECT m.discovery_time, m.ip, m.matched_id, m.uuid 
FROM mydata m
WHERE m.ip = '12.34.56.78'
ORDER BY m.ip, m.discovery_time;

And in case you are wanting to test, the following create script should do you:

CREATE TABLE mydata
(
  ip character varying(255),
  subdomain character varying(255),
  matched_id character varying(255),
  user_id character varying(255),
  uuid character varying(255) NOT NULL,
  discovery_time timestamp without time zone,
  CONSTRAINT pk_mydata PRIMARY KEY (uuid)
);

-- should all get the same matched_id in result, except the 1st
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, '222b5991-9780-11e3-9304-127b2ab15ea7', '2014-02-14 00:03:26');
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, '333b5991-9780-11e3-9304-127b2ab15ea7', '2014-02-16 22:22:26');
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, '379b641b-9782-11e3-9304-127b2ab15ea7', '2014-02-17 03:18:48');
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, 'ac0f6416-977e-11e3-9304-127b2ab15ea7', '2014-02-17 02:53:25');
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, '11fb5991-9780-11e3-9304-127b2ab15ea7', '2014-02-17 03:03:26');
INSERT INTO mydata (ip, subdomain, matched_id, user_id, uuid, discovery_time) VALUES ('12.34.56.78', 'sub1', NULL, NULL, '849d8d61-9781-11e3-9304-127b2ab15ea7', '2014-02-17 03:13:48');


The expected output would then be for all those rows to be assigned the same matched_id, except for the first one (in the INSERT lines) since its time is way more than 4hrs out from the next most recently seen time (and nor does it have a user_id to match to any others).


-- UPDATE 2 --

  • Still not much luck on the continuous row-by-row results. This version seems to work that way if run repeatedly, though
  • Interested to make it efficient
  • New columns min_time and max_time denote the min and max times in a 4hr set

Code:

-- Set user IDs that are the same 
UPDATE mydata AS m SET matched_id = matching.new_matched_id
FROM (
    SELECT a.user_id, MIN(a.uuid) AS new_matched_id FROM mydata a
    WHERE a.user_id IS NOT NULL
    GROUP BY a.user_id
) AS matching
WHERE m.matched_id IS NULL
AND m.user_id IS NOT NULL
AND matching.user_id = m.user_id;


-- Find rows +/- 4hrs of each other 
-- 1. Set min and max times for a 4hr set --
UPDATE mydata my SET min_time = matching.min_dist, max_time = matching.max_dist, matched_id = new_matched_id
FROM (
    -- mintime is approx
    SELECT a.uuid, MIN(b.matched_id) AS new_matched_id, max(COALESCE(b.min_time, b.discovery_time)) - interval '4 hour' AS min_dist, max(COALESCE(b.max_time, b.discovery_time)) + interval '4 hour' AS max_dist
    FROM mydata a
    JOIN mydata b
    ON (a.ip = b.ip AND a.subdomain = b.subdomain)
    GROUP BY a.uuid
    HAVING ABS(EXTRACT(EPOCH FROM max(COALESCE(a.min_time, b.discovery_time)) - a.discovery_time)/3600) <= 4
) matching
WHERE matching.uuid = my.uuid
AND min_time IS NULL;

-- 2. Set the matched id of all the +/- 4hr records --
UPDATE mydata m SET matched_id = new_matched_id, min_time = matching.min_time, max_time = matching.max_time
FROM (
    SELECT a.uuid, MAX(b.min_time) AS min_time, MAX(b.max_time) AS max_time, COALESCE(a.matched_id, MIN(b.uuid)) AS new_matched_id FROM mydata a
    INNER JOIN mydata b
    ON a.ip = b.ip AND a.subdomain = b.subdomain
    WHERE a.discovery_time >= b.min_time
    AND a.discovery_time <= b.max_time
    GROUP BY a.uuid
) matching
WHERE matching.uuid = m.uuid;
like image 400
Chris Riddell Avatar asked Mar 04 '26 04:03

Chris Riddell


1 Answers

I'm not sure I understand the question, so but it seems from whereby if the row IP address is +/- 4hrs of the last that you need the "last" time for each IP address (or IP + UUID, not sure). That you get from

select ip_address, max(seen_time) group by ip_address

You could make a virtual table out of that or use a correlated subquery, see next.

I'm not a Postgres user, but there's surely a function that measures hours. As a rough sketch,

select * from a as A 
where exists (
    select 1 from a 
    where ip_address = A.ip_address
    and   UUID = A.UUID
    group by ip_address, UUID
    having hour(max(seen_time)) - hour(A.seen_time) < 4
)

HTH.

like image 94
James K. Lowden Avatar answered Mar 05 '26 23:03

James K. Lowden



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!