Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R: Group Similar Addresses Together

I have a 400,000 row file with manually entered addresses which need to be geocoded. There's a lot of different variations of the same addresses in the file, so it seems wasteful to be using API calls for the same address multiple times.

To cut down on this, I'd like to reduce these five rows:

    Address
    1 Main Street, Country A, World
    1 Main St, Country A, World
    1 Maine St, Country A, World
    2 Side Street, Country A, World
    2 Side St. Country A, World

down to two:

    Address
    1 Main Street, Country A, World
    2 Side Street, Country A, World

Using the stringdist package you can group the 'word' part of the strings together, but the string matching algorithms don't differentiate between the numbers. This means that it categorises two different houses numbers on the same street as the same address.

To work around this, I came up with two ways of doing it: firsly, trying to manually separate the numbers and the addresses into separate columns using regular expressions and rejoining them afterwards. The problem with this, is that with so many manually entered addresses, there seems to be hundreds of different edge cases and it gets unwieldy.

Using this answer on grouping and this on converting words to numbers, I have a second approach which deals with the edge cases but is incredibly expensive computationally. Is there a better third way of doing this?

library(gsubfn)
library(english)
library(qdap)
library(stringdist)
library(tidyverse)


similarGroups <- function(x, thresh = 0.8, method = "lv"){
  grp <- integer(length(x))
  Address <- x
  x <- tolower(x)
  for(i in seq_along(Address)){
    if(!is.na(Address[i])){
      sim <- stringdist::stringsim(x[i], x, method = method)
      k <- which(sim > thresh & !is.na(Address))
      grp[k] <- i
      is.na(Address) <- k
    }
  }
  grp
}

df <- data.frame(Address = c("1 Main Street, Country A, World", 
                             "1 Main St, Country A, World", 
                             "1 Maine St, Country A, World", 
                             "2 Side Street, Country A, World", 
                             "2 Side St. Country A, World"))

df1 <- df %>%
  # Converts Numbers into Letters
  mutate(Address = replace_number(Address),
         # Groups Similar Addresses Together
         Address = Address[similarGroups(Address, thresh = 0.8, method = "lv")],
         # Converts Letters back into Numbers
         Address = gsubfn("\\w+", setNames(as.list(1:1000), as.english(1:1000)), Address)
  ) %>%
  # Removes the Duplicates
  unique()
like image 725
rsylatian Avatar asked Sep 10 '20 19:09

rsylatian


2 Answers

stringdist::stringsimmatrix allows to compare similarity between strings:

library(dplyr)
library(stringr)
df <- data.frame(Address = c("1 Main Street, Country A, World", 
                             "1 Main St, Country A, World", 
                             "3 Main St, Country A, World", 
                             "2 Side Street, Country A, World", 
                             "2 Side St. PO 5678 Country A,  World"))
                             
stringdist::stringsimmatrix(df$Address)
          1         2         3         4         5
1 1.0000000 0.8709677 0.8387097 0.8387097 0.5161290
2 0.8518519 1.0000000 0.9629630 0.6666667 0.4444444
3 0.8148148 0.9629630 1.0000000 0.6666667 0.4444444
4 0.8387097 0.7096774 0.7096774 1.0000000 0.6774194
5 0.5833333 0.5833333 0.5833333 0.7222222 1.0000000

As you pointed out, in the example above, row 2 and 3 are very similar according to this criteria (96%), whereas house number is different.

You could add another criteria extracting numbers from the strings, and comparing their similarity :

# Extract numbers
nums <- df %>% rowwise %>% mutate(numlist = str_extract_all(Address,"\\(?[0-9]+\\)?"))  

# Create numbers vectors pairs
numpairs <- expand.grid(nums$numlist, nums$numlist)

# Calculate similarity
numsim <- numpairs %>% rowwise %>% mutate(dist = length(intersect(Var1,Var2)) / length(union(Var1,Var2)))

# Return similarity matrix
matrix(numsim$dist,nrow(df))

     [,1] [,2] [,3] [,4] [,5]
[1,]    1    1    0  0.0  0.0
[2,]    1    1    0  0.0  0.0
[3,]    0    0    1  0.0  0.0
[4,]    0    0    0  1.0  0.5
[5,]    0    0    0  0.5  1.0

According to this new criteria, rows 2 and 3 are clearly different.

You could combine these two criteria to decide whether addresses are similar enough, for example :

matrix(numsim$dist,nrow(df)) * stringdist::stringsimmatrix(df$Address)

          1         2 3         4         5
1 1.0000000 0.8709677 0 0.0000000 0.0000000
2 0.8518519 1.0000000 0 0.0000000 0.0000000
3 0.0000000 0.0000000 1 0.0000000 0.0000000
4 0.0000000 0.0000000 0 1.0000000 0.3387097
5 0.0000000 0.0000000 0 0.3611111 1.0000000

To deal with many hundred thousands of addresses, expand.grid won't work on the whole dataset, but you could split / parallelize this by country / area in order to avoid an unfeasible full cartesian product.

like image 91
Waldi Avatar answered Sep 24 '22 02:09

Waldi


Might want to look into OpenRefine, or the refinr package for R, which is much less visual but still good. It has two functions, key_collision_merge and n_gram_merge which has several parameters. If you have a dictionary of good addresses, you can pass that to key_collision_merge.

Probably good to make note of the abbreviations you see often (St., Blvd., Rd., etc.) and replace all of those. Surely there is a good table somewhere of these abbreviations, like https://www.pb.com/docs/US/pdf/SIS/Mail-Services/USPS-Suffix-Abbreviations.pdf.

Then:

library(refinr)    
df <- tibble(Address = c("1 Main Street, Country A, World", 
                             "1 Main St, Country A, World", 
                             "1 Maine St, Country A, World", 
                             "2 Side Street, Country A, World", 
                             "2 Side St. Country A, World",
                              "3 Side Rd. Country A, World",
                              "3 Side Road Country B World"))
df2 <- df %>%
  mutate(address_fix = str_replace_all(Address, "St\\.|St\\,|St\\s", "Street"),
         address_fix = str_replace_all(address_fix, "Rd\\.|Rd\\,|Rd\\s", "Road")) %>%
  mutate(address_merge = n_gram_merge(address_fix, numgram = 1))

df2$address_merge
[1] "1 Main Street Country A, World"
[2] "1 Main Street Country A, World"
[3] "1 Main Street Country A, World"
[4] "2 Side Street Country A, World"
[5] "2 Side Street Country A, World"
[6] "3 Side Road Country A, World"  
[7] "3 Side Road Country B World"   
like image 30
ciakovx Avatar answered Sep 24 '22 02:09

ciakovx