So I've got a data file (semicolon separated) that has a lot of detail and incomplete rows (leading Access and SQL to choke). It's county level data set broken into segments, sub-segments, and sub-sub-segments (for a total of ~200 factors) for 40 years. In short, it's huge, and it's not going to fit into memory if I try to simply read it.
So my question is this, given that I want all the counties, but only a single year (and just the highest level of segment... leading to about 100,000 rows in the end), what would be the best way to go about getting this rollup into R?
Currently I'm trying to chop out irrelevant years with Python, getting around the filesize limit by reading and operating on one line at a time, but I'd prefer an R-only solution (CRAN packages OK). Is there a similar way to read in files a piece at a time in R?
Any ideas would be greatly appreciated.
Update:
Data example:
County; State; Year; Quarter; Segment; Sub-Segment; Sub-Sub-Segment; GDP; ... Ada County;NC;2009;4;FIRE;Financial;Banks;80.1; ... Ada County;NC;2010;1;FIRE;Financial;Banks;82.5; ... NC [Malformed row] [8.5 Mill rows]
I want to chop out some columns and pick two out of 40 available years (2009-2010 from 1980-2020), so that the data can fit into R:
County; State; Year; Quarter; Segment; GDP; ... Ada County;NC;2009;4;FIRE;80.1; ... Ada County;NC;2010;1;FIRE;82.5; ... [~200,000 rows]
Results:
After tinkering with all the suggestions made, I decided that readLines, suggested by JD and Marek, would work best. I gave Marek the check because he gave a sample implementation.
I've reproduced a slightly adapted version of Marek's implementation for my final answer here, using strsplit and cat to keep only columns I want.
It should also be noted this is MUCH less efficient than Python... as in, Python chomps through the 3.5GB file in 5 minutes while R takes about 60... but if all you have is R then this is the ticket.
## Open a connection separately to hold the cursor position file.in <- file('bad_data.txt', 'rt') file.out <- file('chopped_data.txt', 'wt') line <- readLines(file.in, n=1) line.split <- strsplit(line, ';') # Stitching together only the columns we want cat(line.split[[1]][1:5], line.split[[1]][8], sep = ';', file = file.out, fill = TRUE) ## Use a loop to read in the rest of the lines line <- readLines(file.in, n=1) while (length(line)) { line.split <- strsplit(line, ';') if (length(line.split[[1]]) > 1) { if (line.split[[1]][3] == '2009') { cat(line.split[[1]][1:5], line.split[[1]][8], sep = ';', file = file.out, fill = TRUE) } } line<- readLines(file.in, n=1) } close(file.in) close(file.out)
Failings by Approach:
So, how do you open large CSV files in Excel? Essentially, there are two options: Split the CSV file into multiple smaller files that do fit within the 1,048,576 row limit; or, Find an Excel add-in that supports CSV files with a higher number of rows.
The Difficulty with Opening Big CSVs in Excel Excel is limited to opening CSVs that fit within your computer's RAM. For most modern computers, that means a limit of about 60,000 to 200,000 rows.
Probably the most commonly used program for opening a CSV. Here, you'll encounter a 1,048,576 row limit. You'll most likely receive a notification if your file exceeds this, so you'll be warned that you aren't viewing all data. Similar to Excel, with Mac Numbers you'll see warning if you'r file exceeds 1,000,000 rows.
My try with readLines
. This piece of a code creates csv
with selected years.
file_in <- file("in.csv","r") file_out <- file("out.csv","a") x <- readLines(file_in, n=1) writeLines(x, file_out) # copy headers B <- 300000 # depends how large is one pack while(length(x)) { ind <- grep("^[^;]*;[^;]*; 20(09|10)", x) if (length(ind)) writeLines(x[ind], file_out) x <- readLines(file_in, n=B) } close(file_in) close(file_out)
I'm not an expert at this, but you might consider trying MapReduce, which would basically mean taking a "divide and conquer" approach. R has several options for this, including:
Alternatively, R provides several packages to deal with large data that go outside memory (onto disk). You could probably load the whole dataset into a bigmemory
object and do the reduction completely within R. See http://www.bigmemory.org/ for a set of tools to handle this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With