Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Read.CSV not working as expected in R

Tags:

r

csv

read.csv

I am stumped. Normally, read.csv works as expected, but I have come across an issue where the behavior is unexpected. It most likely is user error on my part, but any help will be appreciated.

Here is the URL for the file

http://nces.ed.gov/ipeds/datacenter/data/SFA0910.zip

Here is my code to get the file, unzip, and read it in:

 URL <- "http://nces.ed.gov/ipeds/datacenter/data/SFA0910.zip"
 download.file(URL, destfile="temp.zip")
 unzip("temp.zip")
 tmp <- read.table("sfa0910.csv", 
                   header=T, stringsAsFactors=F, sep=",", row.names=NULL)

Here is my problem. When I open the data csv data in Excel, the data look as expected. When I read the data into R, the first column is actually named row.names. R is reading in one extra row of data, but I can't figure out where the "error" occurs that is causing row.names to be a column. Simply, it looks like the data shifted over.

However, what is strange is that the last column in R does appear to contain the proper data.

Here are a few rows from the first few columns:

tmp[1:5,1:7]
  row.names UNITID XSCUGRAD SCUGRAD XSCUGFFN SCUGFFN XSCUGFFP
1    100654      R     4496       R     1044       R       23
2    100663      R    10646       R     1496       R       14
3    100690      R      380       R        5       R        1
4    100706      R     6119       R      774       R       13
5    100724      R     4638       R     1209       R       26

Any thoughts on what I could be doing wrong?

like image 236
Btibert3 Avatar asked Aug 15 '12 23:08

Btibert3


People also ask

Why is my CSV not importing correctly?

One of the most common CSV import errors is that the file is simply too large. That can be caused by too many fields or records in the file, too many columns, or too many rows. The import error can be caused by limits set by the program using the file or the amount of available memory on the system.

Why does my exported CSV data get converted to weird formats?

Instead, this is due to the way Excel and other spreadsheet programs open a CSV file and display the data therein. Basically, spreadsheet programs are designed to be used for calculation purposes, so they tend to apply mathematical formats to numbers when the CSV file is opened directly into the program.

What does the read CSV () function in R do?

read. csv() is a wrapper function for read. table() that mandates a comma as separator and uses the input file's first line as header that specifies the table's column names. Thus, it is an ideal candidate to read CSV files.


2 Answers

My tip: use count.fields() as a quick diagnostic when delimited files do not behave as expected.

First, count the number of fields using table():

table(count.fields("sfa0910.csv", sep = ","))
# 451  452 
#   1 6852

That tells you that all but one of the lines contains 452 fields. So which is the aberrant line?

which(count.fields("sfa0910.csv", sep = ",") != 452)
# [1] 1

The first line is the problem. On inspection, all lines except the first are terminated by 2 commas.

The question now is: what does that mean? Is there supposed to be an extra field in the header row which was omitted? Or were the 2 commas appended to the other lines in error? It may be best to contact whoever generated the data, if possible, to clarify the ambiguity.

like image 91
neilfws Avatar answered Oct 03 '22 20:10

neilfws


I have a fix maybe based on mnel's comments

dat<-readLines(paste("sfa", '0910', ".csv", sep=""))
ncommas<-sapply(seq_along(dat),function(x){sum(attributes(gregexpr(',',dat[x])[[1]])$match.length)})
> head(ncommas)
[1] 450 451 451 451 451 451

all columns after the first have an extra seperator which excel ignores.

for(i in seq_along(dat)[-1]){
dat[i]<-gsub('(.*),','\\1',dat[i])
}
write(dat,'temp.csv')

tmp<-read.table('temp.csv',header=T, stringsAsFactors=F, sep=",")

> tmp[1:5,1:7]
  UNITID XSCUGRAD SCUGRAD XSCUGFFN SCUGFFN XSCUGFFP SCUGFFP
1 100654        R    4496        R    1044        R      23
2 100663        R   10646        R    1496        R      14
3 100690        R     380        R       5        R       1
4 100706        R    6119        R     774        R      13
5 100724        R    4638        R    1209        R      26

the moral of the story .... listen to Joshua Ulrich ;)

Quick fix. Open the file in excel and save it. This will also delete the extra seperators.

Alternatively

dat<-readLines(paste("sfa", '0910', ".csv", sep=""),n=1)
dum.names<-unlist(strsplit(dat,','))
tmp <- read.table(paste("sfa", '0910', ".csv", sep=""), 
                   header=F, stringsAsFactors=F,col.names=c(dum.names,'XXXX'),sep=",",skip=1)
tmp1<-tmp[,-dim(tmp)[2]]
like image 28
shhhhimhuntingrabbits Avatar answered Oct 03 '22 21:10

shhhhimhuntingrabbits