Today I have finally decided to start climbing R's steep learning curve. I have spent a few hours and I managed to import my dataset and do a few other basic things, but I am having trouble with the data type: a column which contains decimals is imported as integer, and conversion to double changes the values.
In trying to get a small csv file to put here as an example I discovered that the problem only happens when the data file is too large (my original file is a 1048418 by 12 matrix, but even with "only" 5000 rows I have the same problem. When I only have 100, 1000 or even 2000 rows the column is imported correctly as double).
Here is a smaller dataset (still 500kb, but again, if the dataset is small the problem is not replicated). The code is
> ex <- read.csv("exampleshort.csv",header=TRUE)
> typeof(ex$RET)
[1] "integer"
Why is the column of returns being imported as integer when the file is large, when it is clearly of the type double?
The worst thing is that if I try to convert it to double, the values are changed
> exdouble <- as.double(ex$RET)
> typeof(exdouble)
[1] "double"
> ex$RET[1:5]
[1] 0.005587 -0.005556 -0.005587 0.005618 -0.001862
2077 Levels: -0.000413 -0.000532 -0.001082 -0.001199 -0.0012 -0.001285 -0.001337 -0.001351 -0.001357 -0.001481 -0.001486 -0.001488 ... 0.309524
> exdouble[1:5]
[1] 1305 321 322 1307 41
This is not the only column that is imported wrong, but I figured that if I find a solution for one column, I should be able to sort the other ones out. Here is some more information:
> sapply(ex,class)
PERMNO DATE COMNAM SICCD PRC RET RETX SHROUT VWRETD VWRETX EWRETD EWRETX
"integer" "integer" "factor" "integer" "factor" "factor" "factor" "integer" "numeric" "numeric" "numeric" "numeric"
They should be in this order: integer, date, string, integer, double, double, double, integer, double, double, double, double (the types are probably wrong, but hopefully you will get what I mean)
See the help for read.csv: ?read.csv
. Here is the relevant section:
colClasses: character. A vector of classes to be assumed for the
columns. Recycled as necessary, or if the character vector
is named, unspecified values are taken to be ‘NA’.
Possible values are ‘NA’ (the default, when ‘type.convert’ is
used), ‘"NULL"’ (when the column is skipped), one of the
atomic vector classes (logical, integer, numeric, complex,
character, raw), or ‘"factor"’, ‘"Date"’ or ‘"POSIXct"’.
Otherwise there needs to be an ‘as’ method (from package
‘methods’) for conversion from ‘"character"’ to the specified
formal class.
Note that ‘colClasses’ is specified per column (not per
variable) and so includes the column of row names (if any).
Good luck with your quest to learn R. It's difficult, but so much fun after you get past the first few stages (which I admit do take some time).
try this and fix the others accordingly:
ex <- read.csv("exampleshort.csv",header=TRUE,colClasses=c("integer","integer","factor","integer","numeric","factor","factor","integer","numeric","numeric","numeric","numeric"), na.strings=c("."))
As BenBolker points out, the colClasses
argument is probably not needed. However, note that using the colClasses
argument can make the operation faster, especially with a large dataset.
na.strings
must be specified. See the following section in ?read.csv
:
na.strings: a character vector of strings which are to be interpreted
as ‘NA’ values. Blank fields are also considered to be
missing values in logical, integer, numeric and complex
fields.
For reference purposes (this should not be used as the solution because the best solution is to import the data correctly in one step):
RET
was not imported as an integer. It was imported as a factor
. For future reference, if you want to convert a factor
to a numeric
, use
new_RET <-as.numeric(as.character(ex$RET))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With