I tried to merge two data.frames, and they are like below:
GVKEY YEAR coperol delta vega firm_related_wealth
1 001045 1992 1 38.88885 17.86943 2998.816
2 001045 1993 1 33.57905 19.19287 2286.418
3 001045 1994 1 48.54719 16.85830 3924.053
4 001045 1995 1 111.46762 38.71565 8550.903
5 001045 1996 1 218.89279 45.59413 17834.921
6 001045 1997 1 415.61461 51.45863 34279.515
AND
GVKEY YEAR fracdirafter fracdirafterindep twfracdirafter
1 001004 1996 1.00 0.70 1.000000000
2 001004 1997 0.00 0.00 0.000000000
3 001004 1998 0.00 0.00 0.000000000
4 001004 1999 0.00 0.00 0.000000000
5 001004 2000 0.00 0.00 0.000000000
6 001004 2001 0.25 0.25 0.009645437
They both have 1,048,575 rows. My code is merge(a,b,by=c("GVKEY","YEAR"))
, I kept receiving error message "negative length vectors are not allowed
". I also tried the data.table way, but got error message saying that my results would exceed 2^31 rows. Apparently, the merged data will not be so big, so I am not sure how to solve this issue.
You are getting this error because the data.frame
/ data.table
created by the join has more than 2^31 - 1
rows (2,147,483,647).
Due to the way vectors are constructed internally by R, the maximum length of any vector is 2^31 - 1
elements (see: https://stackoverflow.com/a/5234293/2341679). Since a data.frame
/ data.table
is really a list()
of vectors, this limit also applies to the number of rows.
As other people have commented and answered, unfortunately you won't be able to construct this data.table
, and its likely there are that many rows because of duplicate matches between your two data.tables
(these may or may not be intentional on your part).
The good news is, if the duplicate matches are not errors, and you still want to perform the join, there is a way around it: you just need to do whatever computation you wanted to do on the resulting data.table
in the same call as the join using the data.table[]
operator, e.g.
:
dt_left[dt_right, on = .(GVKEY, YEAR),
j = .(sum(firm_related_wealth), mean(fracdirafterindep),
by = .EACHI]
If you're not familiar with the data.table
syntax, you can perform calculations on columns within a data.table
as shown above using the j
argument. When performing a join using this syntax, computation in j
is performed on the data.table
created by the join.
The key here is the by = .EACHI
argument. This breaks the join (and subsequent computation in j
) down into smaller components: one data.table
for each row in dt_right
and its matches in dt_left
, avoiding the problem of creating a data.table
with > 2^31 - 1
rows.
I'm not sure how merge
is implemented, but there seems to be a big difference when you try to merge by one column or by two, as you can see in the following simulation:
> df1<-data.frame(a=1:200000,b=2*(1:200000),c=3*(1:200000))
> df2<-data.frame(a=-df1$a,b=-df1$b,d=4*(1:200000))
> ss<-sample(200000,10000)
> df2[ss,1:2]<-df1[ss,1:2]
> system.time(df3<-merge(x=df1,y=df2,by=c('a','b')))
user system elapsed
1.25 0.00 1.25
> system.time(df4<-merge(x=df1,y=df2,by='a'))
user system elapsed
0.06 0.00 0.06
Looking at the system memory the two-column merge used a lot more memory as well. There's probably a cartesian product in there somewhere and I guess this is what's causing your error.
What you could do is to create a new column concatenating GVKEY and YEAR for each data.frame and merge by that column.
a$newKey<-paste(a$GVKEY,a$YEAR,sep='_')
b$newKey<-paste(b$GVKEY,b$YEAR,sep='_')
c<-merge(a,b,by='newKey')
You would need to clean up the columns in the result, since GVKEY and YEAR would both appear twice, but at least the merge should work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With