I'm an R user, and I frequently find that I need to write functions that require subsetting large datasets (10s of millions of rows). When I apply such functions over a large number of observations, it can get very time consuming if I'm not careful about how I implement it.
To do this, I have sometimes used the data.table package, and this provides much faster speeds than subsetting using data frames. Recently, I've started experimenting with packages like RMySQL, pushing some tables to mysql, and using the package to run sql queries and return results.
I have found mixed performance improvements. For smaller datasets (millions), it seems that loading up the data into a data.table and setting the right keys makes for faster subsetting. For larger datasets (10s to 100s of millions), it appears the sending out a query to mysql moves faster.
Was wondering if anyone has any insight into which technique should return simple subsetting or aggregation queries faster, and whether or not this should depend on the size of the data? I understand that setting keys in data.table is somewhat analogous to creating an index, but I don't have much more intuition beyond that.
there is no difference. A view is just a stored query which can be referred to in sql queries as though they are tables. Note that this does not apply to materialized views. A view is only a query stored in the data dictionary: it is not going to make your query run faster or slower.
Contrary to the answers - In my experience, for views with lots of joins, doing a direct query runs faster.
SQL Server 2017 is the fastest database everywhere you need it. Whether it is your laptop, in your private cloud, or in our Azure public cloud infrastructure. Whether it is running on Linux, Windows, or Docker Containers, we have the speed to power any workload your application needs.
MySQL full-text search (FTS) is far much faster than queries using wildcard characters.
If the data fits in RAM, data.table is faster. If you provide an example it will probably become evident, quickly, that you're using data.table badly. Have you read the "do's and don'ts" on the data.table wiki?
SQL has a lower bound because it is a row store. If the data fits in RAM (and 64bit is quite a bit) then data.table is faster not just because it is in RAM but because columns are contiguous in memory (minimising page fetches from RAM to L2 for column operations). Use data.table correctly and it should be faster than SQL's lower bound. This is explained in FAQ 3.1. If you're seeing slower with data.table, then chances are very high that you're using data.table incorrectly (or there's a performance bug that we need to fix). So, please post some tests, after reading the data.table wiki.
I am not an R user, but I know a little about Databases. I believe that MySQL (or any other reputatble RDBMS) will actually perform your subsetting operations faster (by, like, an order of magnitude, usually) barring any additional computation involved in the subsetting process.
I suspect your performance lag on small data sets is related to the expense of the connection and initial push of the data to MySQL. There is likely a point at which the connection overhead and data transfer time adds more to the cost of your operation than MySQL is saving you.
However, for datasets larger than a certain minimum, it seem likley that this cost is compensated for by the sheer speed of the database.
My understanding is that SQL can acheive most fetching and sorting operations much, much more quickly than iterative operations in code. But one must factor in the cost of the connection and (in this case) the initial transfer of data over the network wire.
I will be interested to hear what others have to say . . .
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With