I have an app where I want to pull out values from a lookup table based on user inputs. The reference table is a statistical test, based on a calculation that'd be too slow to do for all the different combinations of user inputs. Hence, a lookup table for all the possibilities.
But... right now the table is about 60 MB (as .Rdata) or 214 MB (as .csv), and it'll get much larger if I expand the possible user inputs. I've already reduced the number of significant figures in the data (to 3) and removed the row/column names.
Obviously, I can preload the lookup table outside the reactive server function, but it'll still take a decent chunk of time to load in that data. Does anyone have any tips on dealing with large amounts of data in Shiny? Thanks!
You can put the load call inside of the shinyServer function in another reactive so that you can reference the dataset dynamically, and each session can have different data loaded simultaneously.
To create a two-file app, create a new directory (for example, newdir/ ) and place two files, called ui. R and server. R , in the directory. To run it, call runApp("newdir") .
The R Shiny application can be scaled using a cloud-native architecture. Typically the nexts step are: Package the application code, its dependencies, and R Shiny Server for runtime into a Linux Docker image. Deploy the containerized application to a Kubernetes cluster and configure to run multiple replica instances.
Open the app. R script in your RStudio editor. RStudio will recognize the Shiny script and provide a Run App button (at the top of the editor). Either click this button to launch your app or use the keyboard shortcut: Command+Shift+Enter (Control+Shift+Enter on Windows).
flaneuse, we are still working with a smaller set that you but we have been experimenting with:
Use rds for our data
As @jazzurro mentioned rds above, and you seem to know how to do this, but the syntax for others is below.
Format .rds
allows you to bring in a single R object so you can rename it if needs be.
In your prep data code, for example:
mystorefile <- file.path("/my/path","data.rds")
# ... do data stuff
# Save down (assuming mydata holds your data frame or table)
saveRDS(mydata, file = mystorefile)
In your shiny code:
# Load in my data
x <- readRDS(mystorefile)
Remember to copy your data .rds file into your app directory when you deploy. We use a data directory /myapp/data and then file.path for store file is changed to "./data" in our shiny code.
global.R
We have placed our readRDS
calls to load in our data in this global file (instead of in server.R before shinyServer()
call), so that is run once, and is available for all sessions, with the added bonus it can be seen by ui.R
.
See this scoping explanation for R Shiny.
Slice and dice upfront
The standard daily reports use the most recent data. So I make a small latest.dt
in my global.R of a smaller subset of my data. So the landing page with the latest charts work with this smaller data set to get faster charts.
The custom data tab which uses the full.dt
then is on a separate tab. It is slower but at that stage the user is more patient, and is thinking of what dates and other parameters to choose.
This subset idea may help you.
Would be interested in what others (with more demanding data sets have tried)!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With