Much has been written about deploying data crunching applications on EC2/S3, but I would like to know, what is the typical workflow for developing such applications?
Lets say I have a 1 TB of time series data to begin with and I have managed to store this on S3. How would I write applications and do interactive data analysis to build machine learning models and then write large programs to test them? In other words, how does one go about setting up a dev environment in such a situation? Do I boot up an EC2 instance, develop software on it and save my changes, and shutdown every time I want to do some work?
Typically, I fire up R or Pylab, read data from my local drives and do my analysis. Then I create applications based on that analysis and let it loose on that data.
On EC2, I am not sure if I can do that. Do people keep data locally for analysis and only use EC2 when they have large simulation jobs to run?
I am very curious to know what other people are doing, especially start ups who have their entire infrastructure based on EC2/S3.
We create a baseline, custom AMI with all programs that we know we'll always need already on the AMI.
The software we develop (and update constantly) is stored on external storage (we use a Maven repository, but you could use anything that works well with your environment.
We then fire up our custom AMI with everything we need on it, deploy the latest version of our software from Maven, and we're good to go.
So the workflow is:
Setup
Create a custom AMI with stuff we'll always need
Ongoing
Develop software locally Deploy binaries to external storage (Maven repository in our case) Fire up multiple instances of custom AMI as needed Copy binaries from external storage to each instance Run on each instance
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With