I am running some machine learning algorithms on EMR Spark cluster. I am curious about which kind of instance to use so I can get the optimal cost/performance gain?
For the same level of prices, I can choose among:
vCPU ECU Memory(GiB)
m3.xlarge 4 13 15
c4.xlarge 4 16 7.5
r3.xlarge 4 13 30.5
Which kind of instance should be used in EMR Spark cluster?
PDF. Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark , on AWS to process and analyze vast amounts of data.
Amazon EMR is the best place to run Apache Spark. You can quickly and easily create managed Spark clusters from the AWS Management Console, AWS CLI, or the Amazon EMR API.
Normalized Instance Hours are hours of compute time based on the standard of 1 hour of m1. small usage = 1 hour normalized compute time. You can view our documentation to see a list of different sizes within an instance family, and the corresponding normalization factor per hour.
Your cluster will still be deployed in a single Availability Zone, however selecting multiple Availability Zones allows Amazon EMR to look across all selected Availability Zones to deploy your cluster in the Availability Zone with the most EC2 Spot Capacity to run your cluster.
Generally speaking, it depends on your use case, needs, etc... But I can suggest a minimum configuration considering the information that you have shared.
You seem to be trying to train an ALS
factorization or SVD
on matrices between 2 ~ 4 GBs of data. So actually that's not too much of data.
You'll be needing at least 1 master and 2 nodes to setup and configure a small distributed cluster. The master won't be doing any computing whatsoever so it won't need much resources but of course I would be dealing task scheduling, etc.
You can add slaves (instances) according to your needs.
EDIT : As mentioned in the comments, 5th generation instances are now available for each of the instance types mentioned in this thread: R5, M5, and C5. In general, latest-generation instance types are cheaper and more performant than their older counterparts.
C3, C4, and C5 are compute optimized instances featuring high performance processors and with a lowest price/compute performance in EC2 compared to R3, R4 or R5 although it's recommended use cases are distributed memory caches and in-memory analytics. But C5 will do the job for you for a lower price.
Performance Optimizations :
Amazon EMR charges on hourly increments. This means once you run a cluster, you are paying for the entire hour. That's important to remember because if you are paying for a full hour of Amazon EMR cluster, improving your data processing time by matter of minutes may not be worth your time and effort.
Don't forget that adding more nodes to increase performance is cheaper than spending time optimizing your cluster.
Reference : Amazon EMR Best Practices - Parviz Deyhim.
EDIT : You might also consider enabling Ganglia to monitor your cluster resources: CPU, RAM, Network I/O. This would help you also tuning your EMR cluster. Practically, you don't have any configuration to do. Just follow the documentation to add it to your EMR cluster on creation.
Generally speaking the preferred instance depends on the job you are running (is it memory intensive? is it CPU intensive? etc.) However Spark is very memory intensive and I wouldn't use machines with less than 30Gb for most jobs.
In your particular case (4Gb dataset) I am not sure why you'd want to use distributed computing to begin with- it will just make your job run slow. If you are sure you want spark run it in local mode with X threads (depending on how many cores you have)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With