Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS Glue takes a long time to finish

I just run a very simple job as follows

glueContext = GlueContext(SparkContext.getOrCreate())
l_table = glueContext.create_dynamic_frame.from_catalog(
             database="gluecatalog",
             table_name="fctable") 
l_table = l_table.drop_fields(['seq','partition_0','partition_1','partition_2','partition_3']).rename_field('tbl_code','table_code')
print "Count: ", l_table.count()
l_table.printSchema()
l_table.select_fields(['trans_time']).toDF().distinct().show()
dfc = l_table.relationalize("table_root", "s3://my-bucket/temp/")
print "Before keys() call "
dfc.keys()
print "After keys() call "
l_table.select_fields('table').printSchema()
dfc.select('table_root_table').toDF().where("id = 1 or id = 2").orderBy(['id','index']).show()
dfc.select('table_root').toDF().where("table = 1 or table = 2").show()

The data structure is simple too

root
|-- table: array
| |-- element: struct
| | |-- trans_time: string
| | |-- seq: null
| | |-- operation: string
| | |-- order_date: string
| | |-- order_code: string
| | |-- tbl_code: string
| | |-- ship_plant_code: string
|-- partition_0
|-- partition_1
|-- partition_2
|-- partition_3

When I run job test, it took anywhere from 12 to 16 minutes to finish. But the cloud watch log showed that the job took 2 seconds to display all my data.

So my questions are: Where does AWS Glue job spend its time beyond the logging could show and is what it doing outside the logging period?

like image 488
Shawn Avatar asked Aug 29 '17 19:08

Shawn


People also ask

Why is glue crawler so slow?

It takes more time to crawl a large number of small files than a small number of large files. That's because the crawler must list each file and must read the first megabyte of each new file.

Is AWS Glue fast?

Performance of AWS Glue 3.0AWS Glue 3.0 speeds up your Spark applications in addition to offering reduced startup latencies. The following benchmark shows the performance improvements between AWS Glue 3.0 and AWS Glue 2.0 for a popular customer workload to convert large datasets from CSV to Apache Parquet format.

Is AWS Glue worthwhile?

AWS Glue is recommended when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform. Amazon Kinesis Data Analytics is recommended when your use cases are primarily analytics and when you want to run jobs on a serverless Apache Flink-based platform.

What is maximum capacity in AWS Glue job?

Maximum capacityChoose an integer from 2 to 100. The default is 10. This job type cannot have a fractional DPU allocation. For AWS Glue version 2.0 or later jobs, you cannot instead specify a Maximum capacity.


2 Answers

It's taking the time to setup the environment that allows your code to run. I had the same issue, contacted the AWS GLUE team and they were helpful. The reason it takes a long time is that GLUE builds an environment when you run the first job (which stays alive for 1 hours) if you run the same script twice or any other script within one hour, the next job will take significantly less time. They call this Cold Start when you run the first script, It took my first job 17 minutes, I ran the same job again right after the first one finished and it took 3 minutes only.

like image 191
Rick Coleman Avatar answered Sep 17 '22 16:09

Rick Coleman


Update as of May 2019 -

  • Cold start times = 7-8 minutes

  • Warm pool maintained for = 10-15 mins

like image 39
human Avatar answered Sep 21 '22 16:09

human