Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is spark.driver.maxResultSize?

The ref says:

Limit of total size of serialized results of all partitions for each Spark action (e.g. collect). Should be at least 1M, or 0 for unlimited. Jobs will be aborted if the total size is above this limit. Having a high limit may cause out-of-memory errors in driver (depends on spark.driver.memory and memory overhead of objects in JVM). Setting a proper limit can protect the driver from out-of-memory errors.

What does this attribute do exactly? I mean at first (since I am not battling with a job that fails due to out of memory errors) I thought I should increase that.

On second thought, it seems that this attribute defines the max size of the result a worker can send to the driver, so leaving it at the default (1G) would be the best approach to protect the driver..

But will happen on this case, the worker will have to send more messages, so the overhead will be just that the job will be slower?


If I understand correctly, assuming that a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize). If so, then increasing that attribute to protect my driver from being assassinated from Yarn should be wrong.

But still the question above remains..I mean what if I set it to 1M (the minimum), will it be the most protective approach?

like image 445
gsamaras Avatar asked Aug 22 '16 20:08

gsamaras


People also ask

What is maxResultSize in Spark?

maxResultSize. Sets a limit on the total size of serialized results of all partitions for each Spark action (such as collect ). Jobs will fail if the size of the results exceeds this limit; however, a high limit can cause out-of-memory errors in the driver. the default is 1 GB.

What is the role of spark driver?

The spark driver is the program that declares the transformations and actions on RDDs of data and submits such requests to the master. Its location is independent of the master/slaves. You could co-located with the master or run it from another node.

What should be the driver memory in Spark?

1 GB RAM per node. 1 executor per cluster for the application manager. 10 percent memory overhead per executor.


1 Answers

assuming that a worker wants to send 4G of data to the driver, then having spark.driver.maxResultSize=1G, will cause the worker to send 4 messages (instead of 1 with unlimited spark.driver.maxResultSize).

No. If estimated size of the data is larger than maxResultSize given job will be aborted. The goal here is to protect your application from driver loss, nothing more.

if I set it to 1M (the minimum), will it be the most protective approach?

In sense yes, but obviously it is not useful in practice. Good value should allow application to proceed normally but protect application from unexpected conditions.

like image 80
zero323 Avatar answered Oct 03 '22 17:10

zero323