I'm really new at using spark at Azure Databricks and I was running a python job from the Databricks Workflows. After processing the job for ~4hrs I'm getting an error regarding:
23/02/01 15:56:34 WARN TransportChannelHandler: Exception in connection from /10.0.2.10:44824
java.io.IOException: Connection reset by peer
And I've been trying increasing the spark driver memory since I was thinking that the unexpected error was because a memory spilling or lost data.
I'm using the 9.1 LTS Runtime with the next Cluster Configuration
Cluster-configuration
Also set the Spark configuration as:
spark.databricks.delta.preview.enabled true
spark.sql.sources.partitionOverwriteMode dynamic
spark.driver.maxResultSize 0
spark.network.timeout 17280000s
spark.sql.legacy.timeParserPolicy LEGACY
spark.scheduler.mode: FAIR
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.driver.memory 210g
spark.executor.heartbeatInterval 100s
spark.sql.execution.arrow.pyspark.enabled true
And I'm getting the next error log:
23/02/01 14:59:37 WARN FairSchedulableBuilder: A job was submitted with scheduler pool 1320242899058103831, which has not been configured. This can happen when the file that pools are read from isn't set, or when that file doesn't contain 1320242899058103831. Created 1320242899058103831 with default configuration (schedulingMode: FIFO, minShare: 0, weight: 1)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 8 on 10.0.2.16: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 10 on 10.0.2.15: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 16 on 10.0.2.14: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 17 on 10.0.2.21: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 12 on 10.0.2.19: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:04:38 ERROR TaskSchedulerImpl: Lost executor 14 on 10.0.2.7: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:05:04 ERROR TaskSchedulerImpl: Lost executor 15 on 10.0.2.13: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:05:04 ERROR TaskSchedulerImpl: Lost executor 5 on 10.0.2.17: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:05:04 ERROR TaskSchedulerImpl: Lost executor 7 on 10.0.2.22: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:05:04 ERROR TaskSchedulerImpl: Lost executor 9 on 10.0.2.9: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:06:10 ERROR TaskSchedulerImpl: Lost executor 4 on 10.0.2.20: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:06:10 ERROR TaskSchedulerImpl: Lost executor 13 on 10.0.2.12: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:06:10 WARN TransportChannelHandler: Exception in connection from /10.0.2.12:35968
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
23/02/01 15:06:34 ERROR TaskSchedulerImpl: Lost executor 6 on 10.0.2.8: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:07:14 ERROR TaskSchedulerImpl: Lost executor 11 on 10.0.2.18: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 23 on 10.0.2.23: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 27 on 10.0.2.17: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 26 on 10.0.2.30: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 21 on 10.0.2.14: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 19 on 10.0.2.28: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:47:43 ERROR TaskSchedulerImpl: Lost executor 29 on 10.0.2.24: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:48:23 ERROR TaskSchedulerImpl: Lost executor 28 on 10.0.2.25: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:48:23 ERROR TaskSchedulerImpl: Lost executor 20 on 10.0.2.22: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:48:23 ERROR TaskSchedulerImpl: Lost executor 22 on 10.0.2.36: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:48:23 ERROR TaskSchedulerImpl: Lost executor 25 on 10.0.2.31: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:49:03 ERROR TaskSchedulerImpl: Lost executor 24 on 10.0.2.12: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:49:03 ERROR TaskSchedulerImpl: Lost executor 30 on 10.0.2.19: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:49:43 ERROR TaskSchedulerImpl: Lost executor 31 on 10.0.2.7: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 36 on 10.0.2.12: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 38 on 10.0.2.23: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 44 on 10.0.2.25: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 33 on 10.0.2.22: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 32 on 10.0.2.37: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 ERROR TaskSchedulerImpl: Lost executor 18 on 10.0.2.18: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:14 WARN TransportChannelHandler: Exception in connection from /10.0.2.18:43074
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
23/02/01 15:55:54 ERROR TaskSchedulerImpl: Lost executor 42 on 10.0.2.19: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:54 ERROR TaskSchedulerImpl: Lost executor 41 on 10.0.2.17: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:54 ERROR TaskSchedulerImpl: Lost executor 39 on 10.0.2.30: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:55:54 ERROR TaskSchedulerImpl: Lost executor 45 on 10.0.2.7: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:56:33 ERROR TaskSchedulerImpl: Lost executor 0 on 10.0.2.10: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:56:33 ERROR TaskSchedulerImpl: Lost executor 34 on 10.0.2.14: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:56:34 WARN TransportChannelHandler: Exception in connection from /10.0.2.10:44824
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
23/02/01 15:57:13 ERROR TaskSchedulerImpl: Lost executor 37 on 10.0.2.36: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:57:13 ERROR TaskSchedulerImpl: Lost executor 2 on 10.0.2.6: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 15:57:14 WARN TransportChannelHandler: Exception in connection from /10.0.2.6:59692
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
23/02/01 15:57:53 ERROR TaskSchedulerImpl: Lost executor 40 on 10.0.2.31: Executor decommission: worker decommissioned because of kill request from HTTP endpoint (data migration disabled)
23/02/01 19:13:16 ERROR PythonDriverLocal: PythonDriver[ReplId-12527-1ebf7-97161-7](DriverRunning) Python Exception
py4j.Py4JException: Error while sending a command.
at py4j.CallbackClient.sendCommand(CallbackClient.java:397)
at py4j.CallbackClient.sendCommand(CallbackClient.java:356)
at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:106)
at com.sun.proxy.$Proxy60.is_finished(Unknown Source)
at com.databricks.backend.daemon.driver.PythonDriverLocal.$anonfun$repl$5(PythonDriverLocal.scala:205)
at com.databricks.backend.daemon.driver.PythonDriverLocal.$anonfun$repl$5$adapted(PythonDriverLocal.scala:205)
at com.databricks.backend.daemon.driver.PythonDriverLocal.withInterpLock(PythonDriverLocal.scala:611)
at com.databricks.backend.daemon.driver.PythonDriverLocal.repl(PythonDriverLocal.scala:205)
at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$11(DriverLocal.scala:547)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:266)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:261)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:258)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:49)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:305)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:297)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:49)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:524)
at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:611)
at scala.util.Try$.apply(Try.scala:213)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:603)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:522)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:557)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:427)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:370)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:221)
at java.lang.Thread.run(Thread.java:750)
Caused by: py4j.Py4JNetworkException: Error while sending a command: c
p0
is_finished
e
at py4j.CallbackConnection.sendCommand(CallbackConnection.java:153)
at py4j.CallbackClient.sendCommand(CallbackClient.java:384)
... 26 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at py4j.CallbackConnection.readBlockingResponse(CallbackConnection.java:169)
at py4j.CallbackConnection.sendCommand(CallbackConnection.java:148)
... 27 more
23/02/01 19:13:18 ERROR WSFSDriverManager$: Failed to get associated pid.
23/02/01 19:13:24 ERROR TaskSchedulerImpl: Lost executor 35 on 10.0.2.28: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
23/02/01 19:13:24 ERROR TaskSchedulerImpl: Lost executor 43 on 10.0.2.24: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
If someone can point to how to solve this or from where the issue comes from I would appreciate it.
Thanks in advance
Let me know if more information is required
With little information it is hard to know what is going on. Generally speaking, the "Executor decommission: worker decommissioned because of kill request from HTTP endpoint" message is expected and should not be necessarily a problem. Unless you know exactly why you added each spark parameter, i suggest you run without any of them. Expecially remove:
spark.driver.maxResultSize 0
spark.network.timeout 17280000s
spark.driver.memory 210g
spark.executor.heartbeatInterval 100s
Only one I would leave if you know you are using Pandas/Python UDFs is: spark.sql.execution.arrow.pyspark.enabled true
From your spark config, I can tell that you are having a Driver and multiple workers. You need to give that configuration to make sure you have the proper sizing. From the 210GB you are giving to the driver, it is clear that you have a relativly large cluster. I think you are hitting an OOM (out of memory) error somewhere in one of your tasks and the cluster is failing or your code is just running on the driver with no worker participation. To know for sure, go to Spart UI. You will see "Jobs", "Tasks" .. click on "Executors" and check how many "Active Tasks" as compared to allocated cores are running. Click on "Executors" menu a few times and look at the "Complete Tasks" first row value if it is chaning. If not, then look in to the code for some command that is running on the driver and find out why it is taking too long. Give us more information and someone will be able to give you a better answer.
Checkout out this very comprehensive document on Node decommissioning in Spark 3.1+.
https://www.waitingforcode.com/apache-spark/what-new-apache-spark-3.1-nodes-decommissioning/read
High points from the article (which I believe you should read through)
With data migration disabled, you want to figure out how much temporary data (shuffle and cached data) are recomputed to understand how this impacts the performance of your job.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With