when I set pyarrow to true we using spark session, but when I run toPandas(), it throws the error:
"toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true. Please set it to false to disable this"
May I know why it happens?
By default PyArrow is disabled but it seems in your case it is enabled, you have to manually disable this configuration either from the current spark application session or permanently from the Spark configuration file.
If you want to disable this for all of you spark sessions, add below line to your Spark configuration at SPARK_HOME/conf/spark-defaults .conf.
spark.sql.execution.arrow.enabled=false
But I would suggest using PyArrow if you are using pandas in your spark application, it will speed the data conversion between spark and pandas.
For more on PyArrow please visit my blog.
I was facing the same problem with Pyarrow.
My environment:
When I try to enable Pyarrow optimization like this:
spark.conf.set('spark.sql.execution.arrow.enabled', 'true')
I get the following warning:
createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true; however failed by the reason below: TypeError: 'JavaPackage' object is not callable
I solved this problem by:
import os
from pyspark import SparkConf
spark_config = SparkConf().getAll()
for conf in spark_config:
print(conf)
This will print the key-value pairs of spark configurations.
('spark.yarn.jars', 'path\to\jar\files')
jar_names = os.listdir('path\to\jar\files')
for jar_name in jar_names:
if 'arrow' in jar_name:
print(jar_name)
Found the following jars:
arrow-format-0.10.0.jar
arrow-memory-0.10.0.jar
arrow-vector-0.10.0.jar
spark.conf.set('spark.driver.extraClassPath', 'path\to\jar\files\arrow-format-0.10.0.jar:path\to\jar\files\arrow-memory-0.10.0.jar:path\to\jar\files\arrow-vector-0.10.0.jar')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With