I have a spark data frame like below:
topics.show(2)
+-----+--------------------+--------------------+--------------------+
|topic| termIndices| termWeights| topics_words|
+-----+--------------------+--------------------+--------------------+
| 0|[0, 39, 68, 43, 5...|[0.06362107696025...|[, management, sa...|
| 1|[3, 1, 8, 6, 4, 1...|[0.03164821806301...|[objectives, lear...|
+-----+--------------------+--------------------+--------------------+
only showing top 2 rows
However when I try to convert to pandas data frame using below method which works in 1.6, I get an error.
topics.toPandas()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-165-4c1231b68769> in <module>()
----> 1 topics.toPandas()
/Users/i854319/spark2/python/pyspark/sql/dataframe.pyc in toPandas(self)
1440 """
1441 import pandas as pd
-> 1442 return pd.DataFrame.from_records(self.collect(), columns=self.columns)
1443
1444 ##########################################################################################
/Users/i854319/spark2/python/pyspark/sql/dataframe.pyc in collect(self)
307 [Row(age=2, name=u'Alice'), Row(age=5, name=u'Bob')]
308 """
--> 309 with SCCallSiteSync(self._sc) as css:
310 port = self._jdf.collectToPython()
311 return list(_load_from_socket(port, BatchedSerializer(PickleSerializer())))
/Users/i854319/spark2/python/pyspark/traceback_utils.pyc in __enter__(self)
70 def __enter__(self):
71 if SCCallSiteSync._spark_stack_depth == 0:
---> 72 self._context._jsc.setCallSite(self._call_site)
73 SCCallSiteSync._spark_stack_depth += 1
74
AttributeError: 'NoneType' object has no attribute 'setCallSite'
So not sure if there is a bug in this method in Spark 2.0.2 or something is going wrong?
Copying my answer from a related question:
There's an open issue around this:
https://issues.apache.org/jira/browse/SPARK-27335?jql=text%20~%20%22setcallsite%22
The poster suggests forcing to sync your DF's backend with your Spark context:
df.sql_ctx.sparkSession._jsparkSession = spark._jsparkSession
df._sc = spark._sc
This worked for us, hopefully can work in other cases as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With