Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PYSPARK : casting string to float when reading a csv file

I'm reading a csv file to dataframe

datafram = spark.read.csv(fileName, header=True)

but the data type in datafram is String, I want to change data type to float. Is there any way to do this efficiently?

like image 866
Alex Avatar asked Oct 07 '16 19:10

Alex


People also ask

How do you cast to float in PySpark?

In PySpark SQL, using the cast() function you can convert the DataFrame column from String Type to Double Type or Float Type. This function takes the argument string representing the type you wanted to convert or any type that is a subclass of DataType.

How do you cast data in PySpark?

In PySpark, you can cast or change the DataFrame column data type using cast() function of Column class, in this article, I will be using withColumn(), selectExpr() , and SQL expression to cast the from String to Int (Integer Type), String to Boolean e.t.c using PySpark examples.

What is DoubleType in PySpark?

DoubleType – A floating-point double value. IntegerType – An integer value. LongType – A long integer value. NullType – A null value. ShortType – A short integer value.


2 Answers

The most straightforward way to achieve this is by casting.

dataframe = dataframe.withColumn("float", col("column").cast("double"))
like image 94
Alberto Bonsanto Avatar answered Sep 28 '22 01:09

Alberto Bonsanto


If you want to do the casting when reading the CSV, you can use the inferSchema argument when reading the data. Let's try with a a small test csv file:

$ cat ../data/test.csv
a,b,c,d
5.0, 1.0, 1.0, 3.0
2.0, 0.0, 3.0, 4.0
4.0, 0.0, 0.0, 6.0

Now, if we read it as you did, we will have string values:

>>> df_csv = spark.read.csv("../data/test.csv", header=True)
>>> print(df_csv.dtypes)
[('a', 'string'), ('b', 'string'), ('c', 'string'), ('d', 'string')]

However, if we set inferSchema to True, it will correctly identify them as doubles:

>>> df_csv2 = spark.read.csv("../data/test.csv", header=True, inferSchema=True)
>>> print(df_csv2.dtypes)
[('a', 'double'), ('b', 'double'), ('c', 'double'), ('d', 'double')]

However, this approach requires another run over the data. You can find more information on the DataFrameReader CSV documentation.

like image 20
Mikel Avatar answered Sep 28 '22 03:09

Mikel