Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Partitioning by multiple columns in PySpark with columns in a list

My question is similar to this thread: Partitioning by multiple columns in Spark SQL

but I'm working in Pyspark rather than Scala and I want to pass in my list of columns as a list. I want to do something like this:

column_list = ["col1","col2"]
win_spec = Window.partitionBy(column_list)

I can get the following to work:

win_spec = Window.partitionBy(col("col1"))

This also works:

col_name = "col1"
win_spec = Window.partitionBy(col(col_name))

And this also works:

win_spec = Window.partitionBy([col("col1"), col("col2")])
like image 921
prk Avatar asked Mar 12 '18 17:03

prk


People also ask

Can you partition by multiple fields PySpark?

You can also create partitions on multiple columns using PySpark partitionBy() . Just pass columns you want to partition as arguments to this method.

How do I partition a PySpark DataFrame based on column values?

PySpark partitionBy() is used to partition based on column values while writing DataFrame to Disk/File system. When you write DataFrame to Disk by calling partitionBy() Pyspark splits the records based on the partition column and stores each partition data into a sub-directory.


2 Answers

Convert column names to column expressions with a list comprehension [col(x) for x in column_list]:

from pyspark.sql.functions import col
column_list = ["col1","col2"]
win_spec = Window.partitionBy([col(x) for x in column_list])
like image 195
Psidom Avatar answered Sep 18 '22 19:09

Psidom


Your first attempt should work.

Consider the following example:

import pyspark.sql.functions as f
from pyspark.sql import Window

df = sqlCtx.createDataFrame(
    [
        ("a", "apple", 1),
        ("a", "orange", 2),
        ("a", "orange", 3),
        ("b", "orange", 3),
        ("b", "orange", 5)
    ],
    ["name", "fruit","value"]
)
df.show()
#+----+------+-----+
#|name| fruit|value|
#+----+------+-----+
#|   a| apple|    1|
#|   a|orange|    2|
#|   a|orange|    3|
#|   b|orange|    3|
#|   b|orange|    5|
#+----+------+-----+

Suppose you wanted to calculate a fraction of the sum for each row, grouping by the first two columns:

cols = ["name", "fruit"]
w = Window.partitionBy(cols)
df.select(cols + [(f.col('value') / f.sum('value').over(w)).alias('fraction')]).show()

#+----+------+--------+
#|name| fruit|fraction|
#+----+------+--------+
#|   a| apple|     1.0|
#|   b|orange|   0.375|
#|   b|orange|   0.625|
#|   a|orange|     0.6|
#|   a|orange|     0.4|
#+----+------+--------+
like image 25
pault Avatar answered Sep 18 '22 19:09

pault