Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use window functions in PySpark?

Tags:

I'm trying to use some windows functions (ntile and percentRank) for a data frame but I don't know how to use them.

Can anyone help me with this please? In the Python API documentation there are no examples about it.

Specifically, I'm trying to get quantiles of a numeric field in my data frame.

I'm using spark 1.4.0.

like image 679
jegordon Avatar asked Aug 06 '15 14:08

jegordon


People also ask

How does window function work in PySpark?

PySpark Window function performs statistical operations such as rank, row number, etc. on a group, frame, or collection of rows and returns results for each row individually. It is also popularly growing to perform data transformations.

What is the use of window function in Spark?

Window functions allow users of Spark SQL to calculate results such as the rank of a given row or a moving average over a range of input rows. They significantly improve the expressiveness of Spark's SQL and DataFrame APIs.


1 Answers

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

import numpy as np np.random.seed(1)  keys = ["foo"] * 10 + ["bar"] * 10 values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])  df = sqlContext.createDataFrame([    {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)]) 

Make sure you're using HiveContext (Spark < 2.0 only):

from pyspark.sql import HiveContext  assert isinstance(sqlContext, HiveContext) 

Create a window:

from pyspark.sql.window import Window  w =  Window.partitionBy(df.k).orderBy(df.v) 

which is equivalent to

(PARTITION BY k ORDER BY v)  

in SQL.

As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

Finally we can use it for a query:

from pyspark.sql.functions import percentRank, ntile  df.select(     "k", "v",     percentRank().over(w).alias("percent_rank"),     ntile(3).over(w).alias("ntile3") ) 

Note that ntile is not related in any way to the quantiles.

like image 81
zero323 Avatar answered Sep 28 '22 02:09

zero323