The sample code from Florian
-----------+-----------+-----------+
|ball_column|keep_the |hall_column|
+-----------+-----------+-----------+
| 0| 7| 14|
| 1| 8| 15|
| 2| 9| 16|
| 3| 10| 17|
| 4| 11| 18|
| 5| 12| 19|
| 6| 13| 20|
+-----------+-----------+-----------+
The first part of the code drops columns name in the banned list
#first part of the code
banned_list = ["ball","fall","hall"]
condition = lambda col: any(word in col for word in banned_list)
new_df = df.drop(*filter(condition, df.columns))
So the above piece of code should drop the ball_column
and hall_column
.
The second part of the code buckets specific columns in the list. For this example, we will bucket the only one remaining, keep_column
.
bagging =
Bucketizer(
splits=[-float("inf"), 10, 100, float("inf")],
inputCol='keep_the',
outputCol='keep_the')
Now bagging the columns using pipeline was as follows
model = Pipeline(stages=bagging).fit(df)
bucketedData = model.transform(df)
How can I add the first block of the code (banned list
, condition
, new_df
) to the ml pipeline as a stage?
I believe this does what you want. You can create a custom Transformer
, and add that to the stages in the Pipeline
. Note that I slightly changed your functions because we do not have access to all variables you mentioned, but the concept remains the same.
Hope this helps!
import pyspark.sql.functions as F
from pyspark.ml import Pipeline, Transformer
from pyspark.ml.feature import Bucketizer
from pyspark.sql import DataFrame
from typing import Iterable
import pandas as pd
# CUSTOM TRANSFORMER ----------------------------------------------------------------
class ColumnDropper(Transformer):
"""
A custom Transformer which drops all columns that have at least one of the
words from the banned_list in the name.
"""
def __init__(self, banned_list: Iterable[str]):
super(ColumnDropper, self).__init__()
self.banned_list = banned_list
def _transform(self, df: DataFrame) -> DataFrame:
df = df.drop(*[x for x in df.columns if any(y in x for y in self.banned_list)])
return df
# SAMPLE DATA -----------------------------------------------------------------------
df = pd.DataFrame({'ball_column': [0,1,2,3,4,5,6],
'keep_the': [6,5,4,3,2,1,0],
'hall_column': [2,2,2,2,2,2,2] })
df = spark.createDataFrame(df)
# EXAMPLE 1: USE THE TRANSFORMER WITHOUT PIPELINE -----------------------------------
column_dropper = ColumnDropper(banned_list = ["ball","fall","hall"])
df_example = column_dropper.transform(df)
# EXAMPLE 2: USE THE TRANSFORMER WITH PIPELINE --------------------------------------
column_dropper = ColumnDropper(banned_list = ["ball","fall","hall"])
bagging = Bucketizer(
splits=[-float("inf"), 3, float("inf")],
inputCol= 'keep_the',
outputCol="keep_the_bucket")
model = Pipeline(stages=[column_dropper,bagging]).fit(df)
bucketedData = model.transform(df)
bucketedData.show()
Output:
+--------+---------------+
|keep_the|keep_the_bucket|
+--------+---------------+
| 6| 1.0|
| 5| 1.0|
| 4| 1.0|
| 3| 1.0|
| 2| 0.0|
| 1| 0.0|
| 0| 0.0|
+--------+---------------+
Also, note that if your custom method requires to be fitted (e.g. a custom StringIndexer
), you should also create a custom Estimator
:
class CustomTransformer(Transformer):
def _transform(self, df) -> DataFrame:
class CustomEstimator(Estimator):
def _fit(self, df) -> CustomTransformer:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With