Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

StandardScaler in Spark not working as expected

Any idea why spark would be doing this for StandardScaler? As per the definition of StandardScaler:

The StandardScaler standardizes a set of features to have zero mean and a standard deviation of 1. The flag withStd will scale the data to unit standard deviation while the flag withMean (false by default) will center the data prior to scaling it.

>>> tmpdf.show(4)
+----+----+----+------------+
|int1|int2|int3|temp_feature|
+----+----+----+------------+
|   1|   2|   3|       [2.0]|
|   7|   8|   9|       [8.0]|
|   4|   5|   6|       [5.0]|
+----+----+----+------------+

>>> sScaler = StandardScaler(withMean=True, withStd=True).setInputCol("temp_feature")
>>> sScaler.fit(tmpdf).transform(tmpdf).show()
+----+----+----+------------+-------------------------------------------+
|int1|int2|int3|temp_feature|StandardScaler_4fe08ca180ab163e4120__output|
+----+----+----+------------+-------------------------------------------+
|   1|   2|   3|       [2.0]|                                     [-1.0]|
|   7|   8|   9|       [8.0]|                                      [1.0]|
|   4|   5|   6|       [5.0]|                                      [0.0]|
+----+----+----+------------+-------------------------------------------+

In numpy world

>>> x
array([2., 8., 5.])
>>> (x - x.mean())/x.std()
array([-1.22474487,  1.22474487,  0.        ])

In sklearn world

>>> scaler = StandardScaler(with_mean=True, with_std=True)
>>> data
[[2.0], [8.0], [5.0]]
>>> print(scaler.fit(data).transform(data))
[[-1.22474487]
 [ 1.22474487]
 [ 0.        ]]
like image 423
Shrikar Avatar asked Aug 08 '18 18:08

Shrikar


Video Answer


1 Answers

The reason that your results are not as expected is because pyspark.ml.feature.StandardScaler uses the unbiased sample standard deviation instead of the population standard deviation.

From the docs:

The “unit std” is computed using the corrected sample standard deviation, which is computed as the square root of the unbiased sample variance.

If you were to try your numpy code with the sample standard deviation, you'd see the same results:

import numpy as np

x = np.array([2., 8., 5.])
print((x - x.mean())/x.std(ddof=1))
#array([-1.,  1.,  0.])

From a modeling perspective, this almost surely isn't a problem (unless your data is the entire population, which is pretty much never the case). Also keep in mind that for large sample sizes, the sample standard deviation approaches the population standard deviation. So if you have many rows in your DataFrame, the difference here will be negligible.


However, if you insisted on having your scaler use the population standard deviation, one "hacky" way is to add a row to your DataFrame that is the mean of the columns.

Recall that the standard deviation is defined as the square root of the sum of the squares of the differences to the mean. Or as a function:

# using the same x as above
def popstd(x): 
    return np.sqrt(sum((xi - x.mean())**2/len(x) for xi in x))

print(popstd(x))
#2.4494897427831779

print(x.std())
#2.4494897427831779

The difference when using the unbiased standard deviation is simply that you divide by len(x)-1 instead of len(x). So if you added a sample that was equal to the mean value, you'd increase the denominator without impacting the overall mean.

Suppose you had the following DataFrame:

df = spark.createDataFrame(
    np.array(range(1,10,1)).reshape(3,3).tolist(),
    ["int1", "int2", "int3"]
)
df.show()
#+----+----+----+
#|int1|int2|int3|
#+----+----+----+
#|   1|   2|   3|
#|   4|   5|   6|
#|   7|   8|   9|
#+----+----+----+

Union this DataFrame with the mean value for each column:

import pyspark.sql.functions as f
# This is equivalent to UNION ALL in SQL
df2 = df.union(df.select(*[f.avg(c).alias(c) for c in df.columns]))

Now scale your values:

from pyspark.ml.feature import VectorAssembler, StandardScaler
va = VectorAssembler(inputCols=["int2"], outputCol="temp_feature")

tmpdf = va.transform(df2)
sScaler = StandardScaler(
    withMean=True, withStd=True, inputCol="temp_feature", outputCol="scaled"
)
sScaler.fit(tmpdf).transform(tmpdf).show()
#+----+----+----+------------+---------------------+
#|int1|int2|int3|temp_feature|scaled               |
#+----+----+----+------------+---------------------+
#|1.0 |2.0 |3.0 |[2.0]       |[-1.2247448713915892]|
#|4.0 |5.0 |6.0 |[5.0]       |[0.0]                |
#|7.0 |8.0 |9.0 |[8.0]       |[1.2247448713915892] |
#|4.0 |5.0 |6.0 |[5.0]       |[0.0]                |
#+----+----+----+------------+---------------------+
like image 187
pault Avatar answered Oct 16 '22 08:10

pault