Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to map features from the output of a VectorAssembler back to the column names in Spark ML?

I'm trying to run a linear regression in PySpark and I want to create a table containing summary statistics such as coefficients, P-values and t-values for each column in my dataset. However, in order to train a linear regression model I had to create a feature vector using Spark's VectorAssembler, and now for each row I have a single feature vector and the target column. When I try to access Spark's in-built regression summary statistics, they give me a very raw list of numbers for each of these statistics, and there's no way to know which attribute corresponds to which value, which is really difficult to figure out manually with a large number of columns. How do I map these values back to the column names?

For example, I have my current output as something like this:

Coefficients: [-187.807832407,-187.058926726,85.1716641376,10595.3352802,-127.258892837,-39.2827730493,-1206.47228704,33.7078197705,99.9956812528]

P-Value: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.18589731365614548, 0.275173571416679, 0.0]

t-statistic: [-23.348593508995318, -44.72813283953004, 19.836508234714472, 144.49248881747755, -16.547272230754242, -9.560681351483941, -19.563547400189073, 1.3228378389036228, 1.0912415361190977, 20.383256127350474]

Coefficient Standard Errors: [8.043646497811427, 4.182131353367049, 4.293682291754585, 73.32793120907755, 7.690626652102948, 4.108783841348964, 61.669402913526625, 25.481445101737247, 91.63478289909655, 609.7007361468519]

These numbers mean nothing unless I know which attribute they correspond to. But in my DataFrame I only have one column called "features" which contains rows of sparse Vectors.

This is an ever bigger problem when I have one-hot encoded features, because if I have one variable with an encoding of length n, I will get n corresponding coefficients/p-values/t-values etc.

like image 928
charmander Avatar asked Mar 21 '17 18:03

charmander


People also ask

How do you use the Stringindexer in PySpark?

A label indexer that maps a string column of labels to an ML column of label indices. If the input column is numeric, we cast it to string and index the string values. The indices are in [0, numLabels). By default, this is ordered by label frequencies so the most frequent label gets index 0.

What is OneHotEncoderEstimator?

Class OneHotEncoderEstimatorA one-hot encoder that maps a column of category indices to a column of binary vectors, with at most a single one-value per row that indicates the input category index.

How do I rename a column in PySpark?

Method 1: Using withColumnRenamed() We will use of withColumnRenamed() method to change the column names of pyspark data frame. existingstr: Existing column name of data frame to rename. newstr: New column name. Returns type: Returns a data frame by renaming an existing column.


3 Answers

As of today Spark doesn't provide any method that can do it for you, so if you have to create your own. Let's say your data looks like this:

import random random.seed(1)  df = sc.parallelize([(     random.choice([0.0, 1.0]),      random.choice(["a", "b", "c"]),     random.choice(["foo", "bar"]),     random.randint(0, 100),     random.random(), ) for _ in range(100)]).toDF(["label", "x1", "x2", "x3", "x4"]) 

and is processed using following pipeline:

from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler from pyspark.ml import Pipeline from pyspark.ml.regression import LinearRegression  indexers = [   StringIndexer(inputCol=c, outputCol="{}_idx".format(c)) for c in ["x1", "x2"]] encoders = [     OneHotEncoder(         inputCol=idx.getOutputCol(),         outputCol="{0}_enc".format(idx.getOutputCol())) for idx in indexers] assembler = VectorAssembler(     inputCols=[enc.getOutputCol() for enc in encoders] + ["x3", "x4"],     outputCol="features")  pipeline = Pipeline(     stages=indexers + encoders + [assembler, LinearRegression()]) model = pipeline.fit(df) 

Get the LinearRegressionModel:

lrm = model.stages[-1] 

Transform the data:

transformed =  model.transform(df) 

Extract and flatten ML attributes:

from itertools import chain  attrs = sorted(     (attr["idx"], attr["name"]) for attr in (chain(*transformed         .schema[lrm.summary.featuresCol]         .metadata["ml_attr"]["attrs"].values()))) 

and map to the output:

[(name, lrm.summary.pValues[idx]) for idx, name in attrs] 
[('x1_idx_enc_a', 0.26400012641279824),  ('x1_idx_enc_c', 0.06320192217171572),  ('x2_idx_enc_foo', 0.40447778902400433),  ('x3', 0.1081883594783335),  ('x4', 0.4545851609776568)] 
[(name, lrm.coefficients[idx]) for idx, name in attrs] 
[('x1_idx_enc_a', 0.13874401585637453),  ('x1_idx_enc_c', 0.23498565469334595),  ('x2_idx_enc_foo', -0.083558932128022873),  ('x3', 0.0030186112903237442),  ('x4', -0.12951394186593695)] 
like image 67
zero323 Avatar answered Sep 17 '22 02:09

zero323


You can see the actual order of the columns here

df.schema["features"].metadata["ml_attr"]["attrs"]

there will be two classes usually, ["binary] & ["numeric"]

pd.DataFrame(df.schema["features"].metadata["ml_attr"]["attrs"]["binary"]+df.schema["features"].metadata["ml_attr"]["attrs"]["numeric"]).sort_values("idx")

Should give the exact order of all the columns

like image 37
pratiklodha Avatar answered Sep 17 '22 02:09

pratiklodha


Here's the one line answer:

[x["name"] for x in sorted(train_downsampled.schema["all_features"].metadata["ml_attr"]["attrs"]["binary"]+
   train_downsampled.schema["all_features"].metadata["ml_attr"]["attrs"]["numeric"], 
   key=lambda x: x["idx"])]

Thanks to @pratiklodha for the core of this.

like image 27
Andy Reagan Avatar answered Sep 17 '22 02:09

Andy Reagan