Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Building SVM with tensorflow's LinearClassifier and Panda's Dataframes

I'm aware of this question, but it is for an outdated function.

Let's say I'm trying to predict whether a person will visit country 'X' given the countries they have already visited and their income.

I have a training data set in a pandas DataFrame that's in the following format.

  1. Each row represents a different person, each unrelated to the others in matrix.
  2. The first 10 columns are all names of countries and the values in the column are binary (1 if they have visited that country or 0 if they haven't).
  3. Column 11 is their income. It's a continuous decimal variable.
  4. Lastly, column 12 is another binary table that says yes they have visited 'X' or not.

So essentially, if I have a 100,000 people in my dataset, then I have a dataframe of dimensions 100,000 x 12. I want to be able to properly pass this into a linear classifier using tensorflow. But not sure even how to approach this.

I am trying to pass the data into this function

estimator = LinearClassifier(
    n_classes=n_classes, feature_columns=[sparse_column_a, 
 sparse_feature_a_x_sparse_feature_b], label_keys=label_keys)

(If there's a better suggestion on which estimator to use, I'd be open to trying that.)

And I'm passing data as:

df = pd.DataFrame(np.random.randint(0,2,size=(100, 12)), columns=list('ABCDEFGHIJKL'))
tf_val = tf.estimator.inputs.pandas_input_fn(X.iloc[:, 0:9], X.iloc[:, 11], shuffle=True)

However, I'm not sure how to take this output and properly pass into a classifier. Am I setting up the problem properly? I'm not coming from a data science background, so any guidance would be very helpful!

Concerns

  1. Column 11 is a covariate. Hence, I don't think it can just be passed in as a feature, can it?
  2. How can I incorporate column 11 into the classifier as well, since column 11 is a completely different type of feature than columns 1 through 10.
  3. At the very least, even if I ignore column 11, how do I at least fit column 1 through 10, with label = column 12 and pass this into a classifier?

(working code needed for bounty)

like image 993
Jonathan Avatar asked Mar 29 '19 20:03

Jonathan


2 Answers

Linear SVM

SVM is a max margin classifier, i.e. it maximizes the width or the margin separating the positive class from the negative class. The loss function of linear SVM in case of binary classification is given below.

enter image description here

It can be derived from the more generalized multi class linear SVM loss (also called hinge loss) shown below (with Δ = 1).

enter image description here enter image description here

Note: In all the above equations, the weight vector w includes bias b

How on the earth did someone came up with this loss? Lets dig in.

enter image description here

Image above shows the data points belonging to positive class separated from the data point belonging to the negative class by a separating hyperplane (shown as solid line). However, there can be many such separating hyperplanes. SVM finds the separating hyperplane such that the distance of the hyperplane to the nearest positive data point and to the nearest negative data point is maximum (shown as dotted line).

Mathematically, SVM finds the weight vector w (bias included) such that

enter image description here

If the labels(y) of +ve class and -ve class are +1 and -1 respectively, then SVM finds w such that

enter image description here

• If a data point is on the correct side of the hyperplane (correctly classified) then

enter image description here

• If a data point is on the wrong side (miss classified) then

enter image description here

So the loss for a data point, which is a measure of miss classification can be written as

enter image description here

Regularization

If a weight vector w correctly classifies the data (X) then any multiple of these weight vector λw where λ>1 will also correctly classifies the data ( zero loss). This is because the transformation λW stretches all score magnitudes and hence also their absolute differences. L2 regularization penalizes the large weights by adding the regularization loss to the hinge loss.

enter image description here

For example, if x=[1,1,1,1] and two weight vectors w1=[1,0,0,0], w2=[0.25,0.25,0.25,0.25]. Then dot(W1,x) =dot(w2,x) =1 i.e. both the weight vectors lead to the same dot product and hence same hinge loss. But the L2 penalty of w1 is 1.0 while the L2 penalty of w2 is only 0.25. Hence L2 regularization prefers w2 over w1. The classifier is encouraged to take into account all input dimensions to small amounts rather than a few input dimensions and very strongly. This improve the generalization of the model and lead to less overfitting.

L2 penalty leads to the max margin property in SVMs. If the SVM is expressed as an optimization problem then the generalized Lagrangian form for the constrained quadratic optimization problem is as below

enter image description here

Now that we know the loss function of linear SVM we can use gradient decent (or other optimizers) to find the weight vectors which minimizes the loss.

Code

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets

# Load Data
iris = datasets.load_iris()
X = iris.data[:, :2][iris.target != 2]
y = iris.target[iris.target != 2]

# Change labels to +1 and -1 
y = np.where(y==1, y, -1)

# Linear Model with L2 regularization
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, activation='linear', kernel_regularizer=tf.keras.regularizers.l2()))

# Hinge loss
def hinge_loss(y_true, y_pred):    
    return tf.maximum(0., 1- y_true*y_pred)

# Train the model
model.compile(optimizer='adam', loss=hinge_loss)
model.fit(X, y,  epochs=50000, verbose=False)

# Plot the learned decision boundary 
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01),
                         np.arange(y_min, y_max, 0.01))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1)
plt.show()

enter image description here

SVM can also be expressed as a constrained quadratic optimization problem. The advantage of this formulation is that we can use the kernel trick to classify non linearly separable data (using different kernels). LIBSVM implements the Sequential minimal optimization (SMO) algorithm for kernelized support vector machines (SVMs).

Code

from sklearn.svm import SVC
# SVM with linear kernel
clf = SVC(kernel='linear')
clf.fit(X, y) 

# Plot the learned decision boundary 
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01),
                         np.arange(y_min, y_max, 0.01))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1)
plt.show() 

enter image description here

Finally

The Linear SVM model using tf which you can use for your problem statement is

# Prepare Data 
# 10 Binary features
df = pd.DataFrame(np.random.randint(0,2,size=(1000, 10)))
# 1 floating value feature 
df[11] = np.random.uniform(0,100000, size=(1000))
# True Label 
df[12] = pd.DataFrame(np.random.randint(0, 2, size=(1000)))

# Convert data to zero mean unit variance 
scalar = StandardScaler().fit(df[df.columns.drop(12)])
X = scalar.transform(df[df.columns.drop(12)])
y = np.array(df[12])

# convert label to +1 and -1. Needed for hinge loss
y = np.where(y==1, +1, -1)

# Model 
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, activation='linear', 
                                kernel_regularizer=tf.keras.regularizers.l2()))
# Hinge Loss
def my_loss(y_true, y_pred):    
    return tf.maximum(0., 1- y_true*y_pred)

# Train model 
model.compile(optimizer='adam', loss=my_loss)
model.fit(X, y,  epochs=100, verbose=True)

K-Fold cross validation and making predictions

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import KFold
from sklearn.metrics import roc_curve, auc

# Load Data
iris = datasets.load_iris()
X = iris.data[:, :2][iris.target != 2]
y_ = iris.target[iris.target != 2]

# Change labels to +1 and -1 
y = np.where(y_==1, +1, -1)


# Hinge loss
def hinge_loss(y_true, y_pred):    
    return tf.maximum(0., 1- y_true*y_pred)

def get_model():
    # Linear Model with L2 regularization
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Dense(1, activation='linear', kernel_regularizer=tf.keras.regularizers.l2()))
    model.compile(optimizer='adam', loss=hinge_loss)
    return model

def sigmoid(x):
    return 1 / (1 + np.exp(-x))

predict = lambda model, x : sigmoid(model.predict(x).reshape(-1))
predict_class = lambda model, x : np.where(predict(model, x)>0.5, 1, 0)


kf = KFold(n_splits=2, shuffle=True)

# K Fold cross validation
best = (None, -1)

for i, (train_index, test_index) in enumerate(kf.split(X)):
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]

    model = get_model()
    model.fit(X_train, y_train, epochs=5000, verbose=False, batch_size=128)
    y_pred = model.predict_classes(X_test)
    val = roc_auc_score(y_test, y_pred)    
    print ("CV Fold {0}: AUC: {1}".format(i+1, auc))
    if best[1] < val:
        best = (model, val)

# ROC Curve using the best model
y_score = predict(best[0], X)
fpr, tpr, _ = roc_curve(y_, y_score)
roc_auc = auc(fpr, tpr)
print (roc_auc)

# Plot ROC
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
         lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc="lower right")
plt.show()

# Make predictions
y_score = predict_class(best[0], X)

Making predictions

Since the output of the model is linear we have to normalize it to probabilities to make predictions. If it is a binary classification we can use sigmoid of if it is a multiclass classification then we can use softmax. Below code is for binary classification

predict = lambda model, x : sigmoid(model.predict(x).reshape(-1))
predict_class = lambda model, x : np.where(predict(model, x)>0.5, 1, 0)

References

  1. CS231n

Update 1:

To made the code compatible with tf 2.0 the datatype of y should be same as X. To do this, after line y = np.where(..... add the line y = y.astype(np.float64).

like image 176
mujjiga Avatar answered Oct 13 '22 01:10

mujjiga


Since all of your features are already numerical you can use them as they are.

df = pd.DataFrame(np.random.randint(0,2,size=(100, 12)), columns=list('ABCDEFGHIJKL'))
df['K'] = np.random.random(100)
nuemric_features = [tf.feature_column.numeric_column(column) for column in df.columns[:11]]
model = tf.estimator.LinearClassifier(feature_columns=nuemric_features)
tf_val = tf.estimator.inputs.pandas_input_fn(df.iloc[:,:11], df.iloc[:,11], shuffle=True)
model.train(input_fn=tf_val, steps=1000)

print(list(model.predict(input_fn=tf_val))[0])
{'logits': array([-1.7512109], dtype=float32), 'logistic': array([0.14789453], dtype=float32), 'probabilities': array([0.8521055 , 0.14789453], dtype=float32), 'class_ids': array([0]), 'classes': array([b'0'], dtype=object)}

The probabilities of the prediction output is most likely what you are interested in. You have two probabilities, one for the target being Flase and one for True.

If you want to have more details look at this nice blog-post about binary classification with TensorFlow.

like image 24
P.Tillmann Avatar answered Oct 13 '22 01:10

P.Tillmann