Good morning everyone. I'm trying to implement this LSTM Algorithm using Keras and pandas as to read in the csv file in. The backend that I'm using is Tensorflow. I'm having a problem when it comes to inversing my results before predicting the training set. Below is my code
import numpy
import matplotlib.pyplot as plt
import pandas
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
#plt.plot(dataset)
#plt.show()
#fix random seed for reproducibility
numpy.random.seed(7)
#Load dataset
col_names = ['UserID','SysTouchTime', 'EventTime', 'ActivityTouchID', 'Pointer_count', 'PointerID',
'ActionID', 'Touch_X', 'Touch_Y', 'Touch_Pressure', 'Contact_Size', 'Phone_Orientation']
dataframe = pandas.read_csv('touchEventsFor5Users.csv', engine='python', header=None, names = col_names, skiprows=1)
#print(dataset.head())
#print(dataset.shape)
dataset = dataframe.values
dataset = dataframe.astype('float32')
print(dataset.isnull().any())
dataset = dataset.fillna(method='ffill')
feature_cols = ['SysTouchTime', 'EventTime', 'ActivityTouchID', 'Pointer_count', 'PointerID', 'ActionID', 'Touch_X', 'Touch_Y', 'Touch_Pressure', 'Contact_Size', 'Phone_Orientation']
X = dataset[feature_cols]
y = dataset['UserID']
print(y.head())
#normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset),:]
print(len(train), len(test))
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
#reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
#create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_dim=look_back))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(trainX, trainY, epochs=1, batch_size=32, verbose=2)
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# invert predictions
import gc
gc.collect()
#####problem occurs with the following line of code#############
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
#shift train predictions for plotting
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
# plot baseline and predictions
plt.plot(scaler.inverse_transform(dataset))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()
The error that I get is
ValueError: non-broadcastable output operand with shape (67704,1) doesn't match the broadcast shape (67704,12)
Think you guys could help me solve this problem? I'm very new to this but want to learn it so bad, and this error is making me suffer! Thanks for any help that can be provided.
When you scale your data, it will scale the 12 fields differently. It will take the minmax of each field and transform it into 0 to 1 values.
When you make an invert_transform, it makes no sense to the function because you give it only one field, it doesn't know what to do with it, what was its min and max value... You need to feed a 12 fields dataset, with thise predicted field in the right place.
Try to add this before the problematic line :
# create empty table with 12 fields
trainPredict_dataset_like = np.zeros(shape=(len(train_predict), 12) )
# put the predicted values in the right field
trainPredict_dataset_like[:,0] = trainPredict[:,0]
# inverse transform and then select the right field
trainPredict = scaler.inverse_transform(trainPredict_dataset_like)[:,0]
Does this help? :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With