I have a few thousand audio files and I want to classify them using Keras and Theano. So far, I generated a 28x28 spectrograms (bigger is probably better, but I am just trying to get the algorithm work at this point) of each audio file and read the image into a matrix. So in the end I get this big image matrix to feed into the network for image classification.
In a tutorial I found this mnist classification code:
import numpy as np from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense from keras.utils import np_utils batch_size = 128 nb_classes = 10 nb_epochs = 2 (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(60000, 784) X_test = X_test.reshape(10000, 784) X_train = X_train.astype("float32") X_test = X_test.astype("float32") X_train /= 255 X_test /= 255 print(X_train.shape[0], "train samples") print(X_test.shape[0], "test samples") y_train = np_utils.to_categorical(y_train, nb_classes) y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Dense(output_dim = 100, input_dim = 784, activation= "relu")) model.add(Dense(output_dim = 200, activation = "relu")) model.add(Dense(output_dim = 200, activation = "relu")) model.add(Dense(output_dim = nb_classes, activation = "softmax")) model.compile(optimizer = "adam", loss = "categorical_crossentropy") model.fit(X_train, y_train, batch_size = batch_size, nb_epoch = nb_epochs, show_accuracy = True, verbose = 2, validation_data = (X_test, y_test)) score = model.evaluate(X_test, y_test, show_accuracy = True, verbose = 0) print("Test score: ", score[0]) print("Test accuracy: ", score[1])
This code runs, and I get the result as expected:
(60000L, 'train samples') (10000L, 'test samples') Train on 60000 samples, validate on 10000 samples Epoch 1/2 2s - loss: 0.2988 - acc: 0.9131 - val_loss: 0.1314 - val_acc: 0.9607 Epoch 2/2 2s - loss: 0.1144 - acc: 0.9651 - val_loss: 0.0995 - val_acc: 0.9673 ('Test score: ', 0.099454972004890438) ('Test accuracy: ', 0.96730000000000005)
Up to this point everything runs perfectly, however when I apply the above algorithm to my dataset, accuracy gets stuck.
My code is as follows:
import os import pandas as pd from sklearn.cross_validation import train_test_split from keras.models import Sequential from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.core import Dense, Activation, Dropout, Flatten from keras.utils import np_utils import AudioProcessing as ap import ImageTools as it batch_size = 128 nb_classes = 2 nb_epoch = 10 for i in range(20): print "\n" # Generate spectrograms if necessary if(len(os.listdir("./AudioNormalPathalogicClassification/Image")) > 0): print "Audio files are already processed. Skipping..." else: print "Generating spectrograms for the audio files..." ap.audio_2_image("./AudioNormalPathalogicClassification/Audio/","./AudioNormalPathalogicClassification/Image/",".wav",".png",(28,28)) # Read the result csv df = pd.read_csv('./AudioNormalPathalogicClassification/Result/result.csv', header = None) df.columns = ["RegionName","IsNormal"] bool_mapping = {True : 1, False : 0} nb_classes = 2 for col in df: if(col == "RegionName"): a = 3 else: df[col] = df[col].map(bool_mapping) y = df.iloc[:,1:].values y = np_utils.to_categorical(y, nb_classes) # Load images into memory print "Loading images into memory..." X = it.load_images("./AudioNormalPathalogicClassification/Image/",".png") X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) X_train = X_train.reshape(X_train.shape[0], 784) X_test = X_test.reshape(X_test.shape[0], 784) X_train = X_train.astype("float32") X_test = X_test.astype("float32") X_train /= 255 X_test /= 255 print("X_train shape: " + str(X_train.shape)) print(str(X_train.shape[0]) + " train samples") print(str(X_test.shape[0]) + " test samples") model = Sequential() model.add(Dense(output_dim = 100, input_dim = 784, activation= "relu")) model.add(Dense(output_dim = 200, activation = "relu")) model.add(Dense(output_dim = 200, activation = "relu")) model.add(Dense(output_dim = nb_classes, activation = "softmax")) model.compile(loss = "categorical_crossentropy", optimizer = "adam") print model.summary() model.fit(X_train, y_train, batch_size = batch_size, nb_epoch = nb_epoch, show_accuracy = True, verbose = 1, validation_data = (X_test, y_test)) score = model.evaluate(X_test, y_test, show_accuracy = True, verbose = 1) print("Test score: ", score[0]) print("Test accuracy: ", score[1])
AudioProcessing.py
import os import scipy as sp import scipy.io.wavfile as wav import matplotlib.pylab as pylab import Image def save_spectrogram_scipy(source_filename, destination_filename, size): dt = 0.0005 NFFT = 1024 Fs = int(1.0/dt) fs, audio = wav.read(source_filename) if(len(audio.shape) >= 2): audio = sp.mean(audio, axis = 1) fig = pylab.figure() ax = pylab.Axes(fig, [0,0,1,1]) ax.set_axis_off() fig.add_axes(ax) pylab.specgram(audio, NFFT = NFFT, Fs = Fs, noverlap = 900, cmap="gray") pylab.savefig(destination_filename) img = Image.open(destination_filename).convert("L") img = img.resize(size) img.save(destination_filename) pylab.clf() del img def audio_2_image(source_directory, destination_directory, audio_extension, image_extension, size): nb_files = len(os.listdir(source_directory)); count = 0 for file in os.listdir(source_directory): if file.endswith(audio_extension): destinationName = file[:-4] save_spectrogram_scipy(source_directory + file, destination_directory + destinationName + image_extension, size) count += 1 print ("Generating spectrogram for files " + str(count) + " / " + str(nb_files) + ".")
ImageTools.py
import os import numpy as np import matplotlib.image as mpimg def load_images(source_directory, image_extension): image_matrix = [] nb_files = len(os.listdir(source_directory)); count = 0 for file in os.listdir(source_directory): if file.endswith(image_extension): with open(source_directory + file,"r+b") as f: img = mpimg.imread(f) img = img.flatten() image_matrix.append(img) del img count += 1 #print ("File " + str(count) + " / " + str(nb_files) + " loaded.") return np.asarray(image_matrix)
So I run the above code and recieve:
Audio files are already processed. Skipping... Loading images into memory... X_train shape: (2394L, 784L) 2394 train samples 1027 test samples -------------------------------------------------------------------------------- Initial input shape: (None, 784) -------------------------------------------------------------------------------- Layer (name) Output Shape Param # -------------------------------------------------------------------------------- Dense (dense) (None, 100) 78500 Dense (dense) (None, 200) 20200 Dense (dense) (None, 200) 40200 Dense (dense) (None, 2) 402 -------------------------------------------------------------------------------- Total params: 139302 -------------------------------------------------------------------------------- None Train on 2394 samples, validate on 1027 samples Epoch 1/10 2394/2394 [==============================] - 0s - loss: 0.6898 - acc: 0.5455 - val_loss: 0.6835 - val_acc: 0.5716 Epoch 2/10 2394/2394 [==============================] - 0s - loss: 0.6879 - acc: 0.5522 - val_loss: 0.6901 - val_acc: 0.5716 Epoch 3/10 2394/2394 [==============================] - 0s - loss: 0.6880 - acc: 0.5522 - val_loss: 0.6842 - val_acc: 0.5716 Epoch 4/10 2394/2394 [==============================] - 0s - loss: 0.6883 - acc: 0.5522 - val_loss: 0.6829 - val_acc: 0.5716 Epoch 5/10 2394/2394 [==============================] - 0s - loss: 0.6885 - acc: 0.5522 - val_loss: 0.6836 - val_acc: 0.5716 Epoch 6/10 2394/2394 [==============================] - 0s - loss: 0.6887 - acc: 0.5522 - val_loss: 0.6832 - val_acc: 0.5716 Epoch 7/10 2394/2394 [==============================] - 0s - loss: 0.6882 - acc: 0.5522 - val_loss: 0.6859 - val_acc: 0.5716 Epoch 8/10 2394/2394 [==============================] - 0s - loss: 0.6882 - acc: 0.5522 - val_loss: 0.6849 - val_acc: 0.5716 Epoch 9/10 2394/2394 [==============================] - 0s - loss: 0.6885 - acc: 0.5522 - val_loss: 0.6836 - val_acc: 0.5716 Epoch 10/10 2394/2394 [==============================] - 0s - loss: 0.6877 - acc: 0.5522 - val_loss: 0.6849 - val_acc: 0.5716 1027/1027 [==============================] - 0s ('Test score: ', 0.68490593621422047) ('Test accuracy: ', 0.57156767283349563)
I tried changing the network, adding more epochs, but I always get the same result no matter what. I don't understand why I am getting the same result.
Any help would be appreciated. Thank you.
Edit: I found a mistake where pixel values were not read correctly. I fixed the ImageTools.py below as:
import os import numpy as np from scipy.misc import imread def load_images(source_directory, image_extension): image_matrix = [] nb_files = len(os.listdir(source_directory)); count = 0 for file in os.listdir(source_directory): if file.endswith(image_extension): with open(source_directory + file,"r+b") as f: img = imread(f) img = img.flatten() image_matrix.append(img) del img count += 1 #print ("File " + str(count) + " / " + str(nb_files) + " loaded.") return np.asarray(image_matrix)
Now I actually get grayscale pixel values from 0 to 255, so now my dividing it by 255 makes sense. However, I still get the same result.
If the accuracy is not changing, it means the optimizer has found a local minimum for the loss. This may be an undesirable minimum. One common local minimum is to always predict the class with the most number of data points. You should use weighting on the classes to avoid this minimum.
Accuracy(name="accuracy", dtype=None) Calculates how often predictions equal labels. This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true .
The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .
The most likely reason is that the optimizer is not suited to your dataset. Here is a list of Keras optimizers from the documentation.
I recommend you first try SGD with default parameter values. If it still doesn't work, divide the learning rate by 10. Do that a few times if necessary. If your learning rate reaches 1e-6 and it still doesn't work, then you have another problem.
In summary, replace this line:
model.compile(loss = "categorical_crossentropy", optimizer = "adam")
with this:
from keras.optimizers import SGD opt = SGD(lr=0.01) model.compile(loss = "categorical_crossentropy", optimizer = opt)
and change the learning rate a few times if it doesn't work.
If it was the problem, you should see the loss getting lower after just a few epochs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With