TransWikia.com

Model Validation accuracy stuck at 0.65671 Keras

Data Science Asked by Talha Anwar on December 4, 2020

I am using conv1d to classify EEG signals, but my val_accuracy stuck at 0.65671. No matter what changes i do, it never go beyond 0.65671.
Here is the architecture

model=Sequential()
model.add(Conv1D(filters=4,kernel_size=5,strides=1,padding='valid',kernel_initializer='RandomUniform',input_shape=X_train.shape[1::]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv1D(filters=6,kernel_size=3,strides=1,padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Conv1D(filters=8,kernel_size=3,strides=1,padding='valid',activation='relu'))
#model.add(Conv1D(filters=24,kernel_size=7,strides=1,padding='same',activation='relu'))

model.add(Flatten())
model.add(Dense(12,activation='relu'))
model.add(Dense(1,activation='sigmoid'))

Shape of training data is (5073,3072,7) and for test data it is (1908,3072,7).

I have tried reducing the number of neurons in each layer, changing activation function, and add more layers. But this upper limit has not changed mostly.

I have tried one hot encoding of binary class, using keras.utils.to_categorical(y_train,num_classes=2) but this issue does not resolve.

I have tried learning rate of 0.0001, but it does not work. I have tried some kernel_initializer and optimizers but nothing help

Results

   Train on 5073 samples, validate on 1908 samples
Epoch 1/8
 - 23s - loss: 0.6865 - acc: 0.5757 - val_loss: 0.6709 - val_acc: 0.6564

Epoch 00001: val_acc improved from -inf to 0.65645, saving model to weights.hdf5
Epoch 2/8
 - 22s - loss: 0.6760 - acc: 0.5837 - val_loss: 0.6569 - val_acc: 0.6567

Epoch 00002: val_acc improved from 0.65645 to 0.65671, saving model to weights.hdf5
Epoch 3/8
 - 21s - loss: 0.6661 - acc: 0.5843 - val_loss: 0.6669 - val_acc: 0.6111

Epoch 00003: val_acc did not improve from 0.65671
Epoch 4/8
 - 21s - loss: 0.6622 - acc: 0.5915 - val_loss: 0.6579 - val_acc: 0.6253

Epoch 00004: val_acc did not improve from 0.65671
Epoch 5/8
 - 22s - loss: 0.6575 - acc: 0.5939 - val_loss: 0.6540 - val_acc: 0.6255

Epoch 00005: val_acc did not improve from 0.65671
Epoch 6/8
 - 21s - loss: 0.6554 - acc: 0.5940 - val_loss: 0.6448 - val_acc: 0.6399

Epoch 00006: val_acc did not improve from 0.65671
Epoch 7/8
 - 21s - loss: 0.6511 - acc: 0.6042 - val_loss: 0.6584 - val_acc: 0.6195

Epoch 00007: val_acc did not improve from 0.65671
Epoch 8/8
 - 21s - loss: 0.6487 - acc: 0.6059 - val_loss: 0.6647 - val_acc: 0.6030

Epoch 00008: val_acc did not impr

ove from 0.65671

4 Answers

I am using 1D CNNs for EEG/EMG classification as well. One thing that seems to help for me is playing around with the number of filters, and yours seem quite low. I have used up to 80 filters on a layer, at times with good results. Also you may want to reverse how you are doing things and add more filters at the beginning and reduce with each successive layer.

Answered by stefanLopez on December 4, 2020

I hit the same issue, with a different network/task.

I'm using a fully-connected network to regress a vector from an image. Pretty quickly, after 1-2 epochs, both training and validation seem to be stuck in some values. Curiously, they also vary around second decimal, despite being an order of magnitude larger than in your case (my: loss ~7.2, error ~7.9).

The reason was a bug in the batch generator function, which could come to a state where it always returns the same batch for validation. I've found the bug by creating a debug data set, which had only 10 samples (images).

Answered by Mićo Banović on December 4, 2020

I would like to see you data set :) I am also doing some signal classification.

Unless there is some simple bug in data preprocessing stage: (check what you didn't show here first!)

  • As correctly pointed to you by @stefanLopez your number of filters is way too low.
  • Next, filter length is too short to capture anything serious.
  • Remove batchnorm while testing.
  • Reduce dropout while testing.
  • Test with ELU (Exponential Linear Unit) activation.
  • Last, use more FC layers with more neurons.
  • Try using glorot (commonly known as Xavier) initializer.

Example model:

model=Sequential()

model.add(Conv1D(filters=24,kernel_size=16,strides=1,padding='valid',activation='elu',kernel_initializer='glorot_normal',input_shape=X_train.shape[1::]))

model.add(Conv1D(filters=16,kernel_size=9,strides=1,padding='same',activation='elu',kernel_initializer='glorot_normal'))
model.add(Dropout(0.1))

model.add(Conv1D(filters=12,kernel_size=9,strides=1,padding='valid',activation='elu',kernel_initializer='glorot_normal'))
model.add(Dropout(0.1))

model.add(Flatten())
model.add(Dense(128,activation='elu'))
model.add(Dropout(0.1))
model.add(Dense(16,activation='elu'))
model.add(Dropout(0.1))
model.add(Dense(1,activation='sigmoid'))

Tell if it helps.

Answered by Emil on December 4, 2020

You might consider changing your code from this:

model.add(Dense(12,activation='relu'))

to this:

model.add(Dense(12))
model.add(Activation('relu'))

I was having trouble with an Image based task. accuracy and validation were stuck. This completely helped. I learned about this from this link: Training Accuracy stuck in Keras

Answered by fac120 on December 4, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP