TransWikia.com

LSTM Sequential Model question re: ValueError: non-broadcastable output operand with shape doesn't match broadcast shape

Data Science Asked by brohjoe on September 4, 2021

This is probably a very simplistic question but I have not been able to find resources that directly address this. I know I must be understanding this incorrectly; I’m not quite sure how.

I’ve noticed that if the number of units in the last Dense output layer of my LSTM sequential model does not equal the number of features (columns), I get an error.

If you wanted to output 1 feature in the output (Dense) layer, and you had several input features, how would you:

  1. Do that without errors
  2. identify which feature is being outputted, or does Keras provide outputs for each feature and you have to identify which one you want?

I want to train the model with multiple features, but I’m only interested in one feature’s prediction.

Example: I have data with ‘open’, ‘low’, ‘high’ and ‘close’ and ‘volume columns (5 features). If I set the number of units in the last Dense layer to anything other than 5, I get a broadcast error telling me I have inconsistent shapes in the model. If I put 5 units in the last Dense output layer, I get no errors.

Example:

def create_model(self, epochs, batch_size):
      
    model = Sequential()

    # Adding the first LSTM layer and some Dropout regularisation
    model.add(LSTM(units=128, return_sequences=True,
                   batch_size=batch_size, input_shape=(TIME_STEP, self.X_train.shape[2])))

    # Adding a second LSTM layer and some Dropout regularisation
    model.add(LSTM(units=128, return_sequences=True))
    model.add(Dropout(DROPOUT))

    # Adding a third LSTM layer and some Dropout regularisation
    model.add(LSTM(units=128, return_sequences=True))
    model.add(Dropout(DROPOUT))

    # Adding a fourth LSTM layer and some Dropout regularisation
    model.add(LSTM(units=128, return_sequences=False))
    model.add(Dropout(DROPOUT))

    # Adding the output layer
    model.add(Dense(units=5))
    model.summary()

    # compile model
    adam = optimizers.Adam(lr=LR)
    model.compile(optimizer=adam, loss='mae')
    model.fit(self.X_train, self.y_train, epochs=EPOCHS, batch_size=BATCH_SIZE)

If I enter ‘units=1’ in the Dense layer, I get the following error:

ValueError: non-broadcastable output operand with shape (11784,1) doesn’t match the broadcast shape (11784,5)

One Answer

The final dense layer's units should be equal to the number of features in your y_train. Suppose your y_train has shape (11784,5) then dense layer's units should be 5 or if y_train has shape (11784,1), then units should be 1. Model expects final dense layer's units equal to the number of output features.

You have to identify which features you need in input and output. 'open', 'low', 'high' and 'close' and 'volume' - these are your features. What do you want to predict? Is this a classification or regression problem? What is your problem statement? Based on these, identify your input and output features. Put input features in x_train and output features in y_train. Now, in the final dense layer, you should use units equal to the number of features in y_train.

Correct answer by vyshnavi vanjari on September 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP