TransWikia.com

Possible reasons that validation recall is fluctuating across different epochs but the precision is stable?

Cross Validated Asked by khemedi on January 3, 2022

I know this is not a coding question, but didn’t have any idea where can I ask for help on this. I’m training a deep learning model. After each epoch I measure the performance of the model on validation set. Here is how the performance looks like while training:

Validation performance during training

It’s a binary classification task with cross entropy loss function. I use argmax at the last layer to do the prediction and measure precision and recall. Note, the number of positive and negative samples within each mini batch are almost the same (mini-batches are balanced). Any idea about possible reasons that the model is behaving like this? And how I can improve the recall as well as making it more stable like the precision?

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP