Possible reasons that validation recall is fluctuating across different epochs but the precision is stable?

Cross Validated Asked by khemedi on January 3, 2022

I know this is not a coding question, but didn’t have any idea where can I ask for help on this. I’m training a deep learning model. After each epoch I measure the performance of the model on validation set. Here is how the performance looks like while training:

Validation performance during training

It’s a binary classification task with cross entropy loss function. I use argmax at the last layer to do the prediction and measure precision and recall. Note, the number of positive and negative samples within each mini batch are almost the same (mini-batches are balanced). Any idea about possible reasons that the model is behaving like this? And how I can improve the recall as well as making it more stable like the precision?

Add your own answers!

Related Questions

averaging feature importance from different models

0  Asked on November 21, 2021 by henry50618


Assumptions of OLS and linear mixed models

1  Asked on November 21, 2021 by molecularrunner


Generalised Linear Mixed Model Diagnostics using DHARMa

1  Asked on November 21, 2021 by ahmadmkhatib


Ask a Question

Get help from others!

© 2023 All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP, SolveDir