TransWikia.com

How do we know a neural network test accuracy is good enough when results vary with different runs?

Data Science Asked on September 4, 2021

In every paper I read about prediction models, the training accuracy and the test accuracy (sometimes also the validation accuracy) is stated as a discrete number. However, in experience, depending on how the weights are initialized, different training runs result in different testing results.

How does a standard data science researcher pinpoint the accuracy metric to be written in a paper, and how does ze get certain about its validness?

One Answer

So, the question is about how to report test accuracy, etc, when you see variation over executions.

As @Nikos M. has eluded to, you typically train and test the model at least 3 times and then report the average test evaluation metrics (along with the standard deviation to show the level of variation). This is why you are seeing 'discrete' values reported in papers, this is because they are averaged over x train/test runs.

Correct answer by shepan6 on September 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP