AnswerBun.com

Valid approach? LogSoftmax during training, Softmax during inference

I am training a classifier assigning one of four possible classes to each frame in a preprocessed audio stream using pytorch. I am using cross-entropy loss as the loss function for training. It is implemented as a combination of LogSoftmax and negative log-likelihood loss. During inference however I am using a softmax layer before the output, omitting the logarithm. Now I am wondering whether this yields any problems I might overlook. I suppose it should be fine, because in the end I’m deciding to choose one class and as both softmax and log-softmax are monotonic, the highest probability should belong to the same class using either. The reason I’m not simply switching to log-softmax during inference as I would loose the desirable effect of the output being an actual probability distribution if I’m not mistaken.

Are my assumptions correct so I may carry on or did I miss something important?

Data Science Asked by Scipio on January 1, 2021

0 Answers

Add your own answers!

Related Questions

Need help on Deep learning model on sentiment analysis

1  Asked on August 1, 2021 by vidhya-shankar

 

How to visualize ner dataset tagged using BILOU?

0  Asked on July 31, 2021 by shahid-khan

       

Ask a Question

Get help from others!

© 2022 AnswerBun.com. All rights reserved.