TransWikia.com

In the context of Deep Learning, what is training warmup steps

Data Science Asked on June 27, 2021

I found this term “training warmup steps” in some of the papers, what exactly does this term mean? Has it got anything to do with “learning rate”? If so, how does it affect?

5 Answers

This usually means that you use a very low learning rate for a set number of training steps (warmup steps). After your warmup steps you use your "regular" learning rate or learning rate scheduler. You can also gradually increase your learning rate over the number of warmup steps.

As far as I know, this has the benefit of slowly starting to tune things like attention mechanisms in your network.

Correct answer by Ron Schwessinger on June 27, 2021

Warm up steps is just a parameter in most of the learning algorithms which is used to lower the learning rate in order to reduce the impact of deviating the model from learning on sudden new data set exposure.

For eg:- If you are giving warm up steps as 500 for a iteration of 10,000 epochs, For the first 500 iterations the model will learn the corpus with minimal learning rate than the rate which you've specified in the model. From 501 th iteration model will use the learning rate as itself which given.

Answered by Arvinthsamy M on June 27, 2021

As the other answers already state: Warmup steps are just a few updates with low learning rate before / at the beginning of training. After this warmup, you use the regular learning rate (schedule) to train your model to convergence.

The idea that this helps your network to slowly adapt to the data intuitively makes sense. However, theoretically, the main reason for warmup steps is to allow adaptive optimisers (e.g. Adam, RMSProp, ...) to compute correct statistics of the gradients. Therefore, a warmup period makes little sense when training with plain SGD.

E.g. RMSProp computes a moving average of the squared gradients to get an estimate of the variance in the gradients for each parameter. For the first update, the estimated variance is just the square root of the sum of the squared gradients for the first batch. Since, in general, this will not be a good estimate, your first update could push your network in a wrong direction. To avoid this problem, you give the optimiser a few steps to estimate the variance while making as little changes as possible (low learning rate) and only when the estimate is reasonable, you use the actual (high) learning rate.

Answered by Mr Tsjolder on June 27, 2021

If your data set is highly differentiated, you can suffer from a sort of "early over-fitting". If your shuffled data happens to include a cluster of related, strongly-featured observations, your model's initial training can skew badly toward those features -- or worse, toward incidental features that aren't truly related to the topic at all.

Warm-up is a way to reduce the primacy effect of the early training examples. Without it, you may need to run a few extra epochs to get the convergence desired, as the model un-trains those early superstitions.

Many models afford this as a command-line option. The learning rate is increased linearly over the warm-up period. If the target learning rate is p and the warm-up period is n, then the first batch iteration uses 1p/n for its learning rate; the second uses 2p/n, and so on: iteration i uses i*p/n, until we hit the nominal rate at iteration n.

This means that the first iteration gets only 1/n of the primacy effect. This does a reasonable job of balancing that influence.

Note that the ramp-up is commonly on the order of one epoch -- but is occasionally longer for particularly skewed data, or shorter for more homogeneous distributions. You may want to adjust, depending on how functionally extreme your batches can become when the shuffling algorithm is applied to the training set.

source

Answered by Mohamed SADAK on June 27, 2021

It can assume also other meaning but the learning rate schedule process. For example in YOLOv3, during warm up epochs the ground truth bounding boxes are forced to be the same size of the anchors.

At the end of the day, the warm up procedure aim to soften the impact of the first epochs of learning that can mislead the entire training process. This is not mathematically proven (as far as I know), it just a solid intuition that actually happens to result in better performances.

Answered by Raffaele on June 27, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP