TransWikia.com

Why does BERT has a limitation of only allowing the maximum length of the input tokens as 512?

Cross Validated Asked on January 3, 2022

I have seen BERT was one of the state-of-the-arts word embedding method in 2018 and then XLNet is proposed in 2019 to take care of the limitations of BERT. I have seen one limitation of BERT is the the maximum length of input tokens (which is 512, see this link ). Does anyone know the reason?

One Answer

It's an arbitrary value. It is the longest length of input vector they assumed to be possible. Presumably, they didn't have longer vectors in the training set. Moreover, you can always truncate a vector and ignore farther away history, so in such case the length of the vector would be the farthest history you would considered to be useful. 512 is a power of two, what also suggests that the value is chosen arbitrarily by a computer science minded person.

Answered by Tim on January 3, 2022

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP