TransWikia.com

Is there any advantage of limiting the value of a feature in neural networks

Data Science Asked by Kadir Erdem Demir on December 20, 2020

In a machine learning algorithm, I have a feature that has a value in the range 0-20 it is very rarely value goes over 20 and if does I clamp it 20.

Does it help the neural network model somehow using reducing the infinite floating number set to integers between 0-20? Or even further if I categorize between the floating numbers between like; 0-5 than 0, 5-10 than 1, 10-15 than 2, and 15-20 than 3 does it helps my model to converge better and be more accurate? Does it reduce the effect of "Curse of dimensionality" because the possible inputs are reduced from an infinite set to few possibilities?

One Answer

This would not reduce the effect of the curse of dimensionality because you are not reducing any dimensions, simply the values of one dimension. A valid reason to do this would be if there are so few training examples above 20 that your neural network struggles to learn much about them. But as Erwan suggested, you should simply try clamping and not, and compare validation accuracies. I would suspect that a well designed neural network architecture could better use the information of all values from 0-20 and would not benefit from throwing away information by binning it.

Answered by Cameron Chandler on December 20, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP