TransWikia.com

Parameter initialization in a genetic algorithm

Data Science Asked by yoshi8585 on November 28, 2020

I’m using a neural network in a genetic algorithm. The neural network has 4 inputs (values between 0 and 1) and 4 outputs, corresponding to the probabilities of different actions. The neural network has 58 parameters.

At first, I create a random population : each individual has 58 random parameters. Parameters are chosen randomly with the default method of keras, on python (values between -1 and 1). Is it the good method ? Maybe the best solution needs to have parameters with values higher than 1 for example, but with my method, only values between -1 and 1 exist in the “gene pool”. So, a parameters equal to 3.4 can’t appear for example.

I tried to train the same neural network with labeled data and gradient descent, in order to have an idea of parameters range. After training the model, I obtained some parameters with values >1, or <-1. I thought I could use those parameters as an initialization for my genetic algorithm. But how could I get different individuals ? If the 1st parameter of my trained model equals 2.5, do I have to set the 1st parameter of the different individuals to 2.5 +- 20% for example ?

One Answer

In my opinion you should make the range as large as possible (to a reasonable extent) for the first random initialization.

The genetic algorithm will converge to the appropriate range eventually, but giving it a narrow range could result in a sub-optimal solution because the algorithm doesn't have any way to reach a better solution. The only downside of a large range is that it might take a bit longer (more generations) to converge.

So I would suggest you keep a completely random initialization of the values, for instance in the range [-10,10].

Answered by Erwan on November 28, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP