TransWikia.com

How to handle an equality constraint in metaheuristic algorithms (like GA, PSO)?

Operations Research Asked by stevGates on January 11, 2021

Metaheuristic algorithms are generally used to find the optimal solution to unconstrained optimization problems. I would like to maximize $x_1+3x_2-x_3ge0$ while respecting the equality constraint $x_1+x_2=5$. How can I do by using a metaheuristic method?

3 Answers

The random key variant [1] of genetic algorithms was developed, I believe, to solve sequencing (permutation) problems, but it can be adapted to at least some other types of constrained problems. Let $C$ be the set of possible chromosomes and $X$ the set of feasible solutions to the original problem. You provide a surjection $d:Crightarrow X$ that decodes chromosomes to feasible solutions. So if $f()$ is the original objective function (expressed in terms of the original variables in the model) and $cin C$ is some chromosome, the fitness of $c$ is given by $f(d(c))$. Function $d$ does not need to be injective -- it's OK if multiple chromosomes map to the same feasible solution -- but must be surjective (every feasible solution is represented by at least one chromosome). Importantly, every possible chromosome must map to a feasible solution.

So, for instance, you could define a chromosome to be a vector of two reals, decoded by $d(c) = (c_1, 5-c_1, c_2)$ with fitness $f(d(c))=c_1 + 3(5-c_1) - c_2=15 -2c_1 - c_2$. Since you indicated the original objective function is nonnegative, I presume you have some bounds on the components of $x$. You would need to find the equivalent bounds for the genes $c_1$ and $c_2$ and then impose them as the domains of the genes.

[1] Bean, J. C. Genetic Algorithms and Random Keys for Sequencing and Optimization. ORSA Journal on Computing, 1994, 6, 154–160

Correct answer by prubin on January 11, 2021

You can create moves (neighborhoods) that don't impact the equality constraint.

For example in investment portfolio I added this custom move in OptaPlanner (java, open source) that uses Local Search configuration (Late Acceptance probably).

Elaboration:

In that example we need to decide how which percentage of our budget to invest in which assets (such as stocks). In the end, we need to invest 100% of our budget (total=100%). Now presume that in Local Search at some point we have 0% in asset A, 50% in asset B, 30% in asset C and 20% in asset D. A typical change move will add/remove a percentage for an asset, without balancing it out somewhere, so it break the "total=100%" hard constraint. Instead, we replaced all the out-of-the-box move selectors (=neighborhoods) with a set that never breaks the "total=100%" constraint. That InvestmentQuantityTransferMove takes a a part of percentage assigned to one asset and assigns it to another. For example, take 5% from B and assign it to A, so it results in 5% in asset A, 45% in asset B, 30% in asset C and 20% in asset D.

Answered by Geoffrey De Smet on January 11, 2021

One way in which you can handle constraints in evolutionary optimization is by adding penalty functions to you original objective function (assuming a minimization problem at hand). This penalty function can be defined in several ways and its goal is to quantify the amount of constraint violation.

However, implementing it sometimes is not as easy as it seems. The key problem comes with deciding the weight of penalty function you want to augment your original objective function with. This is in some sense a hyperparameter that you have to tune. An untuned amount of penalty can result in skewed objective function landscape with sharp constraint boundaries. Essentially you have to take care that original objective function and the penalty functions for the constrains are along approximately similar scales.

Answered by your_boy_gorja on January 11, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP