Operations Research Asked on January 5, 2022
I have an optimization problem with an uncommon utility: to find a $beta$ that maximizes
$$
r^{T}cdot H(Xcdotbeta)
$$
where $H()$ is a Heaviside step function as in wiki
$r$ is a vector of size 1000
$X$ is a 1000×50 "tall" matrix
$beta$ is a vector of size 50
I am familiar with gradient descent, which is how I usually solve an optimization problem. But Heaviside function does not work with gradient descent. So I am wondering if anyone here could shed some light on how to solve such optimization problem.
You can solve the problem via integer linear programming as follows, assuming $r_i ge 0$ for all $i$. Let $M_i$ be a (small) upper bound on $-(X cdot beta)_i$. Let binary decision variable $y_i$ indicate whether $(X cdot beta)_i ge 0$. The problem is to maximize $$sum_{i=1}^{1000} r_i y_i$$ subject to $$-(X cdot beta)_i le M_i(1 - y_i)$$ for all $i$. This "big-M" constraint enforces $y_i=1 implies (X cdot beta)_i ge 0$.
Answered by RobPratt on January 5, 2022
1 Asked on January 5, 2022
1 Asked on January 1, 2022 by ehsank
1 Asked on January 1, 2022
big m chance constraints mixed integer programming stochastic programming
1 Asked on December 20, 2021 by cesar-canassa
1 Asked on December 18, 2021 by neverletgo
2 Asked on December 9, 2021 by farouk-hammami
combinatorial optimization cplex linearization optimization quadratic programming
3 Asked on November 24, 2021
1 Asked on November 17, 2021
15 Asked on August 19, 2021
3 Asked on August 19, 2021
4 Asked on August 19, 2021 by marco-lbbecke
2 Asked on August 19, 2021 by matheus-digenes-andrade
2 Asked on August 19, 2021
2 Asked on August 19, 2021 by user3831
3 Asked on August 19, 2021 by josh-allen
2 Asked on August 19, 2021 by ehsan
1 Asked on August 19, 2021
Get help from others!
Recent Questions
Recent Answers
© 2023 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP