TransWikia.com

Convexity of the variance of a mixture distribution

Operations Research Asked by independentvariable on September 25, 2020

$X$ is a random variable that is sampled from the mixture of uniform distributions. In other words:
$$X sim sum_{i=1}^N w_i cdot mathbb{U}(x_i, x_{i+1}),$$
where $mathbb{U}(x_i, x_{i+1})$ denotes a random variable that follows a uniform distribution in $[x_i, x_{i+1}]$.
For feasibility, we need $w geq 0, sum_{i=1}^N w_i = 1$.

In an optimization problem my variables are $w_i$ for $i=1,ldots,N$, and I would like to upper bound the variance of $X$. According to Wikipedia, the variance of $X$ is:
$$mathrm{Var}(X) = sum_{i=1}^N w_i(sigma_i^2+ mu_i^2 – mu^2) $$
where $sigma_i^2$ and $mu_i$ are the variance and mean of $mathbb{U}(x_i, x_{i+1})$, respectively (which are parameters), and $mu$ is the mean of the mixture, which is $$mu = sum_{i=1}^N w_i frac{x_i
+ x_{i+1}}{2}.$$

Thus, if my derivation is not wrong:
$$ mathrm{Var}(X) = sum_{i=1}^N w_ileft(sigma_i^2+ mu_i^2 – left(sum_{j=1}^N w_j frac{x_j
+ x_{j+1}}{2}right)^2right) $$

which is very ugly and appears to be non-convex to upper bound this function (edit: I want to constrain $mathrm{Var}(X) leq mathrm{constant}$).

My question is, is there any trick, or any other convex approximation of such a variance, such that I can include an upper bound on the variance constraint?

One Answer

In order to find the best upper bound for variance, for given input values of $u_i$ and $sigma_i^2$, you should globally maximize variance with respect to the $w_i$, subject to the constraints $w_i ge 0, Sigma w_i = 1$.

This can be formulated as a convex QP (Quadratic Programming problem), i.e., maximizing a concave quadratic subject to linear constraints. Hence it is easy to solve, unless $n$ is gigantic, which hardly seems likely for any reasonable mixture distribution. I leave to the OP as an exercise, whether the KKT conditions can yield a closed form solution.

The convex QP takes the form:

maximize $(Sigma_{i=1}^n w_i (sigma_i^2 +mu_i^2)) - mu^2$ with respect to $mu, w_i$

subject to $Sigma_{i=1}^n w_i mu_i = mu, w_i ge 0 forall i, Sigma_{i=1}^n w_i = 1$.

If all $u_i$ are equal to each other, this would be a Linear Programming problem with compact constraints. Therefore the optimum would be at a vertex of the constraints, and in this case, that vertex would be $w_i = 1$ for the $i$ corresponding to the largest $sigma_i^2$, and all other $w_i = 0$.

Edit: In response to edit to question: "I want to constrain Var(X) $le$ constant)"

If the naive approach of adding the constraint Var(X) $le $ constant to my above convex QP formulation were performed, that would add a non-convex quadratic constraint, making the problem a non-convex Quadratically-Constrained Quadratic Program (QCQP), which requires a global optimizer, such as Gurobi 9.x or BARON to solve to global optimality.

However, there is an easier, faster method: Solve the (pre-Edit) convex QP formulation. Then maximum variance, accounting for the constraint Var(X) $le$ constant), equals

min(optimal objective value of convex QP formulation,constant).

Correct answer by Mark L. Stone on September 25, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP