TransWikia.com

Parametrized complexity of sparse optimization

Theoretical Computer Science Asked by Aryeh on October 30, 2021

Optimization problems of the type: minimize $c^T x$ subject to [maybe some linear constraints and] $||x||_0le k$ are known to be NP-hard. [Actually, I just realized that I don’t have a reference, so if anyone has one handy, please say so in the comments!] The $ell_0$ "norm" is the number of non-zero elements in the vector $x$. A common way to solve such problems is via a convex relaxation: relax $||x||_0le k$ to $||x||_1le k$ and solve the resulting linear program. With luck, one can quantify how close the relaxed solution is to the optimal.

My question is: What if we try a more gradual relaxation, via the $ell_p$ norm for $0<p<1$? These result in non-convex programs, but the non-convexity degree can be tweaked. Surely the $p=0.999$ case, for reasonably set other parameters, cannot be much harder than the $p=1$ case?.. Has anybody tried this approach? The idea is to seek a better efficiency/optimality tradeoff via values of $p$ other than $1$.

One Answer

This may be related to what you have in mind: arxiv.org/abs/0804.4666

Answered by Mahdi Cheraghchi on October 30, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP