TransWikia.com

Under what condition on $A$ is the following true: $lambda_{min}(A) |x|_2^2 leq x^T Ax leq lambda_{max}(A) |x|_2^2$?

Mathematics Asked on November 18, 2021

I remember in a numerical analysis class, the prof showed the class,

$$lambda_{min}(A) |x|_2^2 leq x^T Ax leq lambda_{max}(A) |x|_2^2$$

where I am assuming that $x in mathbb{R}^n$, $lambda_{min}(A)$ is the min eigenvalue (ditto max eigenvalue), and $A$ has some nice property.

I am trying to find out for which $A$ the above inequality holds (and hopefully why).

Through some posts on MathSE I was directed to the Courant-fisher min-max theorem, and I looked up a version in Horn and Johnson’s Matrix analysis textbook.

enter image description here

I can see if you shift the $x^Tx$ from the denominator to the other side, then you have something like that inequality. But this looks like a serious overkill and I cannot immediately see any direct reduction from Courant-Fischer to the original set of inequality.

Any reference helps!

One Answer

Claim. Suppose $nge2$ and $Ain M_n(mathbb R)$ has a real spectrum. Then $lambda_min(A)le x^TAxlelambda_max(A)$ for every real unit vector $x$ if and only if $$ Q^TAQ=left(begin{array}{cc|c}lambda_1&0&0\ 0&lambda_n&0\ hline0&0&Bend{array}right).tag{1} $$ for some orthogonal matrix $Q$, some real numbers $lambda_1,lambda_n$ and some matrix $Bin M_{n-2}(mathbb R)$ such that $$ lambda_1lelambda_minleft(frac12(B+B^T)right) lelambda_maxleft(frac12(B+B^T)right)lelambda_n.tag{2} $$

Proof. Suppose $lambda_min(A)le x^TAxlelambda_max(A)$ for every real unit vector $x$. Then $$ lambda_min(A)lemin_{|x|_2=1}x^TAx.tag{3} $$ However, when $u$ is a unit eigenvector of $A$ corresponding to the eigenvalue $lambda_min(A)$, we also have $$ min_{|x|_2=1}x^TAxle u^TAu=lambda_min(A).tag{4} $$ Therefore equalities hold in $(3)$ and $(4)$. As $x^TSx=x^TAx$ for every real vector $x$, we obtain $$ lambda_min(A)=min_{|x|_2=1}x^TSx=u^TSu. $$ As $S$ is symmetric, the minimum of $x^TSx$ on the unit sphere is achieved only at eigenvectors corresponding to the minimum eigenvalue of $S$. Therefore $u$ must also be an eigenvector of $S$ corresponding to the eigenvalue $lambda_min(S)$. It follows that $$ u^TSu=lambda_min(A)=lambda_min(S)=min_{|x|_2=1}x^TSx= min_{|x|_2=1}x^TAx.tag{5} $$ Similarly, let $v$ is a unit eigenvector of $A$ corresponding to the eigenvalue $lambda_max(A)$. Then $v$ is also an eigenvector of $S$ and an analogous equalities to $(5)$ hold.

Suppose $u$ and $v$ are linearly independent. For any vector $wperp u$, we have $$ begin{cases} langle A^Tw,urangle=langle w,Aurangle=langle w,lambda_min(S)urangle=0\ langle Sw,urangle=langle w,Surangle=langle w,lambda_min(S)urangle=0\ langle Aw,urangle=langle(2S-A)w,urangle=0\ end{cases}tag{6} $$ In particular, if we write $v=au+bv'$ for some unit vector $v'perp u$, then $Av'perp u$. However, since $V=operatorname{span}{u,v'}=operatorname{span}{u,v}$ is an invariant subspace of $A$, we must have $Av'parallel v'$, i.e. $v'$ is also an eigenvector of $A$. But the eigenvalues of $A|_V$ are $lambda_min(A)$ and $lambda_max(A)$. Hence we must have $v'$ must be an eigenvector corresponding to the eigenvalue $lambda_max(A)$. That is, we may assume that $v=v'perp u$.

The analogous equalities to $(6)$ also hold if $u$ is replaced by $v$ and $wperp v$. Therefore, if $Q$ is an orthogonal matrix whose first two columns are $u$ and $v$ respectively, then $Q^TAQ$ is in the form of $(1)$. Moreover, we have $$ lambda_minleft(frac12(B+B^T)right)gelambda_minleft(frac12(A+A^T)right)=lambda_min(S)=lambda_min(A) $$ and similarly $lambda_maxleft(frac12(B+B^T)right)lelambda_max(A)$. Therefore $(2)$ also holds with $lambda_1=lambda_min(A)$ and $lambda_n=lambda_max(A)$.

If $u$ and $v$ are linearly dependent, then $lambda_min(A)=lambda_max(A)$. Therefore, all eigenvalues of $A$ are equal to some $lambdainmathbb R$. By a similar argument to the above, we get $$ Q^TAQ=pmatrix{lambda&0\ 0&B} $$ for some orthogonal matrix $Q$ and some $Bin M_{n-2}(mathbb R)$. As $A$ has a real spectrum, we may choose $Q$ such that $B$ is upper triangular. Hence the diagonal elements of $B$ are equal to $lambda$. However, as begin{aligned} lambda&=max_{|x|_2=1}x^TSx=max_{|x|_2=1}x^TAxgemax_{|x|_2=1}x^TBx =max_{|x|_2=1}frac12x^T(B+B^T)x\ &gemin_{|x|_2=1}frac12x^T(B+B^T)x=min_{|x|_2=1}x^TBxgemin_{|x|_2=1}x^TAx=min_{|x|_2=1}x^TSx=lambda, end{aligned} we have $frac12x^T(B+B^T)x=lambda$ for every unit vector $x$. Hence $frac12x^T(B+B^T)=lambda I_{n-1}$, i.e. $B=lambda I_{n-1}+K$ for some skew-symmetric matrix $K$. But $B$ has a real spectrum. Hence $K$ must be zero and $A=lambda I_n$. But then $(1)$ and $(2)$ hold for any orthogonal matrix $Q$. The necessity of $(1)$ and $(2)$ is thus proved.

Conversely, suppose $(1)$ and $(2)$ are true. By replacing $A$ by $B$ in $(4)$, we have $min_{|y|_2=1}y^TBylelambda_min(B)$. Hence $lambda_minleft(frac12(B+B^T)right) =min_{|y|_2=1}frac12y^T(B+B^T)y =min_{|y|_2=1}y^TBy lelambda_min(B)$. Therefore, $(2)$ implies that $lambda_1lelambda_min(B)$. Similarly, we also have $lambda_max(B)lelambda_n$. Hence we must have $lambda_min(A)=lambda_1$ and $lambda_n=lambda_max(A)$. It follows from $(1)$ and $(2)$ that $lambda_min(A)le x^TSxlelambda_max(A)$ for every real unit vector $x$. Since $x^TSx=x^TAx$, the conclusion follows.

Answered by user1551 on November 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP