TransWikia.com

Decomposition of a linear operator to a partially orthogonal operator and a semi-definite self-adjoint operator

Mathematics Asked by Zhanxiong on January 7, 2022

$DeclareMathOperator{A}{mathscr{A}}$
$DeclareMathOperator{B}{mathscr{B}}$
$DeclareMathOperator{C}{mathscr{C}}$
$DeclareMathOperator{kernel}{mathrm{Ker}}$
$DeclareMathOperator{diag}{mathrm{diag}}$
$DeclareMathOperator{span}{mathrm{span}}$
$DeclareMathOperator{real}{mathbb{R}^2}$
$DeclareMathOperator{rank}{text{rank}}$

The question is:

Let $A$ be a linear operator on the $n$-dimensional Euclidean space $V$. Prove there exists a partially orthogonal operator $B$ and a semi-definite self-adjoint operator $C$,
$kernel(B) = kernel(C)$, such that $A = BC$, and operators $B$ and $C$ are unique; Prove the linear operator $A$ is normal if and only if $B$ commutes with $C$.

An operator $B$ is called partially orthogonal if there exists a $B$-invariant subspace $U$ such that $|B(alpha)| = |alpha|$ for any $alpha in U$, and $B(alpha) = 0$ for any
$alpha in U^bot$. It is easy to show that an operator is partially orthogonal if and only there exists an orthonormal basis ${xi_1, ldots, xi_n}$ such that $B(xi_1, ldots, xi_n) = (xi_1, ldots, xi_n)diag(O, 0)$, where $O$ is an order $r = dim(U)$ orthogonal matrix.

Oddly enough, so far I am able to prove the uniqueness and commuting statement but the construction itself. Inspired by the polar decomposition, I tried using the singular value decomposition of $A$ (the matrix of $A$ under a fixed orthonormal basis) as follows:
begin{align*}
A = O_1diag(M, 0)O_2 = O_1diag(I_{(r)}, 0)O_2 times O_2’diag(M, 0)O_2
=:BC,
end{align*}

where $M = diag(mu_1, ldots, mu_r)$ is a diagonal matrix with its diagonal elements of all eigenvalues. However, the $B$ defined in this way is not partially orthogonal. If enforcing $B$ to be partially orthogonal, then $C$ cannot be made symmetric.

So probably a new perspective is needed here. I appreciate any insights.


As pointed out by @Ben Grossmann, such decomposition is not universal. For example, consider the matrix $A = begin{pmatrix}0 & 1 \ 0 & 0end{pmatrix}$. Below we show this $A$ does not admit the required decomposition.

Suppose $A = BC$, where $B$ is partial orthogonal and $C geq 0$. Since $kernel(A) = span({(1, 0)’}) supset kernel(C)$, $kernel(C) = span((1, 0)’)$ or $kernel(C) = {0}$. If $kernel(C) = {0}$, then
$kernel(B) = {0}$, which implies $rank A = 2$, contradiction. Hence $kernel(B) = kernel(C) = span((1, 0)’)$.

Since $C geq 0$ and $rank C = 1$, we can accordingly assume $C$‘s spectral decomposition is
$C = Odiag(lambda, 0)O’$ where $lambda neq 0$ and $O$ is an order $2$ orthogonal matrix. By solving $Odiag(lambda, 0)O'(1, 0)’ = (0, 0)’$ and noting $O$ is orthogonal, it follows that $O = begin{pmatrix} 0 & 1 \
1 & 0 end{pmatrix}$
, thereby $C = begin{pmatrix} 0 & 0 \ 0 & lambda end{pmatrix}$.

Since $B$ is partially diagonal and $rank B = 1$, there exists an order $2$ orthogonal matrix $P$ such that $B = Pdiag(1, 0)P’$. In the same argument as above, $P = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}$, resulting
$B = begin{pmatrix} 0 & 0 \ 0 & 1 end{pmatrix}$.

As a result, $A = BC = begin{pmatrix} 0 & 0 \ 0 & lambda end{pmatrix}$, this is a contradiction.

One Answer

Either your understanding of the question is wrong, or the author has suggested something that isn't true. In either case, there exist operators $mathscr A$ for which such a decomposition does not exist.

For example, take $$ A = pmatrix{0&1\0&0}. $$ We note that if $ker mathscr B = ker mathscr C$, then both operators must have kernel equal to that of $A$, i.e. the span of $(1,0)$. This means that the image of $mathscr B$ must be the orthogonal complement to $(1,0)$, which would mean (assuming that the subspace on which $|mathscr Balpha| = |alpha|$ is invariant) that the image of $mathscr A$ is a subspace of the span of $(0,1)$.

However, this is not the case.


Here is an answer if we drop the requirement that $ker mathscr B = ker mathscr C$, but assume that your "partially orthogonal" is accurate.

Using the singular value decomposition, under a suitable choice of orthonormal basis $mathscr A$ has the form $$ A = pmatrix{Sigma & 0\0 & 0} U, $$ where $U$ is an orthogonal matrix and $Sigma$ is diagonal with positive entries. Partition the matrix $U$ conformally to get $$ A = pmatrix{Sigma & 0\0 &0}pmatrix{U_{11} & U_{12}\ U_{21} & U_{22}} = pmatrix{Sigma U_{11} & Sigma U_{12}\ 0 & 0}. $$ Note that $mathscr B$, which contains the image of $mathscr A$, must be such that the restriction of $mathscr B$ to its image is an orthogonal operator. In other words, the matrix $B$ of $mathscr B$ must be block diagonal. With that in mind, define $$ B = pmatrix{U_{11} & 0\0 & 0}, \ C = pmatrix{U_{11}^T Sigma U_{11} & U_{11}^T Sigma U_{12}\ U_{12}^T Sigma U_{11} & U_{12}^T Sigma U_{12}} = pmatrix{U_{11} & 0\0 & U_{12}}^T pmatrix{Sigma & Sigma\ Sigma & Sigma} pmatrix{U_{11} & 0\0 & U_{12}}. $$

Answered by Ben Grossmann on January 7, 2022

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP