TransWikia.com

Show that this iterative Richardson iteration may diverge

Mathematics Asked on December 5, 2020

I need a check on the following problem:

Let $A$ a nonsingular matrix with real eigenvalues, and consider the iterative scheme $$x_{k+1} = x_k + alpha (b- Ax_k)$$ for $alpha ne 0$.

i) Assume that $A$ has both negative and real eigenvalues. Show that for every $alpha ne 0$ there exists $x_0$ s.t. ${ x_k}_k$ does not converge

ii) Assume that $A$ has only positive eigenvalues. Find conditions on $alpha$ s.t. the method converges for every $x_0$. Find also the value of $alpha$ that minimize the spectral radius.


I have big problems with the first point.

i) I notice that the iteration matrix is $R=I-alpha A$. Therefore, the eigenvalues are $lambda (R)=1-alphalambda$. The requirement to have convergence is that $sigma(R)<1$, and so it must be $$|1-alpha lambda|<1$$ which implies,as $lambda in mathbb{R}$: $$frac{2}{alpha lambda_i}>1$$ (it is well defined, as $det(A)= prod lambda_i ne 0$ and so each $lambda_i ne 0$)

The fact is that we don’t know anything more about that quotient. So, if the sign of the eigevalues is not constant (as it could be from the assumptions), the method will diverge.

ii) Here I just imposed that for every $i$: $$|1-lambda_i alpha|<1$$ i.e. $$alpha in (0,frac2lambda_i)$$ Assume that $lambda_1 geq lambda_2 geq lambda_n geq 0$
so the last condition becomes $$alpha in (0,frac2lambda_1)$$

Then, in order to minimize the spectral radius, I impose $$1-alpha lambda_n = -(1-alpha lambda_1)$$ therefore it follows $$alpha=frac{2}{lambda_1 + lambda_n}$$ minimizes the spectral radius

Is everything okay?

One Answer

I think it may be useful to take a step back to see exactly where the spectral radious criteria comes from.

Suppose $x$ is the exact solution satisfying $Ax = b$, if we define the error on the $k$-th iteration as $e_k = x_k-x$, remember that $$e_{k+1} = (I -alpha A)e_k = Re_k$$

So by setting $e_0 = x_0-x$, the error in a given iteration $k in mathbb{N}$ simplifies to $e_k = R^k e_0$.

It can be shown that $R^k rightarrow 0$ as $krightarrowinfty$ if and only if all eigenvalues of $R$ have absolute value strictly less than $1$, so the spectral radius criteria is necessary and sufficient to have convergence for any given $e_0$.

Perhaps the confusion is here: even if $R^k nrightarrow 0$, the method still converges for some choices of $x_0$. As an example, for any $R$, $e_0 in ker(R) implies e_1 = Re_0 = 0 implies e_k rightarrow 0$ as $krightarrow infty$. So to find an $x_0$ that makes the method diverge, the initial choice of $x_0$ has to be more specific.

To get an explicit initial condition that makes the iteration diverge, start by taking an eigenpair $(lambda_*, v_*)$ from $A$ and notice that since $$Rv_* = (I-alpha A)v_* = v_*-alpha(Av_*) = (1-alpha lambda_*)v_*$$ $v_*$ will also be an eigenvector of $R$ with eigenvalue $(1-alphalambda_*)$ associated. But, as you already found out, $A$ having eigenvalues with different signs implies that $|1- alpha lambda_*| geq 1$ for some $(lambda_*, v_*)$. So by making $e_0 = v_*$ with $x_0 = v_*+x$, $$lim_{krightarrow infty} e_k = lim_{k rightarrow infty} R^k v_* = lim_{k rightarrow infty} {overbrace{(1-alphalambda)}^{geq 1}} {}^k v_* neq 0$$ and therefore divergence is guaranteed.

Your solution to Part ii) looks good to me!

Correct answer by gabrimev on December 5, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP