# Show that this iterative Richardson iteration may diverge

Mathematics Asked on December 5, 2020

I need a check on the following problem:

Let $$A$$ a nonsingular matrix with real eigenvalues, and consider the iterative scheme $$x_{k+1} = x_k + alpha (b- Ax_k)$$ for $$alpha ne 0$$.

i) Assume that $$A$$ has both negative and real eigenvalues. Show that for every $$alpha ne 0$$ there exists $$x_0$$ s.t. $${ x_k}_k$$ does not converge

ii) Assume that $$A$$ has only positive eigenvalues. Find conditions on $$alpha$$ s.t. the method converges for every $$x_0$$. Find also the value of $$alpha$$ that minimize the spectral radius.

I have big problems with the first point.

i) I notice that the iteration matrix is $$R=I-alpha A$$. Therefore, the eigenvalues are $$lambda (R)=1-alphalambda$$. The requirement to have convergence is that $$sigma(R)<1$$, and so it must be $$|1-alpha lambda|<1$$ which implies,as $$lambda in mathbb{R}$$: $$frac{2}{alpha lambda_i}>1$$ (it is well defined, as $$det(A)= prod lambda_i ne 0$$ and so each $$lambda_i ne 0$$)

The fact is that we don’t know anything more about that quotient. So, if the sign of the eigevalues is not constant (as it could be from the assumptions), the method will diverge.

ii) Here I just imposed that for every $$i$$: $$|1-lambda_i alpha|<1$$ i.e. $$alpha in (0,frac2lambda_i)$$ Assume that $$lambda_1 geq lambda_2 geq lambda_n geq 0$$
so the last condition becomes $$alpha in (0,frac2lambda_1)$$

Then, in order to minimize the spectral radius, I impose $$1-alpha lambda_n = -(1-alpha lambda_1)$$ therefore it follows $$alpha=frac{2}{lambda_1 + lambda_n}$$ minimizes the spectral radius

Is everything okay?

I think it may be useful to take a step back to see exactly where the spectral radious criteria comes from.

Suppose $$x$$ is the exact solution satisfying $$Ax = b$$, if we define the error on the $$k$$-th iteration as $$e_k = x_k-x$$, remember that $$e_{k+1} = (I -alpha A)e_k = Re_k$$

So by setting $$e_0 = x_0-x$$, the error in a given iteration $$k in mathbb{N}$$ simplifies to $$e_k = R^k e_0$$.

It can be shown that $$R^k rightarrow 0$$ as $$krightarrowinfty$$ if and only if all eigenvalues of $$R$$ have absolute value strictly less than $$1$$, so the spectral radius criteria is necessary and sufficient to have convergence for any given $$e_0$$.

Perhaps the confusion is here: even if $$R^k nrightarrow 0$$, the method still converges for some choices of $$x_0$$. As an example, for any $$R$$, $$e_0 in ker(R) implies e_1 = Re_0 = 0 implies e_k rightarrow 0$$ as $$krightarrow infty$$. So to find an $$x_0$$ that makes the method diverge, the initial choice of $$x_0$$ has to be more specific.

To get an explicit initial condition that makes the iteration diverge, start by taking an eigenpair $$(lambda_*, v_*)$$ from $$A$$ and notice that since $$Rv_* = (I-alpha A)v_* = v_*-alpha(Av_*) = (1-alpha lambda_*)v_*$$ $$v_*$$ will also be an eigenvector of $$R$$ with eigenvalue $$(1-alphalambda_*)$$ associated. But, as you already found out, $$A$$ having eigenvalues with different signs implies that $$|1- alpha lambda_*| geq 1$$ for some $$(lambda_*, v_*)$$. So by making $$e_0 = v_*$$ with $$x_0 = v_*+x$$, $$lim_{krightarrow infty} e_k = lim_{k rightarrow infty} R^k v_* = lim_{k rightarrow infty} {overbrace{(1-alphalambda)}^{geq 1}} {}^k v_* neq 0$$ and therefore divergence is guaranteed.

Your solution to Part ii) looks good to me!

Correct answer by gabrimev on December 5, 2020

## Related Questions

### Find the coefficient of $x^{24}$ in the binomial equation

3  Asked on December 23, 2021

### Where to choose the point of expansion for taylor series?

1  Asked on December 23, 2021

### How to check if $phi(n)$ is a perfect square?

0  Asked on December 23, 2021 by ppspp

### Find the generating function of $S_n$(Please check my idea)

0  Asked on December 23, 2021

### Could the “post-forcing continuum” be a new cardinality?

0  Asked on December 23, 2021

### Are these true for a martingale? $Eleft[ frac{X_{n+1}}{X_n} right] = 1, Eleft[ frac{X_{n+2}}{X_n} right] = 1$

1  Asked on December 23, 2021

### Linear programming: model to only pick 10% of the variables as non zero

1  Asked on December 23, 2021

### Solving vector differential equation

0  Asked on December 23, 2021

### The Optimal Toaster Problem

1  Asked on December 23, 2021

### What does the “$bigwedge$” symbol mean in “$bigwedge_{j=1,ldots,M,jneq i}Delta_i(x)>Delta_j(x)”$?

2  Asked on December 21, 2021 by aqee

### Find $f(t)$ such that $int_0^{2 pi} f(t + theta) ln(2 sin frac{t}{2}) dt = frac{e^{i theta}}{e^{-i theta} – a e^{i phi}}$

1  Asked on December 21, 2021 by jay-lemmon

### Proving $frac1{2pi} int_0^{2pi} frac{R^2-r^2}{R^2-2Rrcostheta+r^2} dtheta =1$ by integrating $frac{R+z}{z(R-z)}$ without residue theorem.

4  Asked on December 21, 2021 by juan-esaul-gonzlez-rangel

### What is the ordinary differential equation for double exponential summation?

3  Asked on December 21, 2021 by wz-s

### What is the value of $1 -omega^h + omega^{2h} -…+(-1)^{n-1} omega^{(n-1)h}$ when $omega$ is a root of unity?

0  Asked on December 21, 2021

### Why are the supremums of this linearly interpolated function equal?

0  Asked on December 21, 2021

### What is the definition of “a derivation of a sequent “?

1  Asked on December 21, 2021

### Cartan Differentiable calculus. Show $g(x,y)= frac{f(x)-f(y)}{x-y}$ is differentiable at $(x_{0},x_{0})$

0  Asked on December 21, 2021 by sebastian-bustos

### Proving a linear map is surjective

1  Asked on December 21, 2021

### Probability of Normal Distribution

1  Asked on December 21, 2021 by frightlin

### The resultant of two homogeneous polynomials is homogeneous

1  Asked on December 21, 2021