TransWikia.com

Prove that iterated $x_{i + 1} = x_i - log x_i$ is sublinear.

Mathematics Asked by thienle on November 12, 2020

Define a recurrence as: $$x_0 = n, x_{i + 1} = x_i – log x_i, forall i geq 0.$$ Let $N_n$ be such that $x_{N_n} < 1$. Is it possible to show that $N_n in o(n)$?

Some thoughts: if $x_{i + 1} = x_i – f(x_i)$ and $f$ is constant then $N_n = frac{n}{f(1)}$ and is thus not sublinear. If $f$ is $Omega(n)$ then $N_n$ is constant and thus trivially sublinear. Is $f(x) = log x$ sufficient to make $N_n$ sublinear?


Edit: As pointed out in the comments, the criteria $x_{N_n} < 1$ can never be met for this question. The correct question should be

Let $N_n$ be the smallest integer such that $x_{N_n} < 10$. Is it possible to show that $N_n in o(n)$?

3 Answers

The other answers adress the speed of convergence to $1$, which I think is not your question.

I want to reverse the problem. Write $f(x) = x-log(x)$. Let $M>1$ and $(u_n)$ be the sequence defined by:

$$u_0 = M text{ and } u_{n+1} = g(u_n),$$

where $g : [1, +infty) to [1, +infty)$ is the inverse bijection to $f$. Then $x_i = g(x_{i+1})$, so that $n = x_0 = g^{(N_n)} (x_{N_n}) < g^{(N_n)} (M) = u_{N_n}$ and $n = x_0 = g^{(N_n-1)} (x_{N_n-1}) geq g^{(N_n-1)} (M) = u_{N_n-1}$. Note also that

$$lim_{n to + infty} frac{f(n)}{n} = 1 text{ so } lim_{n to + infty} frac{g(n)}{n} = lim_{n to + infty} frac{f(g(n))}{g(n)} frac{g(n)}{n} = 1.$$

Since $u$ is increasing, we get that $N_n = inf {k geq 0 : u_k > n}$. This is convenient, since we have translated the initial problem to the study of the growth rate of a single sequence.

Let $L>0$. For $x geq M$, we have $f(x) = x-log(x) leq x-log(M)$, so that $x leq g(x-log(M))$, whence $g(x) geq x+log(M)$. It follows that $u_n geq M + n log(M)$, so that $lim_{n to + infty} u_n = +infty$.

Now, let me prove that $u_n$ grows super-linearly. Let $L>0$. Let $n_0$ be such that $u_n geq 10^L$ for all $n geq n_0$. By the same argument as above, $g(x) geq x+L$ for all $x geq 10^L$. Hence, for all $n geq n_0$,

$$u_n = u_{n_0} + (u_n-u_{n_0}) geq u_{n_0} + (n-n_0)L.$$

It follows that $liminf_{n to +infty} u_n/n geq L$. Since this holds for all $L> 0$, we finally get

$$lim_{n to + infty} frac{u_n}{n} = +infty.$$

Specializing to the subsequence $(N_n)$, which also converges to $+infty$,

$$lim_{n to + infty} frac{u_{N_n}}{N_n} = +infty.$$

But $n < u_{N_n} leq g(n)$, so

$$lim_{n to + infty} frac{g(n)}{N_n} = +infty.$$

Finally, since $g(n) sim n$, we get

$$lim_{n to + infty} frac{N_n}{n} = 0,$$

that is, $N_n = o(n)$.

The reasoning above can be adapted to many functions instead of $log$, although many details have to be checked. I think that the result should hold for the recursion $x_{i+1} = x_i-h(x_i)$, where $inf h>0$ and $lim_{x to +infty} h(x) = +infty$ as well as $lim_{x to +infty} h(x)/x = 0$, but that is not completely trivial.

Correct answer by D. Thomine on November 12, 2020

It is interesting to notice that this looks like a Newton iterative scheme for solving the equation $f(x)=a$.

Since we do not know what is $f(x)$, writing $$x_{n+1}=x_n-frac{f(x_n)-a}{f'(x_n)}$$ we have $$frac{f(x)-a}{f'(x)}=log(x)implies frac{f'(x)}{f(x)-a}=frac 1{log(x)}$$ $$implieslog(f(x)-a)=text{li}(x)+cimplies color{blue}{f(x)=a+c,e^{text{li}(x)}}$$ where $text{li}(x)$ is the logarithmic integral function.

The function $e^{text{li}(x)}$ is, for sure, always positive and it is equal to $0$ when $x=1$ but, at this point, it is discontinuous.

Around $x=1^-$, we have $$e^{text{li}(x)}=-e^{gamma } (x-1)-frac{1}{2} e^{gamma } (x-1)^2+Oleft((x-1)^3right)$$ while around $x=1^+$, $$e^{text{li}(x)}=e^{gamma } (x-1)+frac{1}{2} e^{gamma } (x-1)^2+Oleft((x-1)^3right)$$

Since it is Newton, the process is quadratic.

For illustration purposes, trying for $a=123456$, $c=pi$ and using $x_0=30$, we have the following iterates $$left( begin{array}{cc} n & x_n \ 0 & 30.000000000000000000000000000000000000000000000000 \ 1 & color{red}{2}6.894152524358982809170572349163498313534833470925 \ 2 & color{red}{2}4.325223169265492626304969903408926811220213329449 \ 3 & color{red}{22.}681722534270713557158529795786301254378897174602 \ 4 & color{red}{22.}108470739953112533158686496921157728291987229732 \ 5 & color{red}{22.051}702027826858474857702973272665751388334724225 \ 6 & color{red}{22.0511991}43990547095115344666363810856986277372042 \ 7 & color{red}{22.051199104963862}951102891406188800145947352942244 \ 8 & color{red}{22.0511991049638627160819846994044}13283687608959958 \ 9 & color{red}{22.051199104963862716081984699404404760615124872753} end{array} right)$$ which is typical of a second order method.

Answered by Claude Leibovici on November 12, 2020

This is one approach using power series. Define sequence $$ x_i := 1+y_i, tag{1}$$ and function $$ g(x) := x - log(1+x). tag{2}$$ Using the recursion we have $$ y_{i+1} = g(y_i). tag{3}$$ From previous experience with these iterations, such as Newton's method, we expect that $$ f(x^2) = g(f(x)) tag{4}$$ for some function expressed as a power series $$ f(x) := a_1,x + a_2,x^2 + a_3,x^3 + dots tag{5}$$ where we solve for the "undetermined coefficients" $,a_i,$ one at a time using equation $(4).$ The result of the computations is that we must have $$ f(x) := 2x + 4/3x^2 + 8/9x^3 + 112/135x^4 + 76/135x^5 + dots. tag{6}$$ Thus, $$ y_0 = n-1 = f(t_0), tag{7}$$ for some $,t_0,$ and $$ y_n = f(t_0^{,2^n}) approx 2t_0^{,2^n} tag{8}$$ which implies $$ x_n approx 1+2t_0^{,2^n}. tag{9}$$ The convergence of $,x_n,$ to $1$ is called "quadratic" or of "second order".

Answered by Somos on November 12, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP