TransWikia.com

On the functional square root of $x^2+1$

Mathematics Asked by user1551 on January 25, 2021

There are some math quizzes like:

find a function $phi:mathbb{R}rightarrowmathbb{R}$
such that $phi(phi(x)) = f(x) equiv x^2 + 1.$

If such $phi$ exists (it does in this example), $phi$ can be viewed as a “square root” of $f$ in the sense of function composition because $phicircphi = f$. Is there a general theory on the mathematical properties of this kind of square roots? (For instance, for what $f$ will a real analytic $phi$ exist?)

9 Answers

Look also at this answer:

https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx/44727#44727

In short, the analytic solution is

$$f^{[1/2]}(x)=phi(x)=sum_{m=0}^{infty} binom {1/2}m sum_{k=0}^mbinom mk(-1)^{m-k}f^{[k]}(x)$$

$$f^{[1/2]}(x)=lim_{ntoinfty}binom {1/2}nsum_{k=0}^nfrac{1/2-n}{1/2-k}binom nk(-1)^{n-k}f^{[k]}(x)$$

$$f^{[1/2]}(x)=lim_{ntoinfty}frac{sum_{k=0}^{n} frac{(-1)^k f^{[k]}(x)}{(1/2-k)k!(n-k)!}}{sum_{k=0}^{n} frac{(-1)^k }{(1/2-k) k!(n-k)!}}$$

The same way you can find not only square iterative root but iterative root of any power.

Correct answer by Anixx on January 25, 2021

Using the same approach as in this answer, it is possible to produce the functional square root by using:

$$phi_0(x)=|x|^{sqrt2}$$

$$phi_{n+1}(x)=f^{-1}(phi_n(f(x)))=sqrt{phi_n(x^2+1)-1}$$

which converges uniformly to a function $phi$ satisfying $phi(phi(x))=f(x)$, which can be seen by noting that

$$phi_n(phi_n(x))=f^{-n}(|f^n(f^{-n}(|f^n(x)|^{sqrt2}))|^{sqrt2})=f^{-n}(f^n(x)^2)$$

is nearly equal to

$$phi_n(phi_n(x))simeq f^{-n}(f^n(x)^2+1)=f(x)$$

with the error decreasing with each application of $f^{-n}$ since $|f^{-1}(a+epsilon)-f^{-1}(a)|simeqepsilon/sqrt allepsilon/2$ implies that

$$|phi_n(phi_n(x))-f(x)|ll1/2^n$$

and in fact the error tends to zero much faster, since initially $a=f^n(x)^2gg f^2(x)^{2^{n-1}}$ is insanely large.


There is the additional caveat that technically this requires us to compute iterations of $f$ since we have

$$phi_n(x)=f^{-n}(|f^n(x)|^{sqrt2})$$

where $f^n(x)$ is far too large to compute. We note two things at this point:

  • The convergence is at least quadratic, so one should not need to use very large $n$ anyways.

  • If one does need to use larger $n$, it is better to use an improved $phi_0$. Expanding on what is shown in the linked question, we can use the improved$$phi_0(x)=|x|^{sqrt2}+frac1{sqrt2}|x|^{sqrt2-2}-frac12|x|^{-sqrt2}$$to get faster convergence. Note that we only care about the behavior as $xtoinfty$ since $f^n(x)$ is large.

Answered by Simply Beautiful Art on January 25, 2021

[this is a second comment on sheldon's answer [comment-2]]

There is a (relatively) simple q&d-approximation for the finding of the power series of the $h(x)$-function where $h(h(x))=f(x)$. It exploits the method of Carlemanmatrices (in this case the Carlemanmatrix F for $f(x)$) and used diagonalization to compute the squareroot H of that Carlemanmatrix which contains the approximation to $h(x)$'s (truncated) power series by a polynomial. Here I show the approximation of $h_n(x)$ which means a polynomial of order n intended to approximate $h(x) = lim_{n to infty} {h_n}(x)$ by Hn with size n x n . The computation in Pari/GP is straightforward, we only need the user-defined procedure to construct the Carlemanmatrix to size n:

n=8
Fn=make_carleman_matrix(1+x^2,n)     \ the function "make_carleman_matrix..." 
                                     \ must be defined by the user
tmpM=mateigen(Fn);
   tmpW=tmpM^-1;
   tmpD=diag(tmpW * Fn * tmpM);      \ diagonalization

Hn=tmpM*matdiagonal(sqrt(tmpD))*tmpW \ compute the Carlemanmatrix for hn(x)

hn(x) = sum(k=0,n-1,x^k*Hn[1+k,2])    \ used to evaluate hn(x)

This simple procedure gives a nice approximation of a powerseries for $h(x)$ and surprisingly is near to Sheldon's solution and seems to approximate it with increasing size n . Here is a comparision of Sheldon's coefficients (left column $h(x)$) and the differences of the coefficients of $hn(x)$ when computed for various n:

 sheldon's | deviations of coefficients for hn(x) from h(x)
   h(x)    | n=8          n=12        n=16         n=24        n=32          n=64           n=128
 ----------|----------------------------------------------------------------- ------------------------------
   0.642095  0.00960631  0.00140309  0.000215841  0.000330444  0.000108862  0.0000300372  0.000000891427
          0           0           0            0            0            0             0               0
    1.00690   0.0943527   0.0316415   0.00835945   0.00257755   0.00246343  0.0000920079    0.0000483671
          0           0           0            0            0            0             0               0
  -0.514130    0.327877    0.183061    0.0898295   0.00821450    0.0101783    0.00181751     0.000430235
          0           0           0            0            0            0             0               0
   0.733303    0.697906    0.557529     0.387648     0.126456   0.00578646     0.0176690      0.00171061
          0           0           0            0            0            0             0               0
   -1.32058           .     1.26037      1.09312     0.606949     0.208562     0.0864567      0.00217006
          0           .           0            0            0            0             0               0
    2.63884           .     2.62965      2.53703      1.94255      1.08457      0.285742       0.0165412
          0           .           0            0            0            0             0               0
   -5.60443           .           .      5.57735      5.07301      3.74784      0.644615        0.149314
          0           .           .            0            0            0             0               0
    12.4064           .           .      12.4032      12.1002      10.5658      0.635868        0.784870
          0           .           .            0            0            0             0               0
   -28.3152           .           .            .      28.1871      26.8259       2.60645         3.30605
          0           .           .            .            0            0             0               0
    66.1664           .           .            .      66.1297      65.1969       19.2922         12.1253
          0           .           .            .            0            0             0               0
   -157.551           .           .            .      157.544      157.052       79.4798         40.0001

Remark: For me the question arises whether it is indeed the case, that this procedure approximates the given Abel-/Boettcher-method solution and whether/how this can be proven (because I had the similar observation with other functions $f(x)$) I think this would give some more insight in the functionality of the Abel-/Bottcher method and were a big step to the intuitiveness of its application.

[Update]
On Sheldon's request to check to which of his solutions (Kneser-type or Böttcher/Abel-type) this approximates best:
It is difficult to prognose to which of the Abel/Böttcher or Kneser solution the diagonalization-approach converges. I calculated the truncated powerseries to 256 terms (by the 256 x 256 Carlemanmatrix) giving that results (the coefficient at $x^0$):

  0.64212 4541629 Diagonalization 64 x 64
  0.64209 5395818 Diagonalization 128 x 128
  -----------------------------------------
  0.64209 4752506    Kneser        // by Sheldon's answer
  0.64209 4504390    Böttcher/Abel // by Sheldon's answer
  -----------------------------------------
  0.64209 4269150 Diagonalization 256 x 256
  0.64198 5642772 Diagonalization 32 x 32
  0.64187 8663777 Diagonalization 16 x 16

Surprisingly the Kneser-like solution shown in Sheldon's answer has nonzero entries at the coefficients of odd indexes, so maybe that solution could have an even better numerical approximation itself. From the exercises with the suqare-root of the $exp()$-function I'd tend to the prognosis, that the diagonalization shall come out to the Kneser-solution (if Kneser and Böttcher is really different), but who knows... Unfortunately, the diagonalization of a 256 x 256 Carleman matrix needs high float precision in the computation ( 400 dec digits seemed to be not sufficient, so I used 800 dec digits) and computation took 1 and half hours... So this procedure is surely only of theoretical interest, just whether the Kneser or Böttcher/Abel solution is asymptotically equivalent to the diagonalization of an infinite Carleman-matrix.


Here I added more data on the approximation to the Kneser solution up to matrix-size 160x160. It suggests that increasing the size makes the coefficients approximate the Kneser-coefficients by understepping and overstepping with vanishing amplitude. See data for the first coefficient:

      0.64209 4752506    Kneser's coeff_0        // by Sheldon's answer

 size  coeff_0         coeff_0- (Kneser)coeff_0  
 ------------------------------------------------
   2  1.000000000      0.3579052475
   4  0.7071067812     0.06501202868
   6  0.6655053688     0.02341061625
   8  0.6517008106     0.009606058052
  10  0.6460214964     0.003926743899
  12  0.6434975953     0.001402842791
  14  0.6423636910     0.0002689385351
----------------------------------------     Kneser
  16  0.6418786638    -0.0002160887285
  18  0.6417024683    -0.0003922841726
  20  0.6416715448    -0.0004232076833   (min value)
  22  0.6417053724    -0.0003893800905
  24  0.6417640609    -0.0003306916510
  26  0.6418281104    -0.0002666421154
  28  0.6418884320    -0.0002063205003
  30  0.6419412855    -0.0001534670012
  32  0.6419856428    -0.0001091097341
  34  0.6420217903   -0.00007296222387
  36  0.6420505888   -0.00004416370000
  38  0.6420730891   -0.00002166336753
  40  0.6420903405  -0.000004411995475
----------------------------------------     Kneser
  42  0.6421033021   0.000008549576691
  44  0.6421128091    0.00001805654470
  46  0.6421195672    0.00002481465667
  48  0.6421241611    0.00002940859907
  50  0.6421270686    0.00003231609938
  52  0.6421286759    0.00003392343127
  54  0.6421292928    0.00003454029388   (max value)
  56  0.6421291657    0.00003441322779
  58  0.6421284898    0.00003373726814
  60  0.6421274183    0.00003266579638
  62  0.6421260712    0.00003131871087
  64  0.6421245416    0.00002978912317
  66  0.6421229013    0.00002814880519
  68  0.6421212051    0.00002645259374
  70  0.6421194944    0.00002474193326
  72  0.6421178002    0.00002304771411
  74  0.6421161450    0.00002139254073
  76  0.6421145450    0.00001979253975
  78  0.6421130113    0.00001825879776
  80  0.6421115510    0.00001679850187
  82  0.6421101683    0.00001541584308
  84  0.6421088652    0.00001411273041
  86  0.6421076419    0.00001288935398
  88  0.6421064971    0.00001174462783
  90  0.6421054290    0.00001067653686
  92  0.6421044349    0.00000968240789
  94  0.6421035116    0.00000875912052
  96  0.6421026558    0.00000790327002
  98  0.6421018638    0.00000711129235
 100  0.6421011321    0.00000637955937
 102  0.6421004570    0.00000570445034
 104  0.6420998349    0.00000508240494
 106  0.6420992625    0.00000450996172
 108  0.6420987363    0.00000398378508
 110  0.6420982532    0.00000350068351
 112  0.6420978101    0.00000305762094
 114  0.6420974042    0.00000265172282
 116  0.6420970328    0.00000228027821
 118  0.6420966932    0.00000194073889
 120  0.6420963832    0.00000163071623
 122  0.6420961005    0.00000134797648
 124  0.6420958429    0.00000109043489
 126  0.6420956087    0.00000085614916
 128  0.6420953958    0.00000064331232
 130  0.6420952028    0.0000004502455258
 132  0.6420950279    0.0000002753906278
 134  0.6420948698    0.0000001173029359
----------------------------------------     Kneser
 136  0.6420947272   -0.00000002535594350
 138  0.6420945987   -0.0000001538250132
 140  0.6420944833   -0.0000002692505285
 142  0.6420943798   -0.0000003726923395
 144  0.6420942874   -0.0000004651299134
 146  0.6420942050   -0.0000005474680276
 148  0.6420941320   -0.0000006205421322
 150  0.6420940674   -0.0000006851233873
 152  0.6420940106   -0.0000007419233790
 154  0.6420939609   -0.0000007915985276
 156  0.6420939178   -0.0000008347541983
 158  0.6420938806   -0.0000008719485302
 160  0.6420938488   -0.0000009036959966

It should be difficult to expand the table, the computation for sizes 128 to 160 needed a couple of hours...

Answered by Gottfried Helms on January 25, 2021

A solution can be generated based on the Abel function, $alpha(z)$ of $f(z)=z^2+1$. If we have the Abel function, then the half iterate of z can be generated as $h(z)=alpha^{-1}(alpha(z)+0.5)$ Though it is not the only solution (nor in my opinion, the best), the most accessible Abel function solution is based on a Boettcher function for the fixed point at infinity; I wrote a program last year that does just that, from which the results I posted earlier were quickly generated. It is easiest to work with the inverse Boettcher function, from which the inverse Abel function can easily be generated. I'm using use the symbol $beta$ for the Boettcher function. The problem is that f(x) actually has a super attracting fixed point at infinity, not a fixed point at zero. So, we work with the reciprocal of the $beta^{-1}$ function. We defined the formal $beta^{-1}$ function via the following relationship.

$beta^{-1}(z^2)=frac{1}{f( , 1 , / , {beta^{-1}(z) , })}$

First generate the formal power series for the reciprocal function, 1/f(1/z), which allows one to generate the formal $beta^{-1}$ series.

$fi(z)=frac{1}{f(frac{1}{z})} = z^2 - z^4 + z^6 - z^8 + z^{10} - z^{12} ...$

${beta^{-1}(z^2)}=fi({beta^{-1}(z)})$

Now, all you need is the formal power series for the $beta^{-1}(z)$, along with the equation for the inverse Abel function, in terms of the Boettcher function, and the equation to generate the half iterate in terms of the Abel function, $h(z)=alpha^{-1}(alpha(z)+0.5)$.

$alpha^{-1}(z)=frac{1}{beta^{-1}(exp(-2^{z}))}$

$beta^{-1}(z)=$

z +
z^ 3*  1/2 +
z^ 5*  5/8 +
z^ 7*  11/16 +
z^ 9*  131/128 +
z^11*  335/256 +
z^13*  1921/1024 +
z^15*  5203/2048 +
z^17*  122531/32768 +
z^19*  342491/65536 +
z^21*  1992139/262144 +
z^23*  5666653/524288 +
z^25*  66211495/4194304 +
z^27*  190532251/8388608 +
z^29*  1112640185/33554432 +
z^31*  3225372323/67108864 +
z^33*  151170463523/2147483648 +
z^35*  440562661907/4294967296 +
z^37*  2583809849479/17179869184 +
z^39*  7558966177753/34359738368 +
z^41*  88836407031661/274877906944...

So, this yields an approximate solution for the superfunction or $alpha^{-1}(z)$ of $exp(2^z)$, which is the superfunction for x^2. This approximation is modified by the Boettcher function to become exactly, $frac{1}{beta^{-1}(exp(-2^z)}$. Notice that as z increases, $exp(-2^z)$ rapidly goes to zero, as long as $|Im(z)|<frac{pi}{2log(2)}$, and the approximation for the superfunction becomes more and more accurate. This is the Taylor series centered so that $alpha^{-1}(0)=2$. $alpha^{-1}(z)=$

        2.00000000000000000000000
+x^ 1*  1.47042970479728200070736
+x^ 2*  0.762480577752927164660093
+x^ 3*  0.424267970164226197579471
+x^ 4*  0.195424007045383357908720
+x^ 5*  0.0885363745236815506982063
+x^ 6*  0.0359598551892287716903761
+x^ 7*  0.0144792452984198575961554
+x^ 8*  0.00535551113121023140421654
+x^ 9*  0.00201219850895305456107215
+x^10*  0.000694227259952985754369526
+x^11*  0.000244367434796641079018478
+x^12*  0.0000769214480826208320220663
+x^13*  0.0000269925934667063689310974
+x^14*  0.00000813609797954979262652707
+x^15*  0.00000283560192079757251765790
+x^16*  0.000000705532363923839906429084
+x^17*  0.000000277796704124709172266365
+x^18*  0.0000000569382653822375560531824
+x^19*  0.0000000291321329124127631158831
+x^20*  0.00000000199960494407016834679507
+x^21*  0.00000000353966190200798175752179
+x^22* -1.84359576880995872838519 E-10
+x^23*  0.000000000489582426965793452585949
+x^24* -1.69340677715894785103962 E-10
+x^25*  9.49659586691303353973779 E-11
+x^26* -2.92631386240628006146382 E-11
+x^27*  1.88357410017244782298422 E-11
+x^28* -9.69806059398720144574851 E-12
+x^29*  4.16913890865704504495135 E-12
+x^30* -1.73667913416272696484187 E-12
+x^31*  9.23380420463300741831335 E-13
+x^32* -4.74750042625944938044382 E-13
+x^33*  2.15998350305014866568442 E-13
+x^34* -9.49184477375019128289258 E-14
+x^35*  4.68362987742936825161002 E-14
+x^36* -2.44077226697947882519346 E-14
+x^37*  1.15019105307459064415620 E-14
+x^38* -5.06531638741476544065356 E-15
+x^39*  2.48236503924756119664440 E-15

The Abel function, and its inverse the superfunction=$alpha^{-1}(z)$, combine to yield a valid solution for the half iterate using numerical methods to get a Taylor series for $h(z)=alpha^{-1}(alpha(z)+0.5)$. I prefer the Cauchy integral, to generate each coefficient of the Taylor series for the half iterate. So below this paragraph is the half iterate, generated by putting iterations of $x^2+1$ into correspondence with iterations of the $x^2$ via the Boettcher super attracting fixed point of infinity/zero. My main reason for preferring the Kneser type solution is that the superfunction generated from the Kneser type solution has no singularities in the upper half of the complex plane, where as the Bottcher function solution is not nearly so well behaved, with an infinite number of singularities as $|Im(z)|$ approaches $frac{pi}{2log(2)}$. But the Kneser solution requires a Riemann mapping so it is not as accessible as this Boettcher function solution. At the real axis, both functions are very close in values to each other. I haven't studied the half iterates of either in much detail; although the nearest singularity defines the radius of convergence, $sqrt{1-a_0}approx 0.598252i$, as noted in my comments above. Here is the half iterate, $h(z)$, for $f(z)=z^2+1$. Notice that the radius of convergence is a little bit too small, so that $h(h(z))$ doesn't converge to $z^2+1$.

         0.642094504390828381495363 +
 x^ 2 *  1.00690224867415593906994 +
 x^ 4 * -0.514130215253435435599237 +
 x^ 6 *  0.733302763753911249332061 +
 x^ 8 * -1.32058357980755641903265 +
 x^10 *  2.63883845336820960564369 +
 x^12 * -5.60443025341316030005301 +
 x^14 * 12.4064479200198191890023 +
 x^16 *-28.3152137792182421744708 +
 x^18 * 66.1663983446023842196175 +
 x^20 *-157.550867142011717456622 

update for Gottfried June 18 2016 A long time ago, I generated a solution; actually two of them, for this problem. One solution, I called Kneser, as a0=0.64209475250660937, the other solution, I called Botcher has a0=0.64209450439082838. Which one is your Carleman Matrix solution approaching? Also, I wrote a pari-gp program called "fatou.gp", which is posted on the tetration forum. fatou.gp will also solve the Kneser type Abel function for $f(x)=x^2+x+k$, using "x2mode" . For problem at hand, we solve $f2(y)=y^2+y+frac{3}{4}$ where $y=x+0.5$ and $f2(y)=f(x)+0.5$. There is even a half(z) function included in fatou.gp!

This is how to generate the Kneser style half iterate from the Kneser Abel function, using the fatou.gp program from http://math.eretrandre.org/tetrationforum/showthread.php?tid=1017

r c:parifatou.gp
p 38
x2mode=1;  /* instead of exp(z) mode; we want Abel func for x^2+x+k */ 
/* we generate the abel function for f(x)=x^2+x+0.75 using loop(0.75) */
/* this is congruent to y^2+1; with y=x+0.5 */
loop(0.75);
gfunc(z) = half(z-0.5)+0.5; /* gfunc(z) is the  half iterate of x^2+1 */
gt=gtaylor(0,0.49);  /* numerical taylor series; gfunc with radius 0.49 */
/* as expected; gt is real valued, with odd coefficients approx zero */
gt=real(gt);
default(format,"g0.32");
prtpoly(gt,66,"h_kneser");

Then, here is the Kneser half iterate taylor series; ~32 digits of precision
{h_kneser=
        0.64209475250660937169807343264554
+x^ 1*  9.4959152494317554657300465038473 E-40
+x^ 2*  1.0069049372250147459418357475677
+x^ 3* -2.7893041139830150795277397050872 E-38
+x^ 4* -0.51414093145968493887998476978863
+x^ 5* -9.4065127148997900797558897405760 E-38
+x^ 6*  0.73328323429757195778275584056187
+x^ 7* -2.9789327383245773701147783939386 E-37
+x^ 8* -1.3205217768894860355669043639891
+x^ 9*  1.1504719218616280508104457256978 E-36
+x^10*  2.6388870612747665153119383382425
+x^11* -8.6750543071648979714305239390723 E-36
+x^12* -5.6046428545060162663689874766179
+x^13* -3.9783236573567800199609938142484 E-35
+x^14*  12.406496231371326359887467025684
+x^15*  2.7923588662903976634222370215717 E-34
+x^16* -28.314917697286719802821732804709
+x^17*  9.7972423062524314712844013881368 E-34
+x^18*  66.166408465201591741118650450628
+x^19* -2.8365489886933446260749287479578 E-33
+x^20* -157.55217664648253808905923394497
+x^21* -2.1430103842784989026334184508531 E-32
+x^22*  380.93523396262337058551633660482
+x^23* -8.7293263408985395750496714203857 E-32
+x^24* -932.76767443968756177137609589762
+x^25*  1.6816835112645037007423868050918 E-31
+x^26*  2308.4157371188211297925761645335
+x^27*  8.2234328517238662468309796247768 E-31
+x^28* -5764.8421981430433965949103632578
+x^29*  2.4093126450201398761059322051011 E-30
+x^30*  14509.393268197651356881586610790
+x^31* -1.7413019827079961810867692642134 E-29
+x^32* -36767.178669356660723605766040398
+x^33*  3.4623054302244676324137837165281 E-29
+x^34*  93726.265227237965064084189474846
+x^35*  3.2168281644000888751657332322273 E-28
+x^36* -240189.89054329721375053127430176
+x^37* -6.1480933845922075349749806089683 E-28
+x^38*  618431.19445394531359458177697608
+x^39* -2.9602758883184915670425661849608 E-27
+x^40* -1599044.7134053445807953778221247
+x^41* -2.0836594137219496738362037107707 E-26
+x^42*  4150330.0675468053399140636848122
+x^43* -2.5393008834476604824315870201306 E-25
+x^44* -10809432.776664940956645136077695
+x^45* -2.2135842572458930045915219334950 E-25
+x^46*  28241485.455764549219877509263920
+x^47*  1.4074100073819928427870717162596 E-24
+x^48* -73997971.439466935775997058854652
+x^49*  1.3207523303925904236617906201879 E-23
+x^50*  194400498.65150879653132029747726
+x^51*  8.0812941782070004671547698612478 E-23
+x^52* -511952826.91052910409240963653117
+x^53* -1.1165715209657657638798259431330 E-22
+x^54*  1351255631.5192760755702232265392
+x^55* -4.0473601817616568856394255235873 E-22
+x^56* -3573953054.3808198579300447542568
+x^57*  1.5349896293581766423555016353417 E-21
+x^58*  9471095369.6650817560501604193530
+x^59*  3.4569377580944032720520618019952 E-20
+x^60* -25144002192.142016377297723729324
+x^61*  1.8203138811156251197110938358308 E-20
+x^62*  66865157442.793595594781808850147
+x^63* -1.0331785877434983025133994529051 E-18
+x^64* -178094282120.25171903288242546023
+x^65* -4.8708203213790421376786960480469 E-20
}

Answered by Sheldon L on January 25, 2021

[this is a comment on sheldon's answer [comment-1]]

This is an additional example for my last comment @Sheldon made as an "answer" because of number of bytes required. The range of convergence of the (first) Kneser-type "ht(x)"-function in Sheldon's comment can be extended by Eulersummation; this can be implemented by simply introducing suitable cofactors for the power series. Here is Pari-code:

{fshel(x,eo=0)=local(evec);
          evec=ESumVec(eo,11);   \ generates the vector of coefficients for E.summation
          0.642094752506609371698073      *evec[1]
          +x^2* 1.00690493722501474594184 *evec[2]
          +x^4* -0.514140931459684938879986  *evec[3]
...
          +x^18* 66.1664084652015917411194  *evec[10]
          +x^20* -157.552176646482538089039  *evec[11] }          

and here the comparision based on the 11 terms only:

fshel(fshel(0,0.0),0.0)     \ Euler-order 0 = no Euler-summation
 %2156 = 0.98887772          \ should be 1
fshel(fshel(0.0,0.0),0.45)   \ first call Euler-order 0, second call 0.45 
 %2157 = 0.99999781          \ much better

fshel(fshel(0.1,0),0.0)      \ Euler-order 0 = no Euler-summation 
 %2152 = 0.99460603          \ should be   1.01
fshel(fshel(0.1,0.1),0.45)   \ first call Euler-order 0.1, second call 0.45
 %2153 = 1.0099971

The best order of the Euler-summation depends on the value of the x-argument; for instance to sum the alternating series $1-2+4-8+...-...$ we need order 2 and to sum $1-3+9-27+...-...$ we need order 3. The procedure for the computation of the vector is relatively simple; I can put it here if wanted.

Answered by Gottfried Helms on January 25, 2021

(This is not an answer but a table of data after I've tried Anixx's first/accepted solution)
I get for the first few approximations using Pari/GP

\ this is the function to be iterated to height "h"
f(x,h) = for(k=1,h,x=x^2+1);x

 x = 0.5;
  n=4;                                              \ assume some n
  su1=sum(k=0,n,(-1)^k * f(x,k)/k!/(n-k)!/(1/2-k)); \ compute numerator
  su2=sum(k=0,n,(-1)^k         /k!/(n-k)!/(1/2-k)); \ compute denominator
  print([n,su1,su2,  su1/su2]);         \ the last entry (ratio) is the approximation     


\ answers for n=4,5,6,7,8
 %315 = [4, -0.157781292143, 0.304761904762,                   -0.517719864845]
 %319 = [5, 5.81430361558, 0.0677248677249,                    85.8518268237]
 %323 = [6, -2903.09654248, 0.0123136123136,              -235763.191868]
 %327 = [7, 4051027927.89, 0.00189440189440,        2.13842054311 E12]
 %331 = [8, -5.82421095518 E22, 0.000252586919254, -2.30582445535 E26]

I don't see how this could be made convergent to some value..

Answered by Gottfried Helms on January 25, 2021

To get a closed-form solution (where possible) you can find a flow of f(x).

Let's w(x) a flow of f(x).

Then to find it we have to solve a difference equation (called Abel equation):

$$w(t+1)=f(w(t))$$

In our case it's

$$w(t+1)=1+w(t)^2$$

or in difference form,

$$Delta w + w - w^2-1=0$$

Unfortunately this first-order difference equation is non-linear in our case.

Supposing you somehow solved it, you will get $w_C(t)$, a function depending on a constant parameter C. Then you take C=x and t=1/2 (for square iterative root), this will be the answer.

Answered by Anixx on January 25, 2021

Introduce a new coordinate system with a fixed point of $f$ as origin, e.g. the point $omega:=e^{pi i/3}$. Writing $x=omega+xi$ with a new independent variable $xi$ one has $phi(omega+xi)=omega +psi(xi)$ for a new unknown function $psi$ with $psi(0)=0$. This function $psi$ satisfies in a neighbourhood of $xi=0$ the functional equation $psi(psi(xi))=2omegaxi+xi^2$. Now you can recursively determine the Taylor coefficients of $psi$. If you are lucky the resulting formal series actually defines an analytical function in a neighbourhood of $xi=0$.

Answered by Christian Blatter on January 25, 2021

(Update: Oh sorry; I'm new here. Didn't notice the much more substantial link to the mathoverflow. But perhaps it is interesting for some newbie anyway...)

If the function is a polynomial (or powerseries) with constant term $ne0$ this is difficult, but sometimes a meaningful solution can be given.

If the function is as above, but the constant term is zero, then it is just the question of relatively simple manipulation of the formal powerseries/polynomial to construct a "compositional" (or "iterative") root.

There is a very good deal in L.Comtet "Advanced Combinatorics" at pages around 130..150 (don't have it around).

Also the keywords "Bell-matrix", "Carleman-matrix" are helpful: these matrices transform the problem of funtion composition/iteration to matrix-products/matrix-powers. (The matrix-notation just implies the required formal powerseries-manipulations) . This is well established for functions $f(x)= ax + bx^2 + cx^3 +cdots$.

If the constant term does not vanish, $f(x) = K + ax + bx^2 + cx^3 +cdots$, then things are usually much more difficult. But there is one -I think: again well established- workaround:

Rewrite $f(x)$ as some function $g(x+x_0) - x_0$ such that now $g(x)$ has no constant term and then apply the above mentioned procedures (solving for iterative square root and the like) to $g(x)$.

Example: denote the iterative root of a some function $f(x)$ by $f^{[1/2]}(x)$ then solve

$$f^{[1/2]}(x) = g^{[1/2]}(x-x_0)+x_0$$

where you first must find $x_0$.

[update 2] I've tried to generate that power series for $g^{[1/2]}(x)$, which has then complex coefficients. I got $$ g^{[1/2]}(x) approx sqrt{2 x_0} x + (0.20412415 - 0.22379688 i) x^2 + (0.050024298 + 0.048042380 i) x^3 + (-0.022112213 + 0.028383580 i) x^4 + (-0.023623808 - 0.010393981 i) x^5 + (0.00074679924 - 0.021136421 i) x^6 + O(x^7)$$ where $f^{[1/2]}(x) = g^{[1/2]}(x-x_0) + x_0 $ and the fixpoint is $x_0 = exp(i pi /3 ) approx 1/2 + 0.86602540 i $ and $sqrt{2 x_0}=(1.2247449 + 0.70710678 i) $ . The range of convergence is small; however using Euler-summation (of complex orders) I could manage to reproduce $f(0)$ to $f(1)$ by applying two times the half-iterative to about 6 to 8 correct digits. [end update 2]

For a very basic introduction you may try my small essay at

go.helms-net.de/math/tetdocs

and look for the article "continuous functional iteration".

Answered by Gottfried Helms on January 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP