TransWikia.com

Where does the floating point error come from? (Finite difference using matrix multiplication versus shifts and adding.)

Computational Science Asked by Willie Wong on July 18, 2021

In Julia it appears that one picks up some error terms when doing finite differences using matrix multiplication versus shifts and addition.

julia> n = 1000
1000

julia> hessM = circshift(eye(n),-1) + circshift(eye(n),1) - 2* eye(n);

julia> function hess(x::Vector)
         return circshift(x,-1) + circshift(x,1) - 2*x
       end
hess (generic function with 1 method)

julia> x = rand(n);

julia> maximum(hessM * x - hess(x))
0.0

julia> x = rand(n)/300;

julia> maximum(hessM * x - hess(x))
8.673617379884035e-19

I understand that this is floating point error, since if I do x = rand(n) / 256 (or any power of 2 for that matter) the error goes away, but if we divide by something as simple as 3 the error crops up.

But where exactly is the floating point error coming in when running the above code? And why is it sensitive to the division?


Edit: in fact, I have localized the difference to

julia> x = rand(n)/3;

julia> maximum(hessM * x + 2 * eye(n) * x - circshift(eye(n),-1) * x - circshift(eye(n),1)*x)
1.1102230246251565e-16

julia> x = rand(n);

julia> maximum(hessM * x + 2 * eye(n) * x - circshift(eye(n),-1) * x - circshift(eye(n),1)*x)
0.0

julia> x = rand(n) * 3;

julia> maximum(hessM * x + 2 * eye(n) * x - circshift(eye(n),-1) * x - circshift(eye(n),1)*x)
8.881784197001252e-16

What gives?

One Answer

Edit (July 2021): it appears that the behavior will be changed as a side effect of the move of the default PRNG from Mersenne Twister to Xoshiro in the 1.7 release of Julia. See comments below.


It seems that this is tied to how Julia generates random numbers; I've opened a discussion on the Julia Language site. The current implementation of Julia's random number generator for the default range [0,1) for floats (in other words, calling simply rand()) always produces a 0 in the least significant bit for some reason or another (unlike MATLAB, for example). A side effect of this is that floating point errors are suppressed (since these are often errors coming from rounding/cancellation to the lowest bit). Multiplying/dividing by a power of 2 do not change the significand in the floating point representation, however multiplying/dividing by a non-power does. So generically after multiplying/dividing by some non-power, the least significant bit can be either 1 or zero, and now floating point error occurs as usually expected.

Correct answer by Willie Wong on July 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP