TransWikia.com

How important is floating-point precision in molecular dynamics?

Physics Asked on January 22, 2021

I’m wondering how important floating-point precision is in numerical simulations of molecular dynamics in biology. From what I understand, molecular dynamics programs like NAMD use 32-bit floats to represent the various numbers involved in simulations (or at least, this mailing-list entry suggests that this is the case for the GPU. It doesn’t mention anything about the CPU).

I’m curious about this, because from what I understand, thermal noise plays a significant role in biophysics. If this is the case, could the lack of precision in half-precision floating-point formats such as bfloat16 effectively function as "noise"? How much precision does one need in such calculations?

Thanks!

2 Answers

as usual the short answer is: it depends ...

the slightly longer answer is: Practically all systems you'd simulate with MD are chaotic, meaning that your results depend sensitively on the initial conditions. So, no matter how precise your hardware is, you will ALWAYS end up with noticeably different trajectories after a very short time span, but your averages should not be affected. For example if you simulate a small sample of liquid water in the NVT ensemble, you will end up with totally different coordinates of individual water molecules, depending on the random seed of the algorithm that generates your initial velocities, but the total energy and pressure you get should be the same (inside small error bars if you allow for sufficient total simulation time)

In practice we are using either single or double precision. If single precision is used its typically faster (very much so with GPUs). As for the results, as stated above by Vadim, the problem are the round of errors. If you do MD of a protein in solution, and you are merely interested in the protein structure (the coordinates that are the result of a rather simple set of equations), you are certainly fine with single precision. But the more "higher order" the terms you are interested in are, the more pronounced the round-off errors get. For example if you consider things like free energies or thermal conductivity, you calculate numbers based on coordinates, so you add another round of calculations on top of the first one (the calculation of the coordinates) and with each such manipulation you lose at least one digit of precision. However, for most practical purposes it will never get as far as you ending up with random numbers when you use single precision. A good way to combine speed and precision is to work with mixed precision (single for coordinates and velocities, double for higher order terms) as for example done in Gromacs - see: http://manual.gromacs.org/current/reference-manual/definitions.html#mixed-or-double-precision

mic

Correct answer by Michael Brunsteiner on January 22, 2021

The problem with the floating point precision is the so-called round-off error - the fact that all the digits that do not fit into the designated number of bits are truncated. Such issues are not discussed anymore in physics computation courses, since with modern computers this is often not an issue... except the algorithms where such errors may accumulate through many steps, regardless of the pre-specified floating point precision, such as Molecular Dynamics. I can recommend the discussion of the molecular dynamics in this review as a first introduction to the problem.

Answered by Vadim on January 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP