TransWikia.com

Constructing explicit Runge Kutta methods of order 9 and higher

Computational Science Asked on August 22, 2021

Some older books I’ve seen say that the minimum number of stages of an explicit Runge-Kutta method of a specified order is unknown for orders $geq 9$. Is this still true?

What libraries are there for automatically working with high order Runge-Kutta methods?

2 Answers

Bounds

That is still true. In Butcher's book, page 196, it says the following: In a 1985 paper, Butcher showed that you need 11 stages to get order 8, and this is sharp. For order 10, Hairer derived a family of 17-stage methods, but it's not known if one can do better. The same information is given in Section II.5 of Hairer, Norsett, & Wanner vol. I. The latter reference also goes through some of the techniques for developing high-order pairs (up to order 8).

There is an upper bound on the minimum number of stages required for any order, since you can construct them by extrapolation. This has been known for a very long time; see this recent paper of mine for an explanation. However, this bound is quadratic in the order and is certainly quite pessimistic. The nodepy software discussed below can generate exact coefficients for these methods, and also for deferred correction methods (which are Runge-Kutta methods) of any order.

I believe @Etienne is correct in saying that the highest order methods that have been constructed by hand are due to Terry Feagin. Regarding his other comment, this paper contains some 9(8) pairs:

J.H. Verner, High-order explicit Runge-Kutta pairs with low stage order, Applied Numerical Mathematics, Volume 22, Issues 1–3, November 1996, Pages 345-357

The number of new conditions at order $n$ is equal to the number of unlabeled rooted trees with n nodes. Here is a table of the (cumulative) number of order conditions $N$ required for each order $p$; this table goes further than those provided in the literature and was produced using nodepy:

p | N
-----
1 | 1
2 | 2
3 | 4
4 | 8
5 | 17
6 | 37
7 | 85
8 | 200
9 | 486
10| 1205
11| 3047
12| 7813
13| 20299
14| 53272

The number of conditions for higher values of $N$ can easily be computed by taking cumulative sums of OEIS sequence A000081.

Software

For very high order methods, the number and complexity of the order conditions becomes impossible to deal with by hand. Some symbolic packages (Mathematica, at least) have capabilities for generating Runge-Kutta order conditions. There are probably some other packages out there, but I am aware of the following (both of which I wrote):

  • nodepy: a Python package that can generate symbolic expressions and code for the order conditions to arbitrary order. It also includes Python code for checking those conditions, computing error coefficients, etc.
  • RK-Opt: A MATLAB package that, among many other things, can find high order Runge-Kutta methods with coefficients optimized for a few different purposes. It currently couldn't handle 9th-order explicit RK (it goes up to 8th order for stage-order-one methods, tenth order for methods with higher stage order). If that's something you're interested in I could add the 9th-order (and higher) conditions fairly easily.

Another interesting note about order conditions, which becomes significant at such high orders, is that there are two ways to derive them, and they give you different (but collectively equivalent) conditions: one is due to Butcher, the other to Albrecht.

Correct answer by David Ketcheson on August 22, 2021

@DavidKetcheson's answer hits the big points: you can always construct methods of high enough order using extrapolation, that's a very pessimistic bound and you can always do a whole lot better, all the good ones are derived by hand (with the help of some computer algebra tools), no lower bound is known, and the highest order methods are due to Feagin. Given some of the comments, I wanted to round out the answer with a discussion of the current state-of-the-art tableaus in the field.

If you want a compendium of RK tableaus, you can find one in this Julia code. Citations for the paper that they came from are in the docstrings for the tableau constructors. The developer documentation for DifferentialEquations.jl lists all of these tableaus as available for use, and you can see here that these are all tested using Travis and AppVeyor continuous integration suites to make sure that not only the order conditions are satisfied, but they they actually achieve the requested convergence (verification testing). From these, you can see that there are:

  • 5 order 9 methods
  • 6 order 10 methods
  • 2 order 12 methods
  • 1 order 14 method

(that I could find that were published). Again, all derived by hand.

The convergence tests show that some of the derivations were not carried out to high enough precision to work for more than 64-bit numbers (they are commented like this). So that's an interesting quirk to be aware of: at these high orders you usually only get coefficients that "to an error x" satisfy the order conditions, but when using arbitrary precision arithmetic you can actually detect these bounds. So the precision to which you carry out the coefficients do matter, and you should choose it to cover the precision you wish to test (/ use, of course).

If you want a bunch of stability plots, you can just plot(tableau) using the Plots.jl recipe. A good set of notes which has a lot of this written down can be found on Peter Stone's website (go below and click on say the order 10 schemes and you'll get a bunch of PDFs). When developing DifferentialEquations.jl, I created that set of tableaus to systematically go through them on test problems / look at the analytical indicators to see which ones should be included in the main library. I made some quick notes here. As you can see from the algorithms included in the main library, the ones that I found worthwhile were the Verner and Feagin methods. The Verner 9th order method is the highest order method with an interpolant matching the order too. That's something to recognize: the Feagin methods do not have a matching interpolant (though you can bootstrap Hermite, but that's really inefficient).

Since they are all implemented with very efficient implementations, you can play around with them yourself and see how much the different features actually matter. Here's a Jupyter notebook which shows the Feagin methods in use. Notice that the convergence plot really is going to 1e-48 error. High order methods are only more efficient than lower order methods when you really need a very very low tolerance. You can find some benchmarks that use some of them at DiffEqBenchmarks.jl, though when they are used it's usually the 9th order Verner method, and usually showing that the benchmark is not in the regime where this high of order is efficient.

So if you want to play around and work with some high order methods, RK-Opt is what I found is a good tool for deriving some (as @DavidKetcheson mentioned), and DifferentialEquations.jl has all of the published methods (I think?) implemented so that you can easily test/benchmark against them. However, unless you find an assumption that can be dropped, from my tests I haven't been able to find something that beats the Verner (orders 6-9) and Feagin (orders 10+) methods. YMMV though, and I'd love to see more research in this.

Answered by Chris Rackauckas on August 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP