TransWikia.com

Numerically finding constants of motion

Computational Science Asked on August 22, 2021

Given a set of ODE’s $ dot{z} = f(z) $ (or discrete time $ z_{t+1} = f(z_t) $), is there a way to numerically find constants of motion?

For $ f(z_t) approx M z_t $, diagonalizing the matrix $ M $ and finding eigenvalues equal to 1 works, but this only finds linear combinations of $ z $ that are constant. I would like to find things like radius, energy, and momentum.

I am specifically interested in constants of motion as I integrate Lie groups.

EDIT:

Just to clarify, a constant of motion, at least for the sake of this questions, is defined as,

begin{align}
f: mathbb{R}^n rightarrow mathbb{R}
text{s.t.}~~frac{d}{dt} f(z(t)) = 0
end{align}

So, for a given trajectory $ z(t) $, we would expect perhaps multiple constants of motion $ f_i(z(t)) $, but I would like to find constants of motion $ f_i $ such that they are constant for all possible trajectories.

begin{align}
frac{d}{dt} f_i(z(t; z_0^{(k)})) = 0 ~~ forall z_0^{(k)} in mathbb{R}^n
end{align}

3 Answers

If you can write down the problem using generic constants then this becomes a parameter estimation problem. The DifferentialEquations.jl docs explains a bunch of methods in more detail, but I'll focus on a simple and standard method exemplified here. That example is about finding biological constants (birth and death rates in Lotka-Volterra) but the idea is the same as finding constants of motion: convert the problem into an ODE with parameters and optimize the parameter values.

The inner loop is essentially:

  1. Solve the problem with parameters $p$.
  2. Compute a cost function.
  3. Have the optimization routine update the parameters $p$ to hopefully better parameters.
  4. Go back to 1 if your cost is too high.

If you have some data for what the path of the differential equation should be, then you can define a cost function that is the l2 difference between the data and the numerical solution at each point. Minimizing this cost is the same as maximizing the Normal log-likelihood, and results in finding the parameters of the ODE which best fit the data.

But this technique doesn't require data. You can define any loss function on the solution which makes sense, and have an optimizer fit it. For example, you might only know that you want the maximum value of $u$ at times $t = 1,2,3,4$ to be $1$. So you can make the cost function be

$$C(u) = sum_{t=1,2,3,4} Vert u(t) - 1 Vert $$

and have the optimizer search for parameters which best satisfy this constraint. Of course, you get better results the better you define your cost function to match what you really want, though more complicated cost functions will put more of a strain on the optimizer.

Basic optimization methods here include Levenberg-Marquardt and you'll see some tutorials using that, but you really shouldn't be using that because it's very sensitive to finding local minima and these parameter estimation problems are very tough for local optimizers. Instead, you should be using a good global optimization library like NLopt to get decent results (it'll still be difficult!)

Answered by Chris Rackauckas on August 22, 2021

Still hoping for a better answer, but here is something I have thought of (inspired by other what Christ Rackauckas said), but don't particularly like.

Define a parameterized "constant of motion" $ f_i(z(t); theta) $ and then optimize on $ theta $ like,

begin{align} mathcal{L}(theta_i, vec{lambda}) &= frac{1}{2}big(|theta|^2 - 1big) + sum_{t, k} frac{1}{2} big( f_i(z^{(k)}(t); theta_i) - lambda_{ki} big)^2 end{align}

We leave $ lambda_{ki} $ "free" so that each constant for each trajectory, can still scale, but is subject to be a function of the state at each time step.

Training for the next constant of motion could be done by adding a loss term to account for the last constant of motion. begin{align} sum_{t, k} sum_{j=1}^{i-1} big( f_i(z^{(k)}(t); theta_i) - f_j(z^{(k)}(t); theta_j) big)^2 end{align}

I don't like this for the obvious reason that it's a horrible optimization problem to solve.

Answered by ignorance on August 22, 2021

This seems like a difficult problem to solve in general, but here's one idea.

Suppose you look for all constants of motion of the form $$ sum_{i=1}^m lambda_i varphi_i(t), $$ where $lambda_i$ are some unknown coefficients, and ${varphi_i}_i$ is some set of basis functions that can be computed by numerical integration of the ODE. For example, $varphi_i$'s could be all monomials up to a certain degree, composed of $z,dot z,ddot z,ldots$.

Then the condition that $sum_i lambda_i varphi_i(t)=mathrm{const}$ could be approximated as the matrix equation $$ Phi lambda = 0, qquad text{where }Phi_{ij} = varphi_j(t_i) - varphi_j(0), $$ where $lambda$ is a vector unknown, $Phi$ is an $ntimes m$ matrix, and $t_1,ldots,t_n$ are some arbitrary convenient points in time ($n>m$).

This way you could find all possible coefficients $lambda$ that generate constants of motion by asking for the numerical kernel of $Phi$. If the constants of motions are polynomials in $z$ and its derivatives, this would work, but the matrix could be very large due to how many different monomials you can have. This does require you to guess in advance what the constants of motion might look like, I don't know if that's realistic for you.

Answered by Kirill on August 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP