Integrators for time stepping

From Medusa: Coordinate Free Mehless Method implementation
Revision as of 13:20, 10 November 2017 by Jureslak (talk | contribs) (Adaptive methods)

Jump to: navigation, search

This page describes how to solve ordinary differential equations numerically with examples from our library.

Introduction and notation

We are solving an initial value problem, given as

$ \begin{align*} \dot{y}(t) &= f(t, y) \\ y(t_0) &= y_0 \end{align*} $

where $y$ is the unknown (possibly vector) function, $t_0$ is the start time, $f$ is the derivative (the functions we wish to integrate) and $y_0$ is the initial value of $y$. Numerically, we usually choose a time step $\Delta t$ and integrate the function up to a certain time $t_{\max}$. Times os subsequent time steps are denoted with $t_i$ and function values with $y_i$.

The simplest method is explicit Euler's method: $y_{n+1} = y_{n} + \Delta t f(t, y_n)$

Explicit (single step) methods

A family of single step methods are exaplicit Runge-Kutta methods, which uses intermediate derivative values to give a better estimation of value $y_{n+1}$ on the next step.

It is given by

$y_{n+1} = y_n + h \displaystyle \sum_{i=1}^s \beta_i k_i$

where

$ \begin{align*} k_1 & = f(t_n, y_n), \\ k_2 & = f(t_n+\gamma_2h, y_n+h(\alpha_{21}k_1)), \\ k_3 & = f(t_n+\gamma_3h, y_n+h(\alpha_{31}k_1+\alpha_{32}k_2)), \\ & \ \ \vdots \\ k_s & = f(t_n+\gamma_sh, y_n+h(\alpha_{s1}k_1+\alpha_{s2}k_2+\cdots+\alpha_{s,s-1}k_{s-1})). \end{align*} $

To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method: $ \begin{array}{l|l} \gamma & \alpha \\ \hline & \beta^\T \end{array} $

First order method is the Euler's method above, while methods of very high error are available. The most famous is RK4, the Runge Kutta method of fourth order. A more complete list can be found here.

Explicit multistep methods

These methods are called Adams–Bashforth methods, and need previous $s$ sre

Adaptive methods

Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps.

A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [Richardson Extrapolation https://en.wikipedia.org/wiki/Richardson_extrapolation]:

${\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(\beta_{i}-\beta_{i}^{*})k_{i},} e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i,$.