Difference between revisions of "Integrators for time stepping"

From Medusa: Coordinate Free Mehless Method implementation
Jump to: navigation, search
(Explicit (single step) methods)
Line 20: Line 20:
 
== Explicit (single step) methods ==
 
== Explicit (single step) methods ==
  
A family of single step methods are exaplicit Runge-Kutta methods
+
A family of single step methods are exaplicit Runge-Kutta methods, which uses intermediate derivative values to give a better estimation of value $y_{n+1}$ on the next step.
  
 
It is given by
 
It is given by
Line 38: Line 38:
 
$
 
$
  
To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method.
+
To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method:
 +
$
 +
\begin{array}{l|l}
 +
  \gamma & \alpha \\ \hline
 +
  & \beta^\T
 +
$
  
 
First order method is the Euler's method above, while methods of very high error are available.
 
First order method is the Euler's method above, while methods of very high error are available.
 
The most famous is RK4, the Runge Kutta method of fourth order. A more complete list can be found [https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods here].
 
The most famous is RK4, the Runge Kutta method of fourth order. A more complete list can be found [https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods here].
 +
 +
== Explicit multistep methods ==
 +
These methods are called Adams–Bashforth methods, and need previous $s$ sre
 +
 +
== Adaptive methods ==
 +
Adaptive methods estimate the error on each time step and descrease or increase the step as necessary.
 +
Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps.
 +
 +
A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [ Richardson Extrapolation]: 
 +
{\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(\beta_{i}-\beta_{i}^{*})k_{i},} e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i,$.

Revision as of 13:07, 10 November 2017

This page describes how to solve ordinary differential equations numerically with examples from our library.

Introduction and notation

We are solving an initial value problem, given as

$ \begin{align*} \dot{y}(t) &= f(t, y) \\ y(t_0) &= y_0 \end{align*} $

where $y$ is the unknown (possibly vector) function, $t_0$ is the start time, $f$ is the derivative (the functions we wish to integrate) and $y_0$ is the initial value of $y$. Numerically, we usually choose a time step $\Delta t$ and integrate the function up to a certain time $t_{\max}$. Times os subsequent time steps are denoted with $t_i$ and function values with $y_i$.

The simplest method is explicit Euler's method: $y_{n+1} = y_{n} + \Delta t f(t, y_n)$

Explicit (single step) methods

A family of single step methods are exaplicit Runge-Kutta methods, which uses intermediate derivative values to give a better estimation of value $y_{n+1}$ on the next step.

It is given by

$y_{n+1} = y_n + h \displaystyle \sum_{i=1}^s \beta_i k_i$

where

$ \begin{align*} k_1 & = f(t_n, y_n), \\ k_2 & = f(t_n+\gamma_2h, y_n+h(\alpha_{21}k_1)), \\ k_3 & = f(t_n+\gamma_3h, y_n+h(\alpha_{31}k_1+\alpha_{32}k_2)), \\ & \ \ \vdots \\ k_s & = f(t_n+\gamma_sh, y_n+h(\alpha_{s1}k_1+\alpha_{s2}k_2+\cdots+\alpha_{s,s-1}k_{s-1})). \end{align*} $

To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method: $ \begin{array}{l|l} \gamma & \alpha \\ \hline & \beta^\T $

First order method is the Euler's method above, while methods of very high error are available. The most famous is RK4, the Runge Kutta method of fourth order. A more complete list can be found here.

Explicit multistep methods

These methods are called Adams–Bashforth methods, and need previous $s$ sre

Adaptive methods

Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps.

A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [ Richardson Extrapolation]: {\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(\beta_{i}-\beta_{i}^{*})k_{i},} e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i,$.