Difference between revisions of "Integrators for time stepping"
(→Adaptive methods) |
(→Adaptive methods) |
||
Line 54: | Line 54: | ||
== Adaptive methods == | == Adaptive methods == | ||
Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. | Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. | ||
− | Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps. | + | Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps and $\varepsilon$, represeting tolrance. |
A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [Richardson Extrapolation https://en.wikipedia.org/wiki/Richardson_extrapolation]: | A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [Richardson Extrapolation https://en.wikipedia.org/wiki/Richardson_extrapolation]: | ||
− | $ | + | $e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i$. |
+ | |||
+ | If $e_{n+1} < \varepsilon$, time step can be increased, otherwise it is decreased. This cn be achieved by scaling $\Delta t$ in accorance with $\varepsilon / \|e_{n+1}\|$. | ||
+ | |||
+ | These methods are implemented in Matlab as ode45 and similar. They can be found [https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods#Embedded_methods here], with a sample Matlab implementation of Cash-Karp method [https://github.com/jureslak/numerika-fmf/blob/master/ninde/hw2/CashKarp.m here]. |
Revision as of 12:28, 10 November 2017
This page describes how to solve ordinary differential equations numerically with examples from our library.
Contents
Introduction and notation
We are solving an initial value problem, given as
$ \begin{align*} \dot{y}(t) &= f(t, y) \\ y(t_0) &= y_0 \end{align*} $
where $y$ is the unknown (possibly vector) function, $t_0$ is the start time, $f$ is the derivative (the functions we wish to integrate) and $y_0$ is the initial value of $y$. Numerically, we usually choose a time step $\Delta t$ and integrate the function up to a certain time $t_{\max}$. Times os subsequent time steps are denoted with $t_i$ and function values with $y_i$.
The simplest method is explicit Euler's method: $y_{n+1} = y_{n} + \Delta t f(t, y_n)$
Explicit (single step) methods
A family of single step methods are exaplicit Runge-Kutta methods, which uses intermediate derivative values to give a better estimation of value $y_{n+1}$ on the next step.
It is given by
$y_{n+1} = y_n + h \displaystyle \sum_{i=1}^s \beta_i k_i$
where
$ \begin{align*} k_1 & = f(t_n, y_n), \\ k_2 & = f(t_n+\gamma_2h, y_n+h(\alpha_{21}k_1)), \\ k_3 & = f(t_n+\gamma_3h, y_n+h(\alpha_{31}k_1+\alpha_{32}k_2)), \\ & \ \ \vdots \\ k_s & = f(t_n+\gamma_sh, y_n+h(\alpha_{s1}k_1+\alpha_{s2}k_2+\cdots+\alpha_{s,s-1}k_{s-1})). \end{align*} $
To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method: $ \begin{array}{l|l} \gamma & \alpha \\ \hline & \beta^\T \end{array} $
First order method is the Euler's method above, while methods of very high error are available. The most famous is RK4, the Runge Kutta method of fourth order. A more complete list can be found here.
Explicit multistep methods
These methods are called Adams–Bashforth methods, and need previous $s$ sre
Adaptive methods
Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps and $\varepsilon$, represeting tolrance.
A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [Richardson Extrapolation https://en.wikipedia.org/wiki/Richardson_extrapolation]:
$e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i$.
If $e_{n+1} < \varepsilon$, time step can be increased, otherwise it is decreased. This cn be achieved by scaling $\Delta t$ in accorance with $\varepsilon / \|e_{n+1}\|$.
These methods are implemented in Matlab as ode45 and similar. They can be found here, with a sample Matlab implementation of Cash-Karp method here.