<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://e6.ijs.si/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AndrejKP</id>
		<title>Medusa: Coordinate Free Mehless Method implementation - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://e6.ijs.si/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AndrejKP"/>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Special:Contributions/AndrejKP"/>
		<updated>2026-05-13T21:32:52Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.1</generator>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3686</id>
		<title>Scattering from an infinite cylinder</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3686"/>
				<updated>2026-01-22T16:24:37Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Fix the boundary condition arising from H fields (no epsilon there) and the incident wave form. Add solution reference.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Computational electromagnetics]].&lt;br /&gt;
&lt;br /&gt;
==Case==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\newcommand{\eps}{\varepsilon}&lt;br /&gt;
\newcommand{\dpar}[2]{\frac{\partial #1}{\partial #2}}&lt;br /&gt;
\newcommand{\ddpar}[2]{\frac{\partial^2 #1}{\partial #2^2}}&lt;br /&gt;
\def\doubleunderline#1{\underline{\underline{#1}}}&amp;lt;/math&amp;gt;&lt;br /&gt;
As a simple two dimensional case let us consider a plane wave scattering on a dielectric cylinder. This case is important for two reasons, firstly, it shows how to ''encode'' the incident wave into the boundary between the cylinder and free space and thus calculate only with the scattered field outside. It shows how to use analytic expressions to represent sources with known analytical solutions, this is especially useful when dealing with point sources. This case is important for two reasons. Firstly we will see how we can describe the solution in terms of scattered fields only, by including the incident wave in the boundary conditions. Secondly the problem has a relatively simple analytical solution in terms of Hankel funitons and can be used as a benchmark case.The problem is defined in two domains, inside and outside the cylinder, we are solving the following system of two PDE that are coupled on the inside boundary&lt;br /&gt;
\begin{align}&lt;br /&gt;
\nabla^2 E_z^{int} +\eps_r \frac{\omega^2}{c_0^2}\thinspace E_z^{int} = 0 \qquad &amp;amp;\text{in} \quad D \label{eq:inner} \\&lt;br /&gt;
\nabla^2 E_z^s + \frac{\omega^2}{c_0^2}E_z^s  = 0 \qquad &amp;amp; \text{outside} \quad D&lt;br /&gt;
\label{eq:outer}&lt;br /&gt;
\end{align}&lt;br /&gt;
The field $E_z^{int}$ is the total field inside, $E_z^s$ is only the scattered part of $E_z$ Outside the cylinder. The boundary conditions are&lt;br /&gt;
\begin{equation}&lt;br /&gt;
E_z^{int} - E_z^{s} =E_z^{inc}  \qquad \text{on} \quad \partial D \label{eq:BC1}, \qquad \text{and} \qquad&lt;br /&gt;
\dpar{E_z^{int}}{n} - \dpar{E_z^{s}}{n} = \dpar{E_z^{inc}}{n}  \qquad \text{on} \quad  \partial D.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The incident field $E_z^{inc}$ has an analytical form in our case a plane wave $e^{-ikx}$, since it has a nice closed form we know its derivatives and the boundary conditions are nicely expressed. Around the computational domain we use a PML to simulate an infinite domain.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
	// Construct system&lt;br /&gt;
double l_in = epsr*std::pow(omega/c0, 2.0);&lt;br /&gt;
double l_out = std::pow(omega/c0, 2.0);&lt;br /&gt;
for (int i : cyl_dom.interior()){&lt;br /&gt;
	op_inner.lap(i) + l_in*op_inner.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i : outer_inter){&lt;br /&gt;
	op_outer.lap(i) + l_out*op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int c = 0; c &amp;lt; interface_c_idx.size(); ++c){&lt;br /&gt;
	int i = interface_c_idx[c]; // cyl index&lt;br /&gt;
	int j = interface_o_idx[c]; // outer index&lt;br /&gt;
	Vec2d pos = cyl_dom.pos(i);&lt;br /&gt;
	double x = pos[0];&lt;br /&gt;
	double y = pos[1];&lt;br /&gt;
&lt;br /&gt;
	// calculate normal derivative of the source funciton at the i-th node&lt;br /&gt;
	// get normal, must point out of the cylinder&lt;br /&gt;
	Vec2d normal = cyl_dom.normal(i);&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; incident = std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dx = &lt;br /&gt;
            1.0_i*k0*std::cos(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dy = &lt;br /&gt;
            1.0_i*k0*std::sin(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; dui_dn = normal[0]*din_dx + normal[1]*din_dy;&lt;br /&gt;
&lt;br /&gt;
	// continuity of the fields, and the incident field&lt;br /&gt;
	op_inner.value(i) + (-1)*op_outer.value(j, Nouter + i) = incident;&lt;br /&gt;
&lt;br /&gt;
	// continuity of derivatives&lt;br /&gt;
	op_outer.neumann(j, outer_dom.normal(j)) &lt;br /&gt;
              + (1/epsr)*op_inner.neumann(i, cyl_dom.normal(i), j - Nouter)&lt;br /&gt;
			= dui_dn; // todo, change this from Neumanns to derivatives!&lt;br /&gt;
}&lt;br /&gt;
// SC-PML - where either x or y is constant&lt;br /&gt;
for (int i : PML){&lt;br /&gt;
	1.0/(sw[i]*sw[i])*op_outer.lap(i) + l_out*op_outer.value(i) +&lt;br /&gt;
	((-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 0, i)*op_outer.der1(i, 0)&lt;br /&gt;
		+(-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 1, i)*op_outer.der1(i, 1)) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i: outer_bnd){&lt;br /&gt;
	op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Analytical solution==&lt;br /&gt;
The solution to this problem is sought in terms of sums of cylindrical harmonics&amp;lt;ref name=&amp;quot;analytical solution&amp;quot;&amp;gt;M. Kerker , The Scattering of Light and Other Electromagnetic Radiation, 1969.&amp;lt;/ref&amp;gt;. The incident wave is a plane wave $\b E_i = \hat e_z E_0 e^{-ikx}$ and can be decomposed into Bessel functions of the first kind as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}_{i}=\widehat{z} E_{0} e^{-i k_{0} x}= \widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} J_{n}\left(k_{0} \rho\right) e^{i n \phi}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The scattered field is a sum of Hankel functions so it satisfies the Sommerfeld radiation condition&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{s}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} a_{n} H_{n}^{(2)}\left(k_{0} \rho\right) e^{i n \phi}, \quad \rho \geq a&lt;br /&gt;
\end{equation}&lt;br /&gt;
and finally the total field is again a sum ob Bessel functions of the first kind&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{t}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} b_{n} J_{n}\left(k_{1} \rho\right) e^{i n \phi}, \quad \rho \leq a.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The boundary conditions for the field and its derivative on the boundary between the cylinder and free space surrounding it give two expressions&lt;br /&gt;
\begin{equation}&lt;br /&gt;
J_{n}\left(k_{0} a\right)+a_{n} H_{n}^{(2)}\left(k_{0} a\right)=b_{n} J_{n}\left(k_{1} a\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
k_0 J^{\prime}_{n}\left(k_{0} a\right)+ k_0 a_{n} H_{n}^{\prime(2)}\left(k_{0} a\right)=k_1 b_{n} J^{\prime}_{n}\left(k_{1} a\right).&lt;br /&gt;
\end{equation}&lt;br /&gt;
Which give the coefficients $a_n$ and $b_n$ in the above expansions as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
a_{n}=-\frac{J_{n}\left(k_{0} a\right)-\Gamma_{h} J_{n}^{\prime}\left(k_{0} a\right)}{H_{n}^{(2)}\left(k_{0} a\right)-\Gamma_{h} H_{n}^{(2)^{\prime}}\left(k_{0} a\right)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
where&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\Gamma_{h}=\frac{k_{0}}{k_{1}} \frac{J_{n}\left(k_{1} a\right)}{J_{n}^{\prime}\left(k_{1} a\right)}&lt;br /&gt;
\end{equation}&lt;br /&gt;
and &lt;br /&gt;
\begin{equation}&lt;br /&gt;
b_n = \frac{J_n(k_0a)}{J_n(k_1a)} + a_n \frac{H_n^{(2)}(k_0a)}{J_n(k_1a)}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_1.png|400px]][[File:FDFD_2D_dielcyl_ImEn_1.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_2.png|400px]][[File:FDFD_2D_dielcyl_ImEn_2.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_3.png|400px]][[File:FDFD_2D_dielcyl_ImEn_3.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_4.png|400px]][[File:FDFD_2D_dielcyl_ImEn_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_5.png|400px]][[File:FDFD_2D_dielcyl_ImEn_5.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_benchmark_times_basic.png|800px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dx_conv2.png|800px]]&lt;br /&gt;
=References=&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3685</id>
		<title>Scattering from an infinite cylinder</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3685"/>
				<updated>2026-01-22T16:24:10Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Fix the boundary condition arising from H fields (no epsilon there) and the incident wave form. Add solution reference.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Computational electromagnetics]].&lt;br /&gt;
&lt;br /&gt;
==Case==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\newcommand{\eps}{\varepsilon}&lt;br /&gt;
\newcommand{\dpar}[2]{\frac{\partial #1}{\partial #2}}&lt;br /&gt;
\newcommand{\ddpar}[2]{\frac{\partial^2 #1}{\partial #2^2}}&lt;br /&gt;
\def\doubleunderline#1{\underline{\underline{#1}}}&amp;lt;/math&amp;gt;&lt;br /&gt;
As a simple two dimensional case let us consider a plane wave scattering on a dielectric cylinder. This case is important for two reasons, firstly, it shows how to ''encode'' the incident wave into the boundary between the cylinder and free space and thus calculate only with the scattered field outside. It shows how to use analytic expressions to represent sources with known analytical solutions, this is especially useful when dealing with point sources. This case is important for two reasons. Firstly we will see how we can describe the solution in terms of scattered fields only, by including the incident wave in the boundary conditions. Secondly the problem has a relatively simple analytical solution in terms of Hankel funitons and can be used as a benchmark case.The problem is defined in two domains, inside and outside the cylinder, we are solving the following system of two PDE that are coupled on the inside boundary&lt;br /&gt;
\begin{align}&lt;br /&gt;
\nabla^2 E_z^{int} +\eps_r \frac{\omega^2}{c_0^2}\thinspace E_z^{int} = 0 \qquad &amp;amp;\text{in} \quad D \label{eq:inner} \\&lt;br /&gt;
\nabla^2 E_z^s + \frac{\omega^2}{c_0^2}E_z^s  = 0 \qquad &amp;amp; \text{outside} \quad D&lt;br /&gt;
\label{eq:outer}&lt;br /&gt;
\end{align}&lt;br /&gt;
The field $E_z^{int}$ is the total field inside, $E_z^s$ is only the scattered part of $E_z$ Outside the cylinder. The boundary conditions are&lt;br /&gt;
\begin{equation}&lt;br /&gt;
E_z^{int} - E_z^{s} =E_z^{inc}  \qquad \text{on} \quad \partial D \label{eq:BC1}, \qquad \text{and} \qquad&lt;br /&gt;
\dpar{E_z^{int}}{n} - \frac{1}{\eps_r}\dpar{E_z^{s}}{n} = \dpar{E_z^{inc}}{n}  \qquad \text{on} \quad  \partial D.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The incident field $E_z^{inc}$ has an analytical form in our case a plane wave $e^{ikx}$, since it has a nice closed form we know its derivatives and the boundary conditions are nicely expressed. Around the computational domain we use a PML to simulate an infinite domain.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
	// Construct system&lt;br /&gt;
double l_in = epsr*std::pow(omega/c0, 2.0);&lt;br /&gt;
double l_out = std::pow(omega/c0, 2.0);&lt;br /&gt;
for (int i : cyl_dom.interior()){&lt;br /&gt;
	op_inner.lap(i) + l_in*op_inner.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i : outer_inter){&lt;br /&gt;
	op_outer.lap(i) + l_out*op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int c = 0; c &amp;lt; interface_c_idx.size(); ++c){&lt;br /&gt;
	int i = interface_c_idx[c]; // cyl index&lt;br /&gt;
	int j = interface_o_idx[c]; // outer index&lt;br /&gt;
	Vec2d pos = cyl_dom.pos(i);&lt;br /&gt;
	double x = pos[0];&lt;br /&gt;
	double y = pos[1];&lt;br /&gt;
&lt;br /&gt;
	// calculate normal derivative of the source funciton at the i-th node&lt;br /&gt;
	// get normal, must point out of the cylinder&lt;br /&gt;
	Vec2d normal = cyl_dom.normal(i);&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; incident = std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dx = &lt;br /&gt;
            1.0_i*k0*std::cos(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dy = &lt;br /&gt;
            1.0_i*k0*std::sin(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; dui_dn = normal[0]*din_dx + normal[1]*din_dy;&lt;br /&gt;
&lt;br /&gt;
	// continuity of the fields, and the incident field&lt;br /&gt;
	op_inner.value(i) + (-1)*op_outer.value(j, Nouter + i) = incident;&lt;br /&gt;
&lt;br /&gt;
	// continuity of derivatives&lt;br /&gt;
	op_outer.neumann(j, outer_dom.normal(j)) &lt;br /&gt;
              + (1/epsr)*op_inner.neumann(i, cyl_dom.normal(i), j - Nouter)&lt;br /&gt;
			= dui_dn; // todo, change this from Neumanns to derivatives!&lt;br /&gt;
}&lt;br /&gt;
// SC-PML - where either x or y is constant&lt;br /&gt;
for (int i : PML){&lt;br /&gt;
	1.0/(sw[i]*sw[i])*op_outer.lap(i) + l_out*op_outer.value(i) +&lt;br /&gt;
	((-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 0, i)*op_outer.der1(i, 0)&lt;br /&gt;
		+(-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 1, i)*op_outer.der1(i, 1)) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i: outer_bnd){&lt;br /&gt;
	op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Analytical solution==&lt;br /&gt;
The solution to this problem is sought in terms of sums of cylindrical harmonics. The incident wave is a plane wave $\b E_i = \hat e_z E_0 e^{-ikx}$ and can be decomposed into Bessel functions of the first kind as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}_{i}=\widehat{z} E_{0} e^{-i k_{0} x}= \widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} J_{n}\left(k_{0} \rho\right) e^{i n \phi}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The scattered field is a sum of Hankel functions so it satisfies the Sommerfeld radiation condition&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{s}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} a_{n} H_{n}^{(2)}\left(k_{0} \rho\right) e^{i n \phi}, \quad \rho \geq a&lt;br /&gt;
\end{equation}&lt;br /&gt;
and finally the total field is again a sum ob Bessel functions of the first kind&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{t}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} b_{n} J_{n}\left(k_{1} \rho\right) e^{i n \phi}, \quad \rho \leq a.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The boundary conditions for the field and its derivative on the boundary between the cylinder and free space surrounding it give two expressions&lt;br /&gt;
\begin{equation}&lt;br /&gt;
J_{n}\left(k_{0} a\right)+a_{n} H_{n}^{(2)}\left(k_{0} a\right)=b_{n} J_{n}\left(k_{1} a\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
k_0 J^{\prime}_{n}\left(k_{0} a\right)+ k_0 a_{n} H_{n}^{\prime(2)}\left(k_{0} a\right)=k_1 b_{n} J^{\prime}_{n}\left(k_{1} a\right).&lt;br /&gt;
\end{equation}&lt;br /&gt;
Which give the coefficients $a_n$ and $b_n$ in the above expansions as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
a_{n}=-\frac{J_{n}\left(k_{0} a\right)-\Gamma_{h} J_{n}^{\prime}\left(k_{0} a\right)}{H_{n}^{(2)}\left(k_{0} a\right)-\Gamma_{h} H_{n}^{(2)^{\prime}}\left(k_{0} a\right)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
where&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\Gamma_{h}=\frac{k_{0}}{k_{1}} \frac{J_{n}\left(k_{1} a\right)}{J_{n}^{\prime}\left(k_{1} a\right)}&lt;br /&gt;
\end{equation}&lt;br /&gt;
and &lt;br /&gt;
\begin{equation}&lt;br /&gt;
b_n = \frac{J_n(k_0a)}{J_n(k_1a)} + a_n \frac{H_n^{(2)}(k_0a)}{J_n(k_1a)}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_1.png|400px]][[File:FDFD_2D_dielcyl_ImEn_1.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_2.png|400px]][[File:FDFD_2D_dielcyl_ImEn_2.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_3.png|400px]][[File:FDFD_2D_dielcyl_ImEn_3.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_4.png|400px]][[File:FDFD_2D_dielcyl_ImEn_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_5.png|400px]][[File:FDFD_2D_dielcyl_ImEn_5.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_benchmark_times_basic.png|800px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dx_conv2.png|800px]]&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3684</id>
		<title>Scattering from an infinite cylinder</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Scattering_from_an_infinite_cylinder&amp;diff=3684"/>
				<updated>2026-01-22T16:22:54Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Back to [[Computational electromagnetics]].&lt;br /&gt;
&lt;br /&gt;
==Case==&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\newcommand{\eps}{\varepsilon}&lt;br /&gt;
\newcommand{\dpar}[2]{\frac{\partial #1}{\partial #2}}&lt;br /&gt;
\newcommand{\ddpar}[2]{\frac{\partial^2 #1}{\partial #2^2}}&lt;br /&gt;
\def\doubleunderline#1{\underline{\underline{#1}}}&amp;lt;/math&amp;gt;&lt;br /&gt;
As a simple two dimensional case let us consider a plane wave scattering on a dielectric cylinder. This case is important for two reasons, firstly, it shows how to ''encode'' the incident wave into the boundary between the cylinder and free space and thus calculate only with the scattered field outside. It shows how to use analytic expressions to represent sources with known analytical solutions, this is especially useful when dealing with point sources. This case is important for two reasons. Firstly we will see how we can describe the solution in terms of scattered fields only, by including the incident wave in the boundary conditions. Secondly the problem has a relatively simple analytical solution in terms of Hankel funitons and can be used as a benchmark case.The problem is defined in two domains, inside and outside the cylinder, we are solving the following system of two PDE that are coupled on the inside boundary&lt;br /&gt;
\begin{align}&lt;br /&gt;
\nabla^2 E_z^{int} +\eps_r \frac{\omega^2}{c_0^2}\thinspace E_z^{int} = 0 \qquad &amp;amp;\text{in} \quad D \label{eq:inner} \\&lt;br /&gt;
\nabla^2 E_z^s + \frac{\omega^2}{c_0^2}E_z^s  = 0 \qquad &amp;amp; \text{outside} \quad D&lt;br /&gt;
\label{eq:outer}&lt;br /&gt;
\end{align}&lt;br /&gt;
The field $E_z^{int}$ is the total field inside, $E_z^s$ is only the scattered part of $E_z$ Outside the cylinder. The boundary conditions are&lt;br /&gt;
\begin{equation}&lt;br /&gt;
E_z^{int} - E_z^{s} =E_z^{inc}  \qquad \text{on} \quad \partial D \label{eq:BC1}, \qquad \text{and} \qquad&lt;br /&gt;
\dpar{E_z^{int}}{n} - \dpar{E_z^{s}}{n} = \dpar{E_z^{inc}}{n}  \qquad \text{on} \quad  \partial D.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The incident field $E_z^{inc}$ has an analytical form in our case a plane wave $e^{-ikx}$, since it has a nice closed form we know its derivatives and the boundary conditions are nicely expressed. Around the computational domain we use a PML to simulate an infinite domain.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
	// Construct system&lt;br /&gt;
double l_in = epsr*std::pow(omega/c0, 2.0);&lt;br /&gt;
double l_out = std::pow(omega/c0, 2.0);&lt;br /&gt;
for (int i : cyl_dom.interior()){&lt;br /&gt;
	op_inner.lap(i) + l_in*op_inner.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i : outer_inter){&lt;br /&gt;
	op_outer.lap(i) + l_out*op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int c = 0; c &amp;lt; interface_c_idx.size(); ++c){&lt;br /&gt;
	int i = interface_c_idx[c]; // cyl index&lt;br /&gt;
	int j = interface_o_idx[c]; // outer index&lt;br /&gt;
	Vec2d pos = cyl_dom.pos(i);&lt;br /&gt;
	double x = pos[0];&lt;br /&gt;
	double y = pos[1];&lt;br /&gt;
&lt;br /&gt;
	// calculate normal derivative of the source funciton at the i-th node&lt;br /&gt;
	// get normal, must point out of the cylinder&lt;br /&gt;
	Vec2d normal = cyl_dom.normal(i);&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; incident = std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dx = &lt;br /&gt;
            1.0_i*k0*std::cos(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; din_dy = &lt;br /&gt;
            1.0_i*k0*std::sin(t0)*std::exp(1.0_i*k0*(x*std::cos(t0)+y*std::sin(t0)));&lt;br /&gt;
	std::complex&amp;lt;double&amp;gt; dui_dn = normal[0]*din_dx + normal[1]*din_dy;&lt;br /&gt;
&lt;br /&gt;
	// continuity of the fields, and the incident field&lt;br /&gt;
	op_inner.value(i) + (-1)*op_outer.value(j, Nouter + i) = incident;&lt;br /&gt;
&lt;br /&gt;
	// continuity of derivatives&lt;br /&gt;
	op_outer.neumann(j, outer_dom.normal(j)) &lt;br /&gt;
              + (1/epsr)*op_inner.neumann(i, cyl_dom.normal(i), j - Nouter)&lt;br /&gt;
			= dui_dn; // todo, change this from Neumanns to derivatives!&lt;br /&gt;
}&lt;br /&gt;
// SC-PML - where either x or y is constant&lt;br /&gt;
for (int i : PML){&lt;br /&gt;
	1.0/(sw[i]*sw[i])*op_outer.lap(i) + l_out*op_outer.value(i) +&lt;br /&gt;
	((-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 0, i)*op_outer.der1(i, 0)&lt;br /&gt;
		+(-1.0)/(sw[i]*sw[i]*sw[i])*exop.d1(sw, 1, i)*op_outer.der1(i, 1)) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
for (int i: outer_bnd){&lt;br /&gt;
	op_outer.value(i) = 0.0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Analytical solution==&lt;br /&gt;
The solution to this problem is sought in terms of sums of cylindrical harmonics&amp;lt;ref name=&amp;quot;analytical solution&amp;quot;&amp;gt;M. Kerker , The Scattering of Light and Other Electromagnetic Radiation, 1969.&amp;lt;/ref&amp;gt;. The incident wave is a plane wave $\b E_i = \hat e_z E_0 e^{-ikx}$ and can be decomposed into Bessel functions of the first kind as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}_{i}=\widehat{z} E_{0} e^{-i k_{0} x}= \widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} J_{n}\left(k_{0} \rho\right) e^{i n \phi}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The scattered field is a sum of Hankel functions so it satisfies the Sommerfeld radiation condition&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{s}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} a_{n} H_{n}^{(2)}\left(k_{0} \rho\right) e^{i n \phi}, \quad \rho \geq a&lt;br /&gt;
\end{equation}&lt;br /&gt;
and finally the total field is again a sum ob Bessel functions of the first kind&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\mathbf{E}^{t}=\widehat{z} E_{0} \sum_{n=-\infty}^{\infty} i^{-n} b_{n} J_{n}\left(k_{1} \rho\right) e^{i n \phi}, \quad \rho \leq a.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The boundary conditions for the field and its derivative on the boundary between the cylinder and free space surrounding it give two expressions&lt;br /&gt;
\begin{equation}&lt;br /&gt;
J_{n}\left(k_{0} a\right)+a_{n} H_{n}^{(2)}\left(k_{0} a\right)=b_{n} J_{n}\left(k_{1} a\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
k_0 J^{\prime}_{n}\left(k_{0} a\right)+ k_0 a_{n} H_{n}^{\prime(2)}\left(k_{0} a\right)=k_1 b_{n} J^{\prime}_{n}\left(k_{1} a\right).&lt;br /&gt;
\end{equation}&lt;br /&gt;
Which give the coefficients $a_n$ and $b_n$ in the above expansions as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
a_{n}=-\frac{J_{n}\left(k_{0} a\right)-\Gamma_{h} J_{n}^{\prime}\left(k_{0} a\right)}{H_{n}^{(2)}\left(k_{0} a\right)-\Gamma_{h} H_{n}^{(2)^{\prime}}\left(k_{0} a\right)},&lt;br /&gt;
\end{equation}&lt;br /&gt;
where&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\Gamma_{h}=\frac{k_{0}}{k_{1}} \frac{J_{n}\left(k_{1} a\right)}{J_{n}^{\prime}\left(k_{1} a\right)}&lt;br /&gt;
\end{equation}&lt;br /&gt;
and &lt;br /&gt;
\begin{equation}&lt;br /&gt;
b_n = \frac{J_n(k_0a)}{J_n(k_1a)} + a_n \frac{H_n^{(2)}(k_0a)}{J_n(k_1a)}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_1.png|400px]][[File:FDFD_2D_dielcyl_ImEn_1.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_2.png|400px]][[File:FDFD_2D_dielcyl_ImEn_2.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_3.png|400px]][[File:FDFD_2D_dielcyl_ImEn_3.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_4.png|400px]][[File:FDFD_2D_dielcyl_ImEn_4.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dielcyl_ReEn_5.png|400px]][[File:FDFD_2D_dielcyl_ImEn_5.png|400px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_benchmark_times_basic.png|800px]]&lt;br /&gt;
&lt;br /&gt;
[[File:FDFD_2D_dx_conv2.png|800px]]&lt;br /&gt;
=References=&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3682</id>
		<title>How to build</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3682"/>
				<updated>2025-07-19T16:59:20Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Installation instructions now direct the user to master branch&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Installation=&lt;br /&gt;
To make this work from plain Ubuntu installation, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install git g++ python3 cmake libhdf5-serial-dev doxygen graphviz&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git --branch master --single-branch&lt;br /&gt;
cd medusa&lt;br /&gt;
./run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which installs dependencies, clones the repository, goes into the root folder of&lt;br /&gt;
the repository and runs tests.  This will build and run all tests. If this&lt;br /&gt;
works, you are ready to go! Otherwise install any missing packages and if it&lt;br /&gt;
still fails, raise an issue!&lt;br /&gt;
Note: If you are told the packages cannot be located, try doing a sudo apt-get update.&lt;br /&gt;
&lt;br /&gt;
For instructions on how to use this library in you project, see&lt;br /&gt;
[[Including this library in your project]].&lt;br /&gt;
&lt;br /&gt;
=Building=&lt;br /&gt;
&lt;br /&gt;
List of dependencies:&lt;br /&gt;
&lt;br /&gt;
* Build tools, like &amp;lt;code&amp;gt; cmake &amp;gt;= 2.8.12&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt; g++ &amp;gt;= 4.8&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;python3&amp;lt;/code&amp;gt;&lt;br /&gt;
* [https://www.hdfgroup.org/ HDF5 library] for IO &lt;br /&gt;
* &amp;lt;code&amp;gt;doxygen &amp;gt;=  1.8.8 &amp;lt;/code&amp;gt; and Graphviz for generating the documentation&lt;br /&gt;
&lt;br /&gt;
Out of source builds are preferred. Run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
make&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that you only have to run &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; once, after that only &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt; is sufficient.&lt;br /&gt;
&lt;br /&gt;
Binaries are placed into &amp;lt;code&amp;gt;bin/&amp;lt;/code&amp;gt; folder. Tests can be run all at once via &amp;lt;code&amp;gt;make medusa_run_tests&amp;lt;/code&amp;gt; &lt;br /&gt;
or individually via e. g. &amp;lt;code&amp;gt;make operators_run_tests&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Linker errors ==&lt;br /&gt;
&lt;br /&gt;
When trying out different classes, you might come along linker errors such as &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
Scanning dependencies of target cantilever_beam&lt;br /&gt;
[100%] Building CXX object examples/linear_elasticity/CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o&lt;br /&gt;
[100%] Linking CXX executable ../../../examples/linear_elasticity/cantilever_beam&lt;br /&gt;
/usr/bin/ld: CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o: in function `main':&lt;br /&gt;
cantilever_beam.cpp:(.text.startup+0x162): undefined reference to `void mm::FindBalancedSupport::operator()&amp;lt;mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt; &amp;gt;(mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt;&amp;amp;) const'&lt;br /&gt;
collect2: error: ld returned 1 exit status&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is expected and is the result of some optimizations of compilation time. The medusa library can actually be included in two ways: as&lt;br /&gt;
&amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa_fwd.hpp&amp;gt;&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa.hpp&amp;gt;&amp;lt;/code&amp;gt;. The first version contains the declarations of all symbols, but not all the definitions. Some of the more commonly used template instantiations are included, but by far not all. Using a template instantiation that is not precompiled will cause your program to compile fine, but will fail to link, due to the missing definitions. In this case you have a few options: include the &amp;lt;i&amp;gt;full&amp;lt;/i&amp;gt; Medusa library (the header without the &amp;lt;code&amp;gt;_fwd&amp;lt;/code&amp;gt;) and it should just work, but you will have to wait a bit longer for it to compile. Include only the missing header (in the case above &amp;lt;code&amp;gt;medusa/bits/domains/FindBalancedSupport.hpp&amp;lt;/code&amp;gt;) and pay for whay you use. Or, add your instantiation among the already precompiled instantiations (located in &amp;lt;code&amp;gt;.cpp&amp;lt;/code&amp;gt; files, such as e.g. [https://gitlab.com/e62Lab/medusa/-/blob/dev/src/domains/DomainDiscretization.cpp this one]).&lt;br /&gt;
&lt;br /&gt;
== Building on macOS ==&lt;br /&gt;
This method was tested on macOS Mojave 10.14.2.&lt;br /&gt;
&lt;br /&gt;
First install Xcode via App Store and then Xcode Command Line Tools with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
xcode-select --install&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that, install all dependencies from homebrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install cmake hdf5 boost python doxygen graphviz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can clone and build the project using the following commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git&lt;br /&gt;
cd medusa&lt;br /&gt;
mkdir build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
cd ..&lt;br /&gt;
python3 run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will also run all tests. If it works, you are ready to go! Otherwise install any missing packages and if it still fails, raise an issue!&lt;br /&gt;
&lt;br /&gt;
==HDF5==&lt;br /&gt;
In order to use HDF5 IO you need the [https://www.hdfgroup.org/ HDF5 library].&lt;br /&gt;
You can install it easily using the command &amp;lt;code&amp;gt;sudo apt-get install libhdf5-dev&amp;lt;/code&amp;gt;&lt;br /&gt;
or &amp;lt;code&amp;gt;sudo pacman -S hdf5&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ubuntu places (at least on older versions) hdf5 headers and libraries in a weird folder &lt;br /&gt;
&amp;lt;code&amp;gt;/usr/{lib, include}/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you get an error like &amp;lt;code&amp;gt;HDF5 sample failed to compile.  See errors above.&amp;lt;/code&amp;gt; during &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; execution&lt;br /&gt;
then the sample hdf test file located in &amp;lt;code&amp;gt;test/test_hdf_compile.cpp&amp;lt;/code&amp;gt; failed to compile. Perhaps it is good to make this file compile first, &lt;br /&gt;
before tackling the whole project. &lt;br /&gt;
&lt;br /&gt;
If you get an error similar to &amp;lt;code&amp;gt;fatal error: hdf5.h: No such file or directory&amp;lt;/code&amp;gt;,&lt;br /&gt;
then your compiler cannot see the HDF5 header files. Some distributions, notably (older) Ubuntu, place them into nonstandard folders&lt;br /&gt;
&amp;lt;code&amp;gt;/usr/include/hdf5/serial/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/usr/include/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;.&lt;br /&gt;
Check these two folders or check your distributions hdf package for the locations of these files.&lt;br /&gt;
After you have determined the location, add that directory to the include directories,&lt;br /&gt;
using -I flag or in &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(/usr/include/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you wish to fix this problem permanently, you can create soft links to the headers in your &amp;lt;code&amp;gt;/usr/include&amp;lt;/code&amp;gt; directory, &lt;br /&gt;
by typing&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/include/hdf5/serial/* /usr/include&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After this, there should be no compile time errors. If there are, please raise an issue.&lt;br /&gt;
&lt;br /&gt;
If you get error similar to &amp;lt;code&amp;gt;-lhdf5 not found&amp;lt;/code&amp;gt; and you have hdf5 installed, &lt;br /&gt;
you might have to link the libraries into a discoverable place, like &amp;lt;code&amp;gt;/usr/lib/&amp;lt;/code&amp;gt; &lt;br /&gt;
or add the above directory to the linker path. Similarly to above, check the &amp;lt;code&amp;gt;/usr/lib/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;&lt;br /&gt;
directory and look for file &amp;lt;code&amp;gt;libhdf5.a&amp;lt;/code&amp;gt;. When found,&lt;br /&gt;
specify the location using -L flag or &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
link_directories(/usr/lib/x86_64-linux-gnu/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or fix the problem permanently by soft-linking&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/lib/x86_64-linux-gnu/hdf5/serial/* /usr/lib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OpenMP ==&lt;br /&gt;
Sometimes, OpenMP cmake errors might occure. This happens mainly due to limited multi-thread support. One can fix such issues, by adding the following code into their CMakeLists.txt&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
find_package(OpenMP)&lt;br /&gt;
if (OPENMP_FOUND)&lt;br /&gt;
    set (CMAKE_C_FLAGS &amp;quot;${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}&amp;quot;)&lt;br /&gt;
    set (CMAKE_CXX_FLAGS &amp;quot;${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}&amp;quot;)&lt;br /&gt;
    set (CMAKE_EXE_LINKER_FLAGS &amp;quot;${CMAKE_EXE_LINKER_FLAGS} ${OpenMP_EXE_LINKER_FLAGS}&amp;quot;)&lt;br /&gt;
endif()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Linear Algebra==&lt;br /&gt;
We use [http://eigen.tuxfamily.org/ Eigen] as our matrix library. See&lt;br /&gt;
[http://eigen.tuxfamily.org/dox-devel/group__QuickRefPage.html here] for use&lt;br /&gt;
reference and documentation. For a quick transition from Matlab see&lt;br /&gt;
[http://eigen.tuxfamily.org/dox/AsciiQuickReference.txt here].&lt;br /&gt;
&lt;br /&gt;
== Using Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Install [https://software.intel.com/en-us/mkl Intel MKL] and take not of installation directories.&lt;br /&gt;
&lt;br /&gt;
Proper include and link directories need to be set to use the Intel MKL. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/include)    # change these to your installation path&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/lib/intel64)&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/lib/intel64)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eigen has great support for MKL. You can see more detailed instructions [https://eigen.tuxfamily.org/dox/TopicUsingIntelMKL.html on their website].&lt;br /&gt;
To use MKL for math operations, define &amp;lt;code&amp;gt;EIGEN_USE_MKL_VML&amp;lt;/code&amp;gt; when compiling. You must also link&lt;br /&gt;
the appropriate libraries and define &amp;lt;code&amp;gt;MKL_LP64&amp;lt;/code&amp;gt; for use on a 64bit system.&lt;br /&gt;
With parallelism enabled, the configuration looks as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
target_compile_options(my_target PRIVATE &amp;quot;-fopenmp&amp;quot;)&lt;br /&gt;
target_compile_definitions(my_target PUBLIC EIGEN_USE_MKL_VML MKL_LP64)&lt;br /&gt;
target_link_libraries(my_target medusa mkl_intel_lp64 mkl_intel_thread mkl_core pthread iomp5)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have Intel Parallel Studio installed this also enables you to use the Pardiso paralle direct sparse solver through its [https://eigen.tuxfamily.org/dox/group__PardisoSupport__Module.html Eigen interface].&lt;br /&gt;
&lt;br /&gt;
== Using Intel C/C++ Compiler ==&lt;br /&gt;
&lt;br /&gt;
In order to use Intel's compiler when compiling Medusa, you have several standard optionas for &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt;.&lt;br /&gt;
Make sure compilers and installed and in your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; by running &amp;lt;code&amp;gt;which icc&amp;lt;/code&amp;gt;, which &lt;br /&gt;
should return something like &amp;lt;code&amp;gt;/opt/intel/bin/icc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can define the compiler when *first* calling cmake like so&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this is not your first call, remove the &amp;lt;code&amp;gt;build&amp;lt;/code&amp;gt; directory and start anew.&lt;br /&gt;
&lt;br /&gt;
You can also set the &amp;lt;code&amp;gt;CXX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;CC&amp;lt;/code&amp;gt; bash variables. Before calling&lt;br /&gt;
&amp;lt;code&amp;gt; cmake &amp;lt;/code&amp;gt; for the first time you have to export the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export CXX=&amp;quot;icpc&amp;quot;&lt;br /&gt;
export CC=&amp;quot;icc&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
You can also compile it directly for Intel® Xeon Phi™ Coprocessor. You do this by adding &amp;lt;code&amp;gt;-Dmmic=ON&amp;lt;/code&amp;gt;&lt;br /&gt;
flag to the &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -Dmmic=ON -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; All features that depend on system third-party libraries are not available on MIC (Many Integrated Core).&lt;br /&gt;
This includes:&lt;br /&gt;
&lt;br /&gt;
* HDF class in &amp;lt;code&amp;gt;io.hpp&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3552</id>
		<title>How to build</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3552"/>
				<updated>2023-06-13T11:16:47Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Temporarily changed the branch in installation instructions to &amp;quot;dev&amp;quot;, until we sort out the master branch again.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Installation=&lt;br /&gt;
To make this work from plain Ubuntu installation, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install git g++ python3 cmake libhdf5-serial-dev doxygen graphviz&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git --branch dev --single-branch&lt;br /&gt;
cd medusa&lt;br /&gt;
./run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which installs dependencies, clones the repository, goes into the root folder of&lt;br /&gt;
the repository and runs tests.  This will build and run all tests. If this&lt;br /&gt;
works, you are ready to go! Otherwise install any missing packages and if it&lt;br /&gt;
still fails, raise an issue!&lt;br /&gt;
Note: If you are told the packages cannot be located, try doing a sudo apt-get update.&lt;br /&gt;
&lt;br /&gt;
For instructions on how to use this library in you project, see&lt;br /&gt;
[[Including this library in your project]].&lt;br /&gt;
&lt;br /&gt;
=Building=&lt;br /&gt;
&lt;br /&gt;
List of dependencies:&lt;br /&gt;
&lt;br /&gt;
* Build tools, like &amp;lt;code&amp;gt; cmake &amp;gt;= 2.8.12&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt; g++ &amp;gt;= 4.8&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;python3&amp;lt;/code&amp;gt;&lt;br /&gt;
* [https://www.hdfgroup.org/ HDF5 library] for IO &lt;br /&gt;
* &amp;lt;code&amp;gt;doxygen &amp;gt;=  1.8.8 &amp;lt;/code&amp;gt; and Graphviz for generating the documentation&lt;br /&gt;
&lt;br /&gt;
Out of source builds are preferred. Run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
make&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that you only have to run &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; once, after that only &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt; is sufficient.&lt;br /&gt;
&lt;br /&gt;
Binaries are placed into &amp;lt;code&amp;gt;bin/&amp;lt;/code&amp;gt; folder. Tests can be run all at once via &amp;lt;code&amp;gt;make medusa_run_tests&amp;lt;/code&amp;gt; &lt;br /&gt;
or individually via e. g. &amp;lt;code&amp;gt;make operators_run_tests&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Linker errors ==&lt;br /&gt;
&lt;br /&gt;
When trying out different classes, you might come along linker errors such as &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
Scanning dependencies of target cantilever_beam&lt;br /&gt;
[100%] Building CXX object examples/linear_elasticity/CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o&lt;br /&gt;
[100%] Linking CXX executable ../../../examples/linear_elasticity/cantilever_beam&lt;br /&gt;
/usr/bin/ld: CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o: in function `main':&lt;br /&gt;
cantilever_beam.cpp:(.text.startup+0x162): undefined reference to `void mm::FindBalancedSupport::operator()&amp;lt;mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt; &amp;gt;(mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt;&amp;amp;) const'&lt;br /&gt;
collect2: error: ld returned 1 exit status&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is expected and is the result of some optimizations of compilation time. The medusa library can actually be included in two ways: as&lt;br /&gt;
&amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa_fwd.hpp&amp;gt;&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa.hpp&amp;gt;&amp;lt;/code&amp;gt;. The first version contains the declarations of all symbols, but not all the definitions. Some of the more commonly used template instantiations are included, but by far not all. Using a template instantiation that is not precompiled will cause your program to compile fine, but will fail to link, due to the missing definitions. In this case you have a few options: include the &amp;lt;i&amp;gt;full&amp;lt;/i&amp;gt; Medusa library (the header without the &amp;lt;code&amp;gt;_fwd&amp;lt;/code&amp;gt;) and it should just work, but you will have to wait a bit longer for it to compile. Include only the missing header (in the case above &amp;lt;code&amp;gt;medusa/bits/domains/FindBalancedSupport.hpp&amp;lt;/code&amp;gt;) and pay for whay you use. Or, add your instantiation among the already precompiled instantiations (located in &amp;lt;code&amp;gt;.cpp&amp;lt;/code&amp;gt; files, such as e.g. [https://gitlab.com/e62Lab/medusa/-/blob/dev/src/domains/DomainDiscretization.cpp this one]).&lt;br /&gt;
&lt;br /&gt;
== Building on macOS ==&lt;br /&gt;
This method was tested on macOS Mojave 10.14.2.&lt;br /&gt;
&lt;br /&gt;
First install Xcode via App Store and then Xcode Command Line Tools with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
xcode-select --install&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that, install all dependencies from homebrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install cmake hdf5 boost python doxygen graphviz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can clone and build the project using the following commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git&lt;br /&gt;
cd medusa&lt;br /&gt;
mkdir build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
cd ..&lt;br /&gt;
python3 run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will also run all tests. If it works, you are ready to go! Otherwise install any missing packages and if it still fails, raise an issue!&lt;br /&gt;
&lt;br /&gt;
==HDF5==&lt;br /&gt;
In order to use HDF5 IO you need the [https://www.hdfgroup.org/ HDF5 library].&lt;br /&gt;
You can install it easily using the command &amp;lt;code&amp;gt;sudo apt-get install libhdf5-dev&amp;lt;/code&amp;gt;&lt;br /&gt;
or &amp;lt;code&amp;gt;sudo pacman -S hdf5&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ubuntu places (at least on older versions) hdf5 headers and libraries in a weird folder &lt;br /&gt;
&amp;lt;code&amp;gt;/usr/{lib, include}/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you get an error like &amp;lt;code&amp;gt;HDF5 sample failed to compile.  See errors above.&amp;lt;/code&amp;gt; during &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; execution&lt;br /&gt;
then the sample hdf test file located in &amp;lt;code&amp;gt;test/test_hdf_compile.cpp&amp;lt;/code&amp;gt; failed to compile. Perhaps it is good to make this file compile first, &lt;br /&gt;
before tackling the whole project. &lt;br /&gt;
&lt;br /&gt;
If you get an error similar to &amp;lt;code&amp;gt;fatal error: hdf5.h: No such file or directory&amp;lt;/code&amp;gt;,&lt;br /&gt;
then your compiler cannot see the HDF5 header files. Some distributions, notably (older) Ubuntu, place them into nonstandard folders&lt;br /&gt;
&amp;lt;code&amp;gt;/usr/include/hdf5/serial/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/usr/include/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;.&lt;br /&gt;
Check these two folders or check your distributions hdf package for the locations of these files.&lt;br /&gt;
After you have determined the location, add that directory to the include directories,&lt;br /&gt;
using -I flag or in &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(/usr/include/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you wish to fix this problem permanently, you can create soft links to the headers in your &amp;lt;code&amp;gt;/usr/include&amp;lt;/code&amp;gt; directory, &lt;br /&gt;
by typing&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/include/hdf5/serial/* /usr/include&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After this, there should be no compile time errors. If there are, please raise an issue.&lt;br /&gt;
&lt;br /&gt;
If you get error similar to &amp;lt;code&amp;gt;-lhdf5 not found&amp;lt;/code&amp;gt; and you have hdf5 installed, &lt;br /&gt;
you might have to link the libraries into a discoverable place, like &amp;lt;code&amp;gt;/usr/lib/&amp;lt;/code&amp;gt; &lt;br /&gt;
or add the above directory to the linker path. Similarly to above, check the &amp;lt;code&amp;gt;/usr/lib/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;&lt;br /&gt;
directory and look for file &amp;lt;code&amp;gt;libhdf5.a&amp;lt;/code&amp;gt;. When found,&lt;br /&gt;
specify the location using -L flag or &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
link_directories(/usr/lib/x86_64-linux-gnu/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or fix the problem permanently by soft-linking&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/lib/x86_64-linux-gnu/hdf5/serial/* /usr/lib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Linear Algebra==&lt;br /&gt;
We use [http://eigen.tuxfamily.org/ Eigen] as our matrix library. See&lt;br /&gt;
[http://eigen.tuxfamily.org/dox-devel/group__QuickRefPage.html here] for use&lt;br /&gt;
reference and documentation. For a quick transition from Matlab see&lt;br /&gt;
[http://eigen.tuxfamily.org/dox/AsciiQuickReference.txt here].&lt;br /&gt;
&lt;br /&gt;
== Using Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Install [https://software.intel.com/en-us/mkl Intel MKL] and take not of installation directories.&lt;br /&gt;
&lt;br /&gt;
Proper include and link directories need to be set to use the Intel MKL. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/include)    # change these to your installation path&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/lib/intel64)&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/lib/intel64)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eigen has great support for MKL. You can see more detailed instructions [https://eigen.tuxfamily.org/dox/TopicUsingIntelMKL.html on their website].&lt;br /&gt;
To use MKL for math operations, define &amp;lt;code&amp;gt;EIGEN_USE_MKL_VML&amp;lt;/code&amp;gt; when compiling. You must also link&lt;br /&gt;
the appropriate libraries and define &amp;lt;code&amp;gt;MKL_LP64&amp;lt;/code&amp;gt; for use on a 64bit system.&lt;br /&gt;
With parallelism enabled, the configuration looks as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
target_compile_options(my_target PRIVATE &amp;quot;-fopenmp&amp;quot;)&lt;br /&gt;
target_compile_definitions(my_target PUBLIC EIGEN_USE_MKL_VML MKL_LP64)&lt;br /&gt;
target_link_libraries(my_target medusa mkl_intel_lp64 mkl_intel_thread mkl_core pthread iomp5)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have Intel Parallel Studio installed this also enables you to use the Pardiso paralle direct sparse solver through its [https://eigen.tuxfamily.org/dox/group__PardisoSupport__Module.html Eigen interface].&lt;br /&gt;
&lt;br /&gt;
== Using Intel C/C++ Compiler ==&lt;br /&gt;
&lt;br /&gt;
In order to use Intel's compiler when compiling Medusa, you have several standard optionas for &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt;.&lt;br /&gt;
Make sure compilers and installed and in your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; by running &amp;lt;code&amp;gt;which icc&amp;lt;/code&amp;gt;, which &lt;br /&gt;
should return something like &amp;lt;code&amp;gt;/opt/intel/bin/icc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can define the compiler when *first* calling cmake like so&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this is not your first call, remove the &amp;lt;code&amp;gt;build&amp;lt;/code&amp;gt; directory and start anew.&lt;br /&gt;
&lt;br /&gt;
You can also set the &amp;lt;code&amp;gt;CXX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;CC&amp;lt;/code&amp;gt; bash variables. Before calling&lt;br /&gt;
&amp;lt;code&amp;gt; cmake &amp;lt;/code&amp;gt; for the first time you have to export the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export CXX=&amp;quot;icpc&amp;quot;&lt;br /&gt;
export CC=&amp;quot;icc&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
You can also compile it directly for Intel® Xeon Phi™ Coprocessor. You do this by adding &amp;lt;code&amp;gt;-Dmmic=ON&amp;lt;/code&amp;gt;&lt;br /&gt;
flag to the &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -Dmmic=ON -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; All features that depend on system third-party libraries are not available on MIC (Many Integrated Core).&lt;br /&gt;
This includes:&lt;br /&gt;
&lt;br /&gt;
* HDF class in &amp;lt;code&amp;gt;io.hpp&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Computation_of_shape_functions&amp;diff=3201</id>
		<title>Computation of shape functions</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Computation_of_shape_functions&amp;diff=3201"/>
				<updated>2022-06-16T12:58:39Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Typos&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shape functions, often called stencil weights, are functionals, approximating linear partial differential operators at a chosen point. &lt;br /&gt;
A shape function $\b{\chi}_{\mathcal{L}} (\vec{p})$ for a linear partial differential operator $\mathcal{L}$&lt;br /&gt;
in a point $p$ is a vector of stencil weights, such that&lt;br /&gt;
\[&lt;br /&gt;
(\mathcal{L} u)(\vec{p}) \approx \b{\chi}_{\mathcal{L}}(\vec{p})^\T \b{u},&lt;br /&gt;
\]&lt;br /&gt;
where $\b{u}$ is a vector of function values $u(x_i)$ evaluated in the support of $\vec{p}$.&lt;br /&gt;
For usage examples see the section on [[Meshless Local Strong Form Method (MLSM)]] or some [[ Poisson's equation | examples ]].&lt;br /&gt;
&lt;br /&gt;
For computation of shape functions augmented with monomials, see the [[ Radial basis function-generated finite differences (RBF-FD) ]].&lt;br /&gt;
&lt;br /&gt;
== Derivation ==&lt;br /&gt;
Let $\mathcal{L}$ be a linear partial differential operator.&lt;br /&gt;
We wish to approximate $(\mathcal{L}u)(\vec{p})$ using only values of $u$ in support of $p$. To do so, we construct a &lt;br /&gt;
WLS approximation (see [[Weighted Least Squares (WLS)]]) and compute the coefficients $\b{\alpha}$, such that $\hat{u}(\vec{p}) = \b{b}(\vec{p})^\T \b{\alpha}$.&lt;br /&gt;
The coefficients are computed by solving a least-squares system $WB \b{\alpha} = W\b{u}$, (see [[Solving overdetermined systems]])&lt;br /&gt;
writing&lt;br /&gt;
\[ \b{\alpha} = (B^\T W^2 B^\T)^{-1} B^\T W^2 \b{u}. \]&lt;br /&gt;
Writing down the approximation function $\hat{u}$ we get&lt;br /&gt;
\[&lt;br /&gt;
\hat u (\vec{p}) = \b{b}(\vec{p})^\T \b{\alpha} = \b{b}(\vec{p})^\T (WB)^+W\b{u} &lt;br /&gt;
\]&lt;br /&gt;
and applying operator $\mathcal{L}$ we get &lt;br /&gt;
\[&lt;br /&gt;
    \mathcal{L} \hat u (\vec{p}) = (\mathcal{L}\b{b})(\vec{p})^\T \b{\alpha} = &lt;br /&gt;
(\mathcal{L}\b{b})(\vec{p})^\T (B^\T W^2 B)^{-1} B^\T W^2 \b{u} &lt;br /&gt;
= \b{\chi}_{\mathcal{L}}(\vec{p})^\T \b{u}&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
Depending on what data we have available, we can either compute $\b \alpha$ and obtain a continuous approximation $\hat{u}$,&lt;br /&gt;
to which $\mathcal{L}$ can be applied at any point, or, if function values $\b u$ are unknown, we can choose to compute&lt;br /&gt;
$\b \chi_{\mathcal{L}}$ at a point $\vec p$ and approximate the operator for ''any'' set of function values. &lt;br /&gt;
the operator $\mathcal{L}$ at $\vec p$ is approximated as &lt;br /&gt;
\[&lt;br /&gt;
  (\mathcal{L}u)(\vec{p}) \approx (\mathcal{L} \hat{u})(p) = \b{\chi}_{\mathcal{L}}(\vec{p})^\T \b{u}.&lt;br /&gt;
\]&lt;br /&gt;
Vector $\b{\chi}$ is a vector of stencil weights, also called a ''shape function''. The name comes historically from FEM, and it incorporates the idea of being able to take all the information&lt;br /&gt;
about shape of the local domain and choice of approximation and store it in a single vector, being able to approximate&lt;br /&gt;
a function value from given support values $\b{u}$ with a single dot product. For any values $\b{u}$, value $\b{\chi}(p) \b{u}$&lt;br /&gt;
gives us the approximation of $(\mathcal{L}u)(\vec{p})$.&lt;br /&gt;
Mathematically speaking, $\b{\chi}(\vec{p})$ is a Riesz representation of a functional, $\b{\chi}(\vec{p})\colon \R^n \to \R$, mapping $n$-tuples of known function values to&lt;br /&gt;
approximations of $\mathcal{L}$ in point $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
For example, take a $1$-dimensional case for approximation of derivatives with weight equal to $1$ and $n=m=3$, with equally spaced support values at distances $h$.&lt;br /&gt;
We wish to approximate $u''$ in the middle support point, just by making a weighted sum of the values, something like the finite difference&lt;br /&gt;
\[ u'' \approx \frac{u_1 - 2u_2 + u_3}{h^2}. \]&lt;br /&gt;
This is exactly the same formula as we would have come to by computing $\b{\chi}$, except that our approach is a lot more general. But one should think about&lt;br /&gt;
$\b{\chi}$ as one would about the finite difference scheme, it is a rule, telling us how to compute the derivative.&lt;br /&gt;
\[ u''(s_2) \approx \underbrace{\begin{bmatrix} \frac{1}{h^2} &amp;amp; \frac{-2}{h^2} &amp;amp; \frac{1}{h^2} \end{bmatrix}}_{\b{\chi}^\T} \begin{bmatrix}u_1 \\ u_2 \\ u_3 \end{bmatrix}  \]&lt;br /&gt;
&lt;br /&gt;
The fact that $\b{\chi}$ is independent of the function values $\b{u}$ but depends only on domain geometry means that&lt;br /&gt;
'''we can just compute the shape functions $\b{\chi}$ for points of interest and then approximate any linear operator&lt;br /&gt;
of any function, given its values, very fast, using only a single dot product.'''&lt;br /&gt;
&lt;br /&gt;
== Special case when $B$ is square and invertible ==&lt;br /&gt;
The expression&lt;br /&gt;
\[ \b{\chi}_{\mathcal{L}}(\vec{p})^\T = (\mathcal{L}\b{b})(\vec{p})^\T (B^\T W^2 B)^{-1} B^\T W^2 \]&lt;br /&gt;
can be simplified to &lt;br /&gt;
\[&lt;br /&gt;
  \b \chi_{\mathcal{L}}(\vec{p})^\T = (\mathcal{L}\b{b})(\vec{p})^\T B^{-1}.&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
This gives the approximation as $(\mathcal{L} \hat{u})(p) = (\mathcal{L}\b{b})(\vec{p})^\T B^{-1} \b u$. &lt;br /&gt;
&lt;br /&gt;
Coefficients $\b \alpha$ are the solutions of $B\b \alpha = \b u$. Shape function $\b{\chi}_{\mathcal{L}}(\vec{p})$ is defined as&lt;br /&gt;
$$ &lt;br /&gt;
\b{\chi}_{\mathcal{L}}(\vec{p})^\T = (\mathcal{L}\b{b})(\vec{p})^\T B^{-1},&lt;br /&gt;
$$&lt;br /&gt;
which can be rewritten as a solution of &lt;br /&gt;
$$ B^\T \b{\chi}_{\mathcal{L}}(\vec{p}) =  (\mathcal{L}\b{b})(\vec{p}). $$&lt;br /&gt;
&lt;br /&gt;
This formulation is more suitable for numerical solving and also has a deeper interpretation, as presented in the next section.&lt;br /&gt;
&lt;br /&gt;
== Another view on derivation of shape functions ==&lt;br /&gt;
Solving the system $B^\T \b{\chi}_{\mathcal{L}} (\vec{p})^\T = (\mathcal{L}\b{b})(\vec p)$ can be interpreted as using the method of undetermined coefficients.&lt;br /&gt;
&lt;br /&gt;
We wish to find such a combination of function values evaluated in support points $\vec s_i$ that approximates the value of an operator $(\mathcal{L}u)(\vec p)$, i.e., we are looking for coefficients $\chi_{\mathcal{L}, i}$, such that&lt;br /&gt;
\[ \sum_{i=1}^n \chi_{\mathcal{L}, i} u_i \approx (\mathcal{L}u)(p). \]&lt;br /&gt;
&lt;br /&gt;
We require that $\chi_{\mathcal{L}, i}$ be such, that above approximations holds for a certain space of functions, spanned by the basis $\{b_j\}_{j=1}^m$.&lt;br /&gt;
By choosing the basis functions $b_j$ we decide for which functions the aproximation $\b{\chi}_{\mathcal{L}, \vec{p}} \b{u} \approx (\mathcal{L}u)(\vec p)$ should be exact and we write the equations &lt;br /&gt;
$\sum_{i=1}^n \chi_{\mathcal{L}, i} b_j(\vec s_i) = (\mathcal{L}b_j)(\vec p)$ in a system:&lt;br /&gt;
\[&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
  b_1(\vec s_1) &amp;amp; \cdots &amp;amp; b_1(\vec s_n) \\&lt;br /&gt;
  \vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
  b_m(\vec s_1) &amp;amp; \cdots &amp;amp; b_m(\vec s_n) \\&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
 \chi_{\mathcal{L}, 1} \\ \vdots \\ \chi_{\mathcal{L}, n}&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
=&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
  (\mathcal{L} b_1)(\vec{p}) \\ \vdots \\ (\mathcal{L} b_m)(\vec{p})&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
\]&lt;br /&gt;
Note that the matrix above is precisely the transpose of the matrix $B$ from the [[Weighted Least Squares (WLS) ]] approximation. In this sense, the computation of &lt;br /&gt;
shape functions is dual to computation of basis coefficients. The system is written in matrix form as $B^\T \b{\chi}_{\mathcal{L}} (\vec{p})^\T = (\mathcal{L}\b{b})(\vec p)$. &lt;br /&gt;
&lt;br /&gt;
For $n=m$ case as in the previous section, the solution is the sought shape function $\b{\chi}_{\mathcal L}$.&lt;br /&gt;
&lt;br /&gt;
If we do not specify enough functions $b_j$, the system above is underdetermined.&lt;br /&gt;
A common solution is to minimize the weighted norm of $\b \chi_{\mathcal L}$.&lt;br /&gt;
This gives us a minimization problem&lt;br /&gt;
$$&lt;br /&gt;
 \min_{\b \chi_{\mathcal L}} \sum_{i=1}^n (\chi_{\mathcal{L}, i}/w_i)^2, \text{ such that } B^\T \b{\chi}_{\mathcal{L}} (\vec{p})^\T = (\mathcal{L}\b{b})(\vec p).&lt;br /&gt;
$$&lt;br /&gt;
If the central weights are bigger, this also forces the central coefficients to be larger.&lt;br /&gt;
&lt;br /&gt;
Using the solution derived in [[Solving underdetermined systems]] and substituting inverse weights $W = W^{-1}$ and matrix $A = B^\T$, we get the solution&lt;br /&gt;
$$&lt;br /&gt;
\b{\chi}_{\mathcal{L}} (\vec{p}) = (W^{-1})^{-2} (B^\T)^\T (B^\T (W^{-1})^{-2}(B^\T)^\T)^{-1} (\mathcal{L}\b{b})(\vec p) = W^2 B (B^\T W^2 B)^{-1} (\mathcal{L}\b{b})(\vec p).&lt;br /&gt;
$$&lt;br /&gt;
After transposing, we get&lt;br /&gt;
$$&lt;br /&gt;
\b{\chi}_{\mathcal{L}} (\vec{p})^\T = (\mathcal{L}\b{b})(\vec p)^\T (B^\T W^2 B)^{-1} B^\T W^2,&lt;br /&gt;
$$&lt;br /&gt;
which is exactly the same formula as derived using [[ Weighted Least Squares (WLS) ]] approximation.&lt;br /&gt;
&lt;br /&gt;
Just as we get the same stencil weights using interpolation and the method of undetermined coefficients, we get the same result using &lt;br /&gt;
WLS approximation or weighted norm minimization of shape functions.&lt;br /&gt;
&lt;br /&gt;
== Numerical calculation of the shape functions ==&lt;br /&gt;
Numerically cheaper and more stable way is to translate the problem of inverting the matrix to solving a linear system of equations.&lt;br /&gt;
The alternative view above is useful in this aspect.&lt;br /&gt;
&lt;br /&gt;
=== Invertible $B$ ===&lt;br /&gt;
&lt;br /&gt;
If $B$ is invertible, then $\b{\chi}_{\mathcal{L}}(\vec{p})$ can be thought as a solution of a system $B^\T\b{\chi}_{\mathcal{L}}(\vec{p}) = (\mathcal{L}\b{b})(\vec p)$, which can be solved using LU or Cholesky decomposition for example.&lt;br /&gt;
For more on this, see [[Solving linear systems]].&lt;br /&gt;
&lt;br /&gt;
=== General case ===&lt;br /&gt;
In general, the system $B^\T\b{\chi}_{\mathcal{L}}(\vec{p}) = (\mathcal{L}\b{b})(\vec p)$ is underdetermined and must be solved in the least-weighted-norm sense.&lt;br /&gt;
This means solving the underdetermined system $B^\T W y =  (\mathcal{L}\b{b})(\vec p)$ and computing $\b{\chi}_{\mathcal{L}}(\vec{p}) = Wy$.&lt;br /&gt;
For more details on this, see [[Solving underdetermined systems]]. &lt;br /&gt;
&lt;br /&gt;
Depending on our knowledge of the &lt;br /&gt;
problem, we can use Cholesky ($B^\T W$ is positive definite), $LDL^\T$ if it is symmetric, $LU$ for a general square matrix, $QR$ for full rank overdetermined system and SVD for a general system.&lt;br /&gt;
&lt;br /&gt;
If more shapes need to be calculated using the same matrix $B^\T W$ and only different right hand sides (as is often the case, if shapes for more than one operator are computed), &lt;br /&gt;
it can be done efficiently by storing the decomposition of $B^\T W$.&lt;br /&gt;
&lt;br /&gt;
== Linearity and reduction to only computing the shapes of the operator space basis ==&lt;br /&gt;
Observe that the expression for the shape functions&lt;br /&gt;
\[ \b{\chi}_{\mathcal{L}}(\vec{p}) = (\mathcal{L}\b{b})(\vec{p})^\T (WB)^+W \]&lt;br /&gt;
is linear in $\mathcal{L}$. This follows from the fact that if $\mathcal{L} = \mathcal{L}_1 + \mathcal{L}_2$, the vector of&lt;br /&gt;
derivative values $(\mathcal{L}\b{b})(\vec{p}) = ((\mathcal{L}_1 + \mathcal{L}_2)\b{b})(\vec{p}) = &lt;br /&gt;
 (\mathcal{L}_1\b{b} + \mathcal{L}_2\b{b})(\vec{p}) =  (\mathcal{L}_1\b{b})(\vec p) +  (\mathcal{L}_2\b{b})(\vec p)$ &lt;br /&gt;
and consequently&lt;br /&gt;
$$\b{\chi}_{\mathcal{L}}(\vec{p}) = \b{\chi}_{\mathcal{L}_1}(\vec{p}) + \b{\chi}_{\mathcal{L}_2}(\vec{p}).$$&lt;br /&gt;
&lt;br /&gt;
Say that your equation constitutes of operators up to a certain $k$-th order.&lt;br /&gt;
Then it is sufficient to compute only shape functions for the basis of that operator space, namely&lt;br /&gt;
\[&lt;br /&gt;
\{\b{\chi}_{\frac{\partial}{\partial x^\alpha}, p}, 1 \leq |\alpha| \leq k \}, \text{ where } \frac{\partial}{\partial x^\alpha} = \frac{\partial^{|\alpha|}}{\partial x_1^{\alpha_1}\cdots \partial x_d^{\alpha_d}} \text{ and } |\alpha| = \sum_{i=1}^d \alpha_i.&lt;br /&gt;
\]&lt;br /&gt;
Having the shape functions $\b{\chi}_{\frac{\partial}{\partial x^\alpha}, p}$ available, we write $\mathcal{L}$ in the above basis in point $p$, $\mathcal{L} = \displaystyle \sum_{1 \leq |\alpha| \leq k} a_\alpha(p) \frac{\partial}{\partial x^\alpha}$&lt;br /&gt;
and evaluate it's shape function as &lt;br /&gt;
\[&lt;br /&gt;
  \b{\chi}_{\mathcal{L}, \vec{p}} = \sum_{1 \leq |\alpha| \leq k}  a_\alpha(p) \b{\chi}_{\frac{\partial}{\partial x^\alpha}, p}.&lt;br /&gt;
\]&lt;br /&gt;
This saves us space and gives certain elegance to the implementation.&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=3200</id>
		<title>Weighted Least Squares (WLS)</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=3200"/>
				<updated>2022-06-16T12:57:59Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Typos&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;One of the most important building blocks of the meshless methods is the Weighted/Moving Least Squares (WLS/MLS) approximation , which is implemented in the &amp;lt;code&amp;gt;WLS&amp;lt;/code&amp;gt; class.&lt;br /&gt;
It is also often called GFDM (generalized finite differences method).&lt;br /&gt;
&lt;br /&gt;
= Notation cheat sheet =&lt;br /&gt;
\begin{align*}&lt;br /&gt;
  m \in \N                  &amp;amp; \dots \text{number of basis functions} \\&lt;br /&gt;
  n \geq m \in \N           &amp;amp; \dots \text{number of points in support domain} \\&lt;br /&gt;
  k \in \mathbb{N}          &amp;amp; \dots \text{dimensionality of vector space} \\&lt;br /&gt;
  \vec s_j \in \R^k         &amp;amp; \dots \text{point in support domain } \quad j=1,\dots,n \\&lt;br /&gt;
  u_j \in \R                &amp;amp; \dots \text{value of function to approximate in }\vec{s}_j \quad j=1,\dots,n \\&lt;br /&gt;
  \vec p \in \R^k           &amp;amp; \dots \text{center point of approximation} \\&lt;br /&gt;
  b_i\colon \R^k \to \R     &amp;amp; \dots \text{basis functions } \quad i=1,\dots,m \\&lt;br /&gt;
  B_{j, i} \in \R           &amp;amp; \dots \text{value of basis functions in support points } b_i(\vec{s}_j) \quad j=1,\dots,n, \quad i=1,\dots,m\\&lt;br /&gt;
  \omega \colon \R^k \to \R &amp;amp; \dots \text{weight function} \\&lt;br /&gt;
  w_j \in \R                &amp;amp; \dots \text{weights } \omega(\vec{s}_j-\vec{p})  \quad j=1,\dots,n \\&lt;br /&gt;
  \alpha_i \in \R           &amp;amp; \dots \text{expansion coefficients around point } \vec{p} \quad i=1,\dots,m \\&lt;br /&gt;
  \hat u\colon \R^k \to \R  &amp;amp; \dots \text{approximation function (best fit)} \\&lt;br /&gt;
  \chi_j \in \R          &amp;amp; \dots \text{shape coefficient for point }\vec{p} \quad j=1,\dots,n \\&lt;br /&gt;
\end{align*}&lt;br /&gt;
&lt;br /&gt;
We will also use \(\b{s}, \b{u}, \b{b}, \b{\alpha}, \b{\chi} \) to denote a column of corresponding values,&lt;br /&gt;
$W$ as a $n\times n$ diagonal matrix filled with $w_j$ on the diagonal and $B$ as a $n\times m$ matrix filled with $B_{j, i} = b_i(\vec s_j)$.&lt;br /&gt;
&lt;br /&gt;
= Definition of local approximation =&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:1DWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhdsfej1b9cao01029m1e13o69.png|600px|thumb|upright=2|alt=1D MLS example|&amp;lt;caption&amp;gt;Example of 1D WLS approximation &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
Our wish is to approximate an unknown function $u\colon \R^k \to \R$ while knowing $n$ values $u(\vec{s}_j) := u_j$.&lt;br /&gt;
The vector of known values will be denoted by $\b{u}$ and the vector of coordinates where those values were achieved by $\b{s}$.&lt;br /&gt;
Note that $\b{s}$ is not a vector in the usual sense since its components $\vec{s}_j$ are elements of $\R^k$, but we will call it vector anyway.&lt;br /&gt;
The values of $\b{s}$ are called ''nodes'' or ''support nodes'' or ''support''. The known values $\b{u}$ are also called ''support values''.&lt;br /&gt;
&lt;br /&gt;
In general, an approximation function around point $\vec{p}\in\R^k$ can be&lt;br /&gt;
written as \[\hat{u} (\vec{x}) = \sum_{i=1}^m \alpha_i b_i(\vec{x}) = \b{b}(\vec{x})^\T \b{\alpha} \]&lt;br /&gt;
where $\b{b} = (b_i)_{i=1}^m$ is a set of ''basis functions'', $b_i\colon \R^k \to\R$, and $\b{\alpha} = (\alpha_i)_{i=1}^m$ are the unknown coefficients.&lt;br /&gt;
&lt;br /&gt;
In MLS the goal is to minimize the error of approximation in given values, $\b{e} = \hat u(\b{s}) - \b{u}$&lt;br /&gt;
between the approximation function and target function in the known points $\b{x}$. The error can also be written as $B\b{\alpha} - \b{u}$,&lt;br /&gt;
where $B$ is rectangular matrix of dimensions $n \times m$ with rows containing basis function evaluated in points $\vec{s}_j$.&lt;br /&gt;
\[ B =&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
b_1(\vec{s}_1) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_1) \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
b_1(\vec{s}_n) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_n)&lt;br /&gt;
\end{bmatrix} =&lt;br /&gt;
 [b_i(\vec{s}_j)]_{j=1,i=1}^{n,m} = [\b{b}(\vec{s}_j)^\T]_{j=1}^n. \]&lt;br /&gt;
&lt;br /&gt;
We can choose to minimize any norm of the error vector $e$&lt;br /&gt;
and usually choose to minimize the $2$-norm or square norm \[ \|\b{e}\| = \|\b{e}\|_2 = \sqrt{\sum_{j=1}^n e_j^2}. \]&lt;br /&gt;
Commonly, we also choose to minimize a weighted norm&lt;br /&gt;
&amp;lt;ref&amp;gt;Note that our definition is a bit unusual, usually weights are not&lt;br /&gt;
 squared with the values. However, we do this to avoid computing square&lt;br /&gt;
 roots when doing MLS. If you are used to the usual definition,&lt;br /&gt;
consider the weight to be $\omega^2$.&amp;lt;/ref&amp;gt;&lt;br /&gt;
instead \[ \|\b{e}\|_{2,w} = \|\b{e}\|_w = \sqrt{\sum_{j=1}^n (w_j e_j)^2}. \]&lt;br /&gt;
The ''weights'' $w_i$ are assumed to be non negative and are assembled in a vector $\b{w}$ or a matrix $W = \operatorname{diag}(\b{w})$ and usually obtained from a weight function.&lt;br /&gt;
A ''weight function'' is a function $\omega\colon \R^k \to[0,\infty)$. We calculate $w_j$ as $w_i := \omega(\vec{p}-\vec{s}_j)$, so&lt;br /&gt;
good choices for $\omega$ are function which have higher values close to $0$ (making closer nodes more important), like the normal distribution.&lt;br /&gt;
If we choose $\omega \equiv 1$, we get the unweighted version.&lt;br /&gt;
&lt;br /&gt;
A choice of minimizing the square norm gave this method its name - Least Squares approximation. If we use the weighted version, we get the Weighted Least Squares or WLS.&lt;br /&gt;
In the most general case we wish to minimize&lt;br /&gt;
\[ \|\b{e}\|_{2,w}^2 = \b{e}^\T W^2 \b{e} = (B\b{\alpha} - \b{u})^\T W^2(B\b{\alpha} - \b{u}) =  \sum_j^n w_j^2 (\hat{u}(\vec{s}_j) - u_j)^2  \]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The problem of finding the coefficients $\b{\alpha}$ that minimize the error $\b{e}$ numerically is described in [[Solving overdetermined systems]].&lt;br /&gt;
&lt;br /&gt;
Here, we simply write the solution $\b \alpha = (WB)^+ Wu = (B^\T W^2 B)^{-1} B^\T W^2 u$.  &lt;br /&gt;
&lt;br /&gt;
In our WLS engine we use SVD with regularization by default.&lt;br /&gt;
&lt;br /&gt;
= Weighted Least Squares =&lt;br /&gt;
Weighted least squares approximation is the simplest version of the procedure described above. Given support $\b{s}$, values $\b{u}$&lt;br /&gt;
and an anchor point $\vec{p}$, we calculate the coefficients $\b{\alpha}$ using one of the above methods.&lt;br /&gt;
Then, to approximate a function in the neighbourhood of $\vec p$ we use the formula&lt;br /&gt;
\[&lt;br /&gt;
\hat{u}(\vec x) = \b{b}(\vec x)^\T \b{\alpha} = \sum_{i=1}^m \alpha_i b_i(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
To approximate the derivative $\frac{\partial u}{\partial x_i}$, or any linear partial differential operator $\mathcal L$ on $u$, we&lt;br /&gt;
simply take the same linear combination of transformed basis functions $\mathcal L b_i$. We have considered coefficients $\alpha_i$ to be&lt;br /&gt;
constant and applied the linearity.&lt;br /&gt;
\[&lt;br /&gt;
 \widehat{\mathcal L u}(\vec x) = \sum_{i=1}^m \alpha_i (\mathcal L b_i)(\vec x) = (\mathcal L \b b(\vec x))^\T \b \alpha.&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
= Moving Least Squares =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswls.svg|thumb|upright=2|&amp;lt;caption&amp;gt;Comparison of WLS and MLS approximation&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using WLS the approximation gets worse as we move away from the central point $\vec{p}$.&lt;br /&gt;
This is partially due to not being in the center of the support any more and partially due to weight&lt;br /&gt;
being distributed in such a way to assign more importance to nodes closer to $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
We can battle this problem in two ways: when we wish to approximate in a new point that is sufficiently far&lt;br /&gt;
away from $\vec{p}$ we can compute new support, recompute the new coefficients $\b{\alpha}$ and approximate again.&lt;br /&gt;
This is very costly and we would like to avoid that. A partial fix is to keep support the same, but only&lt;br /&gt;
recompute the weight vector $\b{w}$, that will now assign weight values to nodes close to the new point.&lt;br /&gt;
We still need to recompute the coefficients $\b{\alpha}$, however we avoid the cost of setting up new support&lt;br /&gt;
and function values and recomputing $B$. This approach is called Moving Least Squares due to recomputing&lt;br /&gt;
the weighted least squares problem whenever we move the point of approximation.&lt;br /&gt;
&lt;br /&gt;
Note that if our weight is constant or if $n = m$, when approximation reduces to interpolation, the weights do not play&lt;br /&gt;
any role and this method is redundant. In fact, its benefits arise when supports are rather large.&lt;br /&gt;
&lt;br /&gt;
See &amp;lt;xr id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;/&amp;gt; for comparison between MLS and WLS approximations. MLS approximation remains close to&lt;br /&gt;
the actual function while still inside the support domain, while WLS approximation becomes bad when&lt;br /&gt;
we come out of the reach of the weight function.&lt;br /&gt;
&lt;br /&gt;
{{reflist}}&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=3199</id>
		<title>Weighted Least Squares (WLS)</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=3199"/>
				<updated>2022-06-16T12:55:27Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: b_i (s_j - p) -&amp;gt; b_i (s_j) in notation cheat sheet (seems like what is used from then on)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;One of the most important building blocks of the meshless methods is the Weighted/Moving Least Squares (WLS/MLS) approximation , which is implemented in the &amp;lt;code&amp;gt;WLS&amp;lt;/code&amp;gt; class.&lt;br /&gt;
It is also often called GFDM (generalized finite differences method).&lt;br /&gt;
&lt;br /&gt;
= Notation cheat sheet =&lt;br /&gt;
\begin{align*}&lt;br /&gt;
  m \in \N                  &amp;amp; \dots \text{number of basis functions} \\&lt;br /&gt;
  n \geq m \in \N           &amp;amp; \dots \text{number of points in support domain} \\&lt;br /&gt;
  k \in \mathbb{N}          &amp;amp; \dots \text{dimensionality of vector space} \\&lt;br /&gt;
  \vec s_j \in \R^k         &amp;amp; \dots \text{point in support domain } \quad j=1,\dots,n \\&lt;br /&gt;
  u_j \in \R                &amp;amp; \dots \text{value of function to approximate in }\vec{s}_j \quad j=1,\dots,n \\&lt;br /&gt;
  \vec p \in \R^k           &amp;amp; \dots \text{center point of approximation} \\&lt;br /&gt;
  b_i\colon \R^k \to \R     &amp;amp; \dots \text{basis functions } \quad i=1,\dots,m \\&lt;br /&gt;
  B_{j, i} \in \R           &amp;amp; \dots \text{value of basis functions in support points } b_i(\vec{s}_j) \quad j=1,\dots,n, \quad i=1,\dots,m\\&lt;br /&gt;
  \omega \colon \R^k \to \R &amp;amp; \dots \text{weight function} \\&lt;br /&gt;
  w_j \in \R                &amp;amp; \dots \text{weights } \omega(\vec{s}_j-\vec{p})  \quad j=1,\dots,n \\&lt;br /&gt;
  \alpha_i \in \R           &amp;amp; \dots \text{expansion coefficients around point } \vec{p} \quad i=1,\dots,m \\&lt;br /&gt;
  \hat u\colon \R^k \to \R  &amp;amp; \dots \text{approximation function (best fit)} \\&lt;br /&gt;
  \chi_j \in \R          &amp;amp; \dots \text{shape coefficient for point }\vec{p} \quad j=1,\dots,n \\&lt;br /&gt;
\end{align*}&lt;br /&gt;
&lt;br /&gt;
We will also use \(\b{s}, \b{u}, \b{b}, \b{\alpha}, \b{\chi} \) to denote a column of corresponding values,&lt;br /&gt;
$W$ as a $n\times n$ diagonal matrix filled with $w_j$ on the diagonal and $B$ as a $n\times m$ matrix filled with $B_{j, i} = b_i(\vec s_j)$.&lt;br /&gt;
&lt;br /&gt;
= Definition of local approximation =&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:1DWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhdsfej1b9cao01029m1e13o69.png|600px|thumb|upright=2|alt=1D MLS example|&amp;lt;caption&amp;gt;Example of 1D WLS approximation &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
Our wish is to approximate an unknown function $u\colon \R^k \to \R$ while knowing $n$ values $u(\vec{s}_j) := u_j$.&lt;br /&gt;
The vector of known values will be denoted by $\b{u}$ and the vector of coordinates where those values were achieved by $\b{s}$.&lt;br /&gt;
Note that $\b{s}$ is not a vector in the usual sense since its components $\vec{s}_j$ are elements of $\R^k$, but we will call it vector anyway.&lt;br /&gt;
The values of $\b{s}$ are called ''nodes'' or ''support nodes'' or ''support''. The known values $\b{u}$ are also called ''support values''.&lt;br /&gt;
&lt;br /&gt;
In general, an approximation function around point $\vec{p}\in\R^k$ can be&lt;br /&gt;
written as \[\hat{u} (\vec{x}) = \sum_{i=1}^m \alpha_i b_i(\vec{x}) = \b{b}(\vec{x})^\T \b{\alpha} \]&lt;br /&gt;
where $\b{b} = (b_i)_{i=1}^m$ is a set of ''basis functions'', $b_i\colon \R^k \to\R$, and $\b{\alpha} = (\alpha_i)_{i=1}^m$ are the unknown coefficients.&lt;br /&gt;
&lt;br /&gt;
In MLS the goal is to minimize the error of approximation in given values, $\b{e} = \hat u(\b{s}) - \b{u}$&lt;br /&gt;
between the approximation function and target function in the known points $\b{x}$. The error can also be written as $B\b{\alpha} - \b{u}$,&lt;br /&gt;
where $B$ is rectangular matrix of dimensions $n \times m$ with rows containing basis function evaluated in points $\vec{s}_j$.&lt;br /&gt;
\[ B =&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
b_1(\vec{s}_1) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_1) \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
b_1(\vec{s}_n) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_n)&lt;br /&gt;
\end{bmatrix} =&lt;br /&gt;
 [b_i(\vec{s}_j)]_{j=1,i=1}^{n,m} = [\b{b}(\vec{s}_j)^\T]_{j=1}^n. \]&lt;br /&gt;
&lt;br /&gt;
We can choose to minimize any norm of the error vector $e$&lt;br /&gt;
and usually choose to minimize the $2$-norm or square norm \[ \|\b{e}\| = \|\b{e}\|_2 = \sqrt{\sum_{j=1}^n e_j^2}. \]&lt;br /&gt;
Commonly, we also choose to minimize a weighted norm&lt;br /&gt;
&amp;lt;ref&amp;gt;Note that our definition is a bit unusual, usually weights are not&lt;br /&gt;
 squared with the values. However, we do this to avoid computing square&lt;br /&gt;
 roots when doing MLS. If you are used to the usual definition,&lt;br /&gt;
consider the weight to be $\omega^2$.&amp;lt;/ref&amp;gt;&lt;br /&gt;
instead \[ \|\b{e}\|_{2,w} = \|\b{e}\|_w = \sqrt{\sum_{j=1}^n (w_j e_j)^2}. \]&lt;br /&gt;
The ''weights'' $w_i$ are assumed to be non negative and are assembled in a vector $\b{w}$ or a matrix $W = \operatorname{diag}(\b{w})$ and usually obtained from a weight function.&lt;br /&gt;
A ''weight function'' is a function $\omega\colon \R^k \to[0,\infty)$. We calculate $w_j$ as $w_i := \omega(\vec{p}-\vec{s}_j)$, so&lt;br /&gt;
good choices for $\omega$ are function which have higher values close to $0$ (making closer nodes more important), like the normal distribution.&lt;br /&gt;
If we choose $\omega \equiv 1$, we get the unweighted version.&lt;br /&gt;
&lt;br /&gt;
A choice of minimizing the square norm gave this method its name - Least Squares approximation. If we use the weighted version, we get the Weighted Least Squares or WLS.&lt;br /&gt;
In the most general case we wish to minimize&lt;br /&gt;
\[ \|\b{e}\|_{2,w}^2 = \b{e}^\T W^2 \b{e} = (B\b{\alpha} - \b{u})^\T W^2(B\b{\alpha} - \b{u}) =  \sum_j^n w_j^2 (\hat{u}(\vec{s}_j) - u_j)^2  \]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The problem of finding the coefficients $\b{\alpha}$ that minimize the error $\b{e}$ numerically is described in [[Solving overdetermined systems]].&lt;br /&gt;
&lt;br /&gt;
Here, we simply write the solution $\b \alpha = (WB)^+ Wu = (B^\T W^2 B)^{-1} B^\T W^2 u$.  &lt;br /&gt;
&lt;br /&gt;
In our WLS engine we use SVD with regularization by default.&lt;br /&gt;
&lt;br /&gt;
= Weighted Least Squares =&lt;br /&gt;
Weighted least squares approximation is the simplest version of the procedure described above. Given support $\b{s}$, values $\b{u}$&lt;br /&gt;
and an anchor point $\vec{p}$, we calculate the coefficients $\b{\alpha}$ using one of the above methods.&lt;br /&gt;
Then, to approximate a function in the neighbourhood of $\vec p$ we use the formula&lt;br /&gt;
\[&lt;br /&gt;
\hat{u}(\vec x) = \b{b}(\vec x)^\T \b{\alpha} = \sum_{i=1}^m \alpha_i b_i(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
To approximate the derivative $\frac{\partial u}{\partial x_i}$, or any linear partial differential operator $\mathcal L$ on $u$, we&lt;br /&gt;
simply take the same linear combination of transformed basis functions $\mathcal L b_i$. We have considered coefficients $\alpha_i$ to be&lt;br /&gt;
constant and applied the linearity.&lt;br /&gt;
\[&lt;br /&gt;
 \widehat{\mathcal L u}(\vec x) = \sum_{i=1}^m \alpha_i (\mathcal L b_i)(\vec x) = (\mathcal L \b b(\vec x))^\T \b \alpha.&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
= Moving Least Squares =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswls.svg|thumb|upright=2|&amp;lt;caption&amp;gt;Comparison of WLS and MLS approximation&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using WLS the approximation gets worse as we move away from the central point $\vec{p}$.&lt;br /&gt;
This is partially due to not being in the center of the support any more and partially due to weight&lt;br /&gt;
being distributed in such a way to assign more importance to nodes closer to $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
We can battle this problem in two ways: when we wish to approximate in a new point that is sufficiently far&lt;br /&gt;
away from $\vec{p}$ we can compute new support, recompute the new coefficients $\b{\alpha}$ and approximate again.&lt;br /&gt;
This is very costly and we would like to avoid that. A partial fix is to keep support the same, but only&lt;br /&gt;
recompute the weight vector $\b{w}$, that will now assign weight values to nodes close to the new point.&lt;br /&gt;
We still need to recompute the coefficients $\b{\alpha}$, however we avoid the cost of setting up new support&lt;br /&gt;
and function values and recomputing $B$. This approach is called Moving Least Squares due to recomputing&lt;br /&gt;
the weighted least squares problem whenever we move the point of approximation.&lt;br /&gt;
&lt;br /&gt;
Note that if out weight is constant or if $n = m$, when approximation reduces to interpolation, the weights do not play&lt;br /&gt;
any role and this method is redundant. In fact, its benefits arise when supports are rather large.&lt;br /&gt;
&lt;br /&gt;
See &amp;lt;xr id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;/&amp;gt; for comparison between MLS and WLS approximations. MLS approximation remains close to&lt;br /&gt;
actual function while still inside the support domain, while WLS approximation becomes bad when&lt;br /&gt;
we come out of the reach of the weight function.&lt;br /&gt;
&lt;br /&gt;
{{reflist}}&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3198</id>
		<title>How to build</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=How_to_build&amp;diff=3198"/>
				<updated>2022-06-16T12:52:28Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Added the mention of the fix for &amp;quot;packages cannot be located&amp;quot; (at least it worked for me)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Installation=&lt;br /&gt;
To make this work from plain Ubuntu installation, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install git g++ python3 cmake libhdf5-serial-dev doxygen graphviz&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git --branch master --single-branch&lt;br /&gt;
cd medusa&lt;br /&gt;
./run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
which installs dependencies, clones the repository, goes into the root folder of&lt;br /&gt;
the repository and runs tests.  This will build and run all tests. If this&lt;br /&gt;
works, you are ready to go! Otherwise install any missing packages and if it&lt;br /&gt;
still fails, raise an issue!&lt;br /&gt;
Note: If you are told the packages cannot be located, try doing a sudo apt-get update.&lt;br /&gt;
&lt;br /&gt;
For instructions on how to use this library in you project, see&lt;br /&gt;
[[Including this library in your project]].&lt;br /&gt;
&lt;br /&gt;
=Building=&lt;br /&gt;
&lt;br /&gt;
List of dependencies:&lt;br /&gt;
&lt;br /&gt;
* Build tools, like &amp;lt;code&amp;gt; cmake &amp;gt;= 2.8.12&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt; g++ &amp;gt;= 4.8&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;python3&amp;lt;/code&amp;gt;&lt;br /&gt;
* [https://www.hdfgroup.org/ HDF5 library] for IO &lt;br /&gt;
* &amp;lt;code&amp;gt;doxygen &amp;gt;=  1.8.8 &amp;lt;/code&amp;gt; and Graphviz for generating the documentation&lt;br /&gt;
&lt;br /&gt;
Out of source builds are preferred. Run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir -p build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
make&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that you only have to run &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; once, after that only &amp;lt;code&amp;gt;make&amp;lt;/code&amp;gt; is sufficient.&lt;br /&gt;
&lt;br /&gt;
Binaries are placed into &amp;lt;code&amp;gt;bin/&amp;lt;/code&amp;gt; folder. Tests can be run all at once via &amp;lt;code&amp;gt;make medusa_run_tests&amp;lt;/code&amp;gt; &lt;br /&gt;
or individually via e. g. &amp;lt;code&amp;gt;make operators_run_tests&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Linker errors ==&lt;br /&gt;
&lt;br /&gt;
When trying out different classes, you might come along linker errors such as &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
Scanning dependencies of target cantilever_beam&lt;br /&gt;
[100%] Building CXX object examples/linear_elasticity/CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o&lt;br /&gt;
[100%] Linking CXX executable ../../../examples/linear_elasticity/cantilever_beam&lt;br /&gt;
/usr/bin/ld: CMakeFiles/cantilever_beam.dir/cantilever_beam.cpp.o: in function `main':&lt;br /&gt;
cantilever_beam.cpp:(.text.startup+0x162): undefined reference to `void mm::FindBalancedSupport::operator()&amp;lt;mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt; &amp;gt;(mm::DomainDiscretization&amp;lt;Eigen::Matrix&amp;lt;double, 2, 1, 0, 2, 1&amp;gt; &amp;gt;&amp;amp;) const'&lt;br /&gt;
collect2: error: ld returned 1 exit status&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is expected and is the result of some optimizations of compilation time. The medusa library can actually be included in two ways: as&lt;br /&gt;
&amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa_fwd.hpp&amp;gt;&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;#include &amp;lt;medusa/Medusa.hpp&amp;gt;&amp;lt;/code&amp;gt;. The first version contains the declarations of all symbols, but not all the definitions. Some of the more commonly used template instantiations are included, but by far not all. Using a template instantiation that is not precompiled will cause your program to compile fine, but will fail to link, due to the missing definitions. In this case you have a few options: include the &amp;lt;i&amp;gt;full&amp;lt;/i&amp;gt; Medusa library (the header without the &amp;lt;code&amp;gt;_fwd&amp;lt;/code&amp;gt;) and it should just work, but you will have to wait a bit longer for it to compile. Include only the missing header (in the case above &amp;lt;code&amp;gt;medusa/bits/domains/FindBalancedSupport.hpp&amp;lt;/code&amp;gt;) and pay for whay you use. Or, add your instantiation among the already precompiled instantiations (located in &amp;lt;code&amp;gt;.cpp&amp;lt;/code&amp;gt; files, such as e.g. [https://gitlab.com/e62Lab/medusa/-/blob/dev/src/domains/DomainDiscretization.cpp this one]).&lt;br /&gt;
&lt;br /&gt;
== Building on macOS ==&lt;br /&gt;
This method was tested on macOS Mojave 10.14.2.&lt;br /&gt;
&lt;br /&gt;
First install Xcode via App Store and then Xcode Command Line Tools with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
xcode-select --install&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that, install all dependencies from homebrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
brew install cmake hdf5 boost python doxygen graphviz&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can clone and build the project using the following commands&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://gitlab.com/e62Lab/medusa.git&lt;br /&gt;
cd medusa&lt;br /&gt;
mkdir build&lt;br /&gt;
cd build&lt;br /&gt;
cmake ..&lt;br /&gt;
cd ..&lt;br /&gt;
python3 run_tests.py -t&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will also run all tests. If it works, you are ready to go! Otherwise install any missing packages and if it still fails, raise an issue!&lt;br /&gt;
&lt;br /&gt;
==HDF5==&lt;br /&gt;
In order to use HDF5 IO you need the [https://www.hdfgroup.org/ HDF5 library].&lt;br /&gt;
You can install it easily using the command &amp;lt;code&amp;gt;sudo apt-get install libhdf5-dev&amp;lt;/code&amp;gt;&lt;br /&gt;
or &amp;lt;code&amp;gt;sudo pacman -S hdf5&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Ubuntu places (at least on older versions) hdf5 headers and libraries in a weird folder &lt;br /&gt;
&amp;lt;code&amp;gt;/usr/{lib, include}/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
If you get an error like &amp;lt;code&amp;gt;HDF5 sample failed to compile.  See errors above.&amp;lt;/code&amp;gt; during &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; execution&lt;br /&gt;
then the sample hdf test file located in &amp;lt;code&amp;gt;test/test_hdf_compile.cpp&amp;lt;/code&amp;gt; failed to compile. Perhaps it is good to make this file compile first, &lt;br /&gt;
before tackling the whole project. &lt;br /&gt;
&lt;br /&gt;
If you get an error similar to &amp;lt;code&amp;gt;fatal error: hdf5.h: No such file or directory&amp;lt;/code&amp;gt;,&lt;br /&gt;
then your compiler cannot see the HDF5 header files. Some distributions, notably (older) Ubuntu, place them into nonstandard folders&lt;br /&gt;
&amp;lt;code&amp;gt;/usr/include/hdf5/serial/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;/usr/include/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;.&lt;br /&gt;
Check these two folders or check your distributions hdf package for the locations of these files.&lt;br /&gt;
After you have determined the location, add that directory to the include directories,&lt;br /&gt;
using -I flag or in &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(/usr/include/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you wish to fix this problem permanently, you can create soft links to the headers in your &amp;lt;code&amp;gt;/usr/include&amp;lt;/code&amp;gt; directory, &lt;br /&gt;
by typing&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/include/hdf5/serial/* /usr/include&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
After this, there should be no compile time errors. If there are, please raise an issue.&lt;br /&gt;
&lt;br /&gt;
If you get error similar to &amp;lt;code&amp;gt;-lhdf5 not found&amp;lt;/code&amp;gt; and you have hdf5 installed, &lt;br /&gt;
you might have to link the libraries into a discoverable place, like &amp;lt;code&amp;gt;/usr/lib/&amp;lt;/code&amp;gt; &lt;br /&gt;
or add the above directory to the linker path. Similarly to above, check the &amp;lt;code&amp;gt;/usr/lib/x86_64-linux-gnu/hdf5/serial/&amp;lt;/code&amp;gt;&lt;br /&gt;
directory and look for file &amp;lt;code&amp;gt;libhdf5.a&amp;lt;/code&amp;gt;. When found,&lt;br /&gt;
specify the location using -L flag or &amp;lt;code&amp;gt;CMakeLists.txt&amp;lt;/code&amp;gt; by using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
link_directories(/usr/lib/x86_64-linux-gnu/hdf5/serial/)  # or your appropriate directory&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
or fix the problem permanently by soft-linking&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo ln -s /usr/lib/x86_64-linux-gnu/hdf5/serial/* /usr/lib&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Linear Algebra==&lt;br /&gt;
We use [http://eigen.tuxfamily.org/ Eigen] as our matrix library. See&lt;br /&gt;
[http://eigen.tuxfamily.org/dox-devel/group__QuickRefPage.html here] for use&lt;br /&gt;
reference and documentation. For a quick transition from Matlab see&lt;br /&gt;
[http://eigen.tuxfamily.org/dox/AsciiQuickReference.txt here].&lt;br /&gt;
&lt;br /&gt;
== Using Intel Math Kernel Library (MKL) ==&lt;br /&gt;
Install [https://software.intel.com/en-us/mkl Intel MKL] and take not of installation directories.&lt;br /&gt;
&lt;br /&gt;
Proper include and link directories need to be set to use the Intel MKL. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
include_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/include)    # change these to your installation path&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/mkl/lib/intel64)&lt;br /&gt;
link_directories(SYSTEM /opt/intel/compilers_and_libraries/linux/lib/intel64)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eigen has great support for MKL. You can see more detailed instructions [https://eigen.tuxfamily.org/dox/TopicUsingIntelMKL.html on their website].&lt;br /&gt;
To use MKL for math operations, define &amp;lt;code&amp;gt;EIGEN_USE_MKL_VML&amp;lt;/code&amp;gt; when compiling. You must also link&lt;br /&gt;
the appropriate libraries and define &amp;lt;code&amp;gt;MKL_LP64&amp;lt;/code&amp;gt; for use on a 64bit system.&lt;br /&gt;
With parallelism enabled, the configuration looks as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;cmake&amp;quot;&amp;gt;&lt;br /&gt;
target_compile_options(my_target PRIVATE &amp;quot;-fopenmp&amp;quot;)&lt;br /&gt;
target_compile_definitions(my_target PUBLIC EIGEN_USE_MKL_VML MKL_LP64)&lt;br /&gt;
target_link_libraries(my_target medusa mkl_intel_lp64 mkl_intel_thread mkl_core pthread iomp5)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have Intel Parallel Studio installed this also enables you to use the Pardiso paralle direct sparse solver through its [https://eigen.tuxfamily.org/dox/group__PardisoSupport__Module.html Eigen interface].&lt;br /&gt;
&lt;br /&gt;
== Using Intel C/C++ Compiler ==&lt;br /&gt;
&lt;br /&gt;
In order to use Intel's compiler when compiling Medusa, you have several standard optionas for &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt;.&lt;br /&gt;
Make sure compilers and installed and in your &amp;lt;code&amp;gt;PATH&amp;lt;/code&amp;gt; by running &amp;lt;code&amp;gt;which icc&amp;lt;/code&amp;gt;, which &lt;br /&gt;
should return something like &amp;lt;code&amp;gt;/opt/intel/bin/icc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You can define the compiler when *first* calling cmake like so&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If this is not your first call, remove the &amp;lt;code&amp;gt;build&amp;lt;/code&amp;gt; directory and start anew.&lt;br /&gt;
&lt;br /&gt;
You can also set the &amp;lt;code&amp;gt;CXX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;CC&amp;lt;/code&amp;gt; bash variables. Before calling&lt;br /&gt;
&amp;lt;code&amp;gt; cmake &amp;lt;/code&amp;gt; for the first time you have to export the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export CXX=&amp;quot;icpc&amp;quot;&lt;br /&gt;
export CC=&amp;quot;icc&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
You can also compile it directly for Intel® Xeon Phi™ Coprocessor. You do this by adding &amp;lt;code&amp;gt;-Dmmic=ON&amp;lt;/code&amp;gt;&lt;br /&gt;
flag to the &amp;lt;code&amp;gt;cmake&amp;lt;/code&amp;gt; command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cmake .. -Dmmic=ON -DCMAKE_C_COMPILER=$(which icc) -DCMAKE_CXX_COMPILER=$(which icpc) &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Note:&amp;lt;/b&amp;gt; All features that depend on system third-party libraries are not available on MIC (Many Integrated Core).&lt;br /&gt;
This includes:&lt;br /&gt;
&lt;br /&gt;
* HDF class in &amp;lt;code&amp;gt;io.hpp&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Relaxation_of_the_nodal_distribution&amp;diff=3197</id>
		<title>Relaxation of the nodal distribution</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Relaxation_of_the_nodal_distribution&amp;diff=3197"/>
				<updated>2022-06-16T12:50:33Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Summing variable from n to i so the names don't clash.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To construct stable and reliable shape functions the support domains need to be non-degenerated &amp;lt;ref name=&amp;quot;LeeLocal&amp;quot;&amp;gt;Lee CK, Liu X, Fan SC. Local muliquadric approximation for solving boundary value problems. Comput Mech. 2003;30:395-409.&amp;lt;/ref&amp;gt;, i.e. the distances between support nodes have to be balanced. Naturally, this condition is fulfilled in regular nodal distributions, but when working with complex geometries, the nodes have to be positioned accordingly. There are different algorithms designed to optimally fill the domain with different shapes &amp;lt;ref name=&amp;quot;LohnerGeneral&amp;quot;&amp;gt;Löhner R, Oñate E. A general advancing front technique for filling space with arbitrary objects. Int J Numer Meth Eng. 2004;61:1977-91.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LiuNode&amp;quot;&amp;gt;Liu Y, Nie Y, Zhang W, Wang L. Node placement method by bubble simulation and its application. CMES-Comp Model Eng. 2010;55:89.&amp;lt;/ref&amp;gt;, see [[Positioning of computational nodes]] for our algorithms. Here, an intrinsic feature of the MLSM is used to take care of that problem. The goal is to minimize the overall support domain degeneration in order to attain stable numerical solution. In other words, a global optimization problem with the overall deformation of the local support domains acting as the cost function is tackled. We seek the global minimum by a local iterative approach. In each iteration, the computational nodes are translated according to the local derivative of the potential &lt;br /&gt;
\begin{equation}&lt;br /&gt;
	\delta \b{p}\left( \b{p} \right)=-\sigma_{k}\sum\limits_{i=1}^{n}{\nabla }V\left( \b{p}-\b{p}_i \right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $V, n, \delta \b{p}, \b{p}_{i}$ and $\sigma_{k}$ stand for the potential, number of support nodes, offset of the node, position of i-th support node and relaxation parameter, respectively. After offsets in all nodes are computed, the nodes are repositioned as&lt;br /&gt;
$\b{p}\leftarrow \b{p}+\delta \b{p}\left( \b{p} \right)$. &lt;br /&gt;
Presented iterative process procedure begins with positioning of boundary nodes, which is considered as the definition of the domain, and then followed by the positioning of internal nodes. &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;BasicRelax&amp;lt;/code&amp;gt; engine  supports two call types:&lt;br /&gt;
* with supplied distribution function &amp;lt;code&amp;gt;relax(domain, func)&amp;lt;/code&amp;gt;, where it tries to satisfy the user supplied nodal density function &amp;lt;code&amp;gt;func&amp;lt;/code&amp;gt;. This can be achieved only when there is the total number of domain nodes the same as integral of density function over the domain. If there is too much nodes a volatile relax might occur. If there is not enough nodes the relax might become lazy. The best use of this mode is in combination with &amp;lt;code&amp;gt;GeneralFill&amp;lt;/code&amp;gt; and/or &amp;lt;code&amp;gt;GeneralSurfaceFill&amp;lt;/code&amp;gt;.&lt;br /&gt;
* without distribution, where nodes always move towards less populated area. The relax magnitude is simply determined from Annealing factor and distance to the closest node. A simple and stable approach, however, note that this relax always converges towards uniformly distributed nodes.&lt;br /&gt;
&lt;br /&gt;
Example of filling and relaxing a 2D domain can be found in below code snippet and Figures. Note the difference between relax without supplied distribution (&amp;lt;xr id=&amp;quot;fig:relax_uniform&amp;quot;/&amp;gt;) and with supplied distribution (&amp;lt;xr id=&amp;quot;fig:relax_non-uniform&amp;quot;/&amp;gt;). These examples are extreme, relaxation is usually used to convert an almost desirable node distribution into a desirable node distribution.&lt;br /&gt;
More examples can be found in main repository under tests.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
// Define domain shape.&lt;br /&gt;
double r = 0.25;&lt;br /&gt;
BallShape&amp;lt;Vec2d&amp;gt; bs({0.5, 0.5}, r);&lt;br /&gt;
&lt;br /&gt;
// Fill domain disretization with h.&lt;br /&gt;
auto h = [](Vec2d p){&lt;br /&gt;
    return (0.005 + (p[0] - 0.5) * (p[0] - 0.5) / 2 + (p[1] - 0.5) * (p[1] - 0.5) / 2);&lt;br /&gt;
};&lt;br /&gt;
DomainDiscretization&amp;lt;Vec2d&amp;gt; domain = bs.discretizeBoundaryWithDensity(h);&lt;br /&gt;
GeneralFill&amp;lt;Vec2d&amp;gt; gf;&lt;br /&gt;
gf(domain, h);&lt;br /&gt;
&lt;br /&gt;
// Relax domain to uniform distribution.&lt;br /&gt;
BasicRelax relax;&lt;br /&gt;
relax.iterations(1000).initialHeat(1).finalHeat(0).boundaryProjectionThreshold(0.75)&lt;br /&gt;
     .projectionType(BasicRelax::PROJECT_IN_DIRECTION).numNeighbours(5);&lt;br /&gt;
relax(domain);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:relax_uniform&amp;quot;&amp;gt;&lt;br /&gt;
[[File:relax_uniform.png|thumb|center|800px|&amp;lt;caption&amp;gt; Relaxation of a non-uniform node distribution to a uniform node distribution. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:relax_non-uniform&amp;quot;&amp;gt;&lt;br /&gt;
[[File:relax_non-uniform.png|thumb|center|800px|&amp;lt;caption&amp;gt; Relaxation of a uniform node distribution to a non-uniform node distribution. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Radial_basis_function-generated_finite_differences_(RBF-FD)&amp;diff=3196</id>
		<title>Radial basis function-generated finite differences (RBF-FD)</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Radial_basis_function-generated_finite_differences_(RBF-FD)&amp;diff=3196"/>
				<updated>2022-06-16T12:49:06Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Added clarification to what PHS is since it wasn't mentioned until now.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the computation of RBF-FD weight augmented with polynomials. See also [[Computation of shape functions]] and [[Meshless Local Strong Form Method (MLSM)]] for a more general discussion.&lt;br /&gt;
&lt;br /&gt;
== RBF-FD ==&lt;br /&gt;
$&lt;br /&gt;
\newcommand{\R}{\mathbb{R}}&lt;br /&gt;
\newcommand{\T}{\mathsf{T}}&lt;br /&gt;
\renewcommand{\L}{\mathcal{L}}&lt;br /&gt;
\renewcommand{\b}{\boldsymbol}&lt;br /&gt;
\newcommand{\n}{\b{n}}&lt;br /&gt;
\newcommand{\x}{\b{x}}&lt;br /&gt;
\newcommand{\w}{\b{w}}&lt;br /&gt;
\newcommand{\eps}{\varepsilon}&lt;br /&gt;
\newcommand{\lap}{\nabla^2}&lt;br /&gt;
\newcommand{\dpar}[2]{\frac{\partial #1}{\partial #2}}&lt;br /&gt;
$&lt;br /&gt;
&lt;br /&gt;
Approximations of partial differential operators are the core&lt;br /&gt;
of strong form meshless procedures. Consider a partial differential operator $\L$&lt;br /&gt;
at a point $\x_c$. Approximation of $\L$ at a point $\x_c$&lt;br /&gt;
is sought using an ansatz&lt;br /&gt;
\begin{equation}&lt;br /&gt;
  \label{eq:approx}&lt;br /&gt;
  (\L u)(\x_c) \approx \sum_{i=1}^{n} w_i u(\x_i).&lt;br /&gt;
\end{equation}&lt;br /&gt;
Here $\x_i$ are the neighboring nodes of $\x_c$ which&lt;br /&gt;
constitute its &amp;lt;i&amp;gt;stencil&amp;lt;/i&amp;gt;, $w_i$ are called &amp;lt;i&amp;gt;stencil weights&amp;lt;/i&amp;gt;,&lt;br /&gt;
$n$ is the &amp;lt;i&amp;gt;stencil size&amp;lt;/i&amp;gt; and $u$ is an arbitrary function.&lt;br /&gt;
&lt;br /&gt;
This form of approximation is desirable, since operator $\L$ at point $\x_c$&lt;br /&gt;
is approximated by a linear functional $\w_\L (\x_c)$, assembled of weights $\w_i$&lt;br /&gt;
\begin{equation}&lt;br /&gt;
  \label{eq:approx-vec}&lt;br /&gt;
  \L|_{\x_c} \approx \w_\L (\x_c)^\T&lt;br /&gt;
\end{equation}&lt;br /&gt;
and the approximation is obtained using just a dot product with the function values&lt;br /&gt;
in neighboring nodes. The dependence of $\w_\L (\x_c)$ on $\L$ and $\x_c$ is often omitted&lt;br /&gt;
and written simply as $\w$.&lt;br /&gt;
&lt;br /&gt;
To determine the unknown weights $\w$, equality of \eqref{eq:approx}&lt;br /&gt;
is enforced for a given set of functions. A natural choice are monomials, which&lt;br /&gt;
are also used in FDM, resulting in the Finite Point Method.&lt;br /&gt;
&lt;br /&gt;
In the RBF-FD discretization the equality is satisfied for radial basis functions $\phi_j$.&lt;br /&gt;
Each $\phi_j$, for $j = 1, \ldots, n$ corresponds to one linear equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
  \sum_{i=1}^{n} w_i \phi_j (\x_i) = (\L \phi_j)(\x_c)&lt;br /&gt;
\end{equation}&lt;br /&gt;
for unknowns $w_i$. Assembling these $n$ equations for into matrix form,&lt;br /&gt;
we obtain the following linear system:&lt;br /&gt;
\begin{equation} \label{eq:rbf-system}&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
\phi(\|\x_1 - \x_1\|) &amp;amp;\cdots &amp;amp; \phi(\|\x_n - \x_1\|)  \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots    \\&lt;br /&gt;
\phi(\|\x_1 - \x_n\|) &amp;amp;\cdots &amp;amp; \phi(\|\x_n - \x_n\|)&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
w_1 \\ \vdots \\ w_n&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
=&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
(\L\phi(\|\x-\x_1\|))|_{\x=\x_c} \\&lt;br /&gt;
\vdots \\&lt;br /&gt;
(\L\phi(\|\x-\x_n\|))|_{\x=\x_c} \\&lt;br /&gt;
\end{bmatrix},&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\phi_j$ have been expanded for clarity.&lt;br /&gt;
The above system will be written more compactly as&lt;br /&gt;
\begin{equation} \label{eq:rbf-system-c}&lt;br /&gt;
A \w = \b \ell_\phi.&lt;br /&gt;
\end{equation}&lt;br /&gt;
The matrix $A$ is symmetric, and for some $\phi$ even positive&lt;br /&gt;
definite.&amp;lt;ref&amp;gt;Wendland, Holger. Scattered data approximation. Vol. 17. Cambridge university press, 2004.&amp;lt;/ref&amp;gt; The [[Solving linear systems]] page has more details on numerical techniques for solving.&lt;br /&gt;
&lt;br /&gt;
== Monomial augmentation ==&lt;br /&gt;
Using approximations that only contain RBF can lead to stability issues with&lt;br /&gt;
conditioning under refinement or failure to converge due to stagnation errors.&lt;br /&gt;
To improve accuracy and convergence, the approximation can be augmented with polynomials.&lt;br /&gt;
&lt;br /&gt;
Let $p_1, \ldots, p_s$ be polynomials forming the basis of the space of $d$-dimensional&lt;br /&gt;
multivariate polynomials up to and including total degree $m$, with $s = \binom{m+d}{d}$.&lt;br /&gt;
&lt;br /&gt;
Since enforcing exactness of \eqref{eq:approx}&lt;br /&gt;
for additional function would result in an overdetermined system, these additional constraints are&lt;br /&gt;
enforced by extending \eqref{eq:rbf-system-c}&lt;br /&gt;
as&lt;br /&gt;
\begin{equation} \label{eq:rbf-system-aug}&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
A &amp;amp; P \\&lt;br /&gt;
P^\T &amp;amp; 0&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
\w \\ \b \lambda&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
=&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
\b \ell_{\phi} \\ \b \ell_{p}&lt;br /&gt;
\end{bmatrix}\!\!,\&lt;br /&gt;
P = \begin{bmatrix}&lt;br /&gt;
p_1(\x_1) &amp;amp; \cdots &amp;amp; p_s(\x_1) \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots        \\&lt;br /&gt;
p_1(\x_n) &amp;amp; \cdots &amp;amp; p_s(\x_n) \\&lt;br /&gt;
\end{bmatrix}\!\!,\&lt;br /&gt;
\b \ell_p = \begin{bmatrix}&lt;br /&gt;
(\L p_1)|_{\x=\x_c} \\&lt;br /&gt;
\vdots \\&lt;br /&gt;
(\L p_s)|_{\x=\x_c} \\&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $P$ is a $n \times s$ matrix of polynomials evaluated at stencil nodes and&lt;br /&gt;
$\b \ell_p$ is the vector of values assembled by applying considered operator $\L$ to&lt;br /&gt;
the polynomials at $\x_c$.&lt;br /&gt;
&lt;br /&gt;
Note that the equation $P^\T \w = \b \ell_p$ contains exactly exactness constraints for&lt;br /&gt;
$p_j$. However, the introduction of parameters $\lambda_j$&lt;br /&gt;
causes \eqref{eq:approx} to not be exact for $\phi_i$ anymore. In fact, is was&lt;br /&gt;
shown to be equivalent to the following constrained&lt;br /&gt;
minimisation problem&lt;br /&gt;
\begin{equation}&lt;br /&gt;
  \min_{\w} \left(\frac{1}{2} \w^\T A \w - \w^\T \b \ell_{\phi}\right), \text{ subject to }&lt;br /&gt;
  P^\T \b&lt;br /&gt;
  w = \ell_{p}&lt;br /&gt;
\end{equation}&lt;br /&gt;
and parameters $\b \lambda$ can be interpreted as Lagrangian multipliers.&lt;br /&gt;
&lt;br /&gt;
Weights obtained by solving \eqref{eq:rbf-system-aug}&lt;br /&gt;
are taken as approximations of $\L$ at $\x_c$, while values $\b \lambda$ are discarded.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
The $\ell_\infty$ error of solving Poisson equation on a unit ball using RBF-FD with PHS (Polyharmonic spline) $\phi(r) = r^3$ with various orders of monomial augmentation is shown below.&lt;br /&gt;
&amp;lt;ref name=&amp;quot;SlakRef&amp;quot;&amp;gt;Slak, J., &amp;amp; Stojanovič B. &amp;amp; Kosec, G. (2019). High order RBF-FD approximations with application to a scattering&lt;br /&gt;
  problem. SpliTech 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exactness constraints for monomials often produce high-order methods, since local error of function approximation is small (imagine that the Taylor expansion is exact up to a high order).&lt;br /&gt;
&lt;br /&gt;
[[File:poisson_augmentation_convergence.png|600px]]&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Integrators_for_time_stepping&amp;diff=3195</id>
		<title>Integrators for time stepping</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Integrators_for_time_stepping&amp;diff=3195"/>
				<updated>2022-06-16T12:45:03Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Typo fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes how to solve ordinary differential equations numerically with examples from our library. &lt;br /&gt;
&lt;br /&gt;
== Introduction and notation ==&lt;br /&gt;
&lt;br /&gt;
We are solving an initial value problem, given as &lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
\begin{align*}&lt;br /&gt;
  \dot{y}(t) &amp;amp;= f(t, y) \\&lt;br /&gt;
  y(t_0) &amp;amp;= y_0&lt;br /&gt;
\end{align*}&lt;br /&gt;
$&lt;br /&gt;
&lt;br /&gt;
where $y$ is the unknown (possibly vector) function, $t_0$ is the start time, $f$ is the derivative (the functions we wish to integrate) and $y_0$ is the initial value of $y$.&lt;br /&gt;
Numerically, we usually choose a time step $\Delta t$ and integrate the function up to a certain time $t_{\max}$. Times of subsequent time steps are denoted with $t_i$ and function values with $y_i$. &lt;br /&gt;
&lt;br /&gt;
The simplest method is explicit Euler's method:&lt;br /&gt;
$y_{n+1} = y_{n} + \Delta t f(t, y_n)$&lt;br /&gt;
&lt;br /&gt;
== Explicit (single step) methods ==&lt;br /&gt;
&lt;br /&gt;
A family of single step methods are explicit Runge-Kutta methods, which use intermediate derivative values to give a better estimation of value $y_{n+1}$ on the next step.&lt;br /&gt;
&lt;br /&gt;
It is given by&lt;br /&gt;
&lt;br /&gt;
$y_{n+1} = y_n + h \displaystyle \sum_{i=1}^s \beta_i k_i$&lt;br /&gt;
&lt;br /&gt;
where &lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
\begin{align*}&lt;br /&gt;
 k_1 &amp;amp; = f(t_n, y_n), \\&lt;br /&gt;
 k_2 &amp;amp; = f(t_n+\gamma_2h, y_n+h(\alpha_{21}k_1)), \\&lt;br /&gt;
 k_3 &amp;amp; = f(t_n+\gamma_3h, y_n+h(\alpha_{31}k_1+\alpha_{32}k_2)), \\&lt;br /&gt;
     &amp;amp; \ \ \vdots \\&lt;br /&gt;
 k_s &amp;amp; = f(t_n+\gamma_sh, y_n+h(\alpha_{s1}k_1+\alpha_{s2}k_2+\cdots+\alpha_{s,s-1}k_{s-1})).&lt;br /&gt;
\end{align*}&lt;br /&gt;
$&lt;br /&gt;
&lt;br /&gt;
To specify a particular method, one needs to provide the integer $s$ (the number of stages), and the coefficients $\alpha_{ij}$, $\beta_i$ and $\gamma_i$. This structure is known as the Butcher's tableau of the method: &lt;br /&gt;
$&lt;br /&gt;
\begin{array}{l|l}&lt;br /&gt;
  \gamma &amp;amp; \alpha \\ \hline&lt;br /&gt;
  &amp;amp; \beta^\T&lt;br /&gt;
\end{array}&lt;br /&gt;
$&lt;br /&gt;
&lt;br /&gt;
First order method is the Euler's method above, while methods of very high order are available.&lt;br /&gt;
The most famous is RK4, the Runge Kutta method of fourth order. &lt;br /&gt;
&lt;br /&gt;
$\begin{align*}&lt;br /&gt;
  &amp;amp; {{k}_{1}}=f({{t}_{i}},{{y}_{i}}) \\ &lt;br /&gt;
 &amp;amp; {{k}_{2}}=f({{t}_{i}}+\Delta t/2,{{y}_{i}}+\Delta t{{k}_{1}}/2) \\ &lt;br /&gt;
 &amp;amp; {{k}_{3}}=f({{t}_{i}}+\Delta t/2,{{y}_{i}}+\Delta t{{k}_{2}}/2) \\ &lt;br /&gt;
 &amp;amp; {{k}_{4}}=f({{t}_{i}}+\Delta t,{{y}_{i}}+\Delta t{{k}_{3}}) \\ &lt;br /&gt;
 &amp;amp; {{y}_{i+1}}={{y}_{i}}+\Delta t/6({{k}_{1}}+2{{k}_{2}}+2{{k}_{3}}+{{k}_{4}}). \\ &lt;br /&gt;
\end{align*}$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A more complete list can be found [https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods here].&lt;br /&gt;
&lt;br /&gt;
Most common methods are implemented in the [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/namespacemm_1_1integrators_1_1Explicit.html Explicit] namespace.&lt;br /&gt;
&lt;br /&gt;
== Explicit multistep methods ==&lt;br /&gt;
These methods are called Adams–Bashforth methods, and need previous $s$ are used to estimate next value with a higher order. These method hope to gain computational efficiency by decreasing the number of function computations needed by using already computed values.&lt;br /&gt;
&lt;br /&gt;
The explicit version is stated as $y_{n+s} = y_{n+s-1} + \sum_{i=1}^s b_i y_{n+s-i}$. The values $b_i$ are called weights.&lt;br /&gt;
&lt;br /&gt;
The drawbacks of these mehtods are that another method of same order is needed to compute the initial time steps and that they can not be easily modified to work with adaptive time steps.&lt;br /&gt;
&lt;br /&gt;
Most common methods are implemented in [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/namespacemm_1_1integrators_1_1ExplicitMultistep.html ExplicitMultistep] namespace.&lt;br /&gt;
&lt;br /&gt;
== Adaptive methods ==&lt;br /&gt;
Adaptive methods estimate the error on each time step and descrease or increase the step as necessary. &lt;br /&gt;
Usually the have some addition parameters, such as $\Delta t_{\max}$ and $\Delta t_{\min}$ representing maximal and minimal allowed time steps and $\varepsilon$, represeting tolrance. &lt;br /&gt;
&lt;br /&gt;
A family of adaptive methods can be derived from Runge Kutta methods above. Using a method of order $k$ and $k+1$, that differ only in weights, called $\beta$ and $\beta^\ast$, error can be estimated using [https://en.wikipedia.org/wiki/Richardson_extrapolation Richardson Extrapolation]:&lt;br /&gt;
&lt;br /&gt;
$e_{n+1} = y_{n+1} - y^*_{n+1} = h\sum_{i=1}^s (\beta_i - \beta^*_i) k_i$.&lt;br /&gt;
&lt;br /&gt;
If $\|e_{n+1}\| &amp;lt; \varepsilon$, time step can be increased, otherwise it is decreased. This can be achieved by scaling $\Delta t$ in accorance with $\varepsilon / \|e_{n+1}\|$.&lt;br /&gt;
&lt;br /&gt;
These methods are implemented in Matlab as ode45 and similar. They can be found [https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods#Embedded_methods here], with a sample Matlab implementation of Cash-Karp method [https://github.com/jureslak/numerika-fmf/blob/master/ninde/hw2/CashKarp.m here].&lt;br /&gt;
&lt;br /&gt;
These methods are not implemented yet, but will probably be added in the future.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
Ordinary ODE:&lt;br /&gt;
Solve $\dot{x} = y, \dot{y} = -x$ with $x(0) = 1, y(0) = 0$. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
// define the function&lt;br /&gt;
std::function&amp;lt;Eigen::VectorXd(double, const Eigen::VectorXd&amp;amp;)&amp;gt; func =&lt;br /&gt;
        [](double, const Eigen::VectorXd&amp;amp; y) {&lt;br /&gt;
            Eigen::VectorXd r(2);&lt;br /&gt;
            r(0) = -y(1);&lt;br /&gt;
            r(1) = y(0);&lt;br /&gt;
            return r;&lt;br /&gt;
        };&lt;br /&gt;
&lt;br /&gt;
double tmax = 2*M_PI;&lt;br /&gt;
double step = 0.1;&lt;br /&gt;
Eigen::VectorXd y0(2); y0 &amp;lt;&amp;lt; 1.0, 0.0;&lt;br /&gt;
auto integrator = integrators::Explicit::RK4().solve(func, 0.0, tmax, step, y0);  // clss representing the method and storing the function and initial conditions.&lt;br /&gt;
&lt;br /&gt;
auto stepper = integrator_rk4.begin();  // iterator over the solver, integrating $f$ step by step&lt;br /&gt;
&lt;br /&gt;
while (stepper_rk4) {  // cast to bool tells us if we are done&lt;br /&gt;
    ++stepper_rk4;&lt;br /&gt;
    // do something with step&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// multiple steppers can exists at the same time&lt;br /&gt;
auto stepper2 = integrator.begin(); &lt;br /&gt;
&lt;br /&gt;
// we can also use range based for loop to integrate by time&lt;br /&gt;
for (auto&amp;amp; step : integrator) {&lt;br /&gt;
   //  do something with step&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Heat equation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
// after domain and operators are created&lt;br /&gt;
auto dv_dt = [&amp;amp;](double, const VecXd&amp;amp; y) {&lt;br /&gt;
    Eigen::VectorXd der(y.size());&lt;br /&gt;
    for (int c : interior) {&lt;br /&gt;
        der[c] = op.lap(y, c);&lt;br /&gt;
    }&lt;br /&gt;
    for (int c : boundary) {&lt;br /&gt;
        der[c] = 0;&lt;br /&gt;
    }&lt;br /&gt;
    return der;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
auto integrator = integrators::Explicit::RK4().solve(dv_dt, 0.0, time, dt, T1);  // choose your own integrator&lt;br /&gt;
for (auto&amp;amp; step : integrator) {&lt;br /&gt;
   // do something&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://en.wikipedia.org/wiki/Lorenz_system Lorenz system:]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// System parameters&lt;br /&gt;
double sigma = 10.;&lt;br /&gt;
double rho = 28.;&lt;br /&gt;
double beta = 8./3.;&lt;br /&gt;
&lt;br /&gt;
auto dy_dt = [&amp;amp;](double, const Vec3d&amp;amp; y) {&lt;br /&gt;
    Vec3d der(3);&lt;br /&gt;
    der(0) = sigma*(y(1)-y(0));&lt;br /&gt;
    der(1) = y(0)*(rho-y(2))-y(1);&lt;br /&gt;
    der(2) = y(0)*y(1)-beta*y(2);&lt;br /&gt;
    return der;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
double dt = 0.001; // Timestep&lt;br /&gt;
Vec3d y0 = {1, 1, 1}; // Initial condition&lt;br /&gt;
&lt;br /&gt;
auto integrator = integrators::Explicit::RK4().solve(dy_dt, 0, 100, dt, y0);&lt;br /&gt;
for (auto&amp;amp; step : integrator) {&lt;br /&gt;
    auto t = step.time();&lt;br /&gt;
    auto y = step.value();&lt;br /&gt;
    // output values&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:lorenz_system&amp;quot;&amp;gt;&lt;br /&gt;
[[File:lorenz.png|500px|thumb|upright=2|center|&amp;lt;caption&amp;gt;A sample solution of the Lorenz system for the values $\sigma=10$, $\rho=28$ and $\beta = 8/3$. The two trajectories (one in orange, the other in blue) correspond to two initial points that differ by a small amount in the $z$-coordinate.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Ghost_nodes_(theory)&amp;diff=3194</id>
		<title>Ghost nodes (theory)</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Ghost_nodes_(theory)&amp;diff=3194"/>
				<updated>2022-06-16T12:40:37Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Typo fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;figure id=&amp;quot;fig:tests&amp;quot;&amp;gt;&lt;br /&gt;
[[File:ghost.png|417px|thumb|upright=2|alt=Ghost nodes schema.|&amp;lt;caption&amp;gt;Ghost node configuration at the domain boundary.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ghost nodes are a technique for discretizing (mostly) Neumann boundary conditions in PDEs. See the [[Ghost nodes|example.]]&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Ghost nodes are a technique used for discretizing Neumann boundary conditions in FDM. To be able to use the central difference for first derivative, additional point, called ghost point, is introduced outside the domain boundary. &lt;br /&gt;
The unknown function value at the ghost node is added as a variable. At the boundary node, the Neumann condition is enforced, as well as the equation itself (two equations for two unknowns, the ghost and the boundary function value).&lt;br /&gt;
&lt;br /&gt;
== Meshless setting (implicit) ==&lt;br /&gt;
Consider the problem $\mathcal{L}u = f$ with Neumann boundary conditions $\frac{\partial u}{\partial n} = g$ on some portion $\Gamma_1$ of the boundary and &lt;br /&gt;
Dirichlet conditions $u = u_0$ on portion $\Gamma_2$. Denote the number of internal nodes with $N_i$, number of Neumann boundary nodes with $N_n$ and number of Dirichlet boundary nodes with $N_d$. &lt;br /&gt;
The total number of nodes $N$ is equal to $N = N_i + N_n + N_d$ and so is the number of unknowns, representing solution values at these nodes.&lt;br /&gt;
&lt;br /&gt;
For each Neumann node $p$, additional ''ghost'' or ''fictious'' nodes are placed outside the domain, as seen in the figure on the right. This is usually done &lt;br /&gt;
so that one ghost node is added for each boundary node, spaced $h$ away in the normal direction, where $h$ is the local nodal spacing.&lt;br /&gt;
This increases the number of nodes and unknowns by $N_n$.&lt;br /&gt;
Stencils and stencil weights are computed as before, and ghost nodes are included in the stencils. For each node $p_j$ denote the indices of its stencil nodes by $I_j$.&lt;br /&gt;
We will denote the computed stencil weights for operators with $w_\mathcal{L,j}$ and $w_{\frac{\partial }{\partial n},j}$.&lt;br /&gt;
&lt;br /&gt;
We have the same equations as before:&lt;br /&gt;
&lt;br /&gt;
* $N_i$ for interior nodes: $w_{\mathcal{L},j} \cdot \boldsymbol{u}_{I_j} = f_j$&lt;br /&gt;
* $N_d$ for Dirichlet nodes: $u_j = u_{0,j}$&lt;br /&gt;
* $N_n$ for Neumann nodes: $w_{\frac{\partial }{\partial n},j} \cdot \boldsymbol{u}_{I_j} = g_j$&lt;br /&gt;
&lt;br /&gt;
Additional $N_n$ equations for ghost nodes are required. They are obtained by stating that the PDE itself must also hold in the boundary node, i.e., we add &lt;br /&gt;
&lt;br /&gt;
* additional $N_n$ equations for Neumann nodes: $w_{\mathcal{L},j} \cdot \boldsymbol{u}_{I_j} = f_j$.&lt;br /&gt;
&lt;br /&gt;
Note that no requirements about the PDE or BCs are made in the ghost nodes, they are simply involved as stencil nodes in the two equations for Neumann nodes (and possibly others).&lt;br /&gt;
&lt;br /&gt;
All these equations are assembled in a sparse matrix which is solved and values obtained for ghost nodes can be neglected.&lt;br /&gt;
&lt;br /&gt;
Additionally, more than one layer of ghost nodes can be added and additional equations on the boundary are needed. Sometimes, equations $\mathcal{L}^2 u = \mathcal{L}f$, etc... are used.&lt;br /&gt;
&lt;br /&gt;
For an example of ghost nodes generation and use in Medusa see the [[Ghost nodes]] example.&lt;br /&gt;
&lt;br /&gt;
=== Explicit version ===&lt;br /&gt;
&lt;br /&gt;
When explicit time stepping is used, Neumann boundary conditions and ghost nodes are more difficult to implement.&lt;br /&gt;
We assume that the values of the interior in the $n$-th iteration are already known and denoted by $u_j^n$, as well as the &lt;br /&gt;
boundary and ghost values from the previous iteration $u_j^{n-1}$. &lt;br /&gt;
&lt;br /&gt;
Consider a boundary node $p_j$ with an associated ghost node $p_{\gamma_j}$ and stencil indices $I_j = \{j, \gamma_j, i_1, \ldots, i_{n-2}\}$.&lt;br /&gt;
Denote the stencil weights at $p_j$ for $\mathcal{L}$ as $w_{\mathcal{L},j}$ and for $\frac{\partial }{\partial n}$ as $w_{\frac{\partial }{\partial n},j}$.&lt;br /&gt;
&lt;br /&gt;
We have two equations at this node:&lt;br /&gt;
&lt;br /&gt;
$$\sum_{i \in I_j} w_{\mathcal{L},j,i} u_i^{n} = f_j$$&lt;br /&gt;
$$\sum_{i \in I_j} w_{\frac{\partial }{\partial n},j, i} u_i^{n} = g_j$$&lt;br /&gt;
&lt;br /&gt;
A system of these $2N_n$ equations can be solver in each iteration to obtain the next values of ghost and boundary nodes. &lt;br /&gt;
&lt;br /&gt;
Furthermore, this system can be made explicit using single step of Gauss-Seidel iteration on a single step. We use the second equation to compute the &lt;br /&gt;
new value of $u_{\gamma_j}$ and the first to compute the new value of $u_j$.&lt;br /&gt;
&lt;br /&gt;
For each ghost node $p_{\gamma_j}$:&lt;br /&gt;
$$u_{\gamma_j}^n = \frac{1}{w_{\frac{\partial }{\partial n},j,\gamma_j}} \left[ g_j - w_{\frac{\partial }{\partial n},j,j} u_j^{n-1} - \sum_{k=1}^{n-2}w_{\frac{\partial }{\partial n},j,i_k} u_{i_k}^{n|n-1} \right]$$&lt;br /&gt;
&lt;br /&gt;
For each boundary node $p_j$:&lt;br /&gt;
$$u_{j}^n = \frac{1}{w_{\mathcal{L},j,\gamma_j}} \left[ f_j - w_{\mathcal{L},j,\gamma_j} u_{\gamma_j}^{n} - \sum_{k=1}^{n-2}w_{\mathcal{L},j,i_k} u_{i_k}^{n|n-1} \right]$$&lt;br /&gt;
&lt;br /&gt;
The time index $n|n-1$ refers to either $n$ it the value of node $i_k$ has already been computed in this iteration (e.g. it is an internal node or already computed ghost or boundary node) or $n-1$ if the value is now known yet.&lt;br /&gt;
&lt;br /&gt;
We are assuming that the stencil weight of the ghost node is nonzero in the discretization of the Neumann BC and that the weight of the boundary noe is nonzero in the discretization of $\mathcal{L}$. If this is not the case, the equations need to be rearranged differently.&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=3192</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=3192"/>
				<updated>2022-06-16T12:30:48Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Grammar&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:''k''-d tree}}&lt;br /&gt;
In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh connectivity relations. &lt;br /&gt;
However, in meshless methods, the stencil/support nodes are usually chosen as the nearest $n$ visible nodes. &lt;br /&gt;
&lt;br /&gt;
In order to compute nearest neighbors effectively, fast spatial search structures have been developed, such as [https://en.wikipedia.org/wiki/Octree octree], [https://en.wikipedia.org/wiki/Ball_tree ball tree], [https://en.wikipedia.org/wiki/Cover_tree cover tree] and [https://en.wikipedia.org/wiki/K-d_tree $k$-d tree]. All of these structures first take a list of $N$ points and build a search structure, enabling  subsequent queries for $n$ nearest neighbors to be answered in $O(n\log N)$ time.&lt;br /&gt;
The $k$-d tree structure is described below to give the reader a basic idea of how such spatial structures store data.&lt;br /&gt;
&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
A $k$-d tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the $k$-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have $x$, $y$ and $z$ axes. At root level we decide to divide points according to their $x$ values (we could also do it according to $y$ or $z$ values). Usual criterion for division of points is the median. Thus, at root level, we look for median point by looking at $x$ coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller $x$ coorinate into the left subtree and points which have larger $x$ coordinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to $y$ or $z$ coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
Often, dynamic search structures are needed as well, which support node insertion and removal. In the beginning we were using [http://www.cs.umd.edu/~mount/ANN/ ANN] and [https://www.savarese.com/software/libssrckdtree/ libssrckdtree]&lt;br /&gt;
libraries for static and dynamic spatial search structures, respectively. Later, we tested four different publicly available C++ libraries that support at least static spatial search:&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/~mount/ANN/ ANN]&lt;br /&gt;
* [https://www.savarese.com/software/libssrckdtree/ libssrckdtree]&lt;br /&gt;
* [http://spatial.sourceforge.net/ spatial]&lt;br /&gt;
* [https://github.com/jlblancoc/nanoflann/ nanoflann]&lt;br /&gt;
&lt;br /&gt;
Two typical usage examples for these structures were tested. First, we tested a typical example of finding support domains of $n$ nodes for all nodes, which includes building a tree from a set of points and finding $n$ closest nodes for each point. The results are shown below.&lt;br /&gt;
&lt;br /&gt;
[[File:kdtree_find_support_time.png]]&lt;br /&gt;
&lt;br /&gt;
Second, we tested how the structures performed during the node generation process, as described in [[Positioning of computational nodes]] page. Results are shown in figure below.&lt;br /&gt;
&lt;br /&gt;
[[File:kdtree_pds_time.png]]&lt;br /&gt;
&lt;br /&gt;
Besides speed, which was the main factor, we also considered the number of dependencies a library has and how general it is regarding distances and data types.&lt;br /&gt;
Overall, we decided to use the [https://github.com/jlblancoc/nanoflann/ nanoflann] library in [http://e6.ijs.si/medusa/ medusa]. Our wrappers are available &lt;br /&gt;
as [http://e6.ijs.si/medusa/docs/html/classmm_1_1KDTree.html KDTree] and [http://e6.ijs.si/medusa/docs/html/classmm_1_1KDTreeMutable.html KDTreeMutable].&lt;br /&gt;
&lt;br /&gt;
The results and reasoning is described more accurately in the technical report [[:File:report.pdf]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing: theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	<entry>
		<id>https://e6.ijs.si/medusa/wiki/index.php?title=Refinement_of_the_nodal_distribution&amp;diff=3191</id>
		<title>Refinement of the nodal distribution</title>
		<link rel="alternate" type="text/html" href="https://e6.ijs.si/medusa/wiki/index.php?title=Refinement_of_the_nodal_distribution&amp;diff=3191"/>
				<updated>2022-06-16T12:29:04Z</updated>
		
		<summary type="html">&lt;p&gt;AndrejKP: Typo fix. Their is -&amp;gt; there is&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here we consider possible meshless refinement algorithms (sometimes also called adaptive cloud refinement). The refinement mechanisms we have so far studied include:&lt;br /&gt;
* refinement based on closest node distance&lt;br /&gt;
* refinement based on averaged (inter-)node distance&lt;br /&gt;
* refinement based on half-links&lt;br /&gt;
&lt;br /&gt;
Here we only want to compare the quality of the refined grids and have not tied the refinement algorithm with a error indicator, thus we only study the node insertion process by refining the whole grid. &lt;br /&gt;
&lt;br /&gt;
The refinement routine takes a range of nodes (e.g. a subregion of the domain) together with the refinement parameters and generates new nodes around the old ones. Special care must be taken with refinement of the boundary nodes. Points have to be selected on the actual boundary either analytically considering the geometry or with a numerical root finder such as bisection. &lt;br /&gt;
&lt;br /&gt;
==Problem description==&lt;br /&gt;
&lt;br /&gt;
To compare the node refinement mechanisms we study the process of reaction-diffusion in an infinite cylindrical catalyst pellet (infinite in the $z$-dimension). Since the pellet is infinite in one dimension this problem simplifies to a 2D problem (in the $xy$-plane). For a catalyst pellet of radius $R$ centered at $(x,y) = (0,0)$ and the reactant undergoing a first order reaction we must solve the equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 C - {M_T}^2 C = 0,&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $C$ is the concentration of the reactant, $M_T = R\sqrt{k/D}$ is known as Thiele's modulus and $k$ and $D$ represent the reaction rate constant and diffusivity of the reacting species. The boundary conditions for this problem is \[C(R) = C_s.\] The analytical solution can be found easily using cylindrical coordinates and is given by&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\frac{C(r)}{C_S} = \frac{I_0(r M_T)}{I_0(R M_T)}, &lt;br /&gt;
\end{equation}&lt;br /&gt;
where $I_0(r)$ is the modified Bessel function of first kind (this function is available in the library Boost as well as scripting languages such as Python or MATLAB). The conversion from cartesian to cylindrical coordinates is given by \[r = \sqrt{x^2+y^2}.\]&lt;br /&gt;
&lt;br /&gt;
==Error indicators==&lt;br /&gt;
&lt;br /&gt;
To compare the quality of the refined meshes for the described problem case we look at different error criteria including the max norm $L_\infty$ defined as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
L_\infty = \mathrm{max}_i \left|C_i^\mathrm{numerical} - C_i^\mathrm{analytical}\right|,&lt;br /&gt;
\end{equation}&lt;br /&gt;
the $L_2$ norm per node defined as&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\bar{L_2} = \frac{\sqrt{\sum^N_{i = 1}\left(C_i^\mathrm{numerical} - C_i^\mathrm{analytical}\right)^2}}{N},&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $N$ is the number of nodes (and pertinent equations) in the domain.&lt;br /&gt;
&lt;br /&gt;
We also measure the number of iterations required by the sparse BiCGSTAB solver to reach convergence and the estimated error of solving the system of equations.&lt;br /&gt;
&lt;br /&gt;
== Closest node ==&lt;br /&gt;
&lt;br /&gt;
For a given node $\b{x}_0 = (x_0,y_0)$:&lt;br /&gt;
&lt;br /&gt;
# find the closest node $\b{x}_1 = (x_1,y_1)$   &lt;br /&gt;
# calculate the half distance between the two nodes \[d = |\b{x}_1 - \b{x}_0|/2\]&lt;br /&gt;
# randomly select up to 6 (The case of 6 nodes is the limit since it produces a regular hexagon. In practice this never occurs due to the &amp;quot;monte carlo&amp;quot; node selection procedure.) new nodes on the circle with center $\b{x}_0$ and radius $d$ and simultaneously make sure there is a minimal inter-nodal distance $d$ between the new nodes. &lt;br /&gt;
&lt;br /&gt;
For boundary points we first select 2 points that intersect with the boundary of the domain and only then points lying inside the domain. Due to geometrical constraints boundary points will usually end up with 3 new nodes (in case of straight boundaries we could end up with 4, which would be the previously discussed hexagon limit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_refinement_1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:closest_node.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Refinement based on closest node approach (initial unrefined grid is on the left). In the second refinement step an erroneous point has appeared from an internal point that was too close to the boundary. Also noticable is clustering of points on the boundary.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Average radius ==&lt;br /&gt;
&lt;br /&gt;
Input parameters: $f$ and $l_s$&lt;br /&gt;
&lt;br /&gt;
For a given node $\b{x}_0$:&lt;br /&gt;
# find the $l_s$ (an integer from 1 to 7) closest nodes&lt;br /&gt;
# calculate the average distance $\bar{d}$ to the $l_s$ closest nodes&lt;br /&gt;
# randomly select up to 6 new nodes on the circle with center $\b{x}_0$ and radius $f\cdot\bar{d}$ where $f$ is the radius fraction that lies between 0.2 (leads to clustering) and 0.8. Only allow nodes that are separated by the distance $f \cdot \bar{d}$. &lt;br /&gt;
&lt;br /&gt;
''(note that in case $l_s = 1$ and $f = 0.5$ the average radius mechanism becomes equal to the closest node refinement approach described above)''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_refinement_2&amp;quot;&amp;gt;&lt;br /&gt;
[[File:new_average_radius.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Refinement based on average radius approach (initial unrefined grid is on the left). The parameters are $l_s = 5$ closest nodes in average radius calculation and points placed at radius fraction $f = 0.5$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Half-links ==&lt;br /&gt;
&lt;br /&gt;
''Warning: this is still a prototype''&lt;br /&gt;
&lt;br /&gt;
Input parameters: $l_s$, $d_m$&lt;br /&gt;
&lt;br /&gt;
For a given node $\b{x}_0$:&lt;br /&gt;
# find the $l_s$ (an integer from 1 to 7) closest nodes $\b{x}_i$&lt;br /&gt;
# select new nodes in the middle of the segments $\b{x}_i - \b{x}_0$ only allowing points that are separated by the minimal distance $d_m$&lt;br /&gt;
&lt;br /&gt;
''(note also that in the 1D case the half-link and closest radius approach become the same)''&lt;br /&gt;
&lt;br /&gt;
The minimal distance $d_m$ is chosen as a fraction of the distance to the closest link, e.g. $d_m = f d$, where $f$ is the provided fraction and $d$ is the distance to the closest link.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_refinement_3&amp;quot;&amp;gt;&lt;br /&gt;
[[File:half_link.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Refinement based on half links (initial unrefined grid is on the left). The parameters are $l_s = 6$ and $d_m = 0.4 d$, where $d$ is the distance to the closest link.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_refinement_3a&amp;quot;&amp;gt;&lt;br /&gt;
[[File:refineWithRelax.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Refinement based on half links with additional 10 step relax after refinement&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After experimentation we noticed there are some inconsistencies when trying to refine structured point sets with this approach. The reason for these inconsistencies is that the boundary and internal points have a different number of &amp;quot;natural neighbours&amp;quot;. For example in 2D on a square grid, the internal points have 8 neighbours, while boundary points have 5 neighbours. If we choose higher numbers e.g 9 links for an internal node, the 9th node might be any of the 4 nodes one shell further out that only differ at machine precision.&lt;br /&gt;
&lt;br /&gt;
The figures below show some preliminary results of refinement based on half-links. For the circle domain relaxation was applied after the refinement.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_refinement&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dc_field.png|1000px|thumb|center|&amp;lt;caption&amp;gt; Thermal diffusion in (convective) flow at a stagnation point (bottom left corner).&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After the second refinement of the corner, the solver had difficulty converging to the solution. This was the result of a fixed size shape parameter in the shape functions of the node points. The shape functions have to be tailored to the local characteristic distance in the point set.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:circle_refinement&amp;quot;&amp;gt;&lt;br /&gt;
[[File:ref_circle.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Reaction-diffusion in a cylinder catalyst. Two successive refinements have been applied for $r &amp;gt; 0.5$ and $r &amp;gt; 0.8$, where $r$ is the radial coordinate. The cylinder radius $R = 1$.$&amp;lt;/caption&amp;gt;]] &lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hybrid approach ==&lt;br /&gt;
&lt;br /&gt;
A hybrid might give better distributions of the refined points. The half-link approach performs well at the boundaries while the distance approach gives less regular internal distributions. In any case it is suggested to perform a few more relaxation steps to equilibrate the mesh.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_refinement_4&amp;quot;&amp;gt;&lt;br /&gt;
[[File:hybrid_refine.png|1000px|thumb|center|&amp;lt;caption&amp;gt;Refinement based on half links at the boundaries and closest distances for the internal nodes (initial unrefined grid is on the left). for the boundary nodes the parameters are $l_s = 7$ and $d_m = 0.5 d$, where $d$ is the distance to the closest link.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>AndrejKP</name></author>	</entry>

	</feed>