<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://e6.ijs.si/medusa/wiki/api.php?action=feedcontributions&amp;user=JLapajne&amp;feedformat=atom</id>
		<title>Medusa: Coordinate Free Mehless Method implementation - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://e6.ijs.si/medusa/wiki/api.php?action=feedcontributions&amp;user=JLapajne&amp;feedformat=atom"/>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php/Special:Contributions/JLapajne"/>
		<updated>2026-05-08T16:32:25Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.1</generator>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1139</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1139"/>
				<updated>2017-05-04T18:56:58Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: /* Convergence with respect to number of nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|center|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|center|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_comparison&amp;quot;&amp;gt;&lt;br /&gt;
[[File:explicit_implicit_diffusion_comparison_ver2.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for explicit and implicit mixed boundary condition (Neumann is enforced in two different ways).&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 4 shows convergence of 1/4 of the whole solution with Dirichlet boundary conditions (Fig. 1). The nodes were spanning the area $[0, a/2]^2$. As expected, higher number of nodes yields lower max error up to some maximal number of nodes. The latter is lower for explicit than for implicit method.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of time steps==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
==Using Gaussian basis==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
== Convergence with respect to shape parameters ==&lt;br /&gt;
&lt;br /&gt;
Choice of shape of the basis function can have significant effect on solution of the system.&lt;br /&gt;
We tested the space of feasible shape parameters on the diffusion equation with $\Delta x = \Delta y = 0.01$&lt;br /&gt;
and $m = n = 9$. The weight function does not have much influence here, except on the condition &lt;br /&gt;
number and any singular values that get cut off because of that.&lt;br /&gt;
&lt;br /&gt;
Results for Gaussian, MQ and IMQ basis functions are presented in &amp;lt;xr id=&amp;quot;fig:shape_space_gau&amp;quot;/&amp;gt;, &amp;lt;xr id=&amp;quot;fig:shape_space_mq&amp;quot;/&amp;gt; and &amp;lt;xr id=&amp;quot;fig:shape_space_imq&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_gau&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_gau.png|thumb|left|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_mq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_imq.png|thumb|800px|&amp;lt;caption&amp;gt;Error as a function of MQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mq.png|thumb|upright=1|left|800px|&amp;lt;caption&amp;gt;Error as a function of IMQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mon.png|thumb|upright=1|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian weight's $\sigma_W$ for Monomials of order 2.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving in 3D==&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]==&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with MLSM operators==&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving mixed boundary conditions with MLSM operators==&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes and support size==&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to weight function==&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Solving Electrostatics =&lt;br /&gt;
&lt;br /&gt;
This example is taken from the [http://www.freefem.org/ff++/ftp/freefem++doc.pdf FreeFem++ manual (page 235)].&lt;br /&gt;
&lt;br /&gt;
Assuming there is no current and the charge distribution is time independent, the electric field $\b{E}$ satisfies&lt;br /&gt;
\begin{equation}\label{eq:electrostatics}&lt;br /&gt;
\b{\nabla}\cdot\b{E} = \frac{\rho}{\epsilon}, \quad \b{\nabla} \times \b{E} = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\rho$ is the charge density and $\epsilon$ is the permittivity. If we introduce an electrostatics potential $\phi$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{E} = -\b{\nabla}\phi,&lt;br /&gt;
\end{equation}&lt;br /&gt;
we can insert it into the first equation in (\ref{eq:electrostatics}) resulting in Poisson's equation&lt;br /&gt;
\begin{equation}\label{eq:poisson}&lt;br /&gt;
\b{\nabla}^2 \phi = -\frac{\rho}{\epsilon}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
In the absence of unpaired electric charge equation (\ref{eq:poisson}) becomes Laplace's equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 \phi = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
We now solve this equation for a circular enclosure with two rectangular holes. The boundary of the circular enclosure is held at constant potential $0$ V. The two rectangular holes are held at constant potentials $+1$ V and $-1$ V, respectively. A coloured scatter plot is available in figure &amp;lt;xr id=&amp;quot;fig:electro_statics&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt; '''Code for electrostatics problem'''&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
    // domain parameters, size, support and monomial order&lt;br /&gt;
    int domain_size = 3000;&lt;br /&gt;
    size_t n = 15; &lt;br /&gt;
    size_t m = 3; &lt;br /&gt;
&lt;br /&gt;
    double radius = 5.0;&lt;br /&gt;
    double width = 0.3;&lt;br /&gt;
    double height = 3.0;&lt;br /&gt;
&lt;br /&gt;
    // extra boundary labels&lt;br /&gt;
    int LEFT = -10;&lt;br /&gt;
    int RIGHT = -20;&lt;br /&gt;
&lt;br /&gt;
    // build circular domain&lt;br /&gt;
    CircleDomain&amp;lt;Vec2d&amp;gt; domain({0,0},radius);&lt;br /&gt;
    domain.fillUniformInterior(domain_size);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
&lt;br /&gt;
    // build left rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; left({-2-width,-height},{-2+width,height});&lt;br /&gt;
    left.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    left.types[left.types == BOUNDARY] = LEFT;&lt;br /&gt;
    domain.subtract(left);&lt;br /&gt;
&lt;br /&gt;
    // build right rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; right({2-width,-height},{2+width,height});&lt;br /&gt;
    right.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    right.types[right.types == BOUNDARY] = RIGHT;&lt;br /&gt;
    domain.subtract(right);&lt;br /&gt;
&lt;br /&gt;
    // relax domain and find supports&lt;br /&gt;
    domain.relax(50, 1e-2, 1.0, 3, 1000);&lt;br /&gt;
    domain.findSupport(n);&lt;br /&gt;
&lt;br /&gt;
    // get interior and boundary ranges&lt;br /&gt;
    Range&amp;lt;int&amp;gt; interior = domain.types == INTERNAL;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; boundary = domain.types == BOUNDARY;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; leftrect = domain.types == LEFT;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; rightrect = domain.types == RIGHT;&lt;br /&gt;
&lt;br /&gt;
    // initialize unknown concentration field and right-hand side&lt;br /&gt;
    VecXd phi(domain.size(),1);&lt;br /&gt;
    VecXd RHS(domain.size(),1);&lt;br /&gt;
&lt;br /&gt;
    // initialize interior values (this is an important step)&lt;br /&gt;
    RHS[interior] = 0.0;&lt;br /&gt;
&lt;br /&gt;
    // set dirichlet boundary conditions&lt;br /&gt;
    RHS[boundary] = 0.0;&lt;br /&gt;
    RHS[leftrect] = -1.0;&lt;br /&gt;
    RHS[rightrect] = 1.0;&lt;br /&gt;
&lt;br /&gt;
    // prepare shape functions and laplacian&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; l_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : interior) {&lt;br /&gt;
        Range&amp;lt;Vec2d&amp;gt; supp_domain = domain.positions[domain.support[c]];&lt;br /&gt;
        EngineMLS&amp;lt;Vec2d, Monomials, Gaussians&amp;gt; MLS(m,supp_domain,pow(domain.characteristicDistance(),2));&lt;br /&gt;
        VecXd shape = MLS.getShapeAt(supp_domain[0], {{2, 0}}) +&lt;br /&gt;
                      MLS.getShapeAt(supp_domain[0], {{0, 2}});&lt;br /&gt;
        for (size_t i = 0; i &amp;lt; supp_domain.size(); ++i) {&lt;br /&gt;
            l_coeff.emplace_back(c, domain.support[c][i], shape[i]);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare dirichlet boundaries&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; b_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : domain.types &amp;lt; 0) {&lt;br /&gt;
        b_coeff.emplace_back(c, c, 1.0);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare matrices&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; laplace_m(domain.size(), domain.size());&lt;br /&gt;
    laplace_m.setFromTriplets(l_coeff.begin(), l_coeff.end());&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; boundary_m(domain.size(), domain.size());&lt;br /&gt;
    boundary_m.setFromTriplets(b_coeff.begin(), b_coeff.end());&lt;br /&gt;
&lt;br /&gt;
    // draw&lt;br /&gt;
    std::thread th([&amp;amp;] { draw2D(domain.positions, phi); });&lt;br /&gt;
&lt;br /&gt;
    // initialize solver&lt;br /&gt;
    Eigen::BiCGSTAB&amp;lt;SparseMatrix&amp;lt;double&amp;gt;&amp;gt; solver;&lt;br /&gt;
&lt;br /&gt;
    // join matrices together&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; tmp = boundary_m + laplace_m;&lt;br /&gt;
    solver.compute(tmp);&lt;br /&gt;
&lt;br /&gt;
    // solve system of equations&lt;br /&gt;
    phi = solver.solve(RHS);&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;#iterations:     &amp;quot; &amp;lt;&amp;lt; solver.iterations() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;estimated error: &amp;quot; &amp;lt;&amp;lt; solver.error()      &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // end drawing&lt;br /&gt;
    th.join();  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:electro_statics&amp;quot;&amp;gt;&lt;br /&gt;
[[File:electrostatics.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Simple electrostatics problem solved by implicit method with 15 node monomial supports and gaussian weight functions. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Convection-diffusion =&lt;br /&gt;
&lt;br /&gt;
The following example problem is adapted from [http://servidor.demec.ufpr.br/CFD/bibliografia/Ferziger%20Peric%20-%20Computational%20Methods%20for%20Fluid%20Dynamics,%203rd%20Ed%20-%202002.pdf Ferziger &amp;amp; Perič (2002) (pages 82-86)].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-05-03.png|thumb|400px|upright=2|centre|&amp;lt;caption&amp;gt; Geometry and boundary conditions for the scalar transport in a stagnation point flow. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We now consider the problem of a scalar quantity transported in a known velocity field $\b{u}$. The velocity field is given by&lt;br /&gt;
\[\b{u} = (u_x,u_y) = (x, -y),\]&lt;br /&gt;
which represents the flow near a stagnation point $(x,y) = (0,0)$. The streamlines $xy=\mathrm{const.}$ and change direction with respect to the Cartesian grid. The convection-diffusion equation (also known as the scalar transport equation) to be solved is&lt;br /&gt;
\[\alpha \b{\nabla}^2 \phi - \b{u}\cdot \b{\nabla}\phi = 0\]&lt;br /&gt;
with the following boundary conditions:&lt;br /&gt;
* $\phi = 0$ along the north (inlet) boundary;&lt;br /&gt;
* linear variation $\phi = (1 - y)$ along the west boundary ($x = 0$);&lt;br /&gt;
* symmetry condition on the south boundary;&lt;br /&gt;
* zero gradient at the east (outlet) boundary.&lt;br /&gt;
The parameter $\alpha$ is the diffusivity for the scalar quantity $\phi$. The problem and boundary conditions are schematically represented in the &amp;lt;xr id=&amp;quot;fig:fvm_problem&amp;quot;/&amp;gt;. The solutions obtained by Ferziger and Perić are shown in &amp;lt;xr id=&amp;quot;fig:fvm_problem_solution&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_solution&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-07-44.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right). &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The meshless solution is obtained on regular point arrangements of sizes 100 × 100 and 300 × 300, for the 2 cases of diffusivity $\alpha = 0.01$ and $\alpha = 0.001$, respectively. For comparison we have plotted the contour lines at the same intervals as the original authors of this example (see &amp;lt;xr id=&amp;quot;fig:fvm_problem_meshless&amp;quot;/&amp;gt;). The basis functions are the 5 monomials $1$, $x$, $y$, $x^2$, $y^2$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_meshless&amp;quot;&amp;gt;&lt;br /&gt;
[[File:fvm_convection.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right), obtaines with meshless solver. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Explicit_implicit_diffusion_comparison_ver2.png&amp;diff=1138</id>
		<title>File:Explicit implicit diffusion comparison ver2.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Explicit_implicit_diffusion_comparison_ver2.png&amp;diff=1138"/>
				<updated>2017-05-04T18:56:07Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1127</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1127"/>
				<updated>2017-04-24T09:02:18Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: /* Convergence with respect to number of nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|center|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|center|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_comparison&amp;quot;&amp;gt;&lt;br /&gt;
[[File:explicit_implicit_diffusion_comparison.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for explicit and implicit mixed boundary condition (Neumann is enforced in two different ways).&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 4 shows convergence of 1/4 of the whole solution with Dirichlet boundary conditions (Fig. 1). The nodes were spanning the area $[0, 0.5]^2$. As expected, higher number of nodes yields lower max error up to some maximal number of nodes. The latter is lower for explicit than for implicit method.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of time steps==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
==Using Gaussian basis==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
== Convergence with respect to shape parameters ==&lt;br /&gt;
&lt;br /&gt;
Choice of shape of the basis function can have significant effect on solution of the system.&lt;br /&gt;
We tested the space of feasible shape parameters on the diffusion equation with $\Delta x = \Delta y = 0.01$&lt;br /&gt;
and $m = n = 9$. The weight function does not have much influence here, except on the condition &lt;br /&gt;
number and any singular values that get cut off because of that.&lt;br /&gt;
&lt;br /&gt;
Results for Gaussian, MQ and IMQ basis functions are presented in &amp;lt;xr id=&amp;quot;fig:shape_space_gau&amp;quot;/&amp;gt;, &amp;lt;xr id=&amp;quot;fig:shape_space_mq&amp;quot;/&amp;gt; and &amp;lt;xr id=&amp;quot;fig:shape_space_imq&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_gau&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_gau.png|thumb|left|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_mq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_imq.png|thumb|800px|&amp;lt;caption&amp;gt;Error as a function of MQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mq.png|thumb|upright=1|left|800px|&amp;lt;caption&amp;gt;Error as a function of IMQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mon.png|thumb|upright=1|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian weight's $\sigma_W$ for Monomials of order 2.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving in 3D==&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]==&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with MLSM operators==&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving mixed boundary conditions with MLSM operators==&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes and support size==&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to weight function==&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Solving Electrostatics =&lt;br /&gt;
&lt;br /&gt;
This example is taken from the [http://www.freefem.org/ff++/ftp/freefem++doc.pdf FreeFem++ manual (page 235)].&lt;br /&gt;
&lt;br /&gt;
Assuming there is no current and the charge distribution is time independent, the electric field $\b{E}$ satisfies&lt;br /&gt;
\begin{equation}\label{eq:electrostatics}&lt;br /&gt;
\b{\nabla}\cdot\b{E} = \frac{\rho}{\epsilon}, \quad \b{\nabla} \times \b{E} = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\rho$ is the charge density and $\epsilon$ is the permittivity. If we introduce an electrostatics potential $\phi$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{E} = -\b{\nabla}\phi,&lt;br /&gt;
\end{equation}&lt;br /&gt;
we can insert it into the first equation in (\ref{eq:electrostatics}) resulting in Poisson's equation&lt;br /&gt;
\begin{equation}\label{eq:poisson}&lt;br /&gt;
\b{\nabla}^2 \phi = -\frac{\rho}{\epsilon}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
In the absence of unpaired electric charge equation (\ref{eq:poisson}) becomes Laplace's equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 \phi = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
We now solve this equation for a circular enclosure with two rectangular holes. The boundary of the circular enclosure is held at constant potential $0$ V. The two rectangular holes are held at constant potentials $+1$ V and $-1$ V, respectively. A coloured scatter plot is available in figure &amp;lt;xr id=&amp;quot;fig:electro_statics&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt; '''Code for electrostatics problem'''&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
    // domain parameters, size, support and monomial order&lt;br /&gt;
    int domain_size = 3000;&lt;br /&gt;
    size_t n = 15; &lt;br /&gt;
    size_t m = 3; &lt;br /&gt;
&lt;br /&gt;
    double radius = 5.0;&lt;br /&gt;
    double width = 0.3;&lt;br /&gt;
    double height = 3.0;&lt;br /&gt;
&lt;br /&gt;
    // extra boundary labels&lt;br /&gt;
    int LEFT = -10;&lt;br /&gt;
    int RIGHT = -20;&lt;br /&gt;
&lt;br /&gt;
    // build circular domain&lt;br /&gt;
    CircleDomain&amp;lt;Vec2d&amp;gt; domain({0,0},radius);&lt;br /&gt;
    domain.fillUniformInterior(domain_size);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
&lt;br /&gt;
    // build left rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; left({-2-width,-height},{-2+width,height});&lt;br /&gt;
    left.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    left.types[left.types == BOUNDARY] = LEFT;&lt;br /&gt;
    domain.subtract(left);&lt;br /&gt;
&lt;br /&gt;
    // build right rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; right({2-width,-height},{2+width,height});&lt;br /&gt;
    right.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    right.types[right.types == BOUNDARY] = RIGHT;&lt;br /&gt;
    domain.subtract(right);&lt;br /&gt;
&lt;br /&gt;
    // relax domain and find supports&lt;br /&gt;
    domain.relax(50, 1e-2, 1.0, 3, 1000);&lt;br /&gt;
    domain.findSupport(n);&lt;br /&gt;
&lt;br /&gt;
    // get interior and boundary ranges&lt;br /&gt;
    Range&amp;lt;int&amp;gt; interior = domain.types == INTERNAL;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; boundary = domain.types == BOUNDARY;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; leftrect = domain.types == LEFT;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; rightrect = domain.types == RIGHT;&lt;br /&gt;
&lt;br /&gt;
    // initialize unknown concentration field and right-hand side&lt;br /&gt;
    VecXd phi(domain.size(),1);&lt;br /&gt;
    VecXd RHS(domain.size(),1);&lt;br /&gt;
&lt;br /&gt;
    // initialize interior values (this is an important step)&lt;br /&gt;
    RHS[interior] = 0.0;&lt;br /&gt;
&lt;br /&gt;
    // set dirichlet boundary conditions&lt;br /&gt;
    RHS[boundary] = 0.0;&lt;br /&gt;
    RHS[leftrect] = -1.0;&lt;br /&gt;
    RHS[rightrect] = 1.0;&lt;br /&gt;
&lt;br /&gt;
    // prepare shape functions and laplacian&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; l_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : interior) {&lt;br /&gt;
        Range&amp;lt;Vec2d&amp;gt; supp_domain = domain.positions[domain.support[c]];&lt;br /&gt;
        EngineMLS&amp;lt;Vec2d, Monomials, Gaussians&amp;gt; MLS(m,supp_domain,pow(domain.characteristicDistance(),2));&lt;br /&gt;
        VecXd shape = MLS.getShapeAt(supp_domain[0], {{2, 0}}) +&lt;br /&gt;
                      MLS.getShapeAt(supp_domain[0], {{0, 2}});&lt;br /&gt;
        for (size_t i = 0; i &amp;lt; supp_domain.size(); ++i) {&lt;br /&gt;
            l_coeff.emplace_back(c, domain.support[c][i], shape[i]);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare dirichlet boundaries&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; b_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : domain.types &amp;lt; 0) {&lt;br /&gt;
        b_coeff.emplace_back(c, c, 1.0);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare matrices&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; laplace_m(domain.size(), domain.size());&lt;br /&gt;
    laplace_m.setFromTriplets(l_coeff.begin(), l_coeff.end());&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; boundary_m(domain.size(), domain.size());&lt;br /&gt;
    boundary_m.setFromTriplets(b_coeff.begin(), b_coeff.end());&lt;br /&gt;
&lt;br /&gt;
    // draw&lt;br /&gt;
    std::thread th([&amp;amp;] { draw2D(domain.positions, phi); });&lt;br /&gt;
&lt;br /&gt;
    // initialize solver&lt;br /&gt;
    Eigen::BiCGSTAB&amp;lt;SparseMatrix&amp;lt;double&amp;gt;&amp;gt; solver;&lt;br /&gt;
&lt;br /&gt;
    // join matrices together&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; tmp = boundary_m + laplace_m;&lt;br /&gt;
    solver.compute(tmp);&lt;br /&gt;
&lt;br /&gt;
    // solve system of equations&lt;br /&gt;
    phi = solver.solve(RHS);&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;#iterations:     &amp;quot; &amp;lt;&amp;lt; solver.iterations() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;estimated error: &amp;quot; &amp;lt;&amp;lt; solver.error()      &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // end drawing&lt;br /&gt;
    th.join();  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:electro_statics&amp;quot;&amp;gt;&lt;br /&gt;
[[File:electrostatics.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Simple electrostatics problem solved by implicit method with 15 node monomial supports and gaussian weight functions. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Convection-diffusion =&lt;br /&gt;
&lt;br /&gt;
The following example problem is adapted from [http://servidor.demec.ufpr.br/CFD/bibliografia/Ferziger%20Peric%20-%20Computational%20Methods%20for%20Fluid%20Dynamics,%203rd%20Ed%20-%202002.pdf Ferziger &amp;amp; Perič (2002) (pages 82-86)].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-05-03.png|thumb|400px|upright=2|centre|&amp;lt;caption&amp;gt; Geometry and boundary conditions for the scalar transport in a stagnation point flow. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We now consider the problem of a scalar quantity transported in a known velocity field $\b{u}$. The velocity field is given by&lt;br /&gt;
\[\b{u} = (u_x,u_y) = (x, -y),\]&lt;br /&gt;
which represents the flow near a stagnation point $(x,y) = (0,0)$. The streamlines $xy=\mathrm{const.}$ and change direction with respect to the Cartesian grid. The convection-diffusion equation (also known as the scalar transport equation) to be solved is&lt;br /&gt;
\[\alpha \b{\nabla}^2 \phi - \b{u}\cdot \b{\nabla}\phi = 0\]&lt;br /&gt;
with the following boundary conditions:&lt;br /&gt;
* $\phi = 0$ along the north (inlet) boundary;&lt;br /&gt;
* linear variation $\phi = (1 - y)$ along the west boundary ($x = 0$);&lt;br /&gt;
* symmetry condition on the south boundary;&lt;br /&gt;
* zero gradient at the east (outlet) boundary.&lt;br /&gt;
The parameter $\alpha$ is the diffusivity for the scalar quantity $\phi$. The problem and boundary conditions are schematically represented in the &amp;lt;xr id=&amp;quot;fig:fvm_problem&amp;quot;/&amp;gt;. The solutions obtained by Ferziger and Perić are shown in &amp;lt;xr id=&amp;quot;fig:fvm_problem_solution&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_solution&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-07-44.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right). &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The meshless solution is obtained on regular point arrangements of sizes 100 × 100 and 300 × 300, for the 2 cases of diffusivity $\alpha = 0.01$ and $\alpha = 0.001$, respectively. For comparison we have plotted the contour lines at the same intervals as the original authors of this example (see &amp;lt;xr id=&amp;quot;fig:fvm_problem_meshless&amp;quot;/&amp;gt;). The basis functions are the 5 monomials $1$, $x$, $y$, $x^2$, $y^2$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_meshless&amp;quot;&amp;gt;&lt;br /&gt;
[[File:fvm_convection.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right), obtaines with meshless solver. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1126</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1126"/>
				<updated>2017-04-24T09:01:38Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: /* Convergence with respect to number of nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|center|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|center|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_comparison&amp;quot;&amp;gt;&lt;br /&gt;
[[File:explicit_implicit_diffusion_comparison.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for explicit and implicit mixed boundary condition (Neumann is enforced in two different ways).&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows convergence of 1/4 of the whole solution with Dirichlet boundary conditions (as above). The nodes were spanning the area $[0, 0.5]^2$. As expected, higher number of nodes yields lower max error up to some maximal number of nodes. The latter is lower for explicit than for implicit method.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of time steps==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
==Using Gaussian basis==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
== Convergence with respect to shape parameters ==&lt;br /&gt;
&lt;br /&gt;
Choice of shape of the basis function can have significant effect on solution of the system.&lt;br /&gt;
We tested the space of feasible shape parameters on the diffusion equation with $\Delta x = \Delta y = 0.01$&lt;br /&gt;
and $m = n = 9$. The weight function does not have much influence here, except on the condition &lt;br /&gt;
number and any singular values that get cut off because of that.&lt;br /&gt;
&lt;br /&gt;
Results for Gaussian, MQ and IMQ basis functions are presented in &amp;lt;xr id=&amp;quot;fig:shape_space_gau&amp;quot;/&amp;gt;, &amp;lt;xr id=&amp;quot;fig:shape_space_mq&amp;quot;/&amp;gt; and &amp;lt;xr id=&amp;quot;fig:shape_space_imq&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_gau&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_gau.png|thumb|left|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_mq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_imq.png|thumb|800px|&amp;lt;caption&amp;gt;Error as a function of MQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mq.png|thumb|upright=1|left|800px|&amp;lt;caption&amp;gt;Error as a function of IMQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mon.png|thumb|upright=1|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian weight's $\sigma_W$ for Monomials of order 2.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving in 3D==&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]==&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with MLSM operators==&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving mixed boundary conditions with MLSM operators==&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes and support size==&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to weight function==&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Solving Electrostatics =&lt;br /&gt;
&lt;br /&gt;
This example is taken from the [http://www.freefem.org/ff++/ftp/freefem++doc.pdf FreeFem++ manual (page 235)].&lt;br /&gt;
&lt;br /&gt;
Assuming there is no current and the charge distribution is time independent, the electric field $\b{E}$ satisfies&lt;br /&gt;
\begin{equation}\label{eq:electrostatics}&lt;br /&gt;
\b{\nabla}\cdot\b{E} = \frac{\rho}{\epsilon}, \quad \b{\nabla} \times \b{E} = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\rho$ is the charge density and $\epsilon$ is the permittivity. If we introduce an electrostatics potential $\phi$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{E} = -\b{\nabla}\phi,&lt;br /&gt;
\end{equation}&lt;br /&gt;
we can insert it into the first equation in (\ref{eq:electrostatics}) resulting in Poisson's equation&lt;br /&gt;
\begin{equation}\label{eq:poisson}&lt;br /&gt;
\b{\nabla}^2 \phi = -\frac{\rho}{\epsilon}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
In the absence of unpaired electric charge equation (\ref{eq:poisson}) becomes Laplace's equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 \phi = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
We now solve this equation for a circular enclosure with two rectangular holes. The boundary of the circular enclosure is held at constant potential $0$ V. The two rectangular holes are held at constant potentials $+1$ V and $-1$ V, respectively. A coloured scatter plot is available in figure &amp;lt;xr id=&amp;quot;fig:electro_statics&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt; '''Code for electrostatics problem'''&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
    // domain parameters, size, support and monomial order&lt;br /&gt;
    int domain_size = 3000;&lt;br /&gt;
    size_t n = 15; &lt;br /&gt;
    size_t m = 3; &lt;br /&gt;
&lt;br /&gt;
    double radius = 5.0;&lt;br /&gt;
    double width = 0.3;&lt;br /&gt;
    double height = 3.0;&lt;br /&gt;
&lt;br /&gt;
    // extra boundary labels&lt;br /&gt;
    int LEFT = -10;&lt;br /&gt;
    int RIGHT = -20;&lt;br /&gt;
&lt;br /&gt;
    // build circular domain&lt;br /&gt;
    CircleDomain&amp;lt;Vec2d&amp;gt; domain({0,0},radius);&lt;br /&gt;
    domain.fillUniformInterior(domain_size);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
&lt;br /&gt;
    // build left rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; left({-2-width,-height},{-2+width,height});&lt;br /&gt;
    left.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    left.types[left.types == BOUNDARY] = LEFT;&lt;br /&gt;
    domain.subtract(left);&lt;br /&gt;
&lt;br /&gt;
    // build right rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; right({2-width,-height},{2+width,height});&lt;br /&gt;
    right.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    right.types[right.types == BOUNDARY] = RIGHT;&lt;br /&gt;
    domain.subtract(right);&lt;br /&gt;
&lt;br /&gt;
    // relax domain and find supports&lt;br /&gt;
    domain.relax(50, 1e-2, 1.0, 3, 1000);&lt;br /&gt;
    domain.findSupport(n);&lt;br /&gt;
&lt;br /&gt;
    // get interior and boundary ranges&lt;br /&gt;
    Range&amp;lt;int&amp;gt; interior = domain.types == INTERNAL;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; boundary = domain.types == BOUNDARY;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; leftrect = domain.types == LEFT;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; rightrect = domain.types == RIGHT;&lt;br /&gt;
&lt;br /&gt;
    // initialize unknown concentration field and right-hand side&lt;br /&gt;
    VecXd phi(domain.size(),1);&lt;br /&gt;
    VecXd RHS(domain.size(),1);&lt;br /&gt;
&lt;br /&gt;
    // initialize interior values (this is an important step)&lt;br /&gt;
    RHS[interior] = 0.0;&lt;br /&gt;
&lt;br /&gt;
    // set dirichlet boundary conditions&lt;br /&gt;
    RHS[boundary] = 0.0;&lt;br /&gt;
    RHS[leftrect] = -1.0;&lt;br /&gt;
    RHS[rightrect] = 1.0;&lt;br /&gt;
&lt;br /&gt;
    // prepare shape functions and laplacian&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; l_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : interior) {&lt;br /&gt;
        Range&amp;lt;Vec2d&amp;gt; supp_domain = domain.positions[domain.support[c]];&lt;br /&gt;
        EngineMLS&amp;lt;Vec2d, Monomials, Gaussians&amp;gt; MLS(m,supp_domain,pow(domain.characteristicDistance(),2));&lt;br /&gt;
        VecXd shape = MLS.getShapeAt(supp_domain[0], {{2, 0}}) +&lt;br /&gt;
                      MLS.getShapeAt(supp_domain[0], {{0, 2}});&lt;br /&gt;
        for (size_t i = 0; i &amp;lt; supp_domain.size(); ++i) {&lt;br /&gt;
            l_coeff.emplace_back(c, domain.support[c][i], shape[i]);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare dirichlet boundaries&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; b_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : domain.types &amp;lt; 0) {&lt;br /&gt;
        b_coeff.emplace_back(c, c, 1.0);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare matrices&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; laplace_m(domain.size(), domain.size());&lt;br /&gt;
    laplace_m.setFromTriplets(l_coeff.begin(), l_coeff.end());&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; boundary_m(domain.size(), domain.size());&lt;br /&gt;
    boundary_m.setFromTriplets(b_coeff.begin(), b_coeff.end());&lt;br /&gt;
&lt;br /&gt;
    // draw&lt;br /&gt;
    std::thread th([&amp;amp;] { draw2D(domain.positions, phi); });&lt;br /&gt;
&lt;br /&gt;
    // initialize solver&lt;br /&gt;
    Eigen::BiCGSTAB&amp;lt;SparseMatrix&amp;lt;double&amp;gt;&amp;gt; solver;&lt;br /&gt;
&lt;br /&gt;
    // join matrices together&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; tmp = boundary_m + laplace_m;&lt;br /&gt;
    solver.compute(tmp);&lt;br /&gt;
&lt;br /&gt;
    // solve system of equations&lt;br /&gt;
    phi = solver.solve(RHS);&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;#iterations:     &amp;quot; &amp;lt;&amp;lt; solver.iterations() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;estimated error: &amp;quot; &amp;lt;&amp;lt; solver.error()      &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // end drawing&lt;br /&gt;
    th.join();  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:electro_statics&amp;quot;&amp;gt;&lt;br /&gt;
[[File:electrostatics.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Simple electrostatics problem solved by implicit method with 15 node monomial supports and gaussian weight functions. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Convection-diffusion =&lt;br /&gt;
&lt;br /&gt;
The following example problem is adapted from [http://servidor.demec.ufpr.br/CFD/bibliografia/Ferziger%20Peric%20-%20Computational%20Methods%20for%20Fluid%20Dynamics,%203rd%20Ed%20-%202002.pdf Ferziger &amp;amp; Perič (2002) (pages 82-86)].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-05-03.png|thumb|400px|upright=2|centre|&amp;lt;caption&amp;gt; Geometry and boundary conditions for the scalar transport in a stagnation point flow. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We now consider the problem of a scalar quantity transported in a known velocity field $\b{u}$. The velocity field is given by&lt;br /&gt;
\[\b{u} = (u_x,u_y) = (x, -y),\]&lt;br /&gt;
which represents the flow near a stagnation point $(x,y) = (0,0)$. The streamlines $xy=\mathrm{const.}$ and change direction with respect to the Cartesian grid. The convection-diffusion equation (also known as the scalar transport equation) to be solved is&lt;br /&gt;
\[\alpha \b{\nabla}^2 \phi - \b{u}\cdot \b{\nabla}\phi = 0\]&lt;br /&gt;
with the following boundary conditions:&lt;br /&gt;
* $\phi = 0$ along the north (inlet) boundary;&lt;br /&gt;
* linear variation $\phi = (1 - y)$ along the west boundary ($x = 0$);&lt;br /&gt;
* symmetry condition on the south boundary;&lt;br /&gt;
* zero gradient at the east (outlet) boundary.&lt;br /&gt;
The parameter $\alpha$ is the diffusivity for the scalar quantity $\phi$. The problem and boundary conditions are schematically represented in the &amp;lt;xr id=&amp;quot;fig:fvm_problem&amp;quot;/&amp;gt;. The solutions obtained by Ferziger and Perić are shown in &amp;lt;xr id=&amp;quot;fig:fvm_problem_solution&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_solution&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-07-44.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right). &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The meshless solution is obtained on regular point arrangements of sizes 100 × 100 and 300 × 300, for the 2 cases of diffusivity $\alpha = 0.01$ and $\alpha = 0.001$, respectively. For comparison we have plotted the contour lines at the same intervals as the original authors of this example (see &amp;lt;xr id=&amp;quot;fig:fvm_problem_meshless&amp;quot;/&amp;gt;). The basis functions are the 5 monomials $1$, $x$, $y$, $x^2$, $y^2$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_meshless&amp;quot;&amp;gt;&lt;br /&gt;
[[File:fvm_convection.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right), obtaines with meshless solver. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1125</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=1125"/>
				<updated>2017-04-24T09:00:57Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: /* Convergence with respect to number of nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|center|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|center|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:explicit_implicit_diffusion_comparison.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for explicit and implicit mixed boundary condition (Neumann is enforced in two different ways).&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows convergence of 1/4 of the whole solution with Dirichlet boundary conditions (as above). The nodes were spanning the area $[0, 0.5]^2$. As expected, higher number of nodes yields lower max error up to some maximal number of nodes. The latter is lower for explicit than for implicit method.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of time steps==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
==Using Gaussian basis==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
== Convergence with respect to shape parameters ==&lt;br /&gt;
&lt;br /&gt;
Choice of shape of the basis function can have significant effect on solution of the system.&lt;br /&gt;
We tested the space of feasible shape parameters on the diffusion equation with $\Delta x = \Delta y = 0.01$&lt;br /&gt;
and $m = n = 9$. The weight function does not have much influence here, except on the condition &lt;br /&gt;
number and any singular values that get cut off because of that.&lt;br /&gt;
&lt;br /&gt;
Results for Gaussian, MQ and IMQ basis functions are presented in &amp;lt;xr id=&amp;quot;fig:shape_space_gau&amp;quot;/&amp;gt;, &amp;lt;xr id=&amp;quot;fig:shape_space_mq&amp;quot;/&amp;gt; and &amp;lt;xr id=&amp;quot;fig:shape_space_imq&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_gau&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_gau.png|thumb|left|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_mq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_imq.png|thumb|800px|&amp;lt;caption&amp;gt;Error as a function of MQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mq.png|thumb|upright=1|left|800px|&amp;lt;caption&amp;gt;Error as a function of IMQ basis' $\sigma_B$ and Gaussian weight's $\sigma_W$&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:shape_space_imq&amp;quot;&amp;gt;&lt;br /&gt;
[[File:sigma_depedance_error_mon.png|thumb|upright=1|800px|&amp;lt;caption&amp;gt;Error as a function of Gaussian weight's $\sigma_W$ for Monomials of order 2.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving in 3D==&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]==&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving Dirichlet with MLSM operators==&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
==Solving mixed boundary conditions with MLSM operators==&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|400px|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to number of nodes and support size==&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
==Convergence with respect to weight function==&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Solving Electrostatics =&lt;br /&gt;
&lt;br /&gt;
This example is taken from the [http://www.freefem.org/ff++/ftp/freefem++doc.pdf FreeFem++ manual (page 235)].&lt;br /&gt;
&lt;br /&gt;
Assuming there is no current and the charge distribution is time independent, the electric field $\b{E}$ satisfies&lt;br /&gt;
\begin{equation}\label{eq:electrostatics}&lt;br /&gt;
\b{\nabla}\cdot\b{E} = \frac{\rho}{\epsilon}, \quad \b{\nabla} \times \b{E} = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\rho$ is the charge density and $\epsilon$ is the permittivity. If we introduce an electrostatics potential $\phi$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{E} = -\b{\nabla}\phi,&lt;br /&gt;
\end{equation}&lt;br /&gt;
we can insert it into the first equation in (\ref{eq:electrostatics}) resulting in Poisson's equation&lt;br /&gt;
\begin{equation}\label{eq:poisson}&lt;br /&gt;
\b{\nabla}^2 \phi = -\frac{\rho}{\epsilon}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
In the absence of unpaired electric charge equation (\ref{eq:poisson}) becomes Laplace's equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 \phi = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
We now solve this equation for a circular enclosure with two rectangular holes. The boundary of the circular enclosure is held at constant potential $0$ V. The two rectangular holes are held at constant potentials $+1$ V and $-1$ V, respectively. A coloured scatter plot is available in figure &amp;lt;xr id=&amp;quot;fig:electro_statics&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt; '''Code for electrostatics problem'''&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
    // domain parameters, size, support and monomial order&lt;br /&gt;
    int domain_size = 3000;&lt;br /&gt;
    size_t n = 15; &lt;br /&gt;
    size_t m = 3; &lt;br /&gt;
&lt;br /&gt;
    double radius = 5.0;&lt;br /&gt;
    double width = 0.3;&lt;br /&gt;
    double height = 3.0;&lt;br /&gt;
&lt;br /&gt;
    // extra boundary labels&lt;br /&gt;
    int LEFT = -10;&lt;br /&gt;
    int RIGHT = -20;&lt;br /&gt;
&lt;br /&gt;
    // build circular domain&lt;br /&gt;
    CircleDomain&amp;lt;Vec2d&amp;gt; domain({0,0},radius);&lt;br /&gt;
    domain.fillUniformInterior(domain_size);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
&lt;br /&gt;
    // build left rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; left({-2-width,-height},{-2+width,height});&lt;br /&gt;
    left.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    left.types[left.types == BOUNDARY] = LEFT;&lt;br /&gt;
    domain.subtract(left);&lt;br /&gt;
&lt;br /&gt;
    // build right rectangle&lt;br /&gt;
    RectangleDomain&amp;lt;Vec2d&amp;gt; right({2-width,-height},{2+width,height});&lt;br /&gt;
    right.fillUniformBoundaryWithStep(domain.characteristicDistance());&lt;br /&gt;
    right.types[right.types == BOUNDARY] = RIGHT;&lt;br /&gt;
    domain.subtract(right);&lt;br /&gt;
&lt;br /&gt;
    // relax domain and find supports&lt;br /&gt;
    domain.relax(50, 1e-2, 1.0, 3, 1000);&lt;br /&gt;
    domain.findSupport(n);&lt;br /&gt;
&lt;br /&gt;
    // get interior and boundary ranges&lt;br /&gt;
    Range&amp;lt;int&amp;gt; interior = domain.types == INTERNAL;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; boundary = domain.types == BOUNDARY;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; leftrect = domain.types == LEFT;&lt;br /&gt;
    Range&amp;lt;int&amp;gt; rightrect = domain.types == RIGHT;&lt;br /&gt;
&lt;br /&gt;
    // initialize unknown concentration field and right-hand side&lt;br /&gt;
    VecXd phi(domain.size(),1);&lt;br /&gt;
    VecXd RHS(domain.size(),1);&lt;br /&gt;
&lt;br /&gt;
    // initialize interior values (this is an important step)&lt;br /&gt;
    RHS[interior] = 0.0;&lt;br /&gt;
&lt;br /&gt;
    // set dirichlet boundary conditions&lt;br /&gt;
    RHS[boundary] = 0.0;&lt;br /&gt;
    RHS[leftrect] = -1.0;&lt;br /&gt;
    RHS[rightrect] = 1.0;&lt;br /&gt;
&lt;br /&gt;
    // prepare shape functions and laplacian&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; l_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : interior) {&lt;br /&gt;
        Range&amp;lt;Vec2d&amp;gt; supp_domain = domain.positions[domain.support[c]];&lt;br /&gt;
        EngineMLS&amp;lt;Vec2d, Monomials, Gaussians&amp;gt; MLS(m,supp_domain,pow(domain.characteristicDistance(),2));&lt;br /&gt;
        VecXd shape = MLS.getShapeAt(supp_domain[0], {{2, 0}}) +&lt;br /&gt;
                      MLS.getShapeAt(supp_domain[0], {{0, 2}});&lt;br /&gt;
        for (size_t i = 0; i &amp;lt; supp_domain.size(); ++i) {&lt;br /&gt;
            l_coeff.emplace_back(c, domain.support[c][i], shape[i]);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare dirichlet boundaries&lt;br /&gt;
    std::vector&amp;lt;Triplet&amp;lt;double&amp;gt;&amp;gt; b_coeff;&lt;br /&gt;
    for (auto&amp;amp; c : domain.types &amp;lt; 0) {&lt;br /&gt;
        b_coeff.emplace_back(c, c, 1.0);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // prepare matrices&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; laplace_m(domain.size(), domain.size());&lt;br /&gt;
    laplace_m.setFromTriplets(l_coeff.begin(), l_coeff.end());&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; boundary_m(domain.size(), domain.size());&lt;br /&gt;
    boundary_m.setFromTriplets(b_coeff.begin(), b_coeff.end());&lt;br /&gt;
&lt;br /&gt;
    // draw&lt;br /&gt;
    std::thread th([&amp;amp;] { draw2D(domain.positions, phi); });&lt;br /&gt;
&lt;br /&gt;
    // initialize solver&lt;br /&gt;
    Eigen::BiCGSTAB&amp;lt;SparseMatrix&amp;lt;double&amp;gt;&amp;gt; solver;&lt;br /&gt;
&lt;br /&gt;
    // join matrices together&lt;br /&gt;
    SparseMatrix&amp;lt;double&amp;gt; tmp = boundary_m + laplace_m;&lt;br /&gt;
    solver.compute(tmp);&lt;br /&gt;
&lt;br /&gt;
    // solve system of equations&lt;br /&gt;
    phi = solver.solve(RHS);&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;#iterations:     &amp;quot; &amp;lt;&amp;lt; solver.iterations() &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
    std::cout &amp;lt;&amp;lt; &amp;quot;estimated error: &amp;quot; &amp;lt;&amp;lt; solver.error()      &amp;lt;&amp;lt; std::endl;&lt;br /&gt;
&lt;br /&gt;
    // end drawing&lt;br /&gt;
    th.join();  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:electro_statics&amp;quot;&amp;gt;&lt;br /&gt;
[[File:electrostatics.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Simple electrostatics problem solved by implicit method with 15 node monomial supports and gaussian weight functions. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Convection-diffusion =&lt;br /&gt;
&lt;br /&gt;
The following example problem is adapted from [http://servidor.demec.ufpr.br/CFD/bibliografia/Ferziger%20Peric%20-%20Computational%20Methods%20for%20Fluid%20Dynamics,%203rd%20Ed%20-%202002.pdf Ferziger &amp;amp; Perič (2002) (pages 82-86)].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-05-03.png|thumb|400px|upright=2|centre|&amp;lt;caption&amp;gt; Geometry and boundary conditions for the scalar transport in a stagnation point flow. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We now consider the problem of a scalar quantity transported in a known velocity field $\b{u}$. The velocity field is given by&lt;br /&gt;
\[\b{u} = (u_x,u_y) = (x, -y),\]&lt;br /&gt;
which represents the flow near a stagnation point $(x,y) = (0,0)$. The streamlines $xy=\mathrm{const.}$ and change direction with respect to the Cartesian grid. The convection-diffusion equation (also known as the scalar transport equation) to be solved is&lt;br /&gt;
\[\alpha \b{\nabla}^2 \phi - \b{u}\cdot \b{\nabla}\phi = 0\]&lt;br /&gt;
with the following boundary conditions:&lt;br /&gt;
* $\phi = 0$ along the north (inlet) boundary;&lt;br /&gt;
* linear variation $\phi = (1 - y)$ along the west boundary ($x = 0$);&lt;br /&gt;
* symmetry condition on the south boundary;&lt;br /&gt;
* zero gradient at the east (outlet) boundary.&lt;br /&gt;
The parameter $\alpha$ is the diffusivity for the scalar quantity $\phi$. The problem and boundary conditions are schematically represented in the &amp;lt;xr id=&amp;quot;fig:fvm_problem&amp;quot;/&amp;gt;. The solutions obtained by Ferziger and Perić are shown in &amp;lt;xr id=&amp;quot;fig:fvm_problem_solution&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_solution&amp;quot;&amp;gt;&lt;br /&gt;
[[File:Screenshot_2016-11-28_11-07-44.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right). &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The meshless solution is obtained on regular point arrangements of sizes 100 × 100 and 300 × 300, for the 2 cases of diffusivity $\alpha = 0.01$ and $\alpha = 0.001$, respectively. For comparison we have plotted the contour lines at the same intervals as the original authors of this example (see &amp;lt;xr id=&amp;quot;fig:fvm_problem_meshless&amp;quot;/&amp;gt;). The basis functions are the 5 monomials $1$, $x$, $y$, $x^2$, $y^2$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fvm_problem_meshless&amp;quot;&amp;gt;&lt;br /&gt;
[[File:fvm_convection.png|thumb|600px|center|upright=2|&amp;lt;caption&amp;gt; Isolines of $\phi$, from 0.05 to 0.95 with step 0.1 (top to bottom), for $\alpha = 0.01$ (left) and $\alpha = 0.001$ (right), obtaines with meshless solver. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1013</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1013"/>
				<updated>2017-03-17T08:24:40Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure, although in practice, especially in higher dimensions, it's closer to o(n). There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website. We tested ANN and Libssrckdtree implementations. In addition we tested also cover tree nearest neighbor search.&lt;br /&gt;
&lt;br /&gt;
[[File:nearest_neighbor_search_time.jpg|800px]]&lt;br /&gt;
&lt;br /&gt;
Note that cover tree implementation is not the best available, but it is the only one to have insert/remove and is written in c++.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;br /&gt;
*k-d tree. (n.d.). In Wikipedia. Retrieved March 17, 2017, from https://en.wikipedia.org/wiki/K-d_tree&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1012</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1012"/>
				<updated>2017-03-17T08:23:03Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure, although in practice, especially in higher dimensions, it's closer to o(n). There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website. We tested ANN and Libssrckdtree implementations. In addition we tested also cover tree nearest neighbor search.&lt;br /&gt;
&lt;br /&gt;
[[File:nearest_neighbor_search_time.jpg|800px]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;br /&gt;
*k-d tree. (n.d.). In Wikipedia. Retrieved March 17, 2017, from https://en.wikipedia.org/wiki/K-d_tree&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1011</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1011"/>
				<updated>2017-03-17T08:20:22Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure, although in practice, especially in higher dimensions, it's closer to o(n). There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website. We tested ANN and Libssrckdtree implementations. In addition we tested also cover tree nearest neighbor search.&lt;br /&gt;
&lt;br /&gt;
[[File:nearest_neighbor_search_time.jpg|800px]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1010</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1010"/>
				<updated>2017-03-17T08:19:26Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure, although in practice, especially in higher dimensions, it's closer to o(n). There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website. We tested ANN and Libssrckdtree implementations. In addition we tested also cover tree nearest neighbor search.&lt;br /&gt;
&lt;br /&gt;
[[File:nearest_neighbor_search_time.jpg|800px]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1009</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1009"/>
				<updated>2017-03-17T08:18:20Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure, although in practice, especially in higher dimensions, it's closer to o(n). There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website. We tested ANN and Libssrckdtree implementations. In addition we tested also cover tree nearest neighbor search.&lt;br /&gt;
&lt;br /&gt;
[[File:nearest_neighbor_search_time.jpg|800px]]&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1007</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1007"/>
				<updated>2017-03-17T08:13:46Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
Nearest neighbor search is a $logn$ procedure. There is a nice description of the algorithm on [https://en.wikipedia.org/wiki/K-d_tree wikipedia] website &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1006</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1006"/>
				<updated>2017-03-17T08:01:28Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:Tree_building_time.jpg]]&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1004</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1004"/>
				<updated>2017-03-17T07:58:45Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:cas_gradnje.jpg]]&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1003</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1003"/>
				<updated>2017-03-17T07:57:47Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:cas_gradnje.pdf]]&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1002</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1002"/>
				<updated>2017-03-17T07:56:25Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We also included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
[[File:/home/jure/delo_ijs_e6/libssrckdtree_test/nearest_neighbors_cas\cas_gradnje.pdf������������������]]&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1001</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1001"/>
				<updated>2017-03-17T07:53:35Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding (consult literature). Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries. We included a cover tree library, which can be found [https://github.com/DNCrane/Cover-Tree here].&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1000</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=1000"/>
				<updated>2017-03-17T07:49:25Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding. Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
We tested build times for Ann and Libssrckdtree libraries.&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=999</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=999"/>
				<updated>2017-03-17T07:46:46Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding. Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o($log n$). Inserting n nodes thus again results in $n log n$ time complexity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=998</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=998"/>
				<updated>2017-03-17T07:45:45Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The procedure described above is used to construct a tree from a collection of given points and is used by ANN library. The tree can also be constructed by inserting points one by one. Such procedure is used by Libssrckdtree library. Time complexity of both procedures is $o(nlogn)$. The most time consuming part of the former algorithm is median point selection. It's time complexity is o(n). This can be improved by imploying some smarter schemes than median finding. Time complexity of the second algorithm can quickly be calculated from the average time need to insert a node into the tree - it scales o(log n). Inserting n nodes thus again results in $nlogn$ time complexity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=997</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=997"/>
				<updated>2017-03-17T07:18:03Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the node in a tree. At depth $k$, the k-th axis (dimenison) is used to divide the remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level, we look for median point by looking at x coordiantes of all points. We put the median point into the root leaf. Remaining points are put into the left and right subtrees. We put the points which have smaller x coorinate into the left subtree and points which have larger x coorinate into the right subtree (or vice versa). We construct the two children nodes of the root from the two subgroups we just created. The procedure is the same, except that we divide the space according to y or z coordinate. We continue this way until we run out of points. The axes used for median search are periodically repeating.&lt;br /&gt;
&lt;br /&gt;
The above procedure is the most commond &lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=950</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=950"/>
				<updated>2017-03-15T11:52:21Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the tree. At depth $k$, the k-th axis (dimenison) is used to divide remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level we look for median point by looking at x coordiantes of points. We put the median points into the root leaf. Remaining points are put into the left and right subtrees. In the left subtree we put the points which have smaller x coorinate and in the right subtree the points which have larger x coordinate (or vice versa). We construct the two children nodes of the root from the two groups we just created. The procedure is the same, except that we divide the space according to y or z coordinate.   &lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=949</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=949"/>
				<updated>2017-03-15T11:47:56Z</updated>
		
		<summary type="html">&lt;p&gt;JLapajne: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree.&lt;br /&gt;
&lt;br /&gt;
Kd tree is a binary tree which uses hyperplane space partitioning to divide points into subgroups (subtrees). The plane is chosen based on the depth of the tree. At depth $k$, the k-th axis (dimenison) is used to divide remaining points. Example: in 3 dimensions we have x,y and z axes. At root level we decide to divide points according to their $x$ values (we could also do it according to y or z values). Usual criterion for divison of points is median. Thus, at root level we look for median point by looking at x coordiantes of points. We put the median points into the root leaf. Remaining points are put into the left and right subtrees. We put in the left subtree the points which have smaller x coorinate and in the right subtree the points which have larger x coordinate (or vice versa).  &lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>JLapajne</name></author>	</entry>

	</feed>