<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://e6.ijs.si/medusa/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anja</id>
		<title>Medusa: Coordinate Free Mehless Method implementation - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://e6.ijs.si/medusa/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anja"/>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php/Special:Contributions/Anja"/>
		<updated>2026-04-16T21:52:21Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.1</generator>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=748</id>
		<title>K-d tree</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=K-d_tree&amp;diff=748"/>
				<updated>2016-11-24T15:58:59Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In structured meshes, neighborhood relations are implicitly determined by the mapping from the physical to the logical space. &lt;br /&gt;
In unstructured mesh based approaches, support domains can be determined from the mesh topology. However, in meshless methods, the nearest neighboring nodes in $\Omega_S$ are determined with various algorithms and specialized data structures, we use kD Tree. &lt;br /&gt;
&lt;br /&gt;
The input to Algorithm is a list of $(n_S +1)$ nearest nodes to ${\bf x}$. A naive implementation of finding these nodes could become quite complex if implemented with classical sorting of all nodal distances. Regarding the number of nodes $N$, it approaches the quadratic computational complexity. However, by using efficient data structures, e.g. quadtree, R tree or kD tree (2D tree for 2D domains), the problem becomes tractable. &lt;br /&gt;
The strategy is to build the data structure only once, before the solution procedure. During the solution, the support nodes of the desired points will then be found much faster. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Let us illustrate the whole procedure with a simple example of a 2D tree for eleven nodes, with node numbers and coordinates listed in the first and the second column of Table \ref{tab:kD_nodes} and attached to the corresponding dots on the unit square in the left part of Figure \ref{fig.2D_treegraph}. &lt;br /&gt;
 &lt;br /&gt;
{\begin{table}[h]&lt;br /&gt;
	\centering&lt;br /&gt;
\begin{tabular}{|c|c|c|c|c|c|}&lt;br /&gt;
\hline&lt;br /&gt;
Node number&amp;amp;Unsorted &amp;amp; Sorted by $x$ &amp;amp; Sorted by $y$ &amp;amp; Sorted by $x$ &amp;amp; Bucket\\ \hline&lt;br /&gt;
1&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp;(0,0)&amp;amp; \bf (0,0) \\ \hline&lt;br /&gt;
2&amp;amp;(0.6,0)&amp;amp;(0,0.4)&amp;amp;(0,0.4)&amp;amp;\bf (0,0.4)&amp;amp;  \\ \hline&lt;br /&gt;
3&amp;amp;(1,0)&amp;amp;(0,1)&amp;amp;\bf (0.24,0.6)&amp;amp;   &amp;amp;  \\ \hline&lt;br /&gt;
4&amp;amp;(0,0.4)&amp;amp;(0.24,0.6)&amp;amp;(0,1)&amp;amp;(0,1)&amp;amp; \bf (0,1) \\ \hline&lt;br /&gt;
5&amp;amp;(0.6,0.3)&amp;amp;(0.47,1)&amp;amp;(0.47,1)&amp;amp;\bf (0.47,1)&amp;amp;\\ \hline&lt;br /&gt;
6&amp;amp;(1,0.5)&amp;amp;\bf (0.6,0)&amp;amp;    &amp;amp;   &amp;amp;\\ \hline&lt;br /&gt;
7&amp;amp;(0.24,0.6)&amp;amp;(0.6,0.3)&amp;amp;(1,0)&amp;amp;(0.6,0.3)&amp;amp;\bf (0.6,0.3)\\ \hline&lt;br /&gt;
8&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;(0.6,0.3)&amp;amp;\bf (1,0)&amp;amp;\\ \hline&lt;br /&gt;
9&amp;amp;(0,1)&amp;amp;(1,0)&amp;amp;\bf (1,0.5)&amp;amp; &amp;amp;\\ \hline&lt;br /&gt;
10&amp;amp;(0.47,1)&amp;amp;(1,0.5)&amp;amp;(0.76,0.8)&amp;amp;(0.76,0.8)&amp;amp;\bf (0.76,0.8)\\ \hline&lt;br /&gt;
11&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;(1,1)&amp;amp;\bf (1,1)&amp;amp;\\ \hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\caption{The list of eleven nodes (1st column) determined with coordinates (2nd column), after sorting by $x$ (3rd column), after sorting sub lists by $y$ (4th column) and after sorting sub sub lists by $x$ again (5th column). The nodes nearest to medians are shown in bold.}&lt;br /&gt;
\label{tab:kD_nodes}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
In the first step of 2D tree construction, the list of nodes is sorted by their $x$ coordinate, which is shown in third column of Table \ref{tab:kD_nodes}. Then a node with median coordinate, $x = 0.6$ in our case (shown in bold), is selected as the root of the first level of the 2D tree. If there is more than one such node, any one can be selected. The sorted set in column 3 is split into two parts, the one for $x$ below the median, i.e. $x &amp;lt; 0.6$, and the one for $x$ above or equal the median, i.e. $x\geq 0.6$. The two sub sets of nodes are shown in the left part of Figure \ref{fig.2D_treegraph} within two distinct rectangles, and on the right side of Figure \ref{fig.2D_treegraph} as the left and the right part of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
In the second step, the two sub lists of nodes are sorted by their $y$ coordinate, which is shown in fourth column of Table \ref{tab:kD_nodes}. The median coordinates $y$ are $0.6$ and $0.5$, respectively. The corresponding nodes $(0.24,0.6)$ and $(1,0.5)$ are taken as roots for the second level of 2D tree and are shown in bold and used to further split the tree. The resulting four sub sub sets of nodes are shown in the right side of Figure \ref{fig.2D_treegraph} as nodes on the lower two levels of the 2D tree. &lt;br /&gt;
&lt;br /&gt;
Finally, the sub sub lists are sorted again by their $x$ coordinate, with the result shown in fifth column. Four roots are obtained with the coordinate $x$ nearest to medians, namely the nodes $(0,0.4)$, $(0.47,1)$, $(1,0)$, and $(1,1)$. The remaining nodes of the last level of the 2D tree, also termed the bucket, are&lt;br /&gt;
$(0,0)$, $(0,1)$, $(0.6,0.3)$, and $(0.76,0.8)$.&lt;br /&gt;
In practical cases, the refinement of the tree stops sooner, when its leaves are represented by list of several nodes, because such a fine grained distribution of leaves as in the presented example is often not beneficial from the computational efficiency point of view.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
[[File:image_1avj3kcbe157a1ad610k81brmgsj9.png|800px|thumb|&amp;lt;caption&amp;gt; 2D tree example&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321;&lt;br /&gt;
* Trobec R., Kosec G., Parallel scientific computing : theory, algorithms, and applications of mesh based and meshless methods: Springer; 2015.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=747</id>
		<title>Meshless Local Strong Form Method (MLSM)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=747"/>
				<updated>2016-11-24T15:56:40Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Meshless Local Strong Form Method (MLSM) is a generalization of methods which are in literature also known as Diffuse Approximate Method (DAM), Local Radial Basis Function Collocation Methods (LRBFCM), Generalized FDM, Collocated discrete least squares (CDLS) meshless, etc. Although each of the named methods pose some unique properties, the basic concept of all local strong form methods is similar, namely to approximate treated fields with &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:meshless1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhoje8u10tg1cip1atj1qo51j8b9.png|500px|thumb|upright=2|alt=The scheme of local meshless principle.|&amp;lt;caption&amp;gt;The scheme of local meshless principle. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
nodal trial functions over the local support domain. The nodal trial function is then used to evaluate various operators, e.g. derivation, integration, and after all, approximation of a considered field in arbitrary position. The MLSM could easily be understood as a meshless generalization of the FDM, however much more powerful. MLSM has an ambition to avoid using pre-defined relations between nodes and shift this task into the solution procedure. The final goal of such an approach is higher flexibility in complex domains.&lt;br /&gt;
&lt;br /&gt;
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.&lt;br /&gt;
&lt;br /&gt;
The core of the spatial discretization is a local [[Moving Least Squares (MLS)]] approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as&lt;br /&gt;
	\[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\] &lt;br /&gt;
with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively.  &lt;br /&gt;
&lt;br /&gt;
The problem can be written in matrix form (refer to [[Moving Least Squares (MLS)]] for more details) as &lt;br /&gt;
	\[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\]	&lt;br /&gt;
where $(\mathbf{W}^{0.5}\mathbf{B})^{+}$ stand for a [https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse Moore–Penrose pseudo inverse]. &lt;br /&gt;
&lt;br /&gt;
== Shape functions ==&lt;br /&gt;
By explicit expressiong the coefficients $\alpha$ into the trial function     &lt;br /&gt;
	\[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]	&lt;br /&gt;
is obtained, where $\mathbf{\chi}$ stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function &lt;br /&gt;
	\[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]&lt;br /&gt;
where $L$ stands for general differential operator. &lt;br /&gt;
For example:&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
&lt;br /&gt;
The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator. The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or $p$-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:implementation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhh0f97le11d68kgk18gon9h9.png|900px|thumb|alt=The implementation diagram.|&amp;lt;caption&amp;gt;The implementation diagram. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity  of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$  stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a [[Kd Tree|kD-tree]] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting $n$ supporting nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=746</id>
		<title>Meshless Local Strong Form Method (MLSM)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=746"/>
				<updated>2016-11-24T15:45:57Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Meshless Local Strong Form Method (MLSM) is a generalization of methods which are in literature also known as Diffuse Approximate Method (DAM), Local Radial Basis Function Collocation Methods (LRBFCM), Generalized FDM, Collocated discrete least squares (CDLS) meshless, etc. Although each of the named methods pose some unique properties, the basic concept of all local strong form methods is similar, namely to approximate treated fields with &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:meshless1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhoje8u10tg1cip1atj1qo51j8b9.png|500px|thumb|upright=2|alt=The scheme of local meshless principle.|&amp;lt;caption&amp;gt;The scheme of local meshless principle. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
nodal trial functions over the local support domain. The nodal trial function is then used to evaluate various operators, e.g. derivation, integration, and after all, approximation of a considered field in arbitrary position. The MLSM could easily be understood as a meshless generalization of the FDM, however much more powerful. MLSM has an ambition to avoid using pre-defined relations between nodes and shift this task into the solution procedure. The final goal of such an approach is higher flexibility in complex domains.&lt;br /&gt;
&lt;br /&gt;
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.&lt;br /&gt;
&lt;br /&gt;
The core of the spatial discretization is a local [[Moving Least Squares (MLS)]] approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as&lt;br /&gt;
	\[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\] &lt;br /&gt;
with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively.  &lt;br /&gt;
&lt;br /&gt;
The problem can be written in matrix form (refer to [[Moving Least Squares (MLS)]] for more details) as &lt;br /&gt;
	\[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\]	&lt;br /&gt;
where $(\mathbf{W}^{0.5}\mathbf{B})^{+}$ stand for a [https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse Moore–Penrose pseudo inverse]. &lt;br /&gt;
&lt;br /&gt;
== Shape functions ==&lt;br /&gt;
By explicit expressiong the coefficients $\alpha$ into the trial function     &lt;br /&gt;
	\[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]	&lt;br /&gt;
is obtained, where $\mathbf{\chi}$ stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function &lt;br /&gt;
	\[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]&lt;br /&gt;
where $L$ stands for general differential operator. &lt;br /&gt;
For example:&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
&lt;br /&gt;
The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator. The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or $p$-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:implementation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhh0f97le11d68kgk18gon9h9.png|900px|thumb|alt=The implementation diagram.|&amp;lt;caption&amp;gt;The implementation diagram. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity  of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$  stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a kD-tree [20] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting $n$ supporting nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=745</id>
		<title>Meshless Local Strong Form Method (MLSM)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=745"/>
				<updated>2016-11-24T15:41:29Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Meshless Local Strong Form Method (MLSM) is a generalization of methods which are in literature also known as Diffuse Approximate Method (DAM), Local Radial Basis Function Collocation Methods (LRBFCM), Generalized FDM, Collocated discrete least squares (CDLS) meshless, etc. Although each of the named methods pose some unique properties, the basic concept of all local strong form methods is similar, namely to approximate treated fields with &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:meshless1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhoje8u10tg1cip1atj1qo51j8b9.png|500px|thumb|upright=2|alt=The scheme of local meshless principle.|&amp;lt;caption&amp;gt;The scheme of local meshless principle. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
nodal trial functions over the local support domain. The nodal trial function is then used to evaluate various operators, e.g. derivation, integration, and after all, approximation of a considered field in arbitrary position. The MLSM could easily be understood as a meshless generalization of the FDM, however much more powerful. MLSM has an ambition to avoid using pre-defined relations between nodes and shift this task into the solution procedure. The final goal of such an approach is higher flexibility in complex domains.&lt;br /&gt;
&lt;br /&gt;
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.&lt;br /&gt;
&lt;br /&gt;
The core of the spatial discretization is a local [[Moving Least Squares (MLS)]] approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as&lt;br /&gt;
	\[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\] &lt;br /&gt;
with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively.  &lt;br /&gt;
&lt;br /&gt;
The problem can be written in matrix form (refer to [[Moving Least Squares (MLS)]] for more details) as &lt;br /&gt;
	\[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\]	&lt;br /&gt;
where $(\mathbf{W}^{0.5}\mathbf{B})^{+}$ stand for a [https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse Moore–Penrose pseudo inverse]. &lt;br /&gt;
&lt;br /&gt;
== Shape functions ==&lt;br /&gt;
By explicit expressiong the coefficients $\alpha$ into the trial function     &lt;br /&gt;
	\[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]	&lt;br /&gt;
is obtained, where $\mathbf{\chi}$ stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function &lt;br /&gt;
	\[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]&lt;br /&gt;
where $L$ stands for general differential operator. &lt;br /&gt;
For example:&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
&lt;br /&gt;
The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator. The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or $p$-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:implementation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhh0f97le11d68kgk18gon9h9.png|900px|centre|alt=The implementation diagram.|&amp;lt;caption&amp;gt;The implementation diagram. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity  of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$  stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a kD-tree [20] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting $n$ supporting nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=744</id>
		<title>Weighted Least Squares (WLS)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=744"/>
				<updated>2016-11-24T15:40:27Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;One of the most important building blocks of the meshless methods is the Moving Least Squares approximation, which is implemented in the [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS class]. Check [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mls_test.cpp EngineMLS unit tests] for examples.&lt;br /&gt;
&lt;br /&gt;
= Notation Cheat sheet =&lt;br /&gt;
\begin{align*}&lt;br /&gt;
  m \in \N                  &amp;amp; \dots \text{number of basis functions} \\&lt;br /&gt;
  n \geq m \in \N           &amp;amp; \dots \text{number of points in support domain} \\&lt;br /&gt;
  k \in \mathbb{N}          &amp;amp; \dots \text{dimensionality of vector space} \\&lt;br /&gt;
  \vec s_j \in \R^k         &amp;amp; \dots \text{point in support domain } \quad j=1,\dots,n \\&lt;br /&gt;
  u_j \in \R                &amp;amp; \dots \text{value of function to approximate in }\vec{s}_j \quad j=1,\dots,n \\&lt;br /&gt;
  \vec p \in \R^k           &amp;amp; \dots \text{center point of approximation} \\&lt;br /&gt;
  b_i\colon \R^k \to \R     &amp;amp; \dots \text{basis functions } \quad i=1,\dots,m \\&lt;br /&gt;
  B_{j, i} \in \R           &amp;amp; \dots \text{value of basis functions in support points } b_i(s_j-p) \quad j=1,\dots,n, \quad i=1,\dots,m\\&lt;br /&gt;
  \omega \colon \R^k \to \R &amp;amp; \dots \text{weight function} \\&lt;br /&gt;
  w_j \in \R                &amp;amp; \dots \text{weights } \omega(\vec{s}_j-\vec{p})  \quad j=1,\dots,n \\&lt;br /&gt;
  \alpha_i \in \R           &amp;amp; \dots \text{expansion coefficients around point } vec{p} \quad i=1,\dots,m \\&lt;br /&gt;
  \hat u\colon \R^k \to \R  &amp;amp; \dots \text{approximation function (best fit)} \\&lt;br /&gt;
  \chi_j \in \R          &amp;amp; \dots \text{shape coefficient for point }\vec{p} \quad j=1,\dots,n \\&lt;br /&gt;
\end{align*}&lt;br /&gt;
&lt;br /&gt;
We will also use \(\b{s}, \b{u}, \b{b}, \b{\alpha}, \b{\chi} \) to annotate a column of corresponding values,&lt;br /&gt;
$W$ as a $n\times n$ diagonal matrix filled with $w_j$ on the diagonal and $B$ as a $n\times m$ matrix filled with $B_{j, i}$.&lt;br /&gt;
&lt;br /&gt;
= Definition of local aproximation =&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:1DWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhdsfej1b9cao01029m1e13o69.png|600px|thumb|upright=2|alt=1D MLS example|&amp;lt;caption&amp;gt;Example of 1D WLS approximation &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
Our wish is to approximate an unknown function $u\colon \R^k \to \R$ while knowing $n$ values $u(\vec{s}_j) := u_j$.&lt;br /&gt;
The vector of known values will be denoted by $\b{u}$ and the vector of coordinates where those values were achieved by $\b{s}$.&lt;br /&gt;
Note that $\b{s}$ is not a vector in the usual sense since its components $\vec{s}_j$ are elements of $\R^k$, but we will call it vector anyway.&lt;br /&gt;
The values of $\b{s}$ are called ''nodes'' or ''support nodes'' or ''support''. The known values $\b{u}$ are also called ''support values''.&lt;br /&gt;
&lt;br /&gt;
In general, an approximation function around point $\vec{p}\in\R^k$ can be&lt;br /&gt;
written as \[\hat{u} (\vec{x}) = \sum_{i=1}^m \alpha_i b_i(\vec{x}) = \b{b}(\vec{x})^\T \b{\alpha} \]&lt;br /&gt;
where $\b{b} = (b_i)_{i=1}^m$ is a set of ''basis functions'', $b_i\colon \R^k \to\R$, and $\b{\alpha} = (\alpha_i)_{i=1}^m$ are the unknown coefficients.&lt;br /&gt;
&lt;br /&gt;
In MLS the goal is to minimize the error of approximation in given values, $\b{e} = \hat u(\b{s}) - \b{u}$&lt;br /&gt;
between the approximation function and target function in the known points $\b{x}$. The error can also be written as $B\b{\alpha} - \b{u}$,&lt;br /&gt;
where $B$ is rectangular matrix of dimensions $n \times m$ with rows containing basis function evaluated in points $\vec{s}_j$.&lt;br /&gt;
\[ B =&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
b_1(\vec{s}_1) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_1) \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
b_1(\vec{s}_n) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_n)&lt;br /&gt;
\end{bmatrix} =&lt;br /&gt;
 [b_i(\vec{s}_j)]_{j=1,i=1}^{n,m} = [\b{b}(\vec{s}_j)^\T]_{j=1}^n. \]&lt;br /&gt;
&lt;br /&gt;
We can choose to minimize any norm of the error vector $e$&lt;br /&gt;
and usually choose to minimize the $2$-norm or square norm \[ \|\b{e}\| = \|\b{e}\|_2 = \sqrt{\sum_{j=1}^n e_j^2}. \]&lt;br /&gt;
Commonly, we also choose to minimize a weighted norm&lt;br /&gt;
&amp;lt;ref&amp;gt;Note that our definition is a bit unusual, usually weights are not&lt;br /&gt;
 squared with the values. However, we do this to avoid computing square&lt;br /&gt;
 roots when doing MLS. If you are used to the usual definition,&lt;br /&gt;
consider the weight to be $\omega^2$.&amp;lt;/ref&amp;gt;&lt;br /&gt;
instead \[ \|\b{e}\|_{2,w} = \|\b{e}\|_w = \sqrt{\sum_{j=1}^n (w_j e_j)^2}. \]&lt;br /&gt;
The ''weights'' $w_i$ are assumed to be non negative and are assembled in a vector $\b{w}$ or a matrix $W = \operatorname{diag}(\b{w})$ and usually obtained from a weight function.&lt;br /&gt;
A ''weight function'' is a function $\omega\colon \R^k \to[0,\infty)$. We calculate $w_j$ as $w_i := \omega(\vec{p}-\vec{s}_j)$, so&lt;br /&gt;
good choices for $\omega$ are function which have higher values close to $0$ (making closer nodes more important), like the normal distribution.&lt;br /&gt;
If we choose $\omega \equiv 1$, we get the unweighted version.&lt;br /&gt;
&lt;br /&gt;
A choice of minimizing the square norm gave this method its name - Least Squares appoximation. If we use the weighted version, we get the Weighted Least Squares or WLS.&lt;br /&gt;
In the most general case we wish to minimize&lt;br /&gt;
\[ \|\b{e}\|_{2,w}^2 = \b{e}^\T W^2 \b{e} = (B\b{\alpha} - \b{u})^\T W^2(B\b{\alpha} - \b{u}) =  \sum_j^n w_j^2 (\hat{u}(\vec{s}_j) - u_j)^2  \]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The problem of finding the coefficients $\b{\alpha}$ that minimize the error $\b{e}$ can be solved with at least three approaches:&lt;br /&gt;
* Normal equations (fastest, less accurate) - using Cholesky decomposition of $B$ (requires full rank and $m \leq n$)&lt;br /&gt;
* QR decomposition of $B$ (requires full rank and $m \leq n$, more precise)&lt;br /&gt;
* SVD decomposition of $B$ (more expensive, even more reliable, no rank demand)&lt;br /&gt;
&lt;br /&gt;
In MM we use SVD with regularization described below.&lt;br /&gt;
&lt;br /&gt;
= Computing approximation coefficients =&lt;br /&gt;
&lt;br /&gt;
== Normal equations ==&lt;br /&gt;
We seek the minimum of&lt;br /&gt;
\[ \|\b{e}\|_2^2 = (B\b{\alpha} - \b{u})^\T(B\b{\alpha} - \b{u}) \]&lt;br /&gt;
By seeking the zero gradient in terms of coefficients $\alpha_i$&lt;br /&gt;
\[\frac{\partial}{\partial \alpha_i} (B\b{\alpha} - \b{u})^\T(B\b{\alpha} - \b{u})  = 0\]&lt;br /&gt;
resulting in&lt;br /&gt;
\[ B^\T B\b{\alpha} = B^\T \b{u}. \]&lt;br /&gt;
The coefficient matrix $B^\T B$ is symmetric and positive definite. However, solving above problem directly is&lt;br /&gt;
poorly behaved with respect to round-off errors since the condition number $\kappa(B^\T B)$ is the square&lt;br /&gt;
of $\kappa(B)$.&lt;br /&gt;
&lt;br /&gt;
In case of WLS the equations become&lt;br /&gt;
\[ (WB)^\T WB \b{\alpha} = WB^\T \b{u}. \]&lt;br /&gt;
&lt;br /&gt;
Complexity of Cholesky decomposition is $\frac{n^3}{3}$ and complexity of matrix multiplication $nm^2$. To preform the Cholesky decomposition the rank of $WB$ must be full.&lt;br /&gt;
&lt;br /&gt;
'''Pros:'''&lt;br /&gt;
* simple to implement&lt;br /&gt;
* low computational complexity&lt;br /&gt;
&lt;br /&gt;
'''Cons:'''&lt;br /&gt;
* numerically unstable&lt;br /&gt;
* full rank requirement&lt;br /&gt;
&lt;br /&gt;
== [https://en.wikipedia.org/wiki/QR_decomposition $QR$ Decomposition] ==&lt;br /&gt;
\[{\bf{B}} = {\bf{QR}} = \left[ {{{\bf{Q}}_1},{{\bf{Q}}_2}} \right]\left[ {\begin{array}{*{20}{c}}&lt;br /&gt;
{{{\bf{R}}_1}}\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}} \right]\]&lt;br /&gt;
\[{\bf{B}} = {{\bf{Q}}_1}{{\bf{R}}_1}\]&lt;br /&gt;
$\bf{Q}$ is unitary matrix ($\bf{Q}^{-1}=\bf{Q}^T$). Useful property of a unitary matrices is that multiplying with them does not alter the (Euclidean) norm of a vector, i.e.,&lt;br /&gt;
\[\left\| {{\bf{Qx}}} \right\|{\bf{ = }}\left\| {\bf{x}} \right\|\]&lt;br /&gt;
And $\bf{R}$ is upper diagonal matrix&lt;br /&gt;
\[{\bf{R = (}}{{\bf{R}}_{\bf{1}}}{\bf{,}}0{\bf{)}}\]&lt;br /&gt;
therefore we can say&lt;br /&gt;
\[\begin{array}{l}&lt;br /&gt;
\left\| {{\bf{B\alpha }} - {\bf{u}}} \right\| = \left\| {{{\bf{Q}}^{\rm{T}}}\left( {{\bf{B\alpha }} - {\bf{u}}} \right)} \right\| = \left\| {{{\bf{Q}}^{\rm{T}}}{\bf{B\alpha }} - {{\bf{Q}}^{\rm{T}}}{\bf{u}}} \right\|\\&lt;br /&gt;
 = \left\| {{{\bf{Q}}^{\rm{T}}}\left( {{\bf{QR}}} \right){\bf{\alpha }} - {{\bf{Q}}^{\rm{T}}}{\bf{u}}} \right\| = \left\| {\left( {{{\bf{R}}_1},0} \right){\bf{\alpha }} - {{\left( {{{\bf{Q}}_1},{{\bf{Q}}_{\bf{2}}}} \right)}^{\rm{T}}}{\bf{u}}} \right\|\\&lt;br /&gt;
 = \left\| {{{\bf{R}}_{\bf{1}}}{\bf{\alpha }} - {{\bf{Q}}_{\bf{1}}}{\bf{u}}} \right\| + \left\| {{\bf{Q}}_2^{\rm{T}}{\bf{u}}} \right\|&lt;br /&gt;
\end{array}\]&lt;br /&gt;
Of the two terms on the right we have no control over the second, and we can render the first one&lt;br /&gt;
zero by solving&lt;br /&gt;
\[{{\bf{R}}_{\bf{1}}}{\bf{\alpha }} = {\bf{Q}}_{_{\bf{1}}}^{\rm{T}}{\bf{u}}\]&lt;br /&gt;
Which results in a minimum. We could also compute it with pseudo inverse&lt;br /&gt;
	\[\mathbf{\alpha }={{\mathbf{B}}^{-1}}\mathbf{u}\]&lt;br /&gt;
Where pseudo inverse is simply \[{{\mathbf{B}}^{-1}}=\mathbf{R}_{\text{1}}^{\text{-1}}{{\mathbf{Q}}^{\text{T}}}\] (once again, $R$ is upper diagonal matrix, and $Q$ is unitary matrix).&lt;br /&gt;
And for weighted case&lt;br /&gt;
	\[\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{-1}}\left( {{\mathbf{W}}^{0.5}}\mathbf{u} \right)\]&lt;br /&gt;
&lt;br /&gt;
Complexity of $QR$ decomposition \[\frac{2}{3}m{{n}^{2}}+{{n}^{2}}+\frac{1}{3}n-2=O({{n}^{3}})\]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Pros:&amp;lt;/strong&amp;gt; better stability in comparison with normal equations &amp;lt;strong&amp;gt; cons: &amp;lt;/strong&amp;gt;higher complexity&lt;br /&gt;
&lt;br /&gt;
== [https://en.wikipedia.org/wiki/Singular_value_decomposition SVD decomposition] ==&lt;br /&gt;
In linear algebra, the [https://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition (SVD)]&lt;br /&gt;
is a factorization of a real or complex matrix. It has many useful&lt;br /&gt;
applications in signal processing and statistics.&lt;br /&gt;
&lt;br /&gt;
Formally, the singular value decomposition of an $m \times n$ real or complex&lt;br /&gt;
matrix $\bf{B}$ is a factorization of the form $\bf{B}= \bf{U\Sigma V^\T}$, where&lt;br /&gt;
$\bf{U}$ is an $m \times m$ real or complex unitary matrix, $\bf{\Sigma}$ is an $m \times n$&lt;br /&gt;
rectangular diagonal matrix with non-negative real numbers on the diagonal, and&lt;br /&gt;
$\bf{V}^\T$  is an $n \times n$ real or complex unitary matrix. The diagonal entries&lt;br /&gt;
$\Sigma_{ii}$ are known as the singular values of $\bf{B}$. The $m$ columns of&lt;br /&gt;
$\bf{U}$ and the $n$ columns of $\bf{V}$ are called the left-singular vectors and&lt;br /&gt;
right-singular vectors of $\bf{B}$, respectively.&lt;br /&gt;
&lt;br /&gt;
The singular value decomposition and the eigen decomposition are closely&lt;br /&gt;
related. Namely:&lt;br /&gt;
&lt;br /&gt;
* The left-singular vectors of $\bf{B}$ are eigen vectors of $\bf{BB}^\T$.&lt;br /&gt;
* The right-singular vectors of $\bf{B}$ are eigen vectors of $\bf{B}^\T{B}$.&lt;br /&gt;
* The non-zero singular values of $\bf{B}$ (found on the diagonal entries of $\bf{\Sigma}$) are the square roots of the non-zero eigenvalues of both $\bf{B}^\T\bf{B}$ and $\bf{B}^\T \bf{B}$.&lt;br /&gt;
&lt;br /&gt;
with SVD we can write $\bf{B}$ as \[\bf{B}=\bf{U\Sigma{{V}^{\T}}}\] where are $\bf{U}$ and $\bf{V}$ again unitary matrices and $\bf{\Sigma}$&lt;br /&gt;
stands for diagonal matrix of singular values.&lt;br /&gt;
&lt;br /&gt;
Again we can solve either the system or compute pseudo inverse as&lt;br /&gt;
&lt;br /&gt;
\[ \bf{B}^{-1} = \left( \bf{U\Sigma V}^\T\right)^{-1} = \bf{V}\bf{\Sigma^{-1}U}^\T \]&lt;br /&gt;
where $\bf{\Sigma}^{-1}$ is trivial, just replace every non-zero diagonal entry by&lt;br /&gt;
its reciprocal and transpose the resulting matrix. The stability gain is&lt;br /&gt;
exactly here, one can now set threshold below which the singular value is&lt;br /&gt;
considered as $0$, basically truncate all singular values below some value and&lt;br /&gt;
thus stabilize the inverse.&lt;br /&gt;
&lt;br /&gt;
SVD decomposition complexity \[ 2mn^2+2n^3 = O(n^3) \]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Pros:&amp;lt;/strong&amp;gt; stable &amp;lt;strong&amp;gt; cons: &amp;lt;/strong&amp;gt;high complexity&lt;br /&gt;
&lt;br /&gt;
== Method used in MM (SVD with regularization) ==&lt;br /&gt;
&lt;br /&gt;
= Weighted Least Squares =&lt;br /&gt;
Weighted least squares approximation is the simplest version of the procedure described above. Given supoprt $\b{s}$, values $\b{u}$&lt;br /&gt;
and an anchor point $\vec{p}$, we calculate the coefficients $\b{\alpha}$ using one of the above methods.&lt;br /&gt;
Then, to approximate a function in the neighbourhood of $\vec p$ we use the formula&lt;br /&gt;
\[&lt;br /&gt;
\hat{u}(\vec x) = \b{b}(\vec x)^\T \b{\alpha} = \sum_{i=1}^m \alpha_i b_i(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
To approximate the derivative $\frac{\partial u}{\partial x_i}$, or any linear partial differential operator $\mathcal L$ on $u$, we&lt;br /&gt;
simply take the same linear combination of transformed basis functions $\mathcal L b_i$. We have considered coefficients $\alpha_i$ to be&lt;br /&gt;
constant and applied the linearity.&lt;br /&gt;
\[&lt;br /&gt;
 \widehat{\mathcal L u}(\vec x) = \sum_{i=1}^m \alpha_i (\mathcal L b_i)(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
= WLS at fixed point with fixed support and unknown function values =&lt;br /&gt;
Suppose now we are given support $\b{s}$ and a point $\b{p}$ and want to construct the function approximation from values $\b{u}$.&lt;br /&gt;
We proceed as usual, solving the overdetemined system $WB \b{\alpha} = \b{u}$ for coefficients $\b{\alpha}$ using the pseudoinverse&lt;br /&gt;
\[ \b{\alpha} = (WB)^+W\b{u}, \]&lt;br /&gt;
where $A^+$ denotes the Moore-Penrose pseudoinverse that can be calculated using SVD.&lt;br /&gt;
&lt;br /&gt;
Writing down the appoximation function $\hat{u}$ we get&lt;br /&gt;
\[&lt;br /&gt;
\hat u (\vec{p}) = \b{b}(\vec{p})^\T \b{\alpha} = \b{b}(\vec{p})^\T (WB)^+W\b{u} = \b{\chi}(\vec{p}) \b{u}.&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
We have defined $\b{\chi}$ to be&lt;br /&gt;
\[ \b{\chi}(\vec{p}) = \b{b}(\vec{p})^\T (WB)^+W. \]&lt;br /&gt;
Vector $\b{\chi}$  is a row vector, also called a ''shape function''. The name comes from being able to take all the information&lt;br /&gt;
about shape of the domain and choice of approximation and store it in a single row vector, being able to approximate&lt;br /&gt;
a function value from givven supoprt values $\b{u}$ with a single dot product. For any values $\b{u}$, value $\b{\chi}(p) \b{u}$&lt;br /&gt;
gives us the appoximation $\hat{u}(\vec{p})$ of $u$ in point $\vec{p}$.&lt;br /&gt;
Mathematically speaking, $\b{\chi}(\vec{p})$ is a functional, $\b{\chi}(\vec{p})\colon \R^n \to \R$, mapping $n$-tuples of known function values to&lt;br /&gt;
their approximations in point $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
The same approach works for any linear operator $\mathcal L$ applied to $u$, just replace every $b_i$ in definition of $\b{\chi}$ with $\mathcal Lb_i$.&lt;br /&gt;
For example, take a $1$-dimensional case for approximation of derivatives with weight equal to $1$ and $n=m=3$, with equally spaced support values at distances $h$.&lt;br /&gt;
We wish to approximate $u''$ in the middle support point, just by making a weighted sum of the values, something like the finite difference&lt;br /&gt;
\[ u'' \approx \frac{u_1 - 2u_2 + u_3}{h^2}. \]&lt;br /&gt;
This is exactly the same formula as we would have come to by computing $\b{\chi}$, except that our approach is a lot more general. But one should think about&lt;br /&gt;
$\b{\chi}$ as one would about the finite difference scheme, it is a rule, telling us how to compute the derivative.&lt;br /&gt;
\[ u''(s_2) \approx \underbrace{\begin{bmatrix} \frac{1}{h^2} &amp;amp; \frac{-2}{h^2} &amp;amp; \frac{1}{h^2} \end{bmatrix}}_{\b{\chi}} \begin{bmatrix}u_1 \\ u_2 \\ u_3 \end{bmatrix}  \]&lt;br /&gt;
&lt;br /&gt;
The fact that $\b{\chi}$ is independant of the function values $\b{u}$ but depend only on domain geometry means that&lt;br /&gt;
'''we can just compute the shape functions $\b{\chi}$ for points of interest and then approximate any linear operator&lt;br /&gt;
of any function, given its values, very fast, using only a single dot product.'''&lt;br /&gt;
&lt;br /&gt;
= MLS =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswls.svg|thumb|upright=2|&amp;lt;caption&amp;gt;Comparison of WLS and MLS approximation&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using WLS the approximation gets worse as we move away from the central point $\vec{p}$.&lt;br /&gt;
This is partially due to not being in the center of the support any more and partially due to weight&lt;br /&gt;
being distribute in such a way to assign more importance to nodes closer to $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
We can battle this problem in two ways: when we wish to approximate in a new point that is sufficiently far&lt;br /&gt;
away from $\vec{p}$ we can compute new support, recompute the new coefficients $\b{\alpha}$ and approximate again.&lt;br /&gt;
This is very costly and we would like to avoid that. A partial fix is to keep support the same, but only&lt;br /&gt;
recompute the weight vector $\b{w}$, that will now assign hight values to nodes close to the new point.&lt;br /&gt;
We still need to recompute the coefficients $\b{\alpha}$, hoever we avoid the cost of setting up new support&lt;br /&gt;
and function values and recomputing $B$. This approach is called Moving Least Squares due to recomputing&lt;br /&gt;
the weighted least squares problem whenever we move the point of approximation.&lt;br /&gt;
&lt;br /&gt;
Note that is out weight is constant or if $n = m$, when approximation reduces to interpolation, the weights do not play&lt;br /&gt;
any role and this method is redundant. In fact, its benefits arise when supports are rather large.&lt;br /&gt;
&lt;br /&gt;
See &amp;lt;xr id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;/&amp;gt; for comparison between MLS and WLS approximations. MLS approximation remains close to&lt;br /&gt;
actual function while still inside the support domain, while WLS approximation becomes bad when&lt;br /&gt;
we come out of the reach of the weight function.&lt;br /&gt;
&lt;br /&gt;
{{reflist}}&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=743</id>
		<title>Weighted Least Squares (WLS)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Weighted_Least_Squares_(WLS)&amp;diff=743"/>
				<updated>2016-11-24T15:20:42Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;One of the most important building blocks of the meshless methods is the Moving Least Squares approximation, which is implemented in the [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS class]. Check [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mls_test.cpp EngineMLS unit tests] for examples.&lt;br /&gt;
&lt;br /&gt;
= Notation Cheat sheet =&lt;br /&gt;
\begin{align*}&lt;br /&gt;
  m \in \N                  &amp;amp; \dots \text{number of basis functions} \\&lt;br /&gt;
  n \geq m \in \N           &amp;amp; \dots \text{number of points in support domain} \\&lt;br /&gt;
  k \in \mathbb{N}          &amp;amp; \dots \text{dimensionality of vector space} \\&lt;br /&gt;
  \vec s_j \in \R^k         &amp;amp; \dots \text{point in support domain } \quad j=1,\dots,n \\&lt;br /&gt;
  u_j \in \R                &amp;amp; \dots \text{value of function to approximate in }\vec{s}_j \quad j=1,\dots,n \\&lt;br /&gt;
  \vec p \in \R^k           &amp;amp; \dots \text{center point of approximation} \\&lt;br /&gt;
  b_i\colon \R^k \to \R     &amp;amp; \dots \text{basis functions } \quad i=1,\dots,m \\&lt;br /&gt;
  B_{j, i} \in \R           &amp;amp; \dots \text{value of basis functions in support points } b_i(s_j-p) \quad j=1,\dots,n, \quad i=1,\dots,m\\&lt;br /&gt;
  \omega \colon \R^k \to \R &amp;amp; \dots \text{weight function} \\&lt;br /&gt;
  w_j \in \R                &amp;amp; \dots \text{weights } \omega(\vec{s}_j-\vec{p})  \quad j=1,\dots,n \\&lt;br /&gt;
  \alpha_i \in \R           &amp;amp; \dots \text{expansion coefficients around point } vec{p} \quad i=1,\dots,m \\&lt;br /&gt;
  \hat u\colon \R^k \to \R  &amp;amp; \dots \text{approximation function (best fit)} \\&lt;br /&gt;
  \chi_j \in \R          &amp;amp; \dots \text{shape coefficient for point }\vec{p} \quad j=1,\dots,n \\&lt;br /&gt;
\end{align*}&lt;br /&gt;
&lt;br /&gt;
We will also use \(\b{s}, \b{u}, \b{b}, \b{\alpha}, \b{\chi} \) to annotate a column of corresponding values,&lt;br /&gt;
$W$ as a $n\times n$ diagonal matrix filled with $w_j$ on the diagonal and $B$ as a $n\times m$ matrix filled with $B_{j, i}$.&lt;br /&gt;
&lt;br /&gt;
= Definition of local aproximation =&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:1DWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhdsfej1b9cao01029m1e13o69.png|600px|thumb|upright=2|alt=1D MLS example|&amp;lt;caption&amp;gt;Example of 1D WLS approximation &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
Our wish is to approximate an unknown function $u\colon \R^k \to \R$ while knowing $n$ values $u(\vec{s}_j) := u_j$.&lt;br /&gt;
The vector of known values will be denoted by $\b{u}$ and the vector of coordinates where those values were achieved by $\b{s}$.&lt;br /&gt;
Note that $\b{s}$ is not a vector in the usual sense since its components $\vec{s}_j$ are elements of $\R^k$, but we will call it vector anyway.&lt;br /&gt;
The values of $\b{s}$ are called ''nodes'' or ''support nodes'' or ''support''. The known values $\b{u}$ are also called ''support values''.&lt;br /&gt;
&lt;br /&gt;
In general, an approximation function around point $\vec{p}\in\R^k$ can be&lt;br /&gt;
written as \[\hat{u} (\vec{x}) = \sum_{i=1}^m \alpha_i b_i(\vec{x}) = \b{b}(\vec{x})^\T \b{\alpha} \]&lt;br /&gt;
where $\b{b} = (b_i)_{i=1}^m$ is a set of ''basis functions'', $b_i\colon \R^k \to\R$, and $\b{\alpha} = (\alpha_i)_{i=1}^m$ are the unknown coefficients.&lt;br /&gt;
&lt;br /&gt;
In MLS the goal is to minimize the error of approximation in given values, $\b{e} = \hat u(\b{s}) - \b{u}$&lt;br /&gt;
between the approximation function and target function in the known points $\b{x}$. The error can also be written as $B\b{\alpha} - \b{u}$,&lt;br /&gt;
where $B$ is rectangular matrix of dimensions $n \times m$ with rows containing basis function evaluated in points $\vec{s}_j$.&lt;br /&gt;
\[ B =&lt;br /&gt;
\begin{bmatrix}&lt;br /&gt;
b_1(\vec{s}_1) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_1) \\&lt;br /&gt;
\vdots &amp;amp; \ddots &amp;amp; \vdots \\&lt;br /&gt;
b_1(\vec{s}_n) &amp;amp; \ldots &amp;amp; b_m(\vec{s}_n)&lt;br /&gt;
\end{bmatrix} =&lt;br /&gt;
 [b_i(\vec{s}_j)]_{j=1,i=1}^{n,m} = [\b{b}(\vec{s}_j)^\T]_{j=1}^n. \]&lt;br /&gt;
&lt;br /&gt;
We can choose to minimize any norm of the error vector $e$&lt;br /&gt;
and usually choose to minimize the $2$-norm or square norm \[ \|\b{e}\| = \|\b{e}\|_2 = \sqrt{\sum_{j=1}^n e_j^2}. \]&lt;br /&gt;
Commonly, we also choose to minimize a weighted norm&lt;br /&gt;
&amp;lt;ref&amp;gt;Note that our definition is a bit unusual, usually weights are not&lt;br /&gt;
 squared with the values. However, we do this to avoid computing square&lt;br /&gt;
 roots when doing MLS. If you are used to the usual definition,&lt;br /&gt;
consider the weight to be $\omega^2$.&amp;lt;/ref&amp;gt;&lt;br /&gt;
instead \[ \|\b{e}\|_{2,w} = \|\b{e}\|_w = \sqrt{\sum_{j=1}^n (w_j e_j)^2}. \]&lt;br /&gt;
The ''weights'' $w_i$ are assumed to be non negative and are assembled in a vector $\b{w}$ or a matrix $W = \operatorname{diag}(\b{w})$ and usually obtained from a weight function.&lt;br /&gt;
A ''weight function'' is a function $\omega\colon \R^k \to[0,\infty)$. We calculate $w_j$ as $w_i := \omega(\vec{p}-\vec{s}_j)$, so&lt;br /&gt;
good choices for $\omega$ are function which have higher values close to $0$ (making closer nodes more important), like the normal distribution.&lt;br /&gt;
If we choose $\omega \equiv 1$, we get the unweighted version.&lt;br /&gt;
&lt;br /&gt;
A choice of minimizing the square norm gave this method its name - Least Squares appoximation. If we use the weighted version, we get the Weighted Least Squares or WLS.&lt;br /&gt;
In the most general case we wish to minimize&lt;br /&gt;
\[ \|\b{e}\|_{2,w}^2 = \b{e}^\T W^2 \b{e} = (B\b{\alpha} - \b{u})^\T W^2(B\b{\alpha} - \b{u}) =  \sum_j^n w_j^2 (\hat{u}(\vec{s}_j) - u_j)^2  \]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The problem of finding the coefficients $\b{\alpha}$ that minimize the error $\b{e}$ can be solved with at least three approaches:&lt;br /&gt;
* Normal equations (fastest, less accurate) - using Cholesky decomposition of $B$ (requires full rank and $m \leq n$)&lt;br /&gt;
* QR decomposition of $B$ (requires full rank and $m \leq n$, more precise)&lt;br /&gt;
* SVD decomposition of $B$ (more expensive, even more reliable, no rank demand)&lt;br /&gt;
&lt;br /&gt;
In MM we use SVD with regularization described below.&lt;br /&gt;
&lt;br /&gt;
= Computing approximation coefficients =&lt;br /&gt;
&lt;br /&gt;
== Normal equations ==&lt;br /&gt;
We seek the minimum of&lt;br /&gt;
\[ \|\b{e}\|_2^2 = (B\b{\alpha} - \b{u})^\T(B\b{\alpha} - \b{u}) \]&lt;br /&gt;
By seeking the zero gradient in terms of coefficients $\alpha_i$&lt;br /&gt;
\[\frac{\partial}{\partial \alpha_i} (B\b{\alpha} - \b{u})^\T(B\b{\alpha} - \b{u})  = 0\]&lt;br /&gt;
resulting in&lt;br /&gt;
\[ B^\T B\b{\alpha} = B^\T \b{u}. \]&lt;br /&gt;
The coefficient matrix $B^\T B$ is symmetric and positive definite. However, solving above problem directly is&lt;br /&gt;
poorly behaved with respect to round-off errors since the condition number $\kappa(B^\T B)$ is the square&lt;br /&gt;
of $\kappa(B)$.&lt;br /&gt;
&lt;br /&gt;
In case of WLS the equations become&lt;br /&gt;
\[ (WB)^\T WB \b{\alpha} = WB^\T \b{u}. \]&lt;br /&gt;
&lt;br /&gt;
Complexity of Cholesky decomposition is $\frac{n^3}{3}$ and complexity of matrix multiplication $nm^2$. To preform the Cholesky decomposition the rank of $WB$ must be full.&lt;br /&gt;
&lt;br /&gt;
'''Pros:'''&lt;br /&gt;
* simple to implement&lt;br /&gt;
* low computational complexity&lt;br /&gt;
&lt;br /&gt;
'''Cons:'''&lt;br /&gt;
* numerically unstable&lt;br /&gt;
* full rank requirement&lt;br /&gt;
&lt;br /&gt;
== QR Decomposition ==&lt;br /&gt;
\[{\bf{B}} = {\bf{QR}} = \left[ {{{\bf{Q}}_1},{{\bf{Q}}_2}} \right]\left[ {\begin{array}{*{20}{c}}&lt;br /&gt;
{{{\bf{R}}_1}}\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}} \right]\]&lt;br /&gt;
\[{\bf{B}} = {{\bf{Q}}_1}{{\bf{R}}_1}\]&lt;br /&gt;
$\bf{Q}$ is unitary matrix ($\bf{Q}^{-1}=\bf{Q}^T$). Useful property of a unitary matrices is that multiplying with them does not alter the (Euclidean) norm of a vector, i.e.,&lt;br /&gt;
\[\left\| {{\bf{Qx}}} \right\|{\bf{ = }}\left\| {\bf{x}} \right\|\]&lt;br /&gt;
And $\bf{R}$ is upper diagonal matrix&lt;br /&gt;
\[{\bf{R = (}}{{\bf{R}}_{\bf{1}}}{\bf{,}}0{\bf{)}}\]&lt;br /&gt;
therefore we can say&lt;br /&gt;
\[\begin{array}{l}&lt;br /&gt;
\left\| {{\bf{B\alpha }} - {\bf{u}}} \right\| = \left\| {{{\bf{Q}}^{\rm{T}}}\left( {{\bf{B\alpha }} - {\bf{u}}} \right)} \right\| = \left\| {{{\bf{Q}}^{\rm{T}}}{\bf{B\alpha }} - {{\bf{Q}}^{\rm{T}}}{\bf{u}}} \right\|\\&lt;br /&gt;
 = \left\| {{{\bf{Q}}^{\rm{T}}}\left( {{\bf{QR}}} \right){\bf{\alpha }} - {{\bf{Q}}^{\rm{T}}}{\bf{u}}} \right\| = \left\| {\left( {{{\bf{R}}_1},0} \right){\bf{\alpha }} - {{\left( {{{\bf{Q}}_1},{{\bf{Q}}_{\bf{2}}}} \right)}^{\rm{T}}}{\bf{u}}} \right\|\\&lt;br /&gt;
 = \left\| {{{\bf{R}}_{\bf{1}}}{\bf{\alpha }} - {{\bf{Q}}_{\bf{1}}}{\bf{u}}} \right\| + \left\| {{\bf{Q}}_2^{\rm{T}}{\bf{u}}} \right\|&lt;br /&gt;
\end{array}\]&lt;br /&gt;
Of the two terms on the right we have no control over the second, and we can render the first one&lt;br /&gt;
zero by solving&lt;br /&gt;
\[{{\bf{R}}_{\bf{1}}}{\bf{\alpha }} = {\bf{Q}}_{_{\bf{1}}}^{\rm{T}}{\bf{u}}\]&lt;br /&gt;
Which results in a minimum. We could also compute it with pseudo inverse&lt;br /&gt;
	\[\mathbf{\alpha }={{\mathbf{B}}^{-1}}\mathbf{u}\]&lt;br /&gt;
Where pseudo inverse is simply \[{{\mathbf{B}}^{-1}}=\mathbf{R}_{\text{1}}^{\text{-1}}{{\mathbf{Q}}^{\text{T}}}\] (once again, R is upper diagonal matrix, and Q is unitary matrix).&lt;br /&gt;
And for weighted case&lt;br /&gt;
	\[\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{-1}}\left( {{\mathbf{W}}^{0.5}}\mathbf{u} \right)\]&lt;br /&gt;
&lt;br /&gt;
Complexity of QR decomposition \[\frac{2}{3}m{{n}^{2}}+{{n}^{2}}+\frac{1}{3}n-2=O({{n}^{3}})\]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Pros:&amp;lt;/strong&amp;gt; better stability in comparison with normal equations &amp;lt;strong&amp;gt; cons: &amp;lt;/strong&amp;gt;higher complexity&lt;br /&gt;
&lt;br /&gt;
== SVD decomposition ==&lt;br /&gt;
In linear algebra, the [https://en.wikipedia.org/wiki/Singular_value_decomposition singular value decomposition (SVD)]&lt;br /&gt;
is a factorization of a real or complex matrix. It has many useful&lt;br /&gt;
applications in signal processing and statistics.&lt;br /&gt;
&lt;br /&gt;
Formally, the singular value decomposition of an $m \times n$ real or complex&lt;br /&gt;
matrix $\bf{B}$ is a factorization of the form $\bf{B}= \bf{U\Sigma V^\T}$, where&lt;br /&gt;
$\bf{U}$ is an $m \times m$ real or complex unitary matrix, $\bf{\Sigma}$ is an $m \times n$&lt;br /&gt;
rectangular diagonal matrix with non-negative real numbers on the diagonal, and&lt;br /&gt;
$\bf{V}^\T$  is an $n \times n$ real or complex unitary matrix. The diagonal entries&lt;br /&gt;
$\Sigma_{ii}$ are known as the singular values of $\bf{B}$. The $m$ columns of&lt;br /&gt;
$\bf{U}$ and the $n$ columns of $\bf{V}$ are called the left-singular vectors and&lt;br /&gt;
right-singular vectors of $\bf{B}$, respectively.&lt;br /&gt;
&lt;br /&gt;
The singular value decomposition and the eigen decomposition are closely&lt;br /&gt;
related. Namely:&lt;br /&gt;
&lt;br /&gt;
* The left-singular vectors of $\bf{B}$ are eigen vectors of $\bf{BB}^\T$.&lt;br /&gt;
* The right-singular vectors of $\bf{B}$ are eigen vectors of $\bf{B}^\T{B}$.&lt;br /&gt;
* The non-zero singular values of $\bf{B}$ (found on the diagonal entries of $\bf{\Sigma}$) are the square roots of the non-zero eigenvalues of both $\bf{B}^\T\bf{B}$ and $\bf{B}^\T \bf{B}$.&lt;br /&gt;
&lt;br /&gt;
with SVD we can write $\bf{B}$ as \[\bf{B}=\bf{U\Sigma{{V}^{\T}}}\] where are $\bf{U}$ and $\bf{V}$ again unitary matrices and $\bf{\Sigma}$&lt;br /&gt;
stands for diagonal matrix of singular values.&lt;br /&gt;
&lt;br /&gt;
Again we can solve either the system or compute pseudo inverse as&lt;br /&gt;
&lt;br /&gt;
\[ \bf{B}^{-1} = \left( \bf{U\Sigma V}^\T\right)^{-1} = \bf{V}\bf{\Sigma^{-1}U}^\T \]&lt;br /&gt;
where $\bf{\Sigma}^{-1}$ is trivial, just replace every non-zero diagonal entry by&lt;br /&gt;
its reciprocal and transpose the resulting matrix. The stability gain is&lt;br /&gt;
exactly here, one can now set threshold below which the singular value is&lt;br /&gt;
considered as $0$, basically truncate all singular values below some value and&lt;br /&gt;
thus stabilize the inverse.&lt;br /&gt;
&lt;br /&gt;
SVD decomposition complexity \[ 2mn^2+2n^3 = O(n^3) \]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Pros:&amp;lt;/strong&amp;gt; stable &amp;lt;strong&amp;gt; cons: &amp;lt;/strong&amp;gt;high complexity&lt;br /&gt;
&lt;br /&gt;
== Method used in MM (SVD with regularization) ==&lt;br /&gt;
&lt;br /&gt;
= Weighted Least Squares =&lt;br /&gt;
Weighted least squares approximation is the simplest version of the procedure described above. Given supoprt $\b{s}$, values $\b{u}$&lt;br /&gt;
and an anchor point $\vec{p}$, we calculate the coefficients $\b{\alpha}$ using one of the above methods.&lt;br /&gt;
Then, to approximate a function in the neighbourhood of $\vec p$ we use the formula&lt;br /&gt;
\[&lt;br /&gt;
\hat{u}(\vec x) = \b{b}(\vec x)^\T \b{\alpha} = \sum_{i=1}^m \alpha_i b_i(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
To approximate the derivative $\frac{\partial u}{\partial x_i}$, or any linear partial differential operator $\mathcal L$ on $u$, we&lt;br /&gt;
simply take the same linear combination of transformed basis functions $\mathcal L b_i$. We have considered coefficients $\alpha_i$ to be&lt;br /&gt;
constant and applied the linearity.&lt;br /&gt;
\[&lt;br /&gt;
 \widehat{\mathcal L u}(\vec x) = \sum_{i=1}^m \alpha_i (\mathcal L b_i)(\vec x).&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
= WLS at fixed point with fixed support and unknown function values =&lt;br /&gt;
Suppose now we are given support $\b{s}$ and a point $\b{p}$ and want to construct the function approximation from values $\b{u}$.&lt;br /&gt;
We proceed as usual, solving the overdetemined system $WB \b{\alpha} = \b{u}$ for coefficients $\b{\alpha}$ using the pseudoinverse&lt;br /&gt;
\[ \b{\alpha} = (WB)^+W\b{u}, \]&lt;br /&gt;
where $A^+$ denotes the Moore-Penrose pseudoinverse that can be calculated using SVD.&lt;br /&gt;
&lt;br /&gt;
Writing down the appoximation function $\hat{u}$ we get&lt;br /&gt;
\[&lt;br /&gt;
\hat u (\vec{p}) = \b{b}(\vec{p})^\T \b{\alpha} = \b{b}(\vec{p})^\T (WB)^+W\b{u} = \b{\chi}(\vec{p}) \b{u}.&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
We have defined $\b{\chi}$ to be&lt;br /&gt;
\[ \b{\chi}(\vec{p}) = \b{b}(\vec{p})^\T (WB)^+W. \]&lt;br /&gt;
Vector $\b{\chi}$  is a row vector, also called a ''shape function''. The name comes from being able to take all the information&lt;br /&gt;
about shape of the domain and choice of approximation and store it in a single row vector, being able to approximate&lt;br /&gt;
a function value from givven supoprt values $\b{u}$ with a single dot product. For any values $\b{u}$, value $\b{\chi}(p) \b{u}$&lt;br /&gt;
gives us the appoximation $\hat{u}(\vec{p})$ of $u$ in point $\vec{p}$.&lt;br /&gt;
Mathematically speaking, $\b{\chi}(\vec{p})$ is a functional, $\b{\chi}(\vec{p})\colon \R^n \to \R$, mapping $n$-tuples of known function values to&lt;br /&gt;
their approximations in point $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
The same approach works for any linear operator $\mathcal L$ applied to $u$, just replace every $b_i$ in definition of $\b{\chi}$ with $\mathcal Lb_i$.&lt;br /&gt;
For example, take a $1$-dimensional case for approximation of derivatives with weight equal to $1$ and $n=m=3$, with equally spaced support values at distances $h$.&lt;br /&gt;
We wish to approximate $u''$ in the middle support point, just by making a weighted sum of the values, something like the finite difference&lt;br /&gt;
\[ u'' \approx \frac{u_1 - 2u_2 + u_3}{h^2}. \]&lt;br /&gt;
This is exactly the same formula as we would have come to by computing $\b{\chi}$, except that our approach is a lot more general. But one should think about&lt;br /&gt;
$\b{\chi}$ as one would about the finite difference scheme, it is a rule, telling us how to compute the derivative.&lt;br /&gt;
\[ u''(s_2) \approx \underbrace{\begin{bmatrix} \frac{1}{h^2} &amp;amp; \frac{-2}{h^2} &amp;amp; \frac{1}{h^2} \end{bmatrix}}_{\b{\chi}} \begin{bmatrix}u_1 \\ u_2 \\ u_3 \end{bmatrix}  \]&lt;br /&gt;
&lt;br /&gt;
The fact that $\b{\chi}$ is independant of the function values $\b{u}$ but depend only on domain geometry means that&lt;br /&gt;
'''we can just compute the shape functions $\b{\chi}$ for points of interest and then approximate any linear operator&lt;br /&gt;
of any function, given its values, very fast, using only a single dot product.'''&lt;br /&gt;
&lt;br /&gt;
= MLS =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswls.svg|thumb|upright=2|&amp;lt;caption&amp;gt;Comparison of WLS and MLS approximation&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using WLS the approximation gets worse as we move away from the central point $\vec{p}$.&lt;br /&gt;
This is partially due to not being in the center of the support any more and partially due to weight&lt;br /&gt;
being distribute in such a way to assign more importance to nodes closer to $\vec{p}$.&lt;br /&gt;
&lt;br /&gt;
We can battle this problem in two ways: when we wish to approximate in a new point that is sufficiently far&lt;br /&gt;
away from $\vec{p}$ we can compute new support, recompute the new coefficients $\b{\alpha}$ and approximate again.&lt;br /&gt;
This is very costly and we would like to avoid that. A partial fix is to keep support the same, but only&lt;br /&gt;
recompute the weight vector $\b{w}$, that will now assign hight values to nodes close to the new point.&lt;br /&gt;
We still need to recompute the coefficients $\b{\alpha}$, hoever we avoid the cost of setting up new support&lt;br /&gt;
and function values and recomputing $B$. This approach is called Moving Least Squares due to recomputing&lt;br /&gt;
the weighted least squares problem whenever we move the point of approximation.&lt;br /&gt;
&lt;br /&gt;
Note that is out weight is constant or if $n = m$, when approximation reduces to interpolation, the weights do not play&lt;br /&gt;
any role and this method is redundant. In fact, its benefits arise when supports are rather large.&lt;br /&gt;
&lt;br /&gt;
See &amp;lt;xr id=&amp;quot;fig:comparisonMLSandWLS&amp;quot;/&amp;gt; for comparison between MLS and WLS approximations. MLS approximation remains close to&lt;br /&gt;
actual function while still inside the support domain, while WLS approximation becomes bad when&lt;br /&gt;
we come out of the reach of the weight function.&lt;br /&gt;
&lt;br /&gt;
{{reflist}}&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Frequently_asked_questions&amp;diff=742</id>
		<title>Frequently asked questions</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Frequently_asked_questions&amp;diff=742"/>
				<updated>2016-11-24T14:07:32Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* '''I get the following error when compiling?'''&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
  /usr/bin/ld: cannot find -lhdf5&lt;br /&gt;
  collect2: error: ld returned 1 exit status&lt;br /&gt;
  CMakeFiles/diffusion.dir/build.make:100: recipe for target XXXX failed&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Check that you have hdf5 installed. If running on Ubuntu, you might have libraries in a weird place. See [[how to build#hdf5]] for more details.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Point_contact&amp;diff=741</id>
		<title>Point contact</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Point_contact&amp;diff=741"/>
				<updated>2016-11-24T14:03:48Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Click here to return back to [[Solid Mechanics]]&lt;br /&gt;
&lt;br /&gt;
=Point contact on a 2D half-plane=&lt;br /&gt;
&lt;br /&gt;
A starting point to solve problems in contact mechanics is to understand the effect of a point-load applied to a homogeneous, linear elastic, isotropic half-plane. This problem may be defined either as plane stress or plain strain (for solution with FreeFem++ we have choosen the latter). The traction boundary conditions for this problem are:&lt;br /&gt;
\begin{equation}\label{eq:bc}&lt;br /&gt;
\sigma_{xy}(x,0) = 0, \quad \sigma_{yy}(x,y) = -P\delta(x,y)&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\delta(x,y)$ is the Dirac delta function. Together these boundary conditions state that there is a singular normal force $P$ applied at $(x,y) = (0,0)$ and there are no shear stresses on the surface of the elastic half-plane.&lt;br /&gt;
&lt;br /&gt;
The analytical relations for the stresses can be found from the [https://en.wikipedia.org/wiki/Flamant_solution Flamant solution] (stress distributions in a linear elastic wedge loaded by point forces a the tip. When the &amp;quot;wedge&amp;quot; is flat we get a half-plane. The derivation uses polar coordinates.) and are given as:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{2P}{\pi} \frac{x^2y}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{yy} = -\frac{2P}{\pi} \frac{y^3}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xy} = -\frac{2P}{\pi} \frac{xy^2}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
for some point $(x,y)$ in the half-plane. From this stress field the strain components and thus the displacements $(u_x,u_y)$ can be determined. The displacements are given by&lt;br /&gt;
\begin{align}&lt;br /&gt;
u_x &amp;amp;= -\frac{P}{4\pi\mu}\left((\kappa-1)\theta - \frac{2xy}{r^2}\right), \label{eq:dispx}\\&lt;br /&gt;
u_y &amp;amp;= -\frac{P}{4\pi\mu}\left((\kappa+1)\log r + \frac{2x^2}{r^2}\right), \label{eq:dispy}&lt;br /&gt;
\end{align}&lt;br /&gt;
where $$r = \sqrt{x^2+y^2}$$ and $$\tan \theta = \frac{x}{y}.$$ The symbol $\kappa$ is known as Dundars constant and is defined as&lt;br /&gt;
\[&lt;br /&gt;
\kappa = \begin{cases} 3 - 4\nu &amp;amp; \quad \text{(Plane strain)}, \\&lt;br /&gt;
                       \cfrac{3 - \nu}{1+\nu} &amp;amp; \quad \text{(Plane stress)}. \end{cases}&lt;br /&gt;
\]&lt;br /&gt;
The last remaining symbol is $\mu$ which represents the shear modulus (sometimes also denoted with $G$).&lt;br /&gt;
&lt;br /&gt;
==Numerical solution with [http://www.freefem.org/ FreeFem++]==&lt;br /&gt;
Due to the known analytical solution the point-contact problem can be used for benchmarking numerical PDE solvers in terms of accuracy (as well as computational efficiency). The purpose of this section is to compare the numerical solution obtained by FreeFem++ with the analytical solution, as well as provide a reference numerical solution for the [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/wiki/index.php/Main_Page C++ library] developed in our laboratory.&lt;br /&gt;
&lt;br /&gt;
For purposes of simplicity we limit ourselves to the domain $(x,y) \in \Omega = [-1,1] \times[-1,-0.1]$ and prescribe Dirichlet displacement on the boundaries $\Gamma_D$ from the known analytical solution (\ref{eq:dispx}, \ref{eq:dispy}). This way we avoid having to deal with the Dirac delta traction boundary condition (\ref{eq:bc}). The problem can be described as find $\boldsymbol{u(\boldsymbol{x})}$ that satisfies&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}= 0 \qquad \text{on }\Omega&lt;br /&gt;
\end{equation}&lt;br /&gt;
and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\boldsymbol{u} = \boldsymbol{u}_{\text{analytical}} \qquad \text{on }\Gamma_D&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\boldsymbol{u}_\text{analytical}$ is given in equations (\ref{eq:dispx}) and (\ref{eq:dispy}).&lt;br /&gt;
&lt;br /&gt;
To solve the point-contact problem in FreeFem++ we must first provide the weak form of the balance equation:&lt;br /&gt;
\begin{equation*}&lt;br /&gt;
\boldsymbol{\nabla}\cdot\boldsymbol{\sigma} + \boldsymbol{b} = 0.&lt;br /&gt;
\end{equation*}&lt;br /&gt;
The corresponding weak formulation is&lt;br /&gt;
\begin{equation}\label{eq:weak}&lt;br /&gt;
\int_\Omega \boldsymbol{\sigma} : \boldsymbol{\varepsilon}(\boldsymbol{v}) \, d\Omega - \int_\Omega \boldsymbol{b}\cdot\boldsymbol{v}\,d\Omega = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $:$ denotes the tensor scalar product (tensor contraction), i.e. $\boldsymbol{A}:\boldsymbol{B} =\sum_{i,j} A_{ij}B_{ij}$. The vector $\boldsymbol{v}$ is the test function or so-called &amp;quot;virtual displacement&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Equation (\ref{eq:weak}) can be handed to FreeFem++ with the help of [https://en.wikipedia.org/wiki/Voigt_notation#Mandel_notation Voigt or Mandel notation], that reduces the symmetric tensors $\boldsymbol{\sigma}$ and $\boldsymbol{\varepsilon}$ to vectors. The benefit of [https://en.wikipedia.org/wiki/Voigt_notation#Mandel_notation Mandel notation] is that it allows the tensor scalar product to be performed as a scalar product of two vectors.&lt;br /&gt;
For this reason we create the following macros:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
 macro u [ux,uy] // displacements&lt;br /&gt;
 macro v [vx,vy] // test function&lt;br /&gt;
 macro b [bx,by] // body forces&lt;br /&gt;
 macro e(u) [dx(u[0]),dy(u[1]),(dx(u[1])+dy(u[0]))/2] // strain (for post-processing)&lt;br /&gt;
 macro em(u) [dx(u[0]),dy(u[1]),sqrt(2)*(dx(u[1])+dy(u[0]))/2] // strain in Mandel notation&lt;br /&gt;
 macro A [[2*mu+lambda,mu,0],[mu,2*mu+lambda,0],[0,0,2*mu]] // stress-strain matrix&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The weak form (\ref{eq:weak}) can then be expressed naturally in FreeFem++ syntax as &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt;int2d(Th)((A*em(u))'*em(v)) - int2d(Th)(b'*v)&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Numerical solution with [[Meshless Local Strong Form Method (MLSM)]]==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    RectangleDomain&amp;lt;vec_t&amp;gt; domain(O.domain_lo, O.domain_hi);&lt;br /&gt;
    domain.fillUniformInteriorWithStep(O.d_space);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(O.d_space);&lt;br /&gt;
    domain.findSupport(O.n, internal);&lt;br /&gt;
    domain.findSupport(O.n, boundary, internal, true); //search only among internal nodes + itself&lt;br /&gt;
&lt;br /&gt;
    EngineMLS&amp;lt;vec_t, Gaussians, Gaussians&amp;gt; mls(&lt;br /&gt;
        {pow(domain.characteristicDistance()*O.sigmaB,2), O.m},   //basis functionsa&lt;br /&gt;
        domain.positions[domain.support[0]],&lt;br /&gt;
        pow(domain.characteristicDistance()*O.sigmaW,2));       //weight function    */&lt;br /&gt;
&lt;br /&gt;
    auto mlsm = make_mlsm(domain, mls, internal);&lt;br /&gt;
    for (auto&amp;amp; i : boundary)    u_3[i]  = u_anal(i);&lt;br /&gt;
    /// [MAIN TEMPORAL LOOP]&lt;br /&gt;
    for (size_t step = 0; step * O.dt &amp;lt; O.time; step++) {&lt;br /&gt;
        int i;  &lt;br /&gt;
        ///[NAVIER-CAUCHY EQUATION :: explicit Plane Stress]&lt;br /&gt;
        #pragma omp parallel for private(i) schedule(static)&lt;br /&gt;
        for (i=0;i&amp;lt;internal.size();++i){&lt;br /&gt;
            u_3[i] = O.dt * O.dt / O.rho * (&lt;br /&gt;
                    O.mu * mlsm.lap(u_2,i) + O.E/(2-2*O.v ) * mlsm.graddiv(u_2,i) +&lt;br /&gt;
                    force[i] - O.dampCoef * (u_2[i] - u_1[i])/O.dt&lt;br /&gt;
                ) /// navier part&lt;br /&gt;
                + 2 * u_2[i] - u_1[i];   &lt;br /&gt;
        }&lt;br /&gt;
        ///[STEP FORWARD]&lt;br /&gt;
        u_1 = u_2;&lt;br /&gt;
        u_2 = u_3;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that an operator grad(div(u)) that requires also mixed derivatives is used. To obtain stable solution a minimal second order monomial basis is required.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Purely Dirichlet case ===&lt;br /&gt;
For starters we check the solution of the problem with Dirichlet BC. The conditions are obtained from closed form solution [[#Point contact on a 2D half-plane]].&lt;br /&gt;
&lt;br /&gt;
[[File:image_1b19ssjsn1e4t7qvq0ffnb1pft9.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Comparison of MLS based MLSM, FEM and analytical solution of displacements at different cross-sections. &lt;br /&gt;
&lt;br /&gt;
[[File:v_for_y_05.png|400px]][[File:v_for_x_0.png|400px]][[File:u_for_y_05.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:point_contact_convergence&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the convergence between analytical and numerical solution we vary the number of nodes by increasing the grid size in both $x$- and $y$- directions simultaneously by powers of two from $2^2$ (16 nodes all together) to $2^7$ (16384 nodes all together).&lt;br /&gt;
The $L^2$ error norm is used to measure the &amp;quot;difference&amp;quot; between solutions. Since the displacements are the variable that obtain from FreeFem++, we use the displacement magnitude $|\boldsymbol{u}| = \sqrt{u_x^2+u_y^2}$ to define our $L^2$-error norm. The exact equation we have used is&lt;br /&gt;
\begin{equation}&lt;br /&gt;
L^2\text{-norm} = \sqrt{\frac{\int_\Omega (|\boldsymbol{u_{\text{numerical}}}|-|\boldsymbol{u_{\text{analytical}}}|)^2d\Omega}{\int_\Omega|\boldsymbol{u_{\text{analytical}}}|^2d\Omega}}.  &lt;br /&gt;
\end{equation}&lt;br /&gt;
Results are shown in &amp;lt;xr id=&amp;quot;fig:point_contact_convergence&amp;quot;/&amp;gt;&lt;br /&gt;
[[File:Convergence.png|400px|&amp;lt;caption&amp;gt;FEM Convergence results for the point contact problem. The colours blue, red and green represent linear, quadratic and cubic finite elements, respectively.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Point_contact&amp;diff=740</id>
		<title>Point contact</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Point_contact&amp;diff=740"/>
				<updated>2016-11-24T14:02:59Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Click here to return back to [[Solid Mechanics]]&lt;br /&gt;
&lt;br /&gt;
=Point contact on a 2D half-plane=&lt;br /&gt;
&lt;br /&gt;
A starting point to solve problems in contact mechanics is to understand the effect of a point-load applied to a homogeneous, linear elastic, isotropic half-plane. This problem may be defined either as plane stress or plain strain (for solution with FreeFem++ we have choosen the latter). The traction boundary conditions for this problem are:&lt;br /&gt;
\begin{equation}\label{eq:bc}&lt;br /&gt;
\sigma_{xy}(x,0) = 0, \quad \sigma_{yy}(x,y) = -P\delta(x,y)&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\delta(x,y)$ is the Dirac delta function. Together these boundary conditions state that there is a singular normal force $P$ applied at $(x,y) = (0,0)$ and there are no shear stresses on the surface of the elastic half-plane.&lt;br /&gt;
&lt;br /&gt;
The analytical relations for the stresses can be found from the [https://en.wikipedia.org/wiki/Flamant_solution Flamant solution] (stress distributions in a linear elastic wedge loaded by point forces a the tip. When the &amp;quot;wedge&amp;quot; is flat we get a half-plane. The derivation uses polar coordinates.) and are given as:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{2P}{\pi} \frac{x^2y}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{yy} = -\frac{2P}{\pi} \frac{y^3}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xy} = -\frac{2P}{\pi} \frac{xy^2}{\left(x^2+y^2\right)^2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
for some point $(x,y)$ in the half-plane. From this stress field the strain components and thus the displacements $(u_x,u_y)$ can be determined. The displacements are given by&lt;br /&gt;
\begin{align}&lt;br /&gt;
u_x &amp;amp;= -\frac{P}{4\pi\mu}\left((\kappa-1)\theta - \frac{2xy}{r^2}\right), \label{eq:dispx}\\&lt;br /&gt;
u_y &amp;amp;= -\frac{P}{4\pi\mu}\left((\kappa+1)\log r + \frac{2x^2}{r^2}\right), \label{eq:dispy}&lt;br /&gt;
\end{align}&lt;br /&gt;
where $$r = \sqrt{x^2+y^2}$$ and $$\tan \theta = \frac{x}{y}.$$ The symbol $\kappa$ is known as Dundars constant and is defined as&lt;br /&gt;
\[&lt;br /&gt;
\kappa = \begin{cases} 3 - 4\nu &amp;amp; \quad \text{(Plane strain)}, \\&lt;br /&gt;
                       \cfrac{3 - \nu}{1+\nu} &amp;amp; \quad \text{(Plane stress)}. \end{cases}&lt;br /&gt;
\]&lt;br /&gt;
The last remaining symbol is $\mu$ which represents the shear modulus (sometimes also denoted with $G$).&lt;br /&gt;
&lt;br /&gt;
==Numerical solution with [http://www.freefem.org/ FreeFem++]==&lt;br /&gt;
Due to the known analytical solution the point-contact problem can be used for benchmarking numerical PDE solvers in terms of accuracy (as well as computational efficiency). The purpose of this section is to compare the numerical solution obtained by FreeFem++ with the analytical solution, as well as provide a reference numerical solution for the [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/wiki/index.php/Main_Page C++ library] developed in our laboratory.&lt;br /&gt;
&lt;br /&gt;
For purposes of simplicity we limit ourselves to the domain $(x,y) \in \Omega = [-1,1] \times[-1,-0.1]$ and prescribe Dirichlet displacement on the boundaries $\Gamma_D$ from the known analytical solution (\ref{eq:dispx}, \ref{eq:dispy}). This way we avoid having to deal with the Dirac delta traction boundary condition (\ref{eq:bc}). The problem can be described as find $\boldsymbol{u(\boldsymbol{x})}$ that satisfies&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\boldsymbol{\nabla}\cdot\boldsymbol{\sigma}= 0 \qquad \text{on }\Omega&lt;br /&gt;
\end{equation}&lt;br /&gt;
and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\boldsymbol{u} = \boldsymbol{u}_{\text{analytical}} \qquad \text{on }\Gamma_D&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\boldsymbol{u}_\text{analytical}$ is given in equations (\ref{eq:dispx}) and (\ref{eq:dispy}).&lt;br /&gt;
&lt;br /&gt;
To solve the point-contact problem in FreeFem++ we must first provide the weak form of the balance equation:&lt;br /&gt;
\begin{equation*}&lt;br /&gt;
\boldsymbol{\nabla}\cdot\boldsymbol{\sigma} + \boldsymbol{b} = 0.&lt;br /&gt;
\end{equation*}&lt;br /&gt;
The corresponding weak formulation is&lt;br /&gt;
\begin{equation}\label{eq:weak}&lt;br /&gt;
\int_\Omega \boldsymbol{\sigma} : \boldsymbol{\varepsilon}(\boldsymbol{v}) \, d\Omega - \int_\Omega \boldsymbol{b}\cdot\boldsymbol{v}\,d\Omega = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $:$ denotes the tensor scalar product (tensor contraction), i.e. $\boldsymbol{A}:\boldsymbol{B} =\sum_{i,j} A_{ij}B_{ij}$. The vector $\boldsymbol{v}$ is the test function or so-called &amp;quot;virtual displacement&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Equation (\ref{eq:weak}) can be handed to FreeFem++ with the help of [https://en.wikipedia.org/wiki/Voigt_notation#Mandel_notation Voigt or Mandel notation], that reduces the symmetric tensors $\boldsymbol{\sigma}$ and $\boldsymbol{\varepsilon}$ to vectors. The benefit of [https://en.wikipedia.org/wiki/Voigt_notation#Mandel_notation Mandel notation] is that it allows the tensor scalar product to be performed as a scalar product of two vectors.&lt;br /&gt;
For this reason we create the following macros:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
 macro u [ux,uy] // displacements&lt;br /&gt;
 macro v [vx,vy] // test function&lt;br /&gt;
 macro b [bx,by] // body forces&lt;br /&gt;
 macro e(u) [dx(u[0]),dy(u[1]),(dx(u[1])+dy(u[0]))/2] // strain (for post-processing)&lt;br /&gt;
 macro em(u) [dx(u[0]),dy(u[1]),sqrt(2)*(dx(u[1])+dy(u[0]))/2] // strain in Mandel notation&lt;br /&gt;
 macro A [[2*mu+lambda,mu,0],[mu,2*mu+lambda,0],[0,0,2*mu]] // stress-strain matrix&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The weak form (\ref{eq:weak}) can then be expressed naturally in FreeFem++ syntax as&lt;br /&gt;
 int2d(Th)((A*em(u))'*em(v)) - int2d(Th)(b'*v)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Numerical solution with [[Meshless Local Strong Form Method (MLSM)]]==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    RectangleDomain&amp;lt;vec_t&amp;gt; domain(O.domain_lo, O.domain_hi);&lt;br /&gt;
    domain.fillUniformInteriorWithStep(O.d_space);&lt;br /&gt;
    domain.fillUniformBoundaryWithStep(O.d_space);&lt;br /&gt;
    domain.findSupport(O.n, internal);&lt;br /&gt;
    domain.findSupport(O.n, boundary, internal, true); //search only among internal nodes + itself&lt;br /&gt;
&lt;br /&gt;
    EngineMLS&amp;lt;vec_t, Gaussians, Gaussians&amp;gt; mls(&lt;br /&gt;
        {pow(domain.characteristicDistance()*O.sigmaB,2), O.m},   //basis functionsa&lt;br /&gt;
        domain.positions[domain.support[0]],&lt;br /&gt;
        pow(domain.characteristicDistance()*O.sigmaW,2));       //weight function    */&lt;br /&gt;
&lt;br /&gt;
    auto mlsm = make_mlsm(domain, mls, internal);&lt;br /&gt;
    for (auto&amp;amp; i : boundary)    u_3[i]  = u_anal(i);&lt;br /&gt;
    /// [MAIN TEMPORAL LOOP]&lt;br /&gt;
    for (size_t step = 0; step * O.dt &amp;lt; O.time; step++) {&lt;br /&gt;
        int i;  &lt;br /&gt;
        ///[NAVIER-CAUCHY EQUATION :: explicit Plane Stress]&lt;br /&gt;
        #pragma omp parallel for private(i) schedule(static)&lt;br /&gt;
        for (i=0;i&amp;lt;internal.size();++i){&lt;br /&gt;
            u_3[i] = O.dt * O.dt / O.rho * (&lt;br /&gt;
                    O.mu * mlsm.lap(u_2,i) + O.E/(2-2*O.v ) * mlsm.graddiv(u_2,i) +&lt;br /&gt;
                    force[i] - O.dampCoef * (u_2[i] - u_1[i])/O.dt&lt;br /&gt;
                ) /// navier part&lt;br /&gt;
                + 2 * u_2[i] - u_1[i];   &lt;br /&gt;
        }&lt;br /&gt;
        ///[STEP FORWARD]&lt;br /&gt;
        u_1 = u_2;&lt;br /&gt;
        u_2 = u_3;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that an operator grad(div(u)) that requires also mixed derivatives is used. To obtain stable solution a minimal second order monomial basis is required.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Purely Dirichlet case ===&lt;br /&gt;
For starters we check the solution of the problem with Dirichlet BC. The conditions are obtained from closed form solution [[#Point contact on a 2D half-plane]].&lt;br /&gt;
&lt;br /&gt;
[[File:image_1b19ssjsn1e4t7qvq0ffnb1pft9.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Comparison of MLS based MLSM, FEM and analytical solution of displacements at different cross-sections. &lt;br /&gt;
&lt;br /&gt;
[[File:v_for_y_05.png|400px]][[File:v_for_x_0.png|400px]][[File:u_for_y_05.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:point_contact_convergence&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the convergence between analytical and numerical solution we vary the number of nodes by increasing the grid size in both $x$- and $y$- directions simultaneously by powers of two from $2^2$ (16 nodes all together) to $2^7$ (16384 nodes all together).&lt;br /&gt;
The $L^2$ error norm is used to measure the &amp;quot;difference&amp;quot; between solutions. Since the displacements are the variable that obtain from FreeFem++, we use the displacement magnitude $|\boldsymbol{u}| = \sqrt{u_x^2+u_y^2}$ to define our $L^2$-error norm. The exact equation we have used is&lt;br /&gt;
\begin{equation}&lt;br /&gt;
L^2\text{-norm} = \sqrt{\frac{\int_\Omega (|\boldsymbol{u_{\text{numerical}}}|-|\boldsymbol{u_{\text{analytical}}}|)^2d\Omega}{\int_\Omega|\boldsymbol{u_{\text{analytical}}}|^2d\Omega}}.  &lt;br /&gt;
\end{equation}&lt;br /&gt;
Results are shown in &amp;lt;xr id=&amp;quot;fig:point_contact_convergence&amp;quot;/&amp;gt;&lt;br /&gt;
[[File:Convergence.png|400px|&amp;lt;caption&amp;gt;FEM Convergence results for the point contact problem. The colours blue, red and green represent linear, quadratic and cubic finite elements, respectively.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Coding_style&amp;diff=739</id>
		<title>Coding style</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Coding_style&amp;diff=739"/>
				<updated>2016-11-24T14:01:17Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a brief description of our coding style, roughtly following the [https://google.github.io/styleguide/cppguide.html Google C++ Style Guide]. &lt;br /&gt;
&lt;br /&gt;
=General=&lt;br /&gt;
A soft $80$ characters line width limit.&lt;br /&gt;
&lt;br /&gt;
=Indentation=&lt;br /&gt;
&lt;br /&gt;
Indent using spaces. Indentation width is $4$ spaces.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
for (int i = 0; i &amp;lt; 10; i++) {&lt;br /&gt;
    cout &amp;lt;&amp;lt; i &amp;lt;&amp;lt; endl;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Naming convention=&lt;br /&gt;
&lt;br /&gt;
Constants - &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; UPPER_CASE &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
classes   - &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt;PascalCase&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
methods - &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt;camelCase&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
variables and stand-alone functions - &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; underscore_separated &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
typedefs - lowercase underscore separated, usually one word with trailing &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; _t &amp;lt;/syntaxhighlight&amp;gt;, eg. &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; vec_t &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
namespaces - one lowercase word, maybe shortened, eg. &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; op &amp;lt;/syntaxhighlight&amp;gt;. For internal implementation details use &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; op_internal &amp;lt;/syntaxhighlight&amp;gt;. Namespace closing brace should contain comment  // namespace your_name. There is no indentation within namespaces.&lt;br /&gt;
&lt;br /&gt;
All standard abbreviations like [[Moving Least Squares (MLS)| Moving Least Squres (MLS)]] or [https://en.wikipedia.org/wiki/Finite_element_method Finite Elements Method (FEM)] -&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt;UPPER_CASE&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Floating point and integer literals use small suffixes, eg. &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; 0.0f, -1e8l, 45ull&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
#define MAX_VALUE 500&lt;br /&gt;
class MyClass {&lt;br /&gt;
  public:&lt;br /&gt;
    MyClass(first_var, second_var) {&lt;br /&gt;
        ...&lt;br /&gt;
    }&lt;br /&gt;
    int getSize();&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Comments=&lt;br /&gt;
&lt;br /&gt;
Comments are good. Use them to explain your code. Comments should have a space between the last&lt;br /&gt;
slash and the start of text.  Inline comments should have at least two spaces between end of code&lt;br /&gt;
and start of the comment.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
// This function will change the world&lt;br /&gt;
double change_the_world(bool skynet) {&lt;br /&gt;
    if (skynet) {&lt;br /&gt;
        return 0.0;  // Brace for the end of the world&lt;br /&gt;
    }&lt;br /&gt;
    ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use doxygen comments to generate [[documentation]].&lt;br /&gt;
&lt;br /&gt;
=Headers =&lt;br /&gt;
&lt;br /&gt;
All headers must contain a header guard of form &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt;PARTH_TO_FILENAME_HPP_&amp;lt;/syntaxhighlight&amp;gt; as enforced by the linter.&lt;br /&gt;
&lt;br /&gt;
Includes in header guards are separated in two groups, with intra-project includes on top and other includes on the bottom.&lt;br /&gt;
The groups are separated by a blank line and includes are kept sorted within a group.&lt;br /&gt;
&lt;br /&gt;
=Misc=&lt;br /&gt;
&lt;br /&gt;
Avoid trailing whitespace. Curly opening brackets &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; { &amp;lt;/syntaxhighlight&amp;gt; should be inline with &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; for &amp;lt;/syntaxhighlight&amp;gt; loops, function&lt;br /&gt;
definitions, class names, separated with a space. Outermost binary operators should have spaces&lt;br /&gt;
around them.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
int sumMe(int var) {  // yes&lt;br /&gt;
    if (var == 1)&lt;br /&gt;
    {                 // no&lt;br /&gt;
        return 1;&lt;br /&gt;
    }&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For null pointer we use  &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; nullptr &amp;lt;/syntaxhighlight&amp;gt; instead of  &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; inline&amp;gt; NULL &amp;lt;/syntaxhighlight&amp;gt; macro.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Cantilever_beam&amp;diff=738</id>
		<title>Cantilever beam</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Cantilever_beam&amp;diff=738"/>
				<updated>2016-11-24T14:00:22Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Do you want to go back to [[Solid Mechanics]]?&lt;br /&gt;
&lt;br /&gt;
On this page we conduct numerical studies of '''bending of a cantilever loaded at the end''', a common numerical benchmark in elastostatics. &amp;lt;ref&amp;gt; Augarde, Charles E. and Deeks, Andrew J.. &amp;quot;The use of Timoshenko's exact solution for a cantilever beam in adaptive analysis&amp;quot; , ''Finite Elements in Analysis and Design''. (2008), doi: [http://dx.doi.org/10.1016/j.finel.2008.01.010 10.1016/j.finel.2008.01.010] &amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Exact solution =&lt;br /&gt;
&lt;br /&gt;
The exact solution to this problem is given by Timoshenko (1951) where it is derived for '''plane stress''' conditions. &amp;lt;ref&amp;gt;Timoshenko, S. and Goodier, J. N. (1951). ''Theory of elasticity'', p. 35 - 39. McGraw-Hill, Inc., New York.&amp;lt;/ref&amp;gt; Consider a beam of dimensions $L \times D$ having a narrow rectangular cross section. The origin of the coordinate system is placed at $(x,y) = (0,D/2)$. The beam is bent by a force $P$ applied at the end $x = 0$ and the other end of the beam is fixed (at $x = L$). The stresses in such a beam are given as:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{Pxy}{I},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{yy} = 0,&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}\label{eq:sxy}&lt;br /&gt;
\sigma_{xy} = -\frac{P}{2I}\left(\frac{D^2}{4} - y^2 \right),&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $I = D^3/12$ is the moment of inertia.&lt;br /&gt;
&lt;br /&gt;
The exact solution in terms of the displacements in $x$- and $y-$ direction is&lt;br /&gt;
\begin{align}\label{eq:beam_a1}&lt;br /&gt;
u_x(x,y) &amp;amp;= -\frac{Py}{6EI}\left(3(x^2-L^2) -(2+\nu)y^2 + 6 (1+\nu) \frac{D^2}{4}\right) \\ \label{eq:beam_a2}&lt;br /&gt;
u_y(x,y) &amp;amp;= \frac{P}{6EI}\left(3\nu x y^2 + x^3 - 3L^2 x + 2L^3\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
where $E$ is Young's modulus and $\nu$ is the Poisson ratio.&lt;br /&gt;
&lt;br /&gt;
Alternatively we may prefer to see the force applied on the right side at $x = L$, and have the left end at $(x,y) = (0,0)$ fixed. In this case the solution can be found in the following expandable section.&lt;br /&gt;
&amp;lt;div class=&amp;quot;toccolours mw-collapsible mw-collapsed&amp;quot;&amp;gt;&lt;br /&gt;
'''Solutions for cantilever beam with force applied on the right side at $x = L$'''&lt;br /&gt;
&amp;lt;div class=&amp;quot;mw-collapsible-content&amp;quot;&amp;gt;&lt;br /&gt;
Consider a cantilever beam of depth $D$, length $L$ and unit thickness, which is fully fixed at $x = 0$ and carries an end load $P$. The stress field in the cantilever is given by&lt;br /&gt;
\begin{align}&lt;br /&gt;
\sigma_{xx} &amp;amp;= \frac{P(L-x)y}{I}, \\&lt;br /&gt;
\sigma_{yy} &amp;amp;= 0, \\&lt;br /&gt;
\sigma_{xy} &amp;amp;= -\frac{P}{2I}\left(\frac{D^2}{4} - y^2\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
where $I = D^3/12$ is the moment of inertia. The displacement field $(u_x, u_y)$ is given by&lt;br /&gt;
\begin{align} \label{eq:cb1}&lt;br /&gt;
u_x &amp;amp;= -\frac{Py}{6EI}\left((6L-3x)x + (2+\nu)\left(y^2-\frac{D^2}{4}\right)\right) \\ \label{eq:cb2}&lt;br /&gt;
u_y &amp;amp;= -\frac{P}{6EI}\left(3\nu y^2(L-x) + (4+5\nu)\frac{D^2 x}{4} +(3L-x)x^2\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
where $E$ is Young's modulus and $\nu$ the Poisson ratio.&lt;br /&gt;
&lt;br /&gt;
From equations (\ref{eq:cb1}) and (\ref{eq:cb2}) we may find the essential boundary conditions for the fixed side $x = 0$&lt;br /&gt;
\begin{align}&lt;br /&gt;
u_x(0,y) &amp;amp;= -\frac{Py}{6EI}(2+\nu)\left(y^2 - \frac{D^2}{4}\right), \\&lt;br /&gt;
u_y(0,y) &amp;amp;= -\frac{P}{6EI}3\nu y^2 L.&lt;br /&gt;
\end{align}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Numerical solution =&lt;br /&gt;
&lt;br /&gt;
For the numerical solution we first choose the following parameters: &amp;lt;ref&amp;gt; Liu, Gui-Rong (2003). ''Mesh free methods: moving beyond the finite element method'', p. 161. CRC Press LLC, Boca Raton.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Loading: $P = -1000$ N&lt;br /&gt;
* Young's modulus: $E = 3 \times 10^7$ N/m&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&lt;br /&gt;
* Poisson's ratio: $\nu = 0.3$&lt;br /&gt;
* Depth of the beam: $D = 12$ m&lt;br /&gt;
* Length of the beam: $L = 48$ m&lt;br /&gt;
&lt;br /&gt;
The unloaded beam is discretized with $40 \times 10$ regular nodes. Since the right end of the beam at $x = L$ is fixed, the displacement boundary conditions are prescribed from the known analytical formulae (\ref{eq:beam_a1}) and (\ref{eq:beam_a2}) :&lt;br /&gt;
\begin{equation}&lt;br /&gt;
u_x(L,y) = -\frac{P}{6EI}\left(-(2+\nu)y^2 + 6 (1+\nu) \frac{D^2}{4}\right); \qquad u_y(L,y) = \frac{P}{2EI}(\nu L y^2)&lt;br /&gt;
\end{equation}&lt;br /&gt;
The traction boundary at the left end of the beam ($x=0$) is given by (\ref{eq:sxy}): &lt;br /&gt;
\begin{equation}\label{eq:trac_a}&lt;br /&gt;
t_y(L,y) = -\frac{P}{2I}\left(\left(\frac{D}{2}\right)^2 - y^2 \right).&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
An indicator of the accuracy that can be employed is the strain energy error $e$:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
e = \left[ \frac{1}{2} \int_\Omega (\b{\varepsilon}^\mathrm{num} - \b{\varepsilon}^\mathrm{exact})\b{C}(\b{\varepsilon}^\mathrm{num} - \b{\varepsilon}^\mathrm{exact}) d\Omega\right]^{1/2}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\b{\varepsilon}$ is the strain tensor in vector form, $\b{C}$ the reduced stiffness tensor (a matrix) and $\Omega$ the domain of the calculated solution.&lt;br /&gt;
&lt;br /&gt;
== FreeFem++ ==&lt;br /&gt;
&lt;br /&gt;
The required code to solve this problem in FreeFem++ is:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
 // Parameters&lt;br /&gt;
 real E = 3.0e7, nu = 0.3;&lt;br /&gt;
 real P = -1000;&lt;br /&gt;
 real L = 48, D = 12;&lt;br /&gt;
 real I = D^3/12;&lt;br /&gt;
 &lt;br /&gt;
 // Mesh&lt;br /&gt;
 mesh Th = square(40,10,[L*x,-D/2+D*y]);&lt;br /&gt;
 &lt;br /&gt;
 // Macros&lt;br /&gt;
 macro u [ux,uy] // displacement&lt;br /&gt;
 macro v [vx,vy] // test function&lt;br /&gt;
 macro div(u) (dx(u[0])+dy(u[1])) // divergence&lt;br /&gt;
 macro eM(u) [dx(u[0]), dy(u[1]), sqrt(2)*(dx(u[1]) + dy(u[0]))/2] // strain tensor&lt;br /&gt;
 &lt;br /&gt;
 // Finite element space&lt;br /&gt;
 fespace Vh(Th,[P2,P2]);&lt;br /&gt;
 Vh u, v;&lt;br /&gt;
 &lt;br /&gt;
 // Boundary conditions&lt;br /&gt;
 func uxb = -P*y/(6*E*I)*(nu*y^2-2*(1+nu)*y^2+6*D^2/4*(1+nu));&lt;br /&gt;
 func uyb = P/(6*E*I)*(3*nu*L*y^2);&lt;br /&gt;
 func ty = -P/(2*I)*(D^2/4-y^2);&lt;br /&gt;
 &lt;br /&gt;
 // Convert E and nu to plane stress&lt;br /&gt;
 E = E/(1-nu^2); nu = nu/(1-nu);&lt;br /&gt;
 &lt;br /&gt;
 // Lame parameters&lt;br /&gt;
 real mu = E/(2*(1+nu));&lt;br /&gt;
 real lambda = E*nu/((1+nu)*(1-2*nu));&lt;br /&gt;
 &lt;br /&gt;
 // Solve problem&lt;br /&gt;
 solve cantileverBeam(u,v) =&lt;br /&gt;
    int2d(Th)(lambda*div(u)*div(v) + 2*mu*(eM(u)'*eM(v)))&lt;br /&gt;
    - int1d(Th,4)([0,-ty]'*v)&lt;br /&gt;
    + on(2,ux=uxb,uy=uyb);&lt;br /&gt;
 &lt;br /&gt;
 // Plot solution&lt;br /&gt;
 real coef = 1000;&lt;br /&gt;
 Th = movemesh(Th,[x+ux*coef,y+uy*coef]);&lt;br /&gt;
 plot(Th,wait=1);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The numerical solutions for beam displacements $u_y(x,0)$ in the axial direction of the beam, and shear stresses $\sigma_{xy}(L/2,y)$ in a cross section at the half lengthwise of the beam, obtained by quadratic finite elements are shown in the figure below. Analytic solutions are shown with continuous lines.&lt;br /&gt;
&lt;br /&gt;
[[File:cantilever_beam.png|800px]]&lt;br /&gt;
&lt;br /&gt;
The displaced mesh coloured with the value of the displacement magnitude is presented in the following figure. The displacements are magnified by a factor of 10.&lt;br /&gt;
&lt;br /&gt;
[[File:cantilever_beam_field.png|800px]]&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Hertzian_contact&amp;diff=737</id>
		<title>Hertzian contact</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Hertzian_contact&amp;diff=737"/>
				<updated>2016-11-24T13:56:42Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Click on [[Solid Mechanics]] to go back.&lt;br /&gt;
&lt;br /&gt;
= Contact of Cylinders - the Hertz problem =&lt;br /&gt;
&lt;br /&gt;
Detailed discussions of this problem can be found in Hills and Nowells (1994) as well as Williams and Dwyer-Joyce (2001). &amp;lt;ref&amp;gt; Hills, D. A. and Nowell, D. (1994). ''Mechanics of Fretting Fatique'', p. 20-25. Springer Science+Business Media, Dordrecht.&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;Williams, John A. and Dwyer-Joyce, Rob S. (2001). ''Contact Between Solid Surfaces'', p. 121 in '''Modern Tribology Handbook: Volume 1, Principles of Tribology''', editor: Bushan, Bharat. CRC Press LLC, Boca Raton.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If two circular cylinders with radii $R_1$ and $R_2$ are pressed together by a force per unit length of magnitude $P$ with their axes parallel, then the contact patch will be of half-width $b$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
b = 2\sqrt{\frac{PR}{\pi E^*}}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $R$ and $E^*$ are the reduced radius of contact and the contact modulus defined by&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\frac{1}{R} = \frac{1}{R_1} + \frac{1}{R_2},&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\frac{1}{E^*} = \frac{1-{\nu_1}^2}{E_1} + \frac{1-{\nu_2}^2}{E_2}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
The resulting pressure distribution $p(x)$ is semielliptical, i.e., of the form&lt;br /&gt;
\begin{equation}&lt;br /&gt;
p(x) = p_0 \sqrt{1-\frac{x^2}{b^2}}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where the peak pressure&lt;br /&gt;
\begin{equation}&lt;br /&gt;
p_0 = \sqrt{\frac{PE^*}{\pi R}}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
The coordinate $x$ is measured perpendicular to that of the cylinder axes. For the case of nominal contact between cylinders closed form analytical solutions are available. &lt;br /&gt;
&lt;br /&gt;
The surface stresses are given in the following equations. At the contact interface&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = \sigma_{zz} = -p(x);&lt;br /&gt;
\end{equation}&lt;br /&gt;
outside the contact region all the stress components at the surface are zero. Along the line of symmetry the following equations hold&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{p_0}{b}\left((b^2+2z^2)(b^2+z^2)^{-1/2} - 2z\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{zz} = -\frac{p_0}{b}(b^2+z^2)^{-1/2}&lt;br /&gt;
\end{equation}&lt;br /&gt;
These are the principal stresses so that the principal shear stress $\tau_1$ is given by&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\tau_1 = \frac{p_0}{b}\left(z - z^2(b^2+z^2)^{-1/2}\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
from which \[(\tau_1)_\mathrm{max} = 0.30p_0, \quad \text{at } z = 0.78b\]&lt;br /&gt;
&lt;br /&gt;
Note that these stresses are all independent of Poisson's ratio although, for plane strain, the third principal stress \[\sigma_{yy} = \nu(\sigma_{xx} + \sigma_{zz})\].&lt;br /&gt;
&lt;br /&gt;
The surface stresses and subsurface stresses along the axis of symmetry are shown in the following two graphs. The $x$ and $z$ coordinates are normalized with the contact width $b$.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_2016-11-16_16-13-26.png|300px]]             [[File:Screenshot_2016-11-16_16-12-32.png|300px]]&lt;br /&gt;
&lt;br /&gt;
At a general point $(x,z)$ the stresses may be expressed in terms of $m$ and $n$, defined by&lt;br /&gt;
\begin{equation}&lt;br /&gt;
m^2 = \frac{1}{2}\left[\left\{(b^2 - x^2 + z^2)^2 + 4x^2z^2\right\}^{1/2} + (b^2 - x^2 + z^2)^2\right]&lt;br /&gt;
\end{equation}&lt;br /&gt;
\begin{equation}&lt;br /&gt;
n^2 = \frac{1}{2}\left[\left\{(b^2 - x^2 + z^2)^2 + 4x^2z^2\right\}^{1/2} - (b^2 - x^2 + z^2)^2\right]&lt;br /&gt;
\end{equation}&lt;br /&gt;
where the signs of $m$ and $n$ are the same as the signs of $z$ and $x$, respectively.&lt;br /&gt;
Whereupon&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{p_0}{b}\left[m\left(1 + \frac{z^2 + n^2}{m^2 + n^2}\right)-2z\right]&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xx} = -\frac{p_0}{b}m\left(1 - \frac{z^2 + n^2}{m^2 + n^2}\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\sigma_{xz} = \sigma_{xx} = -\frac{p_0}{b}n\left(\frac{m^2 - z^2}{m^2 + n^2}\right)&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
= Contact of cylinders under partial slip =&lt;br /&gt;
&lt;br /&gt;
The second case we study is the application of a tangential force $Q$ to the previous problem. When the tangential force is less than the limiting force of friction, i.e., \[|Q| &amp;lt; \mu P,\] where $\mu$ is the coefficent of friction, sliding motion will not occur but the contact will be divided into regions of slip and stick zones that are unknown ''a priori''. For the case of cylinders the analysis is given in Hills &amp;amp; Nowell (1994), p. 44.&lt;br /&gt;
&lt;br /&gt;
Besides the normal traction $p(x)$ we know have an additional shear traction given by&lt;br /&gt;
\begin{equation}&lt;br /&gt;
q(x) = \begin{cases}&lt;br /&gt;
-\mu p_0 \sqrt{1 - \frac{x^2}{b^2}}, \quad c \leq |x| \leq b \\&lt;br /&gt;
-\mu p_0 \left(\sqrt{1 - \frac{x^2}{b^2}} - \frac{c}{b}\sqrt{1 - \frac{x^2}{c^2}}\right), \quad |x| &amp;lt; c&lt;br /&gt;
\end{cases}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $b$ is the half-width of the whole contact, and $c$ the half-width of the central sticking region. The width of the central zone, i.e. the value of dimension $c$ is dependent on the applied tangential force $Q$:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\frac{c}{b} = \sqrt{1 - \frac{Q}{\mu P}}&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
The distributions $q(x)$ and $p(x)$ as well as the widths of the stick and slip zones can be seen in the image below.&lt;br /&gt;
[[File:Screenshot_2016-11-17_10-43-18.png|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== The effect of bulk stress ==&lt;br /&gt;
&lt;br /&gt;
Additionally we might be interested in the addition of bulk stress. This type of stress occurs in fretting fatique experiments like the one shown below.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_2016-11-17_10-55-59.png|500px]]&lt;br /&gt;
&lt;br /&gt;
The previous solution for contact of cylinders under partial slip can be adjusted for the presence of bulk stresses $\sigma_\mathrm{axial}$. These cause an eccentricity $e$ to the solution given above. The shear traction $q(x)$ can be written as:&lt;br /&gt;
\begin{equation}&lt;br /&gt;
q(x) = \begin{cases}&lt;br /&gt;
-\mu p_0 \sqrt{1 - \frac{x^2}{b^2}}, \quad c \leq | x - e | \text{ and } |x| \leq b \\&lt;br /&gt;
-\mu p_0 \left[\sqrt{1 - \frac{x^2}{b^2}} - \frac{c}{b}\sqrt{1 - \frac{(x-e)^2}{c^2}}\right], \quad |x-e| &amp;lt; c&lt;br /&gt;
\end{cases}&lt;br /&gt;
\end{equation}&lt;br /&gt;
where once again \[ \frac{c}{b} = \sqrt{1 - \frac{Q}{\mu P}}\] and&lt;br /&gt;
\begin{equation}&lt;br /&gt;
e = \frac{b \sigma_\mathrm{axial}}{4 \mu p_0}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
If larger values of $\sigma_\mathrm{axial}$ are applied, one edge of the stick zone will approach the edge of the contact ($e$ becomes larger). The solution for the shear stress traction is therefore only valid if $e + c \leq b$, i. e.&lt;br /&gt;
\[\frac{\sigma_\mathrm{axial}}{\mu p_0} \leq 4\left(1 - \sqrt{1 - \frac{Q}{\mu P}}\right).\]&lt;br /&gt;
&lt;br /&gt;
= FreeFem++ numerical solution =&lt;br /&gt;
&lt;br /&gt;
For the numerical solution in FreeFem++ we choose parameters similar to those in Pereira et al. (2016): &amp;lt;ref&amp;gt; Pereira et al. (2016). '''On the convergence of stresses in fretting fatique'''. Materials, 9, 639.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Modulus of elasticity: $E = 72.1$ GPa&lt;br /&gt;
*Poisson's ratio: $\nu = 0.33$&lt;br /&gt;
*Normal load: $P = 543$ N&lt;br /&gt;
*Coefficient of friction: $\mu = 0.85$&lt;br /&gt;
*Cylinder radius: $R = 50$ mm&lt;br /&gt;
*Specimen length: $L = 40$ mm&lt;br /&gt;
*Specimen height: $H = 10$ mm&lt;br /&gt;
&lt;br /&gt;
We assume that both the pad and specimen are from the same material, therefore the combined modulus is \[E^* = \frac{E}{2(1-\nu^2)}.\] According to Pereira et al. (2016) contact width is now defined with the specimen height in the denominator:&lt;br /&gt;
\[b = 2\sqrt{\frac{PR}{H\pi E^*}}\]&lt;br /&gt;
\[p_0 = \sqrt{\frac{PE^*}{H\pi R}}\]&lt;br /&gt;
&lt;br /&gt;
(this might not be correct, because $p_0$ should be equal to $2P/(\pi b)$? Does $R/H$ perhaps represent a dimensionless radius?)&lt;br /&gt;
&lt;br /&gt;
Out of simplicity we only use traction boundaries at the top surface. On the left, right and bottom sides of the specimen Dirichlet boundaries are used. The center of the coordinate system is placed at $(x,y) = (L/2,0)$. The axises therefore go from $-L/2$ to $L/2$ in $x$-direction and from $0$ to $H$ in $y$-direction.&lt;br /&gt;
&lt;br /&gt;
                 top&lt;br /&gt;
       ________________________&lt;br /&gt;
      |                        |&lt;br /&gt;
      |                        |&lt;br /&gt;
 left |                        | right&lt;br /&gt;
      |                        |&lt;br /&gt;
      |________________________|&lt;br /&gt;
                 &lt;br /&gt;
                bottom&lt;br /&gt;
&lt;br /&gt;
Boundary conditions:&lt;br /&gt;
\[ &lt;br /&gt;
t_z(x) = &lt;br /&gt;
\begin{cases}&lt;br /&gt;
-p_0\sqrt{1 - \frac{x^2}{b^2}}, \quad |x| \leq b \\&lt;br /&gt;
0, \quad \text{otherwise} &lt;br /&gt;
\end{cases} \quad \text{on } \Gamma_\mathrm{top}&lt;br /&gt;
\]&lt;br /&gt;
\[&lt;br /&gt;
(u_x,u_z) = (0,0) \quad \text{on } \Gamma_\mathrm{left},\Gamma_\mathrm{right},\Gamma_\mathrm{bottom}&lt;br /&gt;
\]&lt;br /&gt;
&lt;br /&gt;
The numerical solution is obtained for plane '''strain''' conditions using quadratic finite elements. The final mesh after adaptation had ~$40000$ dof. The images below show the raw values of the surface stresses, sub-surface stresses along the axis of symmetry and a close-up of the maximum shear stress contours close to the contact. The contact width $b \approx 2.9$ mm for the parameters described at the beginning of this section. &lt;br /&gt;
&lt;br /&gt;
[[File:surface_stress.png|600px]]&lt;br /&gt;
&lt;br /&gt;
[[File:subsurface_stress.png|500px]]&lt;br /&gt;
&lt;br /&gt;
[[File:mss_contours.png|900px]]&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Dynamic_Thermal_Rating_of_over_head_lines&amp;diff=736</id>
		<title>Dynamic Thermal Rating of over head lines</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Dynamic_Thermal_Rating_of_over_head_lines&amp;diff=736"/>
				<updated>2016-11-24T13:52:29Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In February 2014 a severe icing storm hit Slovenia, and caused damage in order of $8.5$ million € only on the power transmission network. Based on feasibility study performed by Jožef Stefan Institute (JSI), Milan Vidmar Electric Power Research Institute (EIMV), Slovenian Environment Agency (SEA), and ELES, which confirmed that using Joule heating could prevent icing JSI further developed an operative software termed DTRi (Dynamic Thermal Rating - icing). The core of the DTRi is a physical model for the simulation of heat transfer within the transmission power line under realistic weather conditions related to the icing, i.e., ambient temperatures between $-5$ °C and $5$ °C with super cooled rain present. The DTRi comprises the Joule heating, convection, solar heating, evaporation, radiation and impinging super cooled precipitation. Basically, the DTRi solves the heat transport equation (second order partial differential equation) with non-linear boundary conditions describing different heat terms due to the weather conditions. The results obtained with DTRi have been compared against available published data as well as measurements provided by EIMV, with promising results. &lt;br /&gt;
DTRi can run in two modes, i.e. as standalone software and as an embedded system within the SUMO framework, which is a heterogeneous collection of subsystem from different vendors that was developed to increase safety and security as well as the capacity of the existing transmission network. Its core is the integration platform, SUMO BUS, which is an enterprise integration bus, used for orchestrating the subsystems and facilitating data exchange between them. The communication between SUMO BUS and the subsystems is based on web services. More precisely subsystems communicate with the bus via SOAP/HTTP interfaces. This technology enables different subsystem vendors to quickly and efficiently connect their subsystems using standardized and open means of communication and exchanging data. For example, SUMO allows different Dynamic Thermal Rating (DTR) vendors to be incorporated into the system, each serving a different part of the grid. Currently there are $17$ services implemented. They are providing approximately $115$ methods to the clients (subsystems). System’s internal state is held in a relational SQL database. &lt;br /&gt;
&lt;br /&gt;
More details can be found in user manual of the DTRi package -- [[File:Manual.pdf]] &lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dtr1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhribvbu521ot91qj3onlivf9.png|500px|alt=Scheme of the model|&amp;lt;caption&amp;gt;Scheme of the model&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dtr1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhrtmm9lesvm6mft1kqvblh9.png|500px|alt=Scheme of the DTRi implementation|&amp;lt;caption&amp;gt;Scheme of the DTRi implementation&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparison of DTRi prediction and experimental measurement performed on EIMV testing site. &lt;br /&gt;
&lt;br /&gt;
[[File:image_1b197ev2p1su8p6f12cf1qsho9kk.png|500px]] [[File:image_1b197fl1tk1l9fajt1ss81b6v.png|500px]]&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=735</id>
		<title>Meshless Local Strong Form Method (MLSM)</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Meshless_Local_Strong_Form_Method_(MLSM)&amp;diff=735"/>
				<updated>2016-11-24T13:49:31Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Meshless Local Strong Form Method (MLSM) is a generalization of methods which are in literature also known as Diffuse Approximate Method (DAM), Local Radial Basis Function Collocation Methods (LRBFCM), Generalized FDM, Collocated discrete least squares (CDLS) meshless, etc. Although each of the named methods pose some unique properties, the basic concept of all local strong form methods is similar, namely to approximate treated fields with &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:meshless1&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhoje8u10tg1cip1atj1qo51j8b9.png|500px|thumb|upright=2|alt=The scheme of local meshless principle.|&amp;lt;caption&amp;gt;The scheme of local meshless principle. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
nodal trial functions over the local support domain. The nodal trial function is then used to evaluate various operators, e.g. derivation, integration, and after all, approximation of a considered field in arbitrary position. The MLSM could easily be understood as a meshless generalization of the FDM, however much more powerful. MLSM has an ambition to avoid using pre-defined relations between nodes and shift this task into the solution procedure. The final goal of such an approach is higher flexibility in complex domains.&lt;br /&gt;
&lt;br /&gt;
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.&lt;br /&gt;
&lt;br /&gt;
The core of the spatial discretization is a local [[Moving Least Squares (MLS)]] approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as&lt;br /&gt;
	\[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\] &lt;br /&gt;
with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively.  &lt;br /&gt;
&lt;br /&gt;
The problem can be written in matrix form (refer to [[Moving Least Squares (MLS)]] for more details) as &lt;br /&gt;
	\[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\]	&lt;br /&gt;
where $(\mathbf{W}^{0.5}\mathbf{B})^{+}$ stand for a Moore–Penrose pseudo inverse. &lt;br /&gt;
&lt;br /&gt;
== Shape functions ==&lt;br /&gt;
By explicit expressiong the coefficients $\alpha$ into the trial function     &lt;br /&gt;
	\[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]	&lt;br /&gt;
is obtained, where $\mathbf{\chi}$ stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function &lt;br /&gt;
	\[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\]&lt;br /&gt;
where $L$ stands for general differential operator. &lt;br /&gt;
For example:&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
	\[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\]&lt;br /&gt;
&lt;br /&gt;
The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator. The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or $p$-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus &lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:implementation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:image_1avhh0f97le11d68kgk18gon9h9.png|900px|centre|alt=The implementation diagram.|&amp;lt;caption&amp;gt;The implementation diagram. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity  of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$  stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a kD-tree [20] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting $n$ supporting nodes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Testing&amp;diff=734</id>
		<title>Testing</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Testing&amp;diff=734"/>
				<updated>2016-11-24T13:43:54Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;figure id=&amp;quot;fig:tests&amp;quot;&amp;gt;&lt;br /&gt;
[[File:tests.png|500px|thumb|upright=2|alt=Output of successfull ./run_tests.sh script run.|&amp;lt;caption&amp;gt;Output of successfull &amp;lt;code&amp;gt;./run_tests.sh&amp;lt;/code&amp;gt; script run.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have 4 different kind of tests in this library:&lt;br /&gt;
* unit tests&lt;br /&gt;
* style checks&lt;br /&gt;
* docs check&lt;br /&gt;
* system configuration check&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;./run_tests.sh&amp;lt;/code&amp;gt; script controlls all tests&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Usage: ./run_tests.sh&lt;br /&gt;
Options:&lt;br /&gt;
  -c   run only configuration test&lt;br /&gt;
  -t   run only unit tests&lt;br /&gt;
  -s   run only stylechecks&lt;br /&gt;
  -d   run only docs check&lt;br /&gt;
  -h   print this help&lt;br /&gt;
Example:&lt;br /&gt;
 ./run_tests.sh -sd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pushing run &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;./run_tests.sh&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
This script makes and executes all  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; &amp;lt;util_name&amp;gt;_test.cpp &amp;lt;/syntaxhighlight&amp;gt; test files, checks coding style and&lt;br /&gt;
documentation. If anything is wrong you will get pretty little red error, but if you see green, like in &amp;lt;xr id=&amp;quot;fig:tests&amp;quot;/&amp;gt;, you're good to go.&lt;br /&gt;
&lt;br /&gt;
=Unit tests=&lt;br /&gt;
&lt;br /&gt;
All library code is tested by means of unit tests. Unit tests provide verification, are good examples and prevent regressions.&lt;br /&gt;
'''For any newly added functionality, a unit test testing that functionality must be added.'''&lt;br /&gt;
&lt;br /&gt;
==Writing unit tests==&lt;br /&gt;
Every new functionality (eg. added class, function or method) should have a unit test. Unit tests&lt;br /&gt;
* assure that code compiles&lt;br /&gt;
* assure that code executes without crashes&lt;br /&gt;
* assure that  code produces expected results&lt;br /&gt;
* define observable behaviour of the method, class, ...&lt;br /&gt;
* prevent future modifications of this code to change this behaviour accidentally&lt;br /&gt;
&lt;br /&gt;
Unit tests should tests observable behaviour, eg. if function gets 1 and 3 as input, output should be 6. &lt;br /&gt;
They should test for edge cases and most common cases, as well as for expected death cases. &lt;br /&gt;
&lt;br /&gt;
We are using [https://github.com/google/googletest Google Test framework] for our unit tests. See their [https://github.com/google/googletest/blob/master/googletest/docs/Primer.md introduction to unit testing] for more details. &lt;br /&gt;
&lt;br /&gt;
The basic structure is &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
TEST(Group, Name) {&lt;br /&gt;
    EXPECT_EQ(a, b);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each header file should be accompanied by a &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;&amp;lt;util_name&amp;gt;_test.cpp&amp;lt;/syntaxhighlight&amp;gt; with unit tests.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;When writing unit tests, always write them thoroughly and slowly, take your time.&lt;br /&gt;
Never copy your own code's ouput to the test; rather produce it by hand or with another trusted tool.&lt;br /&gt;
Even if it seems obvious the code is correct, remember that you are writing tests also for the future.&lt;br /&gt;
If tests have a bug, it is much harder to debug!&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See our examples in [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/examples.html techincal documentation].&lt;br /&gt;
&lt;br /&gt;
==Running unit tests==&lt;br /&gt;
&lt;br /&gt;
Tests can be run all at once via &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;make run_all_tests&amp;lt;/syntaxhighlight&amp;gt; or individually via eg. &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;make basisfunc_run_tests &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Compiled binary supports running only specified test. Use &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; ./all_tests --gtest_filter=Domain*&amp;lt;/syntaxhighlight&amp;gt; for filtering and &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; ./all_tests --help &amp;lt;/syntaxhighlight&amp;gt; for more options.&lt;br /&gt;
&lt;br /&gt;
==Fixing bugs==&lt;br /&gt;
When you find a bug in the normal code, fix it and write a test for it. The test should fail before the fix, and pass after the it.&lt;br /&gt;
&lt;br /&gt;
= Style check =&lt;br /&gt;
Before commiting, a linter &amp;lt;code&amp;gt;cpplint.py&amp;lt;/code&amp;gt; is ran on all the source and test files to make sure that the code follows the [[coding style|style guide]].&lt;br /&gt;
The linter is not perfect, so if any errors are unjust, feel free to comment appropriate lines in the linter out and commit the change.&lt;br /&gt;
&lt;br /&gt;
= Docs check =&lt;br /&gt;
Every function, class or method should also have documentation as einforced by doxygen in the header where they are defined.&lt;br /&gt;
In the comment block, all parameters, and return value should be meaningully described. It can also containg a short example.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * Computes the force that point a inflicts on point b.&lt;br /&gt;
 *&lt;br /&gt;
 * @param a The index of the first point.&lt;br /&gt;
 * @param b The index of the second point&lt;br /&gt;
 * @return The size of the force vector from a to b.&lt;br /&gt;
 */&lt;br /&gt;
double f (int a, int b);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any longer discussions about the method used or long examples belong to this wiki, but can be linked from the docs.&lt;br /&gt;
&lt;br /&gt;
= Configuration testing =&lt;br /&gt;
&lt;br /&gt;
The script [https://gitlab.com/e62Lab/e62numcodes/blob/master/scripts/configure.sh scripts/configure.sh] checks&lt;br /&gt;
the computer has appropriate packages and that very basic code examples (hdf5, sfml) compile. &lt;br /&gt;
It currently support arch like and ubuntu like distros.&lt;br /&gt;
&lt;br /&gt;
If you find yourself making modification to [https://gitlab.com/e62Lab/e62numcodes/blob/master/.gitlab-ci.yml .gitlab-ci.yml] then maybe you should &lt;br /&gt;
update this check as well, as well as some documentation in the [[how to build]] page but this should not happen very often.&lt;br /&gt;
&lt;br /&gt;
= Continuous build =&lt;br /&gt;
We use GitLab Runner, an open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with GitLab CI, the open-source continuous integration service included with GitLab that coordinates the jobs.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/runner/#features Runner features]&lt;br /&gt;
&lt;br /&gt;
To configure runner that perform MM test on each commit edit &amp;lt;strong&amp;gt;gitlab-ci.yml&amp;lt;/strong&amp;gt; on main (e62numcodes) repo.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Wiki_editing_guide&amp;diff=733</id>
		<title>Wiki editing guide</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Wiki_editing_guide&amp;diff=733"/>
				<updated>2016-11-24T13:41:03Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== MathJax ==&lt;br /&gt;
&amp;lt;nomathjax&amp;gt;User either $ $ or \( \) for inline math and $$ $$ or \[ \] for display style math. You can also use environments, such as align, align*, equation.&lt;br /&gt;
Equations within numbered environments may be labeled with \label and referenced with \ref or, better, \eqref.&lt;br /&gt;
All numbers in text should be in $ $, e.g. $100$ m, $35$ kg.&amp;lt;/nomathjax&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== New commands ===&lt;br /&gt;
New $\LaTeX$ command for the current document can be defined using \newcommand.&lt;br /&gt;
Globally, commands can be added as macros in the &amp;lt;code&amp;gt;wiki/MathJax/config/default.js&amp;lt;/code&amp;gt; file around line 544.&lt;br /&gt;
&lt;br /&gt;
Defined commands:&lt;br /&gt;
* \N, \Z, \Q, \R, \C for basic sets $\N, \Z, \Q, \R, \C$&lt;br /&gt;
* \T for matrix transpose $A^\T$.&lt;br /&gt;
* \b{x} for bold symbols (including greek letters $\b{\alpha}$.&lt;br /&gt;
&lt;br /&gt;
== Static pages ==&lt;br /&gt;
We have a script that [https://gitlab.com/e62Lab/e62numcodes/blob/master/scripts/backup_wiki_static.sh copies wiki as static pages]. It is located in &amp;lt;code&amp;gt;scripts/&amp;lt;/code&amp;gt; folder in our repo.&lt;br /&gt;
It can be run directly or by going into your build folder and running &amp;lt;code&amp;gt;make static_wiki&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Adding figures==&lt;br /&gt;
There is a drag&amp;amp;drop option to upload an image. To actually insert it in the article (wanting to refer to it later) use the following example&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot; line&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:my_figure_label&amp;quot;&amp;gt;&lt;br /&gt;
[[File:name_of_my_figure.png|thumb|upright=2|alt=An alternative text, that appears if the figure cannot be shown|&amp;lt;caption&amp;gt;The caption under the figure&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To make a reference to the image, use the code&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot; line&amp;gt;&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:my_figure_label&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Wiki backup guide ===&lt;br /&gt;
This guide covers two ways to backup this wiki, either as html files, only for viewing, or creating a complete backup of the underlying system.&lt;br /&gt;
&lt;br /&gt;
== Backup as static files ==&lt;br /&gt;
This scrapes our wiki for html pages and saves them to ninestein &amp;lt;strong&amp;gt;home/mmachine/mm_wiki_backup/&amp;lt;/strong&amp;gt;. The [https://gitlab.com/e62Lab/e62numcodes/blob/master/scripts/backup_wiki_static.sh script] is in our scripts directory.&lt;br /&gt;
It can be run with&lt;br /&gt;
  make static_wiki&lt;br /&gt;
from the usual &amp;lt;code&amp;gt;build/&amp;lt;/code&amp;gt; folder.&lt;br /&gt;
The script runs for a few minutes and back up the relevant part of the wiki.&lt;br /&gt;
&lt;br /&gt;
== Full mediawiki backup ==&lt;br /&gt;
The script for this is located in &lt;br /&gt;
 /var/www/html/ParallelAndDistributedSystems/MeshlessMachine/full_wiki_backup.sh&lt;br /&gt;
and runs weekly as a cron job. &lt;br /&gt;
To edit the cron job run&lt;br /&gt;
  crontab -e&lt;br /&gt;
while logged in as the appropriate user. The backup is copied to ninestein. Last 5 backups are stored, and this number can be changed in the script.&lt;br /&gt;
To backup manually, run the script and download the backup from ninestein.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=732</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=732"/>
				<updated>2016-11-24T13:38:57Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Analysis of our method==&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of nodes===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of time steps===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
===Using Gaussian basis===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
===Solving in 3D===&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]===&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
===Solving Dirichlet with MLSM operators===&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
===Solving mixed boundary conditions with MLSM operators===&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|thumb|upright=2|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of nodes and support size===&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to weight function===&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;br /&gt;
&lt;br /&gt;
== Solving Electrostatics ==&lt;br /&gt;
&lt;br /&gt;
This example is taken from the [http://www.freefem.org/ff++/ftp/freefem++doc.pdf FreeFem++ manual (page 235)].&lt;br /&gt;
&lt;br /&gt;
Assuming there is no current and the charge distribution is time independent, the electric field $\b{E}$ satisfies&lt;br /&gt;
\begin{equation}\label{eq:electrostatics}&lt;br /&gt;
\b{\nabla}\cdot\b{E} = \frac{\rho}{\epsilon}, \quad \b{\nabla} \times \b{E} = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
where $\rho$ is the charge density and $\epsilon$ is the permittivity. If we introduce an electrostatics potential $\phi$ such that&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{E} = -\b{\nabla}\phi,&lt;br /&gt;
\end{equation}&lt;br /&gt;
we can insert it into the first equation in (\ref{eq:electrostatics}) resulting in Poisson's equation&lt;br /&gt;
\begin{equation}\label{eq:poisson}&lt;br /&gt;
\b{\nabla}^2 \phi = -\frac{\rho}{\epsilon}.&lt;br /&gt;
\end{equation}&lt;br /&gt;
In the absence of unpaired electric charge equation (\ref{eq:poisson}) becomes Laplace's equation&lt;br /&gt;
\begin{equation}&lt;br /&gt;
\b{\nabla}^2 \phi = 0&lt;br /&gt;
\end{equation}&lt;br /&gt;
&lt;br /&gt;
We now solve this equation for a circular enclosure with two rectangular holes. The boundary of the circular enclosure is held at constant potential $0$ V. The two rectangular holes are held at constant potentials $+1$ V and $-1$ V, respectively. A coloured scatter plot is available in figure &amp;lt;xr id=&amp;quot;fig:electro_statics&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:electro_statics&amp;quot;&amp;gt;&lt;br /&gt;
[[File:electro_statics.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Simple electrostatics problem solved by implicit method with 15 node monomial supports and gaussian weight functions. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=663</id>
		<title>Analysis of MLSM performance</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Analysis_of_MLSM_performance&amp;diff=663"/>
				<updated>2016-11-15T11:19:45Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Solving Diffusion equation=&lt;br /&gt;
For starters, we can solve simple Diffusion equation&lt;br /&gt;
$ \nabla^2 u = \frac{\partial u}{\partial t} $.&lt;br /&gt;
&lt;br /&gt;
We solved the equation on a square $\Omega = [0, a] \times [0, a]$ with&lt;br /&gt;
Dirichlet boundary conditions $ \left. u\right|_{\partial \Omega} = 0 $ and&lt;br /&gt;
initial state $ u(t = 0) = 1$.&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is known, and we use it to evaluate or&lt;br /&gt;
own solution. &lt;br /&gt;
\begin{equation} &lt;br /&gt;
u(\vec{p}, t) = \sum_{\substack{n=1 \\ n \text{&lt;br /&gt;
odd}}}^\infty\sum_{\substack{m=1 \\ m \text{ odd}}}^\infty \frac{1}{\pi^2}&lt;br /&gt;
\frac{16 a^2}{nm} \sin\left(\frac{\pi n}{a}p_x\right) \sin\left(\frac{\pi&lt;br /&gt;
m}{a}p_y\right) e^{-\frac{\pi^2 (n^2+m^2)}{a^2}t} &lt;br /&gt;
\end{equation}&lt;br /&gt;
Because the solution is&lt;br /&gt;
given in the series form, we only compare to the finite approximation, summing&lt;br /&gt;
to $N = 100$ instead of infinity. Solution is on &amp;lt;xr id=&amp;quot;fig:square_heat&amp;quot;/&amp;gt;.&lt;br /&gt;
See the code for solving diffusion [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:square_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:square_heat.png|thumb|upright=2|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Analysis of our method==&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of nodes===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence.png|thumb|upright=2|alt=Graph of errors with respect to number of nodes|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $ \Delta t = 1\cdot 10^{-5}$&lt;br /&gt;
on a unit square ($a = 1$). Results are on &amp;lt;xr id=&amp;quot;fig:node_convergence&amp;quot;/&amp;gt;.  Monomial basis of $6$ monomials was used and $12$&lt;br /&gt;
closest nodes counted as support for each node.  After more than $250$ nodes of&lt;br /&gt;
discretization in each dimension the method diverges, which is expected. The&lt;br /&gt;
stability criterion for diffusion equation in two dimensions is $\Delta t \leq&lt;br /&gt;
\frac{1}{4} \Delta x^2$, where $\Delta x$ is the spatial discretization&lt;br /&gt;
step in one dimension. In our case, at $250$ nodes per side, the right hand side&lt;br /&gt;
yields $\frac{1}{4}\cdot\frac{1}{250}\cdot\frac{1}{250} = 4\times 10^{-6}$,&lt;br /&gt;
so our method is stable within the expected region.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:node_convergence_5&amp;quot;&amp;gt;&lt;br /&gt;
[[File:node_convergence_5.png|thumb|upright=2|&amp;lt;caption&amp;gt;Convergence with respect to number of nodes for Gaussian and Monomial basis&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;xr id=&amp;quot;fig:node_convergence_5&amp;quot;/&amp;gt; is another image of convergence, this time using monomial basis $\{1, x,&lt;br /&gt;
y, x^2, y^2\}$ and Gaussian basis with discretization step $ \Delta t = 5&lt;br /&gt;
\cdot 10^{-6}$ and 5 support nodes. Total node count was $N = 2500$.&lt;br /&gt;
Error was calculated after $0.01$ time units have elapsed.&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of time steps===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:timestep_convergence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:timestep_convergence.png|thumb|upright=2|&amp;lt;caption&amp;gt;Convergence with respect to different timesteps&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count with different time steps on the same&lt;br /&gt;
domain as above. Results are on &amp;lt;xr id=&amp;quot;fig:timestep_convergence&amp;quot;/&amp;gt;. For large time steps the method diverges, but once it starts&lt;br /&gt;
converging the precision increases steadily as the time step decreases, until it&lt;br /&gt;
hits its lower limit. This behaviour is expected.  The error was calculated&lt;br /&gt;
against the analytical solution above after $0.005$ units of time have passed.  A&lt;br /&gt;
monomial basis up to order $2$ inclusive ($m = 6$) was used and the support&lt;br /&gt;
size was $n = 12$.&lt;br /&gt;
&lt;br /&gt;
===Using Gaussian basis===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;&amp;gt;&lt;br /&gt;
[[File:gauss_sigma_dependence.png|thumb|upright=2|&amp;lt;caption&amp;gt; Graph of error with respect to Gaussian basis $\sigma$ &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We tested the method on a fixed node count of $N = 2500$ with spatial step&lt;br /&gt;
discretization of $\Delta x = \frac{1}{50}$. We used the Gaussian basis of&lt;br /&gt;
$m = 5$ functions with support size $n = 13$.  Error was calculated&lt;br /&gt;
against analytical solution above after $0.01$ time units.  A time step of&lt;br /&gt;
$\Delta t = 10^{-5}$ was used. Results are on &amp;lt;xr id=&amp;quot;fig:gauss_sigma_dependence&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As we can see from the graph, there exists an interval where the choice of&lt;br /&gt;
$\sigma$ does not matter much, but outside of this interval, the method diverges&lt;br /&gt;
rapidly. Care from the user side must be taken, to choose $\sigma$&lt;br /&gt;
appropriately, with respect to plot above.&lt;br /&gt;
&lt;br /&gt;
===Solving in 3D===&lt;br /&gt;
&lt;br /&gt;
A 3-dimensional case on domain $[0, 1]^3$ was tested on $N = 20^3$&lt;br /&gt;
nodes, making the discretization step $\Delta x = 0.05$.  Support size of&lt;br /&gt;
$n=10$ with $m=10$ Gaussian basis functions was used. Their&lt;br /&gt;
normalization parameter was $\sigma = 60\Delta x$. A time step of $\Delta&lt;br /&gt;
t = 10^{-5}$ and an explicit Euler method was used to calculate the solution&lt;br /&gt;
of to $0.01$ time units. Resulting function is on &amp;lt;xr id=&amp;quot;fig:diffusion3d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusion3d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:diffusion3d.png|thumb|upright =2|center|&amp;lt;caption&amp;gt;A $ 3 $-dimensional soulution with an explicit Euler method&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Solving Dirichlet with [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classEngineMLS.html EngineMLS]===&lt;br /&gt;
Example code using explicit stepping to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion.cpp example code]&lt;br /&gt;
&lt;br /&gt;
===Solving Dirichlet with MLSM operators===&lt;br /&gt;
Example code using explicit stepping and MLSM operators to reproduce the thermal images from the beginning: [https://gitlab.com/e62Lab/e62numcodes/blob/master/examples/diffusion/diffusion_mlsm_operators.cpp example code]&lt;br /&gt;
&lt;br /&gt;
===Solving mixed boundary conditions with MLSM operators===&lt;br /&gt;
By using MLSM operators and utilizing its `[http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/classmm_1_1MLSM.html MLSM::neumann]` method we can also solve&lt;br /&gt;
partial diffusion equations. Solution is on &amp;lt;xr id=&amp;quot;fig:quarter_diffusion&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:quarter_diffusion&amp;quot;&amp;gt;&lt;br /&gt;
[[File:quarter_diffusion.png|thumb|upright=2|&amp;lt;caption&amp;gt;Example of solving using Neumann boundary conditions &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example code showing the use of Neumann boundary conditions: [https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mlsm_operators_test.cpp example code]&lt;br /&gt;
&lt;br /&gt;
We also support more -- interesting -- domains :) On &amp;lt;xr id=&amp;quot;fig:mikimouse_heat&amp;quot;/&amp;gt; we see a domain in a shape of Miki mouse.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mikimouse_heat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mikimouse_heat.png|thumb|upright=2|center|&amp;lt;caption&amp;gt;Miki mouse domain &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Convergence analysis for one dimensional diffusion equation==&lt;br /&gt;
We now tried to solve the diffusion equation in one dimension &lt;br /&gt;
$u_t = u_{xx}$ on the interval $[0,1]$&lt;br /&gt;
with Dirichlet boundary conditions&lt;br /&gt;
$u(0, t) = u(1,t) = 0 $&lt;br /&gt;
and an initial state&lt;br /&gt;
$u(x,0) = \sin(\pi x).$&lt;br /&gt;
&lt;br /&gt;
An analytical solution for this domain is given in closed form &lt;br /&gt;
\begin{equation}&lt;br /&gt;
u(x,t) = \sin(\pi x) \exp(-\pi^2 t),&lt;br /&gt;
\end{equation}&lt;br /&gt;
so we can use it to evaluate our approximation.&lt;br /&gt;
&lt;br /&gt;
We tested the method with a fixed time step of $\Delta t = 1\cdot 10^{-8}$ on a unit interval, that was discretized into $ N $ nodes. Monomial basis of $m$ monomials was used and $n$ closest nodes counted as support for each node. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt; Note: &amp;lt;/strong&amp;gt; the results are &amp;lt;strong&amp;gt;not confirmed &amp;lt;/strong&amp;gt; and serve merely as an orientation. Errors may have occured. &lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to number of nodes and support size===&lt;br /&gt;
&lt;br /&gt;
Convergence of the method was tested on different number of nodes $N = 15, 20, 25, \ldots, 200$. First we tested the method, when support size is equal to number of monomials in the basis ($n =m$), which we call local interpolation, for different values of $n = 3, 5, 7$. We then compared that to local approximation, where we take $n = 12$ supporting nodes and we only change number of monomials. We used gaussian weight function with $\sigma = 1\cdot \Delta x$. Error was calculated after $0.001$ time units have elapsed. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1d&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1d&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional ddiffusion equation using monomial basis.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Larger is the support size, faster is convergence. This rule applies for local interpolation as well as approximation. But when number of nodes is too large, the method diverges faster for larger support sizes. Error for approximation is bigger than for interpolation (if we compare it for same basis size), but the method diverges with interpolation for smaller number of nodes.&lt;br /&gt;
&lt;br /&gt;
===Convergence with respect to weight function===&lt;br /&gt;
&lt;br /&gt;
We also checked the convergence for different weight functions. We tested it on number of nodes $N = 15, 20, 25, \ldots, 200$, with monomial basis with $m = 5$, support size $n = 12$ and gaussian weight function with $\sigma = w\cdot \Delta x$ for $w = 0.1$. Error was calculated after $0.001$ time units have elapsed with $\Delta t = 10^{-8}$. We can see the results on &amp;lt;xr id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:diffusionConvergence1dWeights&amp;quot;&amp;gt;&lt;br /&gt;
[[File:monomials_convergnece_dt_e-8_t_0-001_approxSupp_12_weights.png|thumb|upright=2|center|&amp;lt;caption&amp;gt; Convergence analysis for $ 1 $-dimensional diffusion equation using monomial basis with respect to weight function. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the $\sigma$ is too small, other nodes on the support domain are not taken into consideration, so taking $n = 12$ or $n = 3$ would give the same results. However when $\sigma$ is too large, all the nodes on the support domain are contributing and there is no significant difference between different weight functions.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=653</id>
		<title>Attenuation due to liquid water content in the atmosphere</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=653"/>
				<updated>2016-11-15T09:28:52Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolman], [mailto:gregor.kosec@ijs.si Gregor Kosec], Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
[[:File:mipro_attenuation.pdf|Full paper available for download here.]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The effect of Liquid Water Content (LWC), i.e. the mass of the water per volume&lt;br /&gt;
unit of the atmosphere, on the attenuation of a $20$ GHz communication link&lt;br /&gt;
between a ground antenna and communication satellite is tackled in this paper.&lt;br /&gt;
The wavelength of $20$ GHz electromagnetic radiation is comparable to the&lt;br /&gt;
droplet size, consequently the scattering plays an important role in the&lt;br /&gt;
attenuation. To better understand this phenomenon a correlation between&lt;br /&gt;
measured LWC and attenuation is analysed. The LWC is usually estimated from&lt;br /&gt;
the pluviograph rain rate measurements that captures only spatially localized&lt;br /&gt;
and ground level information about the LWC. In this paper the LWC is extracted&lt;br /&gt;
also from the reflectivity measurements provided by a $5.6$ GHz weather radar&lt;br /&gt;
situated in Lisca, Slovenia. The radar measures reflectivity in 3D and&lt;br /&gt;
therefore a precise spatial dependency of LWC along the communication link is&lt;br /&gt;
considered. The attenuation is measured with an in-house receiver Ljubljana&lt;br /&gt;
Station SatProSi 1 that communicates with a geostationary communication&lt;br /&gt;
satellite ASTRA 3B on the $20$ GHz band.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
The increasing demands for higher communication capabilities between terrestrial&lt;br /&gt;
and/or earth-satellite repeaters requires employment of frequency bands above&lt;br /&gt;
$10$ GHz. Moving to such frequencies the wavelength of electromagnetic&lt;br /&gt;
radiation (EMR) becomes comparable to the size of water droplets in the&lt;br /&gt;
atmosphere. Consequently, EMR attenuation due to the scattering on the droplets&lt;br /&gt;
becomes significant and ultimately dominant factor in the communications&lt;br /&gt;
quality. During its propagation, the EMR waves encounter different water&lt;br /&gt;
structures, where it can be absorbed or scattered, causing attenuation. In&lt;br /&gt;
general, water in all three states is present in the atmosphere, i.e.\ liquid in&lt;br /&gt;
form of rain, clouds and fog, solid in form of snow and ice crystals, and water&lt;br /&gt;
vapour, which makes the air humid. Regardless the state, it causes considerable&lt;br /&gt;
attenuation that has to be considered in designing of the communication&lt;br /&gt;
strategy. Therefore, in order to effectively introduce high frequency&lt;br /&gt;
communications into the operative regimes, an adequate knowledge about&lt;br /&gt;
atmospheric effects on the attenuation has to be elaborated.&lt;br /&gt;
&lt;br /&gt;
In this paper we deal with the attenuation due to the scattering of EMR on a&lt;br /&gt;
myriad of droplets in the atmosphere that is characterised by LWC or more&lt;br /&gt;
precisely with Drop Size Distribution (DSD). A discussion on the physical&lt;br /&gt;
background of the DSD can be found in (E. Villermaux and B. Bossa. Single-drop&lt;br /&gt;
fragmentation determines size distribution of raindrops, 2009), where authors describe&lt;br /&gt;
basic mechanisms behind distribution of droplets. Despite the efforts to&lt;br /&gt;
understand the complex interplay between droplets, ultimately the empirical&lt;br /&gt;
relations are used. The LWC and DSD can be related to the only involved quantity&lt;br /&gt;
that we can reliable measure, the rain rate. Recently it has been demonstrated&lt;br /&gt;
that for high rain rates also the site location plays a role in the DSD due to&lt;br /&gt;
the local climate conditions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In general, raindrops can be considered as dielectric blobs of water that&lt;br /&gt;
polarize in the presence of an electric field. When introduced to an oscillating&lt;br /&gt;
electric field, such as electromagnetic waves, a droplet of water acts as an&lt;br /&gt;
antenna and re-radiates the received energy in arbitrary direction causing a net&lt;br /&gt;
loss of energy flux towards the receiver. Some part of energy can also be&lt;br /&gt;
absorbed by the raindrop, which results in heating. Absorption is the main cause&lt;br /&gt;
of energy loss when dealing with raindrops large compared to the wavelength,&lt;br /&gt;
whereas scattering is predominant with raindrops smaller than the wavelength.&lt;br /&gt;
The very first model for atmospheric scattering was introduced by lord Rayleigh.&lt;br /&gt;
The Rayleigh assumed the constant spatial polarization within the droplet. Such&lt;br /&gt;
simplifications limits the validity of the model to only relatively small&lt;br /&gt;
droplets in comparison to the wavelength of the incident field, i.e.&lt;br /&gt;
approximately up to $5$ GHz when EMR scattering on the rain droplets&lt;br /&gt;
is considered. A more general model was developed by Mie in 1908, where a&lt;br /&gt;
spatial dependent polarization is considered within the droplet, extending the&lt;br /&gt;
validity of the model to higher droplet size/EMR wavelength ratios. Later, a&lt;br /&gt;
popular empirical model was presented in (J.S. Marshall and W.McK. Palmer. The&lt;br /&gt;
distribution of raindrops with size, 1948), where attenuation is related only to&lt;br /&gt;
the rain rate. The model, also referred to as Marshall-Palmer model, is widely&lt;br /&gt;
used in evaluation of rain rate from reflectivity measured by weather radars.&lt;br /&gt;
Marhsall-Palmer model simply states the relation between the attenuation and&lt;br /&gt;
rain rate in terms of a power function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this paper we seek for correlation between the LWC and attenuation&lt;br /&gt;
measurements. LWC is extracted from reflectivity measurements provided by a&lt;br /&gt;
weather radar situated in Lisca and operated by Slovenian Environment Agency.&lt;br /&gt;
Attenuation is measured by in-house hardware that monitors the signal strength&lt;br /&gt;
between Ljubljana Station SatProSi 1 and communication satellite ASTRA 3B. The&lt;br /&gt;
main purpose of this paper is therefore to investigate correlation between&lt;br /&gt;
precipitation measured in 3D with the meteorological radar and the measured&lt;br /&gt;
attenuation.&lt;br /&gt;
&lt;br /&gt;
=Governing models=&lt;br /&gt;
&lt;br /&gt;
Before we proceed to measurements some basic relations are discussed.&lt;br /&gt;
&lt;br /&gt;
Attenuation ($A$) is a quantity measured in [dB] that describes the loss of electromagnetic radiation propagating through a medium. It is defined with starting intensity $I_s$ and the intensity received after propagation $I_r$ as&lt;br /&gt;
\[&lt;br /&gt;
A = 10\log_{10}\frac{I_s}{I_r}.&lt;br /&gt;
\]&lt;br /&gt;
The specific attenuation ($\alpha=A/L$) measured in [dB/km] as a function of rain rate ($R$) measured in [mm/h] is commonly modelled as&lt;br /&gt;
\[&lt;br /&gt;
\alpha(R) \sim a \,R^{b} \ .&lt;br /&gt;
\]&lt;br /&gt;
Coefficients $a$ and $b$ are determined empirically by fitting the model to the experimental data. In general, coefficients depend on the incident wave frequency and polarization, and ambient temperature. Some example values for different frequencies are presented in Table 1.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Table 1: Value of coefficients for Marshal-Palmer relation $\alpha(R)$ at different frequencies.&lt;br /&gt;
|-&lt;br /&gt;
!f[GHz]||10||12||15||20||25||30 &lt;br /&gt;
|-&lt;br /&gt;
!$a$||0.0094||0.0177||0.0350||0.0722||0.1191||0.1789 &lt;br /&gt;
|-&lt;br /&gt;
!$b$||1.273||1.211||1.143||1.083||1.044||1.007 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The simplest characterization of rain is through rain rate $R$, measured in [mm/h]. However, rain rate do not give any information about the type of rain. For example a storm and a shower might have the same rain rate, but are comprised of different droplets. Therefore, a more descriptive quantity is a Drop size distribution (DSD) that, unsurprisingly, describes the distribution of droplet sizes.&lt;br /&gt;
A simple DSD model is presented in (J.S. Marshall and W.McK. Palmer. The distribution of raindrops with size, 1948)&lt;br /&gt;
&lt;br /&gt;
\[&lt;br /&gt;
\begin{equation}&lt;br /&gt;
N(D) = U \exp (-V \, R^{\delta} D),&lt;br /&gt;
\end{equation}&lt;br /&gt;
\label{eq:dsdr}&lt;br /&gt;
\]&lt;br /&gt;
where $D$ stands for drop diameter measured in [mm], $N(D)$ describes number of droplets of size $D$ to $D + \mathrm dD$ in a unit of volume measured in [$mm^{-1} m^{-3}$] and $R$ rain rate measured in [mm/h]. The values of equation parameters were set to $U=8.3 \cdot 10^3$, $V=4.1$ and $\delta=-0.21$. The DSD was also determined experimentally for different rain rates. The experimental data is presented in &amp;lt;xr id=&amp;quot;fig:dsd&amp;quot;/&amp;gt;, where we can see that the typical diameter of droplets is in range of mm. There is a discrepancy between the theoretical and experimental data with very small droplets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dsd&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dsd_manual.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt; DSD measured in Czech Republic (one year measurement, rain rate $R$ is the parameter of particular sets of points). Lines represent the theoretical value as determined by $(\ref{eq:dsdr})$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Measurements=&lt;br /&gt;
&lt;br /&gt;
== Measurements of signal attenuation==&lt;br /&gt;
&lt;br /&gt;
Jožef Stefan Institute (JSI) and European Space Agency (ESA) cooperate in SatProSi-Alpha project that includes measuring attenuation of the communication link between ground antenna and a satellite, more precisely between the ASTRA 3B satellite and SatProSi 1 station. The ASTRA 3B is a geostationary communication satellite located on the $23.5^\circ E$ longitude over the equator. It broadcasts the signal at $20$ GHz, which is received at SatProSi 1 with an in-house receiver, namely $1.2$ m parabolic antenna positioned on the top of the JSI main building with a gain of about $47$ dB. The SatProSi measures attenuation every $0.15$ seconds, resulting in over $500000$ daily records, since 1. 10. 2011.&lt;br /&gt;
&lt;br /&gt;
== Measurements of rainfall rate ==&lt;br /&gt;
Two sources of rain measurements are used in this paper. The first one is a pluviograph installed locally in the proximity of the antenna. The rain rate is measured every five minutes.&lt;br /&gt;
&lt;br /&gt;
Another, much more sophisticated, measurements of rain characteristics are provided by meteorological radars. The basic idea behind such radars is to measure EMR that reflects from water droplets. The measured reflectivity is then related with rain rate with Marhsall-Palmer relation.&lt;br /&gt;
Radar reflectivity factor $Z$ is formally defined as the sum of sixth powers of drop diameters over all droplets per unit of volume, which can be converted into an integral&lt;br /&gt;
\[&lt;br /&gt;
Z = \int_0^\infty N(D)D^6 \mathrm dD \ .&lt;br /&gt;
\]&lt;br /&gt;
Note that the form of relation follows the Rayleigh scattering model. $Z$ is usually measured in units $ mm^6m^{-3} $. When conducting measurements a so-called Equivalent Reflectivity Factor&lt;br /&gt;
\[&lt;br /&gt;
Z_e = \frac{\eta \lambda^4}{0.93 \pi^5}&lt;br /&gt;
\]&lt;br /&gt;
is used, where $\eta$ means reflectivity, $\lambda$ is radar wavelength and $0.93$ stands for dielectric factor of water. As the name suggests both are equivalent for large wavelengths compared to the drop sizes, as in Rayleigh model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reflectivity factor and rainfall rate are related through Marshall-Palmer relation as&lt;br /&gt;
\[&lt;br /&gt;
Z_{[mm^6m^{-3}]} = \tilde a R_{[mm/h]}^{\tilde{b}}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_{[mm^6m^{-3}]}$ is reflectivity factor measured in $mm^6m^{-3}$ and $R_{[mm/h]}$ is rainfall rate measured in mm/h. In general, empirical coefficients $\tilde a$ and $\tilde b$ vary with location and/or season, however, are independent of rainfall $R$. Most widely used values are $\tilde a=200$ and $\tilde b=1.6$.&lt;br /&gt;
Meteorologists rather use dimensionless logarithmic scale and define&lt;br /&gt;
\[&lt;br /&gt;
\mathit{dBZ} = 10 \, \log_{10} \frac{Z}{Z_0} = 10 \, \log_{10} Z_{[mm^6m^{-3}]}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_0$ is reflectivity factor equivalent to one droplet of diameter $1$ mm per cubic meter.&lt;br /&gt;
&lt;br /&gt;
The meteorological radars at Lisca emit short ($1$ microsecond) electromagnetic pulses with the frequency of $5.62$ GHz and measure strength of the reflection from different points in their path. Radar collect roughly $650000$ spatial data points per radar per every atmosphere scan, which they do every $10$ minutes. They determine the exact location of all their measurements through their direction and the time it takes for the signal to reflect back to the radar.&lt;br /&gt;
&lt;br /&gt;
In addition to reflectivity radars also measure the radial velocity of the reflecting particles by measuring the Doppler shift of the received EMR, but this is a feature we will not be using.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data analysis=&lt;br /&gt;
The analysis begins with handling approximately $20$ GB of radar data for the academic year 2014/15 accompanied with $3$ GB of signal attenuation data for the same time period and approximately $5$ GB of attenuation and local rain gauge data for years 2012 and 2013.&lt;br /&gt;
&lt;br /&gt;
== Preprocessing the radar spatial data ==&lt;br /&gt;
&lt;br /&gt;
Radar data was firstly reduced by eliminating spatial points far away from our point of interest, namely the JSI main building where antenna is located. The geostationary orbit is $35786$ km above the sea level, therefore the link between the antenna and the satellite has a steep elevation angle $36.3^\circ$. In fact just $20$ km south of the antenna the ray rises above $15$ km, which is the upper boundary for all weather activities. Knowing this, a smaller area of the map can be safely cropped out, reducing the number of data points from around $650000$ to approximately $6500$ for each radar scan covering an $40 \text{km} \times 40 \text{km}$ area.&lt;br /&gt;
&lt;br /&gt;
Although we already gravely reduced original data size, we must still reduce thousands of points into something tangible. The positions of both the antenna and the satellite are known at all times, a lovely consequence of them being stationary; therefore the link between them can be easily traced. Roughly $150$ points on the ray path are used as a discrete representation of the link, referred to as link points in future discussions. For each link point a median of $n$ closest radar measurements is computed as a representative value.&lt;br /&gt;
The other way of extracting reflectivity factor was simply to take closest $n$ points to the antenna and select the median value of those. A visualisation of both methods is presented in &amp;lt;xr id=&amp;quot;fig:support_presentation&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Now we are left with multiple scalar quantities as a function of time. Antenna attenuation for every $0.15$ s, local rain gauge for every $5$ min and various extractions of reflectivity factor for every $10$ min. Note, that radar values are not averaged over $10$ minutes, radar simply needs $10$ minutes to complete a single scan.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:att_ref_time&amp;quot;/&amp;gt; an example of rainfall rate measured with weather radar and the measured attenuation for a three day period is presented. A correlation between quantities is clearly seen on the figure but a closer inspection is needed to reveal more details about the correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:support_presentation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:support.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Positions of radar measurements. The blue rectangle is the location of the antenna and the rain gauge. The $ 64 $ points closest to the antenna are enclosed in a red sphere and marked as red circles. Red dots mark the remainder of $ 512 $ closest points. The green line is the ray path between antenna and satellite with green circles representing corresponding support nodes for support size $n=4$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:att_ref_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time_flow_1800_64_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Measured antenna attenuation and rain rate extracted from $ 64 $ radar measurements closest to the antenna. Both datasets have been sorted into $ 30 $ minute bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Correlation between rain and attenuation ==&lt;br /&gt;
In order to find a relation between rain rate and electromagnetic attenuation, measurements of both quantities must be paired. There is no obvious way of doing this since both are measured at a vastly different time-scale. We ended up dividing time into bins of duration $t_0$ and pairing the measurements that fall within the same bin. The maximum values of every quantity was selected as a representative for the given time period.&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient between two variables $X$ and $Y$ can be calculated using&lt;br /&gt;
\[&lt;br /&gt;
corr(X, Y)=\frac{\text{mean}((X - \text{mean}(X))\cdot(Y - \text{mean}(Y)))}{\text{std}(X)\text{std}(Y)}&lt;br /&gt;
\]&lt;br /&gt;
and is a good quantity for determining linear dependence between $X$ and $Y$.&lt;br /&gt;
&lt;br /&gt;
According to the Marshall-Palmer power law a linear relation exists between logarithms of rain rate and specific attenuation.&lt;br /&gt;
Our measurements are of total attenuation $A$ and not of specific attenuation so we must adjust the equation. We assume a typical distance $L$ as a connecting factor between the, which gives us&lt;br /&gt;
\[&lt;br /&gt;
\log_{10}A = \log_{10}La + b\log_{10}R .&lt;br /&gt;
\]&lt;br /&gt;
The exact value of $L$ is not relevant as only the parameter $b$ will interest us. Therefore a slope on a log-log graph, such as on &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt;, is equal to the model parameter $b$. We used a least square linear fit on each set of data to get the corresponding values for $b$.&lt;br /&gt;
&lt;br /&gt;
In addition, correlation between logarithmic values of rain rate and attenuation&lt;br /&gt;
\[&lt;br /&gt;
corr\left(\log_{10}A_{[\text{dB}]}, \log_{10}R_{[\text{mm/h}]}\right)&lt;br /&gt;
\]&lt;br /&gt;
is used as a quality measure of their relation.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
Once we paired attenuation and rainfall data we can scatter the points on a graph.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt; the attenuation against rain rate at $8$ h bin size is presented. For the local radar representation $n=2^6$ and for integral representation $n=2^2$ support size is used. The correlation can be clearly seen, however not unified, as one would expect if measurements and rain rate - reflectivity model would be perfect.&lt;br /&gt;
Since we introduced two free parameters, namely time bin $t_0$ and spatial support size $n$ for integral and $n$ for local radar representation, a sensitivity analyses regarding those parameters are needed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:attenuation_rainrate&amp;quot;&amp;gt;&lt;br /&gt;
[[File:scatter_all.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Attenuation dependency on the rain rate measured in three different ways. Local rain gauge (blue), path integration on each step selecting closest $ 4 $ points (green) and from $ 64 $ points closest to the antenna (red). All measurements have been put into $ 8 $ h bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:local_correlation&amp;quot;/&amp;gt; a correlation with respect to the number of local support nodes and time bin size is presented. The best correlation is obtained with $8$ h time bins and a local $n=2^6$ support size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:local_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt; local&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly a correlation with respect to the number of integral support nodes and time bin size is presented in &amp;lt;xr id=&amp;quot;fig:integrate_correlation&amp;quot;/&amp;gt;. Again, the best correlation is obtained with 8 h time bins, however with the integral model a small integral support, i.e.\ $n=2^2$, already suffices to obtain fair correlation. Such behaviour is expected. In an integral mode we follow the ray and support is moving along, therefore there is no need to capture vast regions for each link point.&lt;br /&gt;
On the other hand in a local approach only one support is used and therefore that support has to be much bigger to capture enough details about the rain conditions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:integrate_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_integrate.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt;integral&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compare measurements acquired with radar and the ones acquired with the local rain gauge a simpler presentation of correlation is shown on &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. One set of data has rain rate extracted from radar using the integral method with support size $4$ and two sets using closest either $n=64$ or $n=512$ nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:correlation_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation as a function of time bin size $t_0$ for different ways of extracting the rain rate. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the next step we compare our measurements with a Marshall-Palmer model, specifically the exponent $b$. According to Table 1 in $20$ GHz the $b_0=1.083$ should hold.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:fit_time&amp;quot;/&amp;gt; differences between our measurements and $b_0$ with respect to the time bin size are presented for the same sets of data as were used in the correlation analysis of &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. An order of magnitude improvement is visible between local rain gauge and data extracted with radar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fit_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_fit_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Exponent in attenuation to rainfall relation $b$ compared to value $b_0$ from Table 1 for $ 20 $ GHz as a function of bin duration $t_0$ for a few ways of extracting rainfall. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
This paper deals with the correlation analysis between EMR attenuation due to the scattering on the ASTRA 3B - SatProSi 1 link and measured rain rate. The main objective of the paper is to analyse the related measurements and comparison of results with Marshall-Palmer model.&lt;br /&gt;
&lt;br /&gt;
The attenuation is measured directly with an in-house equipment with a relatively high time resolution ($0.15$ s).&lt;br /&gt;
&lt;br /&gt;
The rain characteristics are measured with a rain gauge positioned next to the antenna and national meteorological radar. The rain gauge measures average rain rate every five minutes at a single position, while the radar provides a full 3D scan of reflectivity every $10$ minutes.&lt;br /&gt;
&lt;br /&gt;
Although the attenuation depends mainly on the DSD, a rain rate is used as a reference quantity, since it is much more descriptive, as well as easier to measure. The reflectivity measured with the radar is therefore transformed to the rain rate with the Marshall-Palmer relation. More direct approach would be to relate the attenuation with the measured reflectivity directly, however that would not change any of the conclusions, since, on a logarithmic scale, a simple power relation between reflectivity and rain rate reflects only as a linear transformation.&lt;br /&gt;
&lt;br /&gt;
The analysis of support size and time bin size showed quite strong influence of those two parameters on the correlation. It is demonstrated that time bin $8h$ and support sizes of $n=2^6$ and $n=2^2$ for local and integral approach, respectively, provide a decent correlation ($0.6-0.7$) between logarithms of measured attenuation and rain rate.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the power model has been fitted over measured data and the value of the exponent has been compared to the values reported in the literature. The model shows best agreement with the Marshall-Palmer model, when the rain rate is gathered from the integral along the communication link. somewhat worse agreement is achieved with a local determination of rain rate. Results obtained with the rain gauge are the furthest from the expected, despite the fact that the correlation with the measured attenuation is the highest with the rain gauge measurements. The localized information from the rain gauge simply cannot provide enough information to fully characterize the rain conditions along the link.&lt;br /&gt;
&lt;br /&gt;
There are still some open questions to resolve, e.g. what is the reason behind the $8$ h time bin giving the best result, how could we improve the correlation, using different statistics to get more information from the data, etc. All these topics will be addressed in future work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Acknowledgment=&lt;br /&gt;
The authors acknowledge the financial support from the state budget by the&lt;br /&gt;
Slovenian Research Agency under Grant P2-0095. Attenuation data was collected in the framework of the ESA-PECS project SatProSi-Alpha. Slovenian Environment Agency provided us with the data collected by their weather radars.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=651</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=651"/>
				<updated>2016-11-15T09:27:45Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* [[Attenuation due to liquid water content in the atmosphere|Attenuation of satellite communication]]&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
** [[Point contact]]&lt;br /&gt;
** [[Hertzian contact]]&lt;br /&gt;
** [[Cantilever beam]]&lt;br /&gt;
** [[Bending of a square]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* [[Execution on Intel® Xeon Phi™ co-processor]]&lt;br /&gt;
*[[:File:tech_report.compressed.pdf|Execution overheads due to clumsy types]]&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] ; [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] ; [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;br /&gt;
* Trobec R., Šterk M., Robič B., Computational complexity and parallelization of the meshless local Petrov-Galerkin methods. Computers &amp;amp; Structures. 2009;87:81-90; [21895463]&lt;br /&gt;
* Šterk M., Trobec R., Meshless solution of a diffusion equation with parameter optimization and error analysis. Engineering analysis with boundary elements. 2008;32:567-577; [21305383]&lt;br /&gt;
&lt;br /&gt;
==Related pages==&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NumericalMethods&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!utils&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NUMA&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=611</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=611"/>
				<updated>2016-11-10T17:00:17Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* [[Attenuation due to liquid water content in the atmosphere|Attenuation of satellite communication]]&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
** [[Point contact]]&lt;br /&gt;
** [[Hertzian contact]]&lt;br /&gt;
** [[Cantilever beam]]&lt;br /&gt;
** [[Bending of a square]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* [[Execution on Intel® Xeon Phi™ co-processor]]&lt;br /&gt;
* Execution overheads due to clumsy types&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] ; [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] ; [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;br /&gt;
* Trobec R., Šterk M., Robič B., Computational complexity and parallelization of the meshless local Petrov-Galerkin methods. Computers &amp;amp; Structures. 2009;87:81-90; [21895463]&lt;br /&gt;
* Šterk M., Trobec R., Meshless solution of a diffusion equation with parameter optimization and error analysis. Engineering analysis with boundary elements. 2008;32:567-577; [21305383]&lt;br /&gt;
&lt;br /&gt;
==Related pages==&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NumericalMethods&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!utils&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NUMA&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=610</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=610"/>
				<updated>2016-11-10T16:56:28Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* [[Attenuation due to liquid water content in the atmosphere|Attenuation of satellite communication]]&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
** [[Point contact]]&lt;br /&gt;
** [[Hertzian contact]]&lt;br /&gt;
** [[Cantilever beam]]&lt;br /&gt;
** [[Bending of a square]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* [[Execution on Intel® Xeon Phi™ co-processor]]&lt;br /&gt;
* Execution overheads due to clumsy types&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] :: [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;br /&gt;
&lt;br /&gt;
==Related pages==&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NumericalMethods&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!utils&lt;br /&gt;
* http://www-e6.ijs.si/ParallelAndDistributedSystems/#!NUMA&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Testing&amp;diff=609</id>
		<title>Testing</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Testing&amp;diff=609"/>
				<updated>2016-11-10T16:54:28Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;figure id=&amp;quot;fig:tests&amp;quot;&amp;gt;&lt;br /&gt;
[[File:tests.png|500px|thumb|upright=2|alt=Output of successfull ./run_tests.sh script run.|&amp;lt;caption&amp;gt;Output of successfull &amp;lt;code&amp;gt;./run_tests.sh&amp;lt;/code&amp;gt; script run.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have 4 different kind of tests in this library:&lt;br /&gt;
* unit tests&lt;br /&gt;
* style checks&lt;br /&gt;
* docs check&lt;br /&gt;
* system configuration check&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;./run_tests.sh&amp;lt;/code&amp;gt; script controlls all tests&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Usage: ./run_tests.sh&lt;br /&gt;
Options:&lt;br /&gt;
  -c   run only configuration test&lt;br /&gt;
  -t   run only unit tests&lt;br /&gt;
  -s   run only stylechecks&lt;br /&gt;
  -d   run only docs check&lt;br /&gt;
  -h   print this help&lt;br /&gt;
Example:&lt;br /&gt;
 ./run_tests.sh -sd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pushing run &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;./run_tests.sh&amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
This script makes and executes all  &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; &amp;lt;util_name&amp;gt;_test.cpp &amp;lt;/syntaxhighlight&amp;gt; test files, checks coding style and&lt;br /&gt;
documentation. If anything is wrong you will get pretty little red error, but if you see green, like in &amp;lt;xr id=&amp;quot;fig:tests&amp;quot;/&amp;gt;, you're good to go.&lt;br /&gt;
&lt;br /&gt;
=Unit tests=&lt;br /&gt;
&lt;br /&gt;
All library code is tested by means of unit tests. Unit tests provide verification, are good examples and prevent regressions.&lt;br /&gt;
'''For any newly added functionality, a unit test testing that functionality must be added.'''&lt;br /&gt;
&lt;br /&gt;
==Writing unit tests==&lt;br /&gt;
Every new functionality (eg. added class, function or method) should have a unit test. Unit tests&lt;br /&gt;
* assure that code compiles&lt;br /&gt;
* assure that code executes without crashes&lt;br /&gt;
* assure that  code produces expected results&lt;br /&gt;
* define observable behaviour of the method, class, ...&lt;br /&gt;
* prevent future modifications of this code to change this behaviour accidentally&lt;br /&gt;
&lt;br /&gt;
Unit tests should tests observable behaviour, eg. if function gets 1 and 3 as input, output should be 6. &lt;br /&gt;
They should test for edge cases and most common cases, as well as for expected death cases. &lt;br /&gt;
&lt;br /&gt;
We are using [https://github.com/google/googletest Google Test framework] for our unit tests. See their [https://github.com/google/googletest/blob/master/googletest/docs/Primer.md introduction to unit testing] for more details. &lt;br /&gt;
&lt;br /&gt;
The basic structure is &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
TEST(Group, Name) {&lt;br /&gt;
    EXPECT_EQ(a, b);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each header file should be accompanied by a &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;&amp;lt;util_name&amp;gt;_test.cpp&amp;lt;/syntaxhighlight&amp;gt; with unit tests.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;When writing unit tests, always write them thoroughly and slowly, take your time.&lt;br /&gt;
Never copy your own code's ouput to the test; rather produce it by hand or with another trusted tool.&lt;br /&gt;
Even if it seems obvious the code is correct, remember that you are writing tests also for the future.&lt;br /&gt;
If tests have a bug, it is much harder to debug!&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See our examples in [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/examples.html techincal documentation].&lt;br /&gt;
&lt;br /&gt;
==Running unit tests==&lt;br /&gt;
&lt;br /&gt;
Tests can be run all at once via &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;make run_all_tests&amp;lt;/syntaxhighlight&amp;gt; or individually via eg. &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;make basisfunc_run_tests &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Compiled binary supports running only specified test. Use &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; ./all_tests --gtest_filter=Domain*&amp;lt;/syntaxhighlight&amp;gt; for filtering and &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; ./all_tests --help &amp;lt;/syntaxhighlight&amp;gt; for more options.&lt;br /&gt;
&lt;br /&gt;
==Fixing bugs==&lt;br /&gt;
When you find a bug in the normal code, fix it and write a test for it. The test should fail before the fix, and pass after the it.&lt;br /&gt;
&lt;br /&gt;
= Style check =&lt;br /&gt;
Before commiting, a linter &amp;lt;code&amp;gt;cpplint.py&amp;lt;/code&amp;gt; is ran on all the source and test files to make sure that the code follows the [[coding style|style guide]].&lt;br /&gt;
The linter is not perfect, so if any errors are unjust, feel free to comment appropriate lines in the linter out and commit the change.&lt;br /&gt;
&lt;br /&gt;
= Docs check =&lt;br /&gt;
Every function, class or method should also have documentation as einforced by doxygen in the header where they are defined.&lt;br /&gt;
In the comment block, all parameters, and return value should be meaningully described. It can also containg a short example.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * Computes the force that point a inflicts on point b.&lt;br /&gt;
 *&lt;br /&gt;
 * @param a The index of the first point.&lt;br /&gt;
 * @param b The index of the second point&lt;br /&gt;
 * @return The size of the force vector from a to b.&lt;br /&gt;
 */&lt;br /&gt;
double f (int a, int b);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any longer discussions about the method used or long examples belog to this wiki, but can be linked from the docs.&lt;br /&gt;
&lt;br /&gt;
= Configuration testing =&lt;br /&gt;
&lt;br /&gt;
The script [https://gitlab.com/e62Lab/e62numcodes/blob/master/scripts/configure.sh scripts/configure.sh] checks&lt;br /&gt;
the computer has appropriate packages and that very basic code examples (hdf5, sfml) compile. &lt;br /&gt;
It currently support arch like and ubuntu like distros.&lt;br /&gt;
&lt;br /&gt;
If you find yourself making modification to [https://gitlab.com/e62Lab/e62numcodes/blob/master/.gitlab-ci.yml .gitlab-ci.yml] then maybe you should &lt;br /&gt;
update this check as well, as well as some documentation in the [[how to build]] page but this should not happen very often.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=608</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=608"/>
				<updated>2016-11-10T16:47:10Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* [[Attenuation due to liquid water content in the atmosphere|Attenuation of satellite communication]]&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
** [[Point contact]]&lt;br /&gt;
** [[Hertzian contact]]&lt;br /&gt;
** [[Cantilever beam]]&lt;br /&gt;
** [[Bending of a square]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* [[Execution on Intel® Xeon Phi™ co-processor]]&lt;br /&gt;
* Execution overheads due to clumsy types&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] :: [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Execution_on_Intel%C2%AE_Xeon_Phi%E2%84%A2_co-processor&amp;diff=607</id>
		<title>Execution on Intel® Xeon Phi™ co-processor</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Execution_on_Intel%C2%AE_Xeon_Phi%E2%84%A2_co-processor&amp;diff=607"/>
				<updated>2016-11-10T16:46:21Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: Created page with &amp;quot;We tested the speedups on the Intel® Xeon Phi™ with the following code: &amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt; #include &amp;lt;stdio.h&amp;gt; #include &amp;lt;stdlib.h&amp;gt; #include &amp;lt;string.h&amp;gt; #includ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We tested the speedups on the Intel® Xeon Phi™ with the following code:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot; line&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
#include &amp;lt;assert.h&amp;gt;&lt;br /&gt;
#include &amp;lt;omp.h&amp;gt;&lt;br /&gt;
#include &amp;lt;math.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char *argv[]) {&lt;br /&gt;
    int numthreads;&lt;br /&gt;
    int n;&lt;br /&gt;
&lt;br /&gt;
    assert(argc == 3 &amp;amp;&amp;amp; &amp;quot;args: numthreads n&amp;quot;);&lt;br /&gt;
    sscanf(argv[1], &amp;quot;%d&amp;quot;, &amp;amp;numthreads);&lt;br /&gt;
    sscanf(argv[2], &amp;quot;%d&amp;quot;, &amp;amp;n);&lt;br /&gt;
&lt;br /&gt;
    printf(&amp;quot;Init...\n&amp;quot;);&lt;br /&gt;
    printf(&amp;quot;Start (%d threads)...\n&amp;quot;, numthreads);&lt;br /&gt;
    printf(&amp;quot;%d test cases\n&amp;quot;, n);&lt;br /&gt;
&lt;br /&gt;
    int m = 1000000;&lt;br /&gt;
    double ttime = omp_get_wtime();&lt;br /&gt;
&lt;br /&gt;
    int i;&lt;br /&gt;
    double d = 0;&lt;br /&gt;
#pragma offload target(mic:0)&lt;br /&gt;
    {&lt;br /&gt;
#pragma omp parallel for private (i) schedule(static) num_threads(numthreads)&lt;br /&gt;
        for(i = 0; i &amp;lt; n; ++i) {&lt;br /&gt;
            for(int j = 0; j &amp;lt; m; ++j) {&lt;br /&gt;
                d = sin(d) + 0.1 + j;&lt;br /&gt;
                d = pow(0.2, d)*j;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    double time = omp_get_wtime() - ttime;&lt;br /&gt;
    fprintf(stderr, &amp;quot;%d %d %.6f\n&amp;quot;, n, numthreads, time);&lt;br /&gt;
    printf(&amp;quot;time: %.6f s\n&amp;quot;, time);&lt;br /&gt;
    printf(&amp;quot;Done d = %.6lf.\n&amp;quot;, d);&lt;br /&gt;
&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The code essentially distributes a problem of size $n\cdot m$ among &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt;numthreads&amp;lt;/syntaxhighlight&amp;gt; cores,&lt;br /&gt;
We tested the time of execution for $n$ from the set $\{1, 10, 20, 50, 100, 200, 500, 1000\}$&lt;br /&gt;
and &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; numthreads &amp;lt;/syntaxhighlight&amp;gt; from $1$ to $350$. The plots of exectuion times and performance speeups are shown below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;times&amp;quot;&amp;gt;&lt;br /&gt;
[[File:times.png|thumb|center|upright=3|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:speedups&amp;quot;&amp;gt;&lt;br /&gt;
[[File:speedups.png|thumb|center|upright=3|alt=A square of nodes coloured according to the solution(with smaller and larger node density)|&amp;lt;caption&amp;gt;A picture of our solution (with smaller and larger node density)&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The code was compiled using: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; icc -openmp -O3 -qopt-report=2 -qopt-report-phase=vec -o test test.cpp&amp;lt;/syntaxhighlight&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
without warnings or errors. Then, in order to offload to Intel Phi, user must be logged in as root: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; sudo su &amp;lt;/syntaxhighlight&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To run correctly, intel compiler and runtime variables must be sourced: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; source /opt/intel/bin/compilervars.sh intel64&amp;lt;/syntaxhighlight&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Finally, the code was tested using the following command, where &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; test &amp;lt;/syntaxhighlight&amp;gt; is the name of the compiled executable: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; for n in 1 10 20 50 100 200 500 1000; do for nt in {1..350}; echo $nt $n; ./test $nt $n 2&amp;gt;&amp;gt; speedups.txt; done; done&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Times.png&amp;diff=606</id>
		<title>File:Times.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Times.png&amp;diff=606"/>
				<updated>2016-11-10T16:45:14Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Speedups.png&amp;diff=605</id>
		<title>File:Speedups.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Speedups.png&amp;diff=605"/>
				<updated>2016-11-10T16:45:13Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=604</id>
		<title>Attenuation due to liquid water content in the atmosphere</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=604"/>
				<updated>2016-11-10T16:25:43Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolman], [mailto:gregor.kosec@ijs.si Gregor Kosec], Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
[[:File:mipro_attenuation.pdf|Full paper available for download here.]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The effect of Liquid Water Content (LWC), i.e. the mass of the water per volume&lt;br /&gt;
unit of the atmosphere, on the attenuation of a $20$ GHz communication link&lt;br /&gt;
between a ground antenna and communication satellite is tackled in this paper.&lt;br /&gt;
The wavelength of $20$ GHz electromagnetic radiation is comparable to the&lt;br /&gt;
droplet size, consequently the scattering plays an important role in the&lt;br /&gt;
attenuation. To better understand this phenomenon a correlation between&lt;br /&gt;
measured LWC and attenuation is analysed. The LWC is usually estimated from&lt;br /&gt;
the pluviograph rain rate measurements that captures only spatially localized&lt;br /&gt;
and ground level information about the LWC. In this paper the LWC is extracted&lt;br /&gt;
also from the reflectivity measurements provided by a $5.6$ GHz weather radar&lt;br /&gt;
situated in Lisca, Slovenia. The radar measures reflectivity in 3D and&lt;br /&gt;
therefore a precise spatial dependency of LWC along the communication link is&lt;br /&gt;
considered. The attenuation is measured with an in-house receiver Ljubljana&lt;br /&gt;
Station SatProSi 1 that communicates with a geostationary communication&lt;br /&gt;
satellite ASTRA 3B on the $20$ GHz band.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
The increasing demands for higher communication capabilities between terrestrial&lt;br /&gt;
and/or earth-satellite repeaters requires employment of frequency bands above&lt;br /&gt;
$10$ GHz. Moving to such frequencies the wavelength of electromagnetic&lt;br /&gt;
radiation (EMR) becomes comparable to the size of water droplets in the&lt;br /&gt;
atmosphere. Consequently, EMR attenuation due to the scattering on the droplets&lt;br /&gt;
becomes significant and ultimately dominant factor in the communications&lt;br /&gt;
quality. During its propagation, the EMR waves encounter different water&lt;br /&gt;
structures, where it can be absorbed or scattered, causing attenuation. In&lt;br /&gt;
general, water in all three states is present in the atmosphere, i.e.\ liquid in&lt;br /&gt;
form of rain, clouds and fog, solid in form of snow and ice crystals, and water&lt;br /&gt;
vapour, which makes the air humid. Regardless the state, it causes considerable&lt;br /&gt;
attenuation that has to be considered in designing of the communication&lt;br /&gt;
strategy. Therefore, in order to effectively introduce high frequency&lt;br /&gt;
communications into the operative regimes, an adequate knowledge about&lt;br /&gt;
atmospheric effects on the attenuation has to be elaborated.&lt;br /&gt;
&lt;br /&gt;
In this paper we deal with the attenuation due to the scattering of EMR on a&lt;br /&gt;
myriad of droplets in the atmosphere that is characterised by LWC or more&lt;br /&gt;
precisely with Drop Size Distribution (DSD). A discussion on the physical&lt;br /&gt;
background of the DSD can be found in (E. Villermaux and B. Bossa. Single-drop&lt;br /&gt;
fragmentation determines size distribution of raindrops, 2009), where authors describe&lt;br /&gt;
basic mechanisms behind distribution of droplets. Despite the efforts to&lt;br /&gt;
understand the complex interplay between droplets, ultimately the empirical&lt;br /&gt;
relations are used. The LWC and DSD can be related to the only involved quantity&lt;br /&gt;
that we can reliable measure, the rain rate. Recently it has been demonstrated&lt;br /&gt;
that for high rain rates also the site location plays a role in the DSD due to&lt;br /&gt;
the local climate conditions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In general, raindrops can be considered as dielectric blobs of water that&lt;br /&gt;
polarize in the presence of an electric field. When introduced to an oscillating&lt;br /&gt;
electric field, such as electromagnetic waves, a droplet of water acts as an&lt;br /&gt;
antenna and re-radiates the received energy in arbitrary direction causing a net&lt;br /&gt;
loss of energy flux towards the receiver. Some part of energy can also be&lt;br /&gt;
absorbed by the raindrop, which results in heating. Absorption is the main cause&lt;br /&gt;
of energy loss when dealing with raindrops large compared to the wavelength,&lt;br /&gt;
whereas scattering is predominant with raindrops smaller than the wavelength.&lt;br /&gt;
The very first model for atmospheric scattering was introduced by lord Rayleigh.&lt;br /&gt;
The Rayleigh assumed the constant spatial polarization within the droplet. Such&lt;br /&gt;
simplifications limits the validity of the model to only relatively small&lt;br /&gt;
droplets in comparison to the wavelength of the incident field, i.e.&lt;br /&gt;
approximately up to $5$ GHz when EMR scattering on the rain droplets&lt;br /&gt;
is considered. A more general model was developed by Mie in 1908, where a&lt;br /&gt;
spatial dependent polarization is considered within the droplet, extending the&lt;br /&gt;
validity of the model to higher droplet size/EMR wavelength ratios. Later, a&lt;br /&gt;
popular empirical model was presented in (J.S. Marshall and W.McK. Palmer. The&lt;br /&gt;
distribution of raindrops with size, 1948), where attenuation is related only to&lt;br /&gt;
the rain rate. The model, also referred to as Marshall-Palmer model, is widely&lt;br /&gt;
used in evaluation of rain rate from reflectivity measured by weather radars.&lt;br /&gt;
Marhsall-Palmer model simply states the relation between the attenuation and&lt;br /&gt;
rain rate in terms of a power function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this paper we seek for correlation between the LWC and attenuation&lt;br /&gt;
measurements. LWC is extracted from reflectivity measurements provided by a&lt;br /&gt;
weather radar situated in Lisca and operated by Slovenian Environment Agency.&lt;br /&gt;
Attenuation is measured by in-house hardware that monitors the signal strength&lt;br /&gt;
between Ljubljana Station SatProSi 1 and communication satellite ASTRA 3B. The&lt;br /&gt;
main purpose of this paper is therefore to investigate correlation between&lt;br /&gt;
precipitation measured in 3D with the meteorological radar and the measured&lt;br /&gt;
attenuation.&lt;br /&gt;
&lt;br /&gt;
=Governing models=&lt;br /&gt;
&lt;br /&gt;
Before we proceed to measurements some basic relations are discussed.&lt;br /&gt;
&lt;br /&gt;
Attenuation ($A$) is a quantity measured in [dB] that describes the loss of electromagnetic radiation propagating through a medium. It is defined with starting intensity $I_s$ and the intensity received after propagation $I_r$ as&lt;br /&gt;
\[&lt;br /&gt;
A = 10\log_{10}\frac{I_s}{I_r}.&lt;br /&gt;
\]&lt;br /&gt;
The specific attenuation ($\alpha=A/L$) measured in [dB/km] as a function of rain rate ($R$) measured in [mm/h] is commonly modelled as &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}&lt;br /&gt;
\[&lt;br /&gt;
\alpha(R) \sim a \,R^{b} \ .&lt;br /&gt;
\]&lt;br /&gt;
Coefficients $a$ and $b$ are determined empirically by fitting the model to the experimental data. In general, coefficients depend on the incident wave frequency and polarization, and ambient temperature. Some example values for different frequencies are presented in Table 1.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Table 1: Value of coefficients for Marshal-Palmer relation $\alpha(R)$ at different frequencies.&lt;br /&gt;
|-&lt;br /&gt;
!f[GHz]||10||12||15||20||25||30 &lt;br /&gt;
|-&lt;br /&gt;
!$a$||0.0094||0.0177||0.0350||0.0722||0.1191||0.1789 &lt;br /&gt;
|-&lt;br /&gt;
!$b$||1.273||1.211||1.143||1.083||1.044||1.007 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The simplest characterization of rain is through rain rate $R$, measured in [mm/h]. However, rain rate do not give any information about the type of rain. For example a storm and a shower might have the same rain rate, but are comprised of different droplets. Therefore, a more descriptive quantity is a Drop size distribution (DSD) that, unsurprisingly, describes the distribution of droplet sizes.&lt;br /&gt;
A simple DSD model is presented in (J.S. Marshall and W.McK. Palmer. The distribution of raindrops with size, 1948)&lt;br /&gt;
&lt;br /&gt;
\[&lt;br /&gt;
\begin{equation}&lt;br /&gt;
N(D) = U \exp (-V \, R^{\delta} D),&lt;br /&gt;
\end{equation}&lt;br /&gt;
\label{eq:dsdr}&lt;br /&gt;
\]&lt;br /&gt;
where $D$ stands for drop diameter measured in [mm], $N(D)$ describes number of droplets of size $D$ to $D + \mathrm dD$ in a unit of volume measured in [$mm^{-1} m^{-3}$] and $R$ rain rate measured in [mm/h]. The values of equation parameters were set to $U=8.3 \cdot 10^3$, $V=4.1$ and $\delta=-0.21$. The DSD was also determined experimentally for different rain rates &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. The experimental data is presented in &amp;lt;xr id=&amp;quot;fig:dsd&amp;quot;/&amp;gt;, where we can see that the typical diameter of droplets is in range of mm. There is a discrepancy between the theoretical and experimental data with very small droplets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dsd&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dsd_manual.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt; DSD measured in Czech Republic (one year measurement, rain rate $R$ is the parameter of particular sets of points) &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. Lines represent the theoretical value as determined by $(\ref{eq:dsdr})$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Measurements=&lt;br /&gt;
&lt;br /&gt;
== Measurements of signal attenuation==&lt;br /&gt;
&lt;br /&gt;
Jožef Stefan Institute (JSI) and European Space Agency (ESA) cooperate in SatProSi-Alpha project that includes measuring attenuation of the communication link between ground antenna and a satellite, more precisely between the ASTRA 3B satellite and SatProSi 1 station. The ASTRA 3B is a geostationary communication satellite located on the $23.5^\circ E$ longitude over the equator. It broadcasts the signal at $20$ GHz, which is received at SatProSi 1 with an in-house receiver, namely $1.2$ m parabolic antenna positioned on the top of the JSI main building with a gain of about $47$ dB. The SatProSi measures attenuation every $0.15$ seconds, resulting in over $500000$ daily records, since 1. 10. 2011.&lt;br /&gt;
&lt;br /&gt;
== Measurements of rainfall rate ==&lt;br /&gt;
Two sources of rain measurements are used in this paper. The first one is a pluviograph installed locally in the proximity of the antenna. The rain rate is measured every five minutes.&lt;br /&gt;
&lt;br /&gt;
Another, much more sophisticated, measurements of rain characteristics are provided by meteorological radars. The basic idea behind such radars is to measure EMR that reflects from water droplets. The measured reflectivity is then related with rain rate with Marhsall-Palmer relation.&lt;br /&gt;
Radar reflectivity factor $Z$ is formally defined as the sum of sixth powers of drop diameters over all droplets per unit of volume, which can be converted into an integral&lt;br /&gt;
\[&lt;br /&gt;
Z = \int_0^\infty N(D)D^6 \mathrm dD \ .&lt;br /&gt;
\]&lt;br /&gt;
Note that the form of relation follows the Rayleigh scattering model. $Z$ is usually measured in units $ mm^6m^{-3} $. When conducting measurements a so-called Equivalent Reflectivity Factor&lt;br /&gt;
\[&lt;br /&gt;
Z_e = \frac{\eta \lambda^4}{0.93 \pi^5}&lt;br /&gt;
\]&lt;br /&gt;
is used, where $\eta$ means reflectivity, $\lambda$ is radar wavelength and $0.93$ stands for dielectric factor of water. As the name suggests both are equivalent for large wavelengths compared to the drop sizes, as in Rayleigh model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reflectivity factor and rainfall rate are related through Marshall-Palmer relation as&lt;br /&gt;
\[&lt;br /&gt;
Z_{[mm^6m^{-3}]} = \tilde a R_{[mm/h]}^{\tilde{b}}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_{[mm^6m^{-3}]}$ is reflectivity factor measured in $mm^6m^{-3}$ and $R_{[mm/h]}$ is rainfall rate measured in mm/h. In general, empirical coefficients $\tilde a$ and $\tilde b$ vary with location and/or season, however, are independent of rainfall $R$. Most widely used values are $\tilde a=200$ and $\tilde b=1.6$.&lt;br /&gt;
Meteorologists rather use dimensionless logarithmic scale and define&lt;br /&gt;
\[&lt;br /&gt;
\mathit{dBZ} = 10 \, \log_{10} \frac{Z}{Z_0} = 10 \, \log_{10} Z_{[mm^6m^{-3}]}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_0$ is reflectivity factor equivalent to one droplet of diameter $1$ mm per cubic meter.&lt;br /&gt;
&lt;br /&gt;
The meteorological radars at Lisca emit short ($1$ microsecond) electromagnetic pulses with the frequency of $5.62$ GHz and measure strength of the reflection from different points in their path. Radar collect roughly $650000$ spatial data points per radar per every atmosphere scan, which they do every $10$ minutes. They determine the exact location of all their measurements through their direction and the time it takes for the signal to reflect back to the radar.&lt;br /&gt;
&lt;br /&gt;
In addition to reflectivity radars also measure the radial velocity of the reflecting particles by measuring the Doppler shift of the received EMR, but this is a feature we will not be using.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data analysis=&lt;br /&gt;
The analysis begins with handling approximately $20$ GB of radar data for the academic year 2014/15 accompanied with $3$ GB of signal attenuation data for the same time period and approximately $5$ GB of attenuation and local rain gauge data for years 2012 and 2013.&lt;br /&gt;
&lt;br /&gt;
== Preprocessing the radar spatial data ==&lt;br /&gt;
&lt;br /&gt;
Radar data was firstly reduced by eliminating spatial points far away from our point of interest, namely the JSI main building where antenna is located. The geostationary orbit is $35786$ km above the sea level, therefore the link between the antenna and the satellite has a steep elevation angle $36.3^\circ$. In fact just $20$ km south of the antenna the ray rises above $15$ km, which is the upper boundary for all weather activities. Knowing this, a smaller area of the map can be safely cropped out, reducing the number of data points from around $650000$ to approximately $6500$ for each radar scan covering an $40 \text{km} \times 40 \text{km}$ area.&lt;br /&gt;
&lt;br /&gt;
Although we already gravely reduced original data size, we must still reduce thousands of points into something tangible. The positions of both the antenna and the satellite are known at all times, a lovely consequence of them being stationary; therefore the link between them can be easily traced. Roughly $150$ points on the ray path are used as a discrete representation of the link, referred to as link points in future discussions. For each link point a median of $n$ closest radar measurements is computed as a representative value.&lt;br /&gt;
The other way of extracting reflectivity factor was simply to take closest $n$ points to the antenna and select the median value of those. A visualisation of both methods is presented in &amp;lt;xr id=&amp;quot;fig:support_presentation&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Now we are left with multiple scalar quantities as a function of time. Antenna attenuation for every $0.15$ s, local rain gauge for every $5$ min and various extractions of reflectivity factor for every $10$ min. Note, that radar values are not averaged over $10$ minutes, radar simply needs $10$ minutes to complete a single scan.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:att_ref_time&amp;quot;/&amp;gt; an example of rainfall rate measured with weather radar and the measured attenuation for a three day period is presented. A correlation between quantities is clearly seen on the figure but a closer inspection is needed to reveal more details about the correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:support_presentation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:support.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Positions of radar measurements. The blue rectangle is the location of the antenna and the rain gauge. The $ 64 $ points closest to the antenna are enclosed in a red sphere and marked as red circles. Red dots mark the remainder of $ 512 $ closest points. The green line is the ray path between antenna and satellite with green circles representing corresponding support nodes for support size $n=4$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:att_ref_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time_flow_1800_64_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Measured antenna attenuation and rain rate extracted from $ 64 $ radar measurements closest to the antenna. Both datasets have been sorted into $ 30 $ minute bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Correlation between rain and attenuation ==&lt;br /&gt;
In order to find a relation between rain rate and electromagnetic attenuation, measurements of both quantities must be paired. There is no obvious way of doing this since both are measured at a vastly different time-scale. We ended up dividing time into bins of duration $t_0$ and pairing the measurements that fall within the same bin. The maximum values of every quantity was selected as a representative for the given time period.&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient between two variables $X$ and $Y$ can be calculated using&lt;br /&gt;
\[&lt;br /&gt;
corr(X, Y)=\frac{\text{mean}((X - \text{mean}(X))\cdot(Y - \text{mean}(Y)))}{\text{std}(X)\text{std}(Y)}&lt;br /&gt;
\]&lt;br /&gt;
and is a good quantity for determining linear dependence between $X$ and $Y$.&lt;br /&gt;
&lt;br /&gt;
According to the Marshall-Palmer power law a linear relation exists between logarithms of rain rate and specific attenuation.&lt;br /&gt;
Our measurements are of total attenuation $A$ and not of specific attenuation so we must adjust the equation. We assume a typical distance $L$ as a connecting factor between the, which gives us&lt;br /&gt;
\[&lt;br /&gt;
\log_{10}A = \log_{10}La + b\log_{10}R .&lt;br /&gt;
\]&lt;br /&gt;
The exact value of $L$ is not relevant as only the parameter $b$ will interest us. Therefore a slope on a log-log graph, such as on &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt;, is equal to the model parameter $b$. We used a least square linear fit on each set of data to get the corresponding values for $b$.&lt;br /&gt;
&lt;br /&gt;
In addition, correlation between logarithmic values of rain rate and attenuation&lt;br /&gt;
\[&lt;br /&gt;
corr\left(\log_{10}A_{[\text{dB}]}, \log_{10}R_{[\text{mm/h}]}\right)&lt;br /&gt;
\]&lt;br /&gt;
is used as a quality measure of their relation.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
Once we paired attenuation and rainfall data we can scatter the points on a graph.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt; the attenuation against rain rate at $8$ h bin size is presented. For the local radar representation $n=2^6$ and for integral representation $n=2^2$ support size is used. The correlation can be clearly seen, however not unified, as one would expect if measurements and rain rate - reflectivity model would be perfect.&lt;br /&gt;
Since we introduced two free parameters, namely time bin $t_0$ and spatial support size $n$ for integral and $n$ for local radar representation, a sensitivity analyses regarding those parameters are needed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:attenuation_rainrate&amp;quot;&amp;gt;&lt;br /&gt;
[[File:scatter_all.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Attenuation dependency on the rain rate measured in three different ways. Local rain gauge (blue), path integration on each step selecting closest $ 4 $ points (green) and from $ 64 $ points closest to the antenna (red). All measurements have been put into $ 8 $ h bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:local_correlation&amp;quot;/&amp;gt; a correlation with respect to the number of local support nodes and time bin size is presented. The best correlation is obtained with $8$ h time bins and a local $n=2^6$ support size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:local_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt; local&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly a correlation with respect to the number of integral support nodes and time bin size is presented in &amp;lt;xr id=&amp;quot;fig:integrate_correlation&amp;quot;/&amp;gt;. Again, the best correlation is obtained with 8 h time bins, however with the integral model a small integral support, i.e.\ $n=2^2$, already suffices to obtain fair correlation. Such behaviour is expected. In an integral mode we follow the ray and support is moving along, therefore there is no need to capture vast regions for each link point.&lt;br /&gt;
On the other hand in a local approach only one support is used and therefore that support has to be much bigger to capture enough details about the rain conditions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:integrate_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_integrate.png|600px|thumb|center|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt;integral&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compare measurements acquired with radar and the ones acquired with the local rain gauge a simpler presentation of correlation is shown on &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. One set of data has rain rate extracted from radar using the integral method with support size $4$ and two sets using closest either $n=64$ or $n=512$ nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:correlation_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation as a function of time bin size $t_0$ for different ways of extracting the rain rate. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the next step we compare our measurements with a Marshall-Palmer model, specifically the exponent $b$. According to Table 1 in $20$ GHz the $b_0=1.083$ should hold.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:fit_time&amp;quot;/&amp;gt; differences between our measurements and $b_0$ with respect to the time bin size are presented for the same sets of data as were used in the correlation analysis of &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. An order of magnitude improvement is visible between local rain gauge and data extracted with radar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fit_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_fit_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Exponent in attenuation to rainfall relation $b$ compared to value $b_0$ from Table 1 for $ 20 $ GHz as a function of bin duration $t_0$ for a few ways of extracting rainfall. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
This paper deals with the correlation analysis between EMR attenuation due to the scattering on the ASTRA 3B - SatProSi 1 link and measured rain rate. The main objective of the paper is to analyse the related measurements and comparison of results with Marshall-Palmer model.&lt;br /&gt;
&lt;br /&gt;
The attenuation is measured directly with an in-house equipment with a relatively high time resolution ($0.15$ s).&lt;br /&gt;
&lt;br /&gt;
The rain characteristics are measured with a rain gauge positioned next to the antenna and national meteorological radar. The rain gauge measures average rain rate every five minutes at a single position, while the radar provides a full 3D scan of reflectivity every $10$ minutes.&lt;br /&gt;
&lt;br /&gt;
Although the attenuation depends mainly on the DSD, a rain rate is used as a reference quantity, since it is much more descriptive, as well as easier to measure. The reflectivity measured with the radar is therefore transformed to the rain rate with the Marshall-Palmer relation. More direct approach would be to relate the attenuation with the measured reflectivity directly, however that would not change any of the conclusions, since, on a logarithmic scale, a simple power relation between reflectivity and rain rate reflects only as a linear transformation.&lt;br /&gt;
&lt;br /&gt;
The analysis of support size and time bin size showed quite strong influence of those two parameters on the correlation. It is demonstrated that time bin $8h$ and support sizes of $n=2^6$ and $n=2^2$ for local and integral approach, respectively, provide a decent correlation ($0.6-0.7$) between logarithms of measured attenuation and rain rate.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the power model has been fitted over measured data and the value of the exponent has been compared to the values reported in the literature. The model shows best agreement with the Marshall-Palmer model, when the rain rate is gathered from the integral along the communication link. somewhat worse agreement is achieved with a local determination of rain rate. Results obtained with the rain gauge are the furthest from the expected, despite the fact that the correlation with the measured attenuation is the highest with the rain gauge measurements. The localized information from the rain gauge simply cannot provide enough information to fully characterize the rain conditions along the link.&lt;br /&gt;
&lt;br /&gt;
There are still some open questions to resolve, e.g. what is the reason behind the $8$ h time bin giving the best result, how could we improve the correlation, using different statistics to get more information from the data, etc. All these topics will be addressed in future work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Acknowledgment=&lt;br /&gt;
The authors acknowledge the financial support from the state budget by the&lt;br /&gt;
Slovenian Research Agency under Grant P2-0095. Attenuation data was collected in the framework of the ESA-PECS project SatProSi-Alpha. Slovenian Environment Agency provided us with the data collected by their weather radars.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=603</id>
		<title>Attenuation due to liquid water content in the atmosphere</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=603"/>
				<updated>2016-11-10T16:23:00Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolman], [mailto:gregor.kosec@ijs.si Gregor Kosec], Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
[[:File:mipro_attenuation.pdf|Full paper available for download here.]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The effect of Liquid Water Content (LWC), i.e. the mass of the water per volume&lt;br /&gt;
unit of the atmosphere, on the attenuation of a $20$ GHz communication link&lt;br /&gt;
between a ground antenna and communication satellite is tackled in this paper.&lt;br /&gt;
The wavelength of $20$ GHz electromagnetic radiation is comparable to the&lt;br /&gt;
droplet size, consequently the scattering plays an important role in the&lt;br /&gt;
attenuation. To better understand this phenomenon a correlation between&lt;br /&gt;
measured LWC and attenuation is analysed. The LWC is usually estimated from&lt;br /&gt;
the pluviograph rain rate measurements that captures only spatially localized&lt;br /&gt;
and ground level information about the LWC. In this paper the LWC is extracted&lt;br /&gt;
also from the reflectivity measurements provided by a $5.6$ GHz weather radar&lt;br /&gt;
situated in Lisca, Slovenia. The radar measures reflectivity in 3D and&lt;br /&gt;
therefore a precise spatial dependency of LWC along the communication link is&lt;br /&gt;
considered. The attenuation is measured with an in-house receiver Ljubljana&lt;br /&gt;
Station SatProSi 1 that communicates with a geostationary communication&lt;br /&gt;
satellite ASTRA 3B on the $20$ GHz band.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
The increasing demands for higher communication capabilities between terrestrial&lt;br /&gt;
and/or earth-satellite repeaters requires employment of frequency bands above&lt;br /&gt;
$10$ GHz. Moving to such frequencies the wavelength of electromagnetic&lt;br /&gt;
radiation (EMR) becomes comparable to the size of water droplets in the&lt;br /&gt;
atmosphere. Consequently, EMR attenuation due to the scattering on the droplets&lt;br /&gt;
becomes significant and ultimately dominant factor in the communications&lt;br /&gt;
quality. During its propagation, the EMR waves encounter different water&lt;br /&gt;
structures, where it can be absorbed or scattered, causing attenuation. In&lt;br /&gt;
general, water in all three states is present in the atmosphere, i.e.\ liquid in&lt;br /&gt;
form of rain, clouds and fog, solid in form of snow and ice crystals, and water&lt;br /&gt;
vapour, which makes the air humid. Regardless the state, it causes considerable&lt;br /&gt;
attenuation that has to be considered in designing of the communication&lt;br /&gt;
strategy. Therefore, in order to effectively introduce high frequency&lt;br /&gt;
communications into the operative regimes, an adequate knowledge about&lt;br /&gt;
atmospheric effects on the attenuation has to be elaborated.&lt;br /&gt;
&lt;br /&gt;
In this paper we deal with the attenuation due to the scattering of EMR on a&lt;br /&gt;
myriad of droplets in the atmosphere that is characterised by LWC or more&lt;br /&gt;
precisely with Drop Size Distribution (DSD). A discussion on the physical&lt;br /&gt;
background of the DSD can be found in (E. Villermaux and B. Bossa. Single-drop&lt;br /&gt;
fragmentation determines size distribution of raindrops, 2009), where authors describe&lt;br /&gt;
basic mechanisms behind distribution of droplets. Despite the efforts to&lt;br /&gt;
understand the complex interplay between droplets, ultimately the empirical&lt;br /&gt;
relations are used. The LWC and DSD can be related to the only involved quantity&lt;br /&gt;
that we can reliable measure, the rain rate. Recently it has been demonstrated&lt;br /&gt;
that for high rain rates also the site location plays a role in the DSD due to&lt;br /&gt;
the local climate conditions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In general, raindrops can be considered as dielectric blobs of water that&lt;br /&gt;
polarize in the presence of an electric field. When introduced to an oscillating&lt;br /&gt;
electric field, such as electromagnetic waves, a droplet of water acts as an&lt;br /&gt;
antenna and re-radiates the received energy in arbitrary direction causing a net&lt;br /&gt;
loss of energy flux towards the receiver. Some part of energy can also be&lt;br /&gt;
absorbed by the raindrop, which results in heating. Absorption is the main cause&lt;br /&gt;
of energy loss when dealing with raindrops large compared to the wavelength,&lt;br /&gt;
whereas scattering is predominant with raindrops smaller than the wavelength.&lt;br /&gt;
The very first model for atmospheric scattering was introduced by lord Rayleigh.&lt;br /&gt;
The Rayleigh assumed the constant spatial polarization within the droplet. Such&lt;br /&gt;
simplifications limits the validity of the model to only relatively small&lt;br /&gt;
droplets in comparison to the wavelength of the incident field, i.e.&lt;br /&gt;
approximately up to $5$ GHz when EMR scattering on the rain droplets&lt;br /&gt;
is considered. A more general model was developed by Mie in 1908, where a&lt;br /&gt;
spatial dependent polarization is considered within the droplet, extending the&lt;br /&gt;
validity of the model to higher droplet size/EMR wavelength ratios. Later, a&lt;br /&gt;
popular empirical model was presented in (J.S. Marshall and W.McK. Palmer. The&lt;br /&gt;
distribution of raindrops with size, 1948), where attenuation is related only to&lt;br /&gt;
the rain rate. The model, also referred to as Marshall-Palmer model, is widely&lt;br /&gt;
used in evaluation of rain rate from reflectivity measured by weather radars.&lt;br /&gt;
Marhsall-Palmer model simply states the relation between the attenuation and&lt;br /&gt;
rain rate in terms of a power function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this paper we seek for correlation between the LWC and attenuation&lt;br /&gt;
measurements. LWC is extracted from reflectivity measurements provided by a&lt;br /&gt;
weather radar situated in Lisca and operated by Slovenian Environment Agency.&lt;br /&gt;
Attenuation is measured by in-house hardware that monitors the signal strength&lt;br /&gt;
between Ljubljana Station SatProSi 1 and communication satellite ASTRA 3B. The&lt;br /&gt;
main purpose of this paper is therefore to investigate correlation between&lt;br /&gt;
precipitation measured in 3D with the meteorological radar and the measured&lt;br /&gt;
attenuation.&lt;br /&gt;
&lt;br /&gt;
=Governing models=&lt;br /&gt;
&lt;br /&gt;
Before we proceed to measurements some basic relations are discussed.&lt;br /&gt;
&lt;br /&gt;
Attenuation ($A$) is a quantity measured in [dB] that describes the loss of electromagnetic radiation propagating through a medium. It is defined with starting intensity $I_s$ and the intensity received after propagation $I_r$ as&lt;br /&gt;
\[&lt;br /&gt;
A = 10\log_{10}\frac{I_s}{I_r}.&lt;br /&gt;
\]&lt;br /&gt;
The specific attenuation ($\alpha=A/L$) measured in [dB/km] as a function of rain rate ($R$) measured in [mm/h] is commonly modelled as &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}&lt;br /&gt;
\[&lt;br /&gt;
\alpha(R) \sim a \,R^{b} \ .&lt;br /&gt;
\]&lt;br /&gt;
Coefficients $a$ and $b$ are determined empirically by fitting the model to the experimental data. In general, coefficients depend on the incident wave frequency and polarization, and ambient temperature. Some example values for different frequencies are presented in Table 1.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Table 1: Value of coefficients for Marshal-Palmer relation $\alpha(R)$ at different frequencies.&lt;br /&gt;
|-&lt;br /&gt;
!f[GHz]||10||12||15||20||25||30 &lt;br /&gt;
|-&lt;br /&gt;
!$a$||0.0094||0.0177||0.0350||0.0722||0.1191||0.1789 &lt;br /&gt;
|-&lt;br /&gt;
!$b$||1.273||1.211||1.143||1.083||1.044||1.007 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The simplest characterization of rain is through rain rate $R$, measured in [mm/h]. However, rain rate do not give any information about the type of rain. For example a storm and a shower might have the same rain rate, but are comprised of different droplets. Therefore, a more descriptive quantity is a Drop size distribution (DSD) that, unsurprisingly, describes the distribution of droplet sizes.&lt;br /&gt;
A simple DSD model is presented in (J.S. Marshall and W.McK. Palmer. The distribution of raindrops with size, 1948)&lt;br /&gt;
&lt;br /&gt;
\[&lt;br /&gt;
\begin{equation}&lt;br /&gt;
N(D) = U \exp (-V \, R^{\delta} D),&lt;br /&gt;
\end{equation}&lt;br /&gt;
\label{eq:dsdr}&lt;br /&gt;
\]&lt;br /&gt;
where $D$ stands for drop diameter measured in [mm], $N(D)$ describes number of droplets of size $D$ to $D + \mathrm dD$ in a unit of volume measured in [$mm^{-1} m^{-3}$] and $R$ rain rate measured in [mm/h]. The values of equation parameters were set to $U=8.3 \cdot 10^3$, $V=4.1$ and $\delta=-0.21$. The DSD was also determined experimentally for different rain rates &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. The experimental data is presented in &amp;lt;xr id=&amp;quot;fig:dsd&amp;quot;/&amp;gt;, where we can see that the typical diameter of droplets is in range of mm. There is a discrepancy between the theoretical and experimental data with very small droplets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dsd&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dsd_manual.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt; DSD measured in Czech Republic (one year measurement, rain rate $R$ is the parameter of particular sets of points) &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. Lines represent the theoretical value as determined by $(\ref{eq:dsdr})$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Measurements=&lt;br /&gt;
&lt;br /&gt;
== Measurements of signal attenuation==&lt;br /&gt;
&lt;br /&gt;
Jožef Stefan Institute (JSI) and European Space Agency (ESA) cooperate in SatProSi-Alpha project that includes measuring attenuation of the communication link between ground antenna and a satellite, more precisely between the ASTRA 3B satellite and SatProSi 1 station. The ASTRA 3B is a geostationary communication satellite located on the $23.5^\circ E$ longitude over the equator. It broadcasts the signal at $20$ GHz, which is received at SatProSi 1 with an in-house receiver, namely $1.2$ m parabolic antenna positioned on the top of the JSI main building with a gain of about $47$ dB. The SatProSi measures attenuation every $0.15$ seconds, resulting in over $500000$ daily records, since 1. 10. 2011.&lt;br /&gt;
&lt;br /&gt;
== Measurements of rainfall rate ==&lt;br /&gt;
Two sources of rain measurements are used in this paper. The first one is a pluviograph installed locally in the proximity of the antenna. The rain rate is measured every five minutes.&lt;br /&gt;
&lt;br /&gt;
Another, much more sophisticated, measurements of rain characteristics are provided by meteorological radars. The basic idea behind such radars is to measure EMR that reflects from water droplets. The measured reflectivity is then related with rain rate with Marhsall-Palmer relation.&lt;br /&gt;
Radar reflectivity factor $Z$ is formally defined as the sum of sixth powers of drop diameters over all droplets per unit of volume, which can be converted into an integral&lt;br /&gt;
\[&lt;br /&gt;
Z = \int_0^\infty N(D)D^6 \mathrm dD \ .&lt;br /&gt;
\]&lt;br /&gt;
Note that the form of relation follows the Rayleigh scattering model. $Z$ is usually measured in units $ mm^6m^{-3} $. When conducting measurements a so-called Equivalent Reflectivity Factor&lt;br /&gt;
\[&lt;br /&gt;
Z_e = \frac{\eta \lambda^4}{0.93 \pi^5}&lt;br /&gt;
\]&lt;br /&gt;
is used, where $\eta$ means reflectivity, $\lambda$ is radar wavelength and $0.93$ stands for dielectric factor of water. As the name suggests both are equivalent for large wavelengths compared to the drop sizes, as in Rayleigh model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reflectivity factor and rainfall rate are related through Marshall-Palmer relation as&lt;br /&gt;
\[&lt;br /&gt;
Z_{[mm^6m^{-3}]} = \tilde a R_{[mm/h]}^{\tilde{b}}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_{[mm^6m^{-3}]}$ is reflectivity factor measured in $mm^6m^{-3}$ and $R_{[mm/h]}$ is rainfall rate measured in mm/h. In general, empirical coefficients $\tilde a$ and $\tilde b$ vary with location and/or season, however, are independent of rainfall $R$. Most widely used values are $\tilde a=200$ and $\tilde b=1.6$.&lt;br /&gt;
Meteorologists rather use dimensionless logarithmic scale and define&lt;br /&gt;
\[&lt;br /&gt;
\mathit{dBZ} = 10 \, \log_{10} \frac{Z}{Z_0} = 10 \, \log_{10} Z_{[mm^6m^{-3}]}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_0$ is reflectivity factor equivalent to one droplet of diameter $1$ mm per cubic meter.&lt;br /&gt;
&lt;br /&gt;
The meteorological radars at Lisca emit short ($1$ microsecond) electromagnetic pulses with the frequency of $5.62$ GHz and measure strength of the reflection from different points in their path. Radar collect roughly $650000$ spatial data points per radar per every atmosphere scan, which they do every $10$ minutes. They determine the exact location of all their measurements through their direction and the time it takes for the signal to reflect back to the radar.&lt;br /&gt;
&lt;br /&gt;
In addition to reflectivity radars also measure the radial velocity of the reflecting particles by measuring the Doppler shift of the received EMR, but this is a feature we will not be using.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data analysis=&lt;br /&gt;
The analysis begins with handling approximately $20$ GB of radar data for the academic year 2014/15 accompanied with $3$ GB of signal attenuation data for the same time period and approximately $5$ GB of attenuation and local rain gauge data for years 2012 and 2013.&lt;br /&gt;
&lt;br /&gt;
== Preprocessing the radar spatial data ==&lt;br /&gt;
&lt;br /&gt;
Radar data was firstly reduced by eliminating spatial points far away from our point of interest, namely the JSI main building where antenna is located. The geostationary orbit is $35786$ km above the sea level, therefore the link between the antenna and the satellite has a steep elevation angle $36.3^\circ$. In fact just $20$ km south of the antenna the ray rises above $15$ km, which is the upper boundary for all weather activities. Knowing this, a smaller area of the map can be safely cropped out, reducing the number of data points from around $650000$ to approximately $6500$ for each radar scan covering an $40 \text{km} \times 40 \text{km}$ area.&lt;br /&gt;
&lt;br /&gt;
Although we already gravely reduced original data size, we must still reduce thousands of points into something tangible. The positions of both the antenna and the satellite are known at all times, a lovely consequence of them being stationary; therefore the link between them can be easily traced. Roughly $150$ points on the ray path are used as a discrete representation of the link, referred to as link points in future discussions. For each link point a median of $n$ closest radar measurements is computed as a representative value.&lt;br /&gt;
The other way of extracting reflectivity factor was simply to take closest $n$ points to the antenna and select the median value of those. A visualisation of both methods is presented in &amp;lt;xr id=&amp;quot;fig:support_presentation&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Now we are left with multiple scalar quantities as a function of time. Antenna attenuation for every $0.15$ s, local rain gauge for every $5$ min and various extractions of reflectivity factor for every $10$ min. Note, that radar values are not averaged over $10$ minutes, radar simply needs $10$ minutes to complete a single scan.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:att_ref_time&amp;quot;/&amp;gt; an example of rainfall rate measured with weather radar and the measured attenuation for a three day period is presented. A correlation between quantities is clearly seen on the figure but a closer inspection is needed to reveal more details about the correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:support_presentation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:support.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Positions of radar measurements. The blue rectangle is the location of the antenna and the rain gauge. The $ 64 $ points closest to the antenna are enclosed in a red sphere and marked as red circles. Red dots mark the remainder of $ 512 $ closest points. The green line is the ray path between antenna and satellite with green circles representing corresponding support nodes for support size $n=4$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:att_ref_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time_flow_1800_64_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Measured antenna attenuation and rain rate extracted from $ 64 $ radar measurements closest to the antenna. Both datasets have been sorted into $ 30 $ minute bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Correlation between rain and attenuation ==&lt;br /&gt;
In order to find a relation between rain rate and electromagnetic attenuation, measurements of both quantities must be paired. There is no obvious way of doing this since both are measured at a vastly different time-scale. We ended up dividing time into bins of duration $t_0$ and pairing the measurements that fall within the same bin. The maximum values of every quantity was selected as a representative for the given time period.&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient between two variables $X$ and $Y$ can be calculated using&lt;br /&gt;
\[&lt;br /&gt;
corr(X, Y)=\frac{\text{mean}((X - \text{mean}(X))\cdot(Y - \text{mean}(Y)))}{\text{std}(X)\text{std}(Y)}&lt;br /&gt;
\]&lt;br /&gt;
and is a good quantity for determining linear dependence between $X$ and $Y$.&lt;br /&gt;
&lt;br /&gt;
According to the Marshall-Palmer power law a linear relation exists between logarithms of rain rate and specific attenuation.&lt;br /&gt;
Our measurements are of total attenuation $A$ and not of specific attenuation so we must adjust the equation. We assume a typical distance $L$ as a connecting factor between the, which gives us&lt;br /&gt;
\[&lt;br /&gt;
\log_{10}A = \log_{10}La + b\log_{10}R .&lt;br /&gt;
\]&lt;br /&gt;
The exact value of $L$ is not relevant as only the parameter $b$ will interest us. Therefore a slope on a log-log graph, such as on &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt;, is equal to the model parameter $b$. We used a least square linear fit on each set of data to get the corresponding values for $b$.&lt;br /&gt;
&lt;br /&gt;
In addition, correlation between logarithmic values of rain rate and attenuation&lt;br /&gt;
\[&lt;br /&gt;
corr\left(\log_{10}A_{[\text{dB}]}, \log_{10}R_{[\text{mm/h}]}\right)&lt;br /&gt;
\]&lt;br /&gt;
is used as a quality measure of their relation.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
Once we paired attenuation and rainfall data we can scatter the points on a graph.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt; the attenuation against rain rate at $8$ h bin size is presented. For the local radar representation $n=2^6$ and for integral representation $n=2^2$ support size is used. The correlation can be clearly seen, however not unified, as one would expect if measurements and rain rate - reflectivity model would be perfect.&lt;br /&gt;
Since we introduced two free parameters, namely time bin $t_0$ and spatial support size $n$ for integral and $n$ for local radar representation, a sensitivity analyses regarding those parameters are needed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:attenuation_rainrate&amp;quot;&amp;gt;&lt;br /&gt;
[[File:scatter_all.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Attenuation dependency on the rain rate measured in three different ways. Local rain gauge (blue), path integration on each step selecting closest $ 4 $ points (green) and from $ 64 $ points closest to the antenna (red). All measurements have been put into $ 8 $ h bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:local_correlation&amp;quot;/&amp;gt; a correlation with respect to the number of local support nodes and time bin size is presented. The best correlation is obtained with $8$ h time bins and a local $n=2^6$ support size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:local_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt; local&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly a correlation with respect to the number of integral support nodes and time bin size is presented in &amp;lt;xr id=&amp;quot;fig:integrate_correlation&amp;quot;/&amp;gt;. Again, the best correlation is obtained with 8 h time bins, however with the integral model a small integral support, i.e.\ $n=2^2$, already suffices to obtain fair correlation. Such behaviour is expected. In an integral mode we follow the ray and support is moving along, therefore there is no need to capture vast regions for each link point.&lt;br /&gt;
On the other hand in a local approach only one support is used and therefore that support has to be much bigger to capture enough details about the rain conditions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:integrate_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_integrate.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt;integral&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compare measurements acquired with radar and the ones acquired with the local rain gauge a simpler presentation of correlation is shown on &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. One set of data has rain rate extracted from radar using the integral method with support size $4$ and two sets using closest either $n=64$ or $n=512$ nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:correlation_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation as a function of time bin size $t_0$ for different ways of extracting the rain rate. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the next step we compare our measurements with a Marshall-Palmer model, specifically the exponent $b$. According to Table 1 in $20$ GHz the $b_0=1.083$ should hold.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:fit_time&amp;quot;/&amp;gt; differences between our measurements and $b_0$ with respect to the time bin size are presented for the same sets of data as were used in the correlation analysis of &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. An order of magnitude improvement is visible between local rain gauge and data extracted with radar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fit_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_fit_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Exponent in attenuation to rainfall relation $b$ compared to value $b_0$ from Table 1 for $ 20 $ GHz as a function of bin duration $t_0$ for a few ways of extracting rainfall. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
This paper deals with the correlation analysis between EMR attenuation due to the scattering on the ASTRA 3B - SatProSi 1 link and measured rain rate. The main objective of the paper is to analyse the related measurements and comparison of results with Marshall-Palmer model.&lt;br /&gt;
&lt;br /&gt;
The attenuation is measured directly with an in-house equipment with a relatively high time resolution ($0.15$ s).&lt;br /&gt;
&lt;br /&gt;
The rain characteristics are measured with a rain gauge positioned next to the antenna and national meteorological radar. The rain gauge measures average rain rate every five minutes at a single position, while the radar provides a full 3D scan of reflectivity every $10$ minutes.&lt;br /&gt;
&lt;br /&gt;
Although the attenuation depends mainly on the DSD, a rain rate is used as a reference quantity, since it is much more descriptive, as well as easier to measure. The reflectivity measured with the radar is therefore transformed to the rain rate with the Marshall-Palmer relation. More direct approach would be to relate the attenuation with the measured reflectivity directly, however that would not change any of the conclusions, since, on a logarithmic scale, a simple power relation between reflectivity and rain rate reflects only as a linear transformation.&lt;br /&gt;
&lt;br /&gt;
The analysis of support size and time bin size showed quite strong influence of those two parameters on the correlation. It is demonstrated that time bin $8h$ and support sizes of $n=2^6$ and $n=2^2$ for local and integral approach, respectively, provide a decent correlation ($0.6-0.7$) between logarithms of measured attenuation and rain rate.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the power model has been fitted over measured data and the value of the exponent has been compared to the values reported in the literature. The model shows best agreement with the Marshall-Palmer model, when the rain rate is gathered from the integral along the communication link. somewhat worse agreement is achieved with a local determination of rain rate. Results obtained with the rain gauge are the furthest from the expected, despite the fact that the correlation with the measured attenuation is the highest with the rain gauge measurements. The localized information from the rain gauge simply cannot provide enough information to fully characterize the rain conditions along the link.&lt;br /&gt;
&lt;br /&gt;
There are still some open questions to resolve, e.g. what is the reason behind the $8$ h time bin giving the best result, how could we improve the correlation, using different statistics to get more information from the data, etc. All these topics will be addressed in future work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Acknowledgment=&lt;br /&gt;
The authors acknowledge the financial support from the state budget by the&lt;br /&gt;
Slovenian Research Agency under Grant P2-0095. Attenuation data was collected in the framework of the ESA-PECS project SatProSi-Alpha. Slovenian Environment Agency provided us with the data collected by their weather radars.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=602</id>
		<title>Attenuation due to liquid water content in the atmosphere</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=602"/>
				<updated>2016-11-10T16:02:46Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolman], [mailto:gregor.kosec@ijs.si Gregor Kosec], Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
[[:File:mipro_attenuation.pdf|Full paper available for download here.]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The effect of Liquid Water Content (LWC), i.e. the mass of the water per volume&lt;br /&gt;
unit of the atmosphere, on the attenuation of a $20$ GHz communication link&lt;br /&gt;
between a ground antenna and communication satellite is tackled in this paper.&lt;br /&gt;
The wavelength of $20$ GHz electromagnetic radiation is comparable to the&lt;br /&gt;
droplet size, consequently the scattering plays an important role in the&lt;br /&gt;
attenuation. To better understand this phenomenon a correlation between&lt;br /&gt;
measured LWC and attenuation is analysed. The LWC is usually estimated from&lt;br /&gt;
the pluviograph rain rate measurements that captures only spatially localized&lt;br /&gt;
and ground level information about the LWC. In this paper the LWC is extracted&lt;br /&gt;
also from the reflectivity measurements provided by a $5.6$ GHz weather radar&lt;br /&gt;
situated in Lisca, Slovenia. The radar measures reflectivity in 3D and&lt;br /&gt;
therefore a precise spatial dependency of LWC along the communication link is&lt;br /&gt;
considered. The attenuation is measured with an in-house receiver Ljubljana&lt;br /&gt;
Station SatProSi 1 that communicates with a geostationary communication&lt;br /&gt;
satellite ASTRA 3B on the $20$ GHz band.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
The increasing demands for higher communication capabilities between terrestrial&lt;br /&gt;
and/or earth-satellite repeaters requires employment of frequency bands above&lt;br /&gt;
$10$ GHz. Moving to such frequencies the wavelength of electromagnetic&lt;br /&gt;
radiation (EMR) becomes comparable to the size of water droplets in the&lt;br /&gt;
atmosphere. Consequently, EMR attenuation due to the scattering on the droplets&lt;br /&gt;
becomes significant and ultimately dominant factor in the communications&lt;br /&gt;
quality. During its propagation, the EMR waves encounter different water&lt;br /&gt;
structures, where it can be absorbed or scattered, causing attenuation. In&lt;br /&gt;
general, water in all three states is present in the atmosphere, i.e.\ liquid in&lt;br /&gt;
form of rain, clouds and fog, solid in form of snow and ice crystals, and water&lt;br /&gt;
vapour, which makes the air humid. Regardless the state, it causes considerable&lt;br /&gt;
attenuation that has to be considered in designing of the communication&lt;br /&gt;
strategy. Therefore, in order to effectively introduce high frequency&lt;br /&gt;
communications into the operative regimes, an adequate knowledge about&lt;br /&gt;
atmospheric effects on the attenuation has to be elaborated.&lt;br /&gt;
&lt;br /&gt;
In this paper we deal with the attenuation due to the scattering of EMR on a&lt;br /&gt;
myriad of droplets in the atmosphere that is characterised by LWC or more&lt;br /&gt;
precisely with Drop Size Distribution (DSD). A discussion on the physical&lt;br /&gt;
background of the DSD can be found in (E. Villermaux and B. Bossa. Single-drop&lt;br /&gt;
fragmentation determines size distribution of raindrops, 2009), where authors describe&lt;br /&gt;
basic mechanisms behind distribution of droplets. Despite the efforts to&lt;br /&gt;
understand the complex interplay between droplets, ultimately the empirical&lt;br /&gt;
relations are used. The LWC and DSD can be related to the only involved quantity&lt;br /&gt;
that we can reliable measure, the rain rate. Recently it has been demonstrated&lt;br /&gt;
that for high rain rates also the site location plays a role in the DSD due to&lt;br /&gt;
the local climate conditions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In general, raindrops can be considered as dielectric blobs of water that&lt;br /&gt;
polarize in the presence of an electric field. When introduced to an oscillating&lt;br /&gt;
electric field, such as electromagnetic waves, a droplet of water acts as an&lt;br /&gt;
antenna and re-radiates the received energy in arbitrary direction causing a net&lt;br /&gt;
loss of energy flux towards the receiver. Some part of energy can also be&lt;br /&gt;
absorbed by the raindrop, which results in heating. Absorption is the main cause&lt;br /&gt;
of energy loss when dealing with raindrops large compared to the wavelength,&lt;br /&gt;
whereas scattering is predominant with raindrops smaller than the wavelength.&lt;br /&gt;
The very first model for atmospheric scattering was introduced by lord Rayleigh.&lt;br /&gt;
The Rayleigh assumed the constant spatial polarization within the droplet. Such&lt;br /&gt;
simplifications limits the validity of the model to only relatively small&lt;br /&gt;
droplets in comparison to the wavelength of the incident field, i.e.&lt;br /&gt;
approximately up to $5$ GHz when EMR scattering on the rain droplets&lt;br /&gt;
is considered. A more general model was developed by Mie in 1908, where a&lt;br /&gt;
spatial dependent polarization is considered within the droplet, extending the&lt;br /&gt;
validity of the model to higher droplet size/EMR wavelength ratios. Later, a&lt;br /&gt;
popular empirical model was presented in (J.S. Marshall and W.McK. Palmer. The&lt;br /&gt;
distribution of raindrops with size, 1948), where attenuation is related only to&lt;br /&gt;
the rain rate. The model, also referred to as Marshall-Palmer model, is widely&lt;br /&gt;
used in evaluation of rain rate from reflectivity measured by weather radars.&lt;br /&gt;
Marhsall-Palmer model simply states the relation between the attenuation and&lt;br /&gt;
rain rate in terms of a power function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this paper we seek for correlation between the LWC and attenuation&lt;br /&gt;
measurements. LWC is extracted from reflectivity measurements provided by a&lt;br /&gt;
weather radar situated in Lisca and operated by Slovenian Environment Agency.&lt;br /&gt;
Attenuation is measured by in-house hardware that monitors the signal strength&lt;br /&gt;
between Ljubljana Station SatProSi 1 and communication satellite ASTRA 3B. The&lt;br /&gt;
main purpose of this paper is therefore to investigate correlation between&lt;br /&gt;
precipitation measured in 3D with the meteorological radar and the measured&lt;br /&gt;
attenuation.&lt;br /&gt;
&lt;br /&gt;
=Governing models=&lt;br /&gt;
&lt;br /&gt;
Before we proceed to measurements some basic relations are discussed.&lt;br /&gt;
&lt;br /&gt;
Attenuation ($A$) is a quantity measured in [dB] that describes the loss of electromagnetic radiation propagating through a medium. It is defined with starting intensity $I_s$ and the intensity received after propagation $I_r$ as&lt;br /&gt;
\[&lt;br /&gt;
A = 10\log_{10}\frac{I_s}{I_r}.&lt;br /&gt;
\]&lt;br /&gt;
The specific attenuation ($\alpha=A/L$) measured in [dB/km] as a function of rain rate ($R$) measured in [mm/h] is commonly modelled as &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}&lt;br /&gt;
\[&lt;br /&gt;
\alpha(R) \sim a \,R^{b} \ .&lt;br /&gt;
\]&lt;br /&gt;
Coefficients $a$ and $b$ are determined empirically by fitting the model to the experimental data. In general, coefficients depend on the incident wave frequency and polarization, and ambient temperature. Some example values for different frequencies are presented in Table (&amp;lt;a href=&amp;quot;#tab:marshalpalmer&amp;quot;&amp;gt;tab:marshalpalmer&amp;lt;/a&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;table&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;f[GHz]&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;10&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;12&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;15&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;20&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;25&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;30 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;$a$&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0094&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0177&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0350&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0722&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.1191&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.1789 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;$b$&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.273&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.211&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.143&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.083&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.044&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.007 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
 &amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;caption&amp;quot;&amp;gt;Value of coefficients for Marshal-Palmer relation $\alpha(R)$ at different frequencies.&lt;br /&gt;
&amp;lt;a name=&amp;quot;tab:marshalpalmer&amp;quot;&amp;gt;tab:marshalpalmer&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simplest characterization of rain is through rain rate $R$, measured in [mm/h]. However, rain rate do not give any information about the type of rain. For example a storm and a shower might have the same rain rate, but are comprised of different droplets. Therefore, a more descriptive quantity is a Drop size distribution (DSD) that, unsurprisingly, describes the distribution of droplet sizes.&lt;br /&gt;
A simple DSD model is presented in (J.S. Marshall and W.McK. Palmer. The distribution of raindrops with size, 1948)&lt;br /&gt;
&lt;br /&gt;
\[&lt;br /&gt;
\begin{equation}&lt;br /&gt;
N(D) = U \exp (-V \, R^{\delta} D),&lt;br /&gt;
\end{equation}&lt;br /&gt;
\label{eq:dsdr}&lt;br /&gt;
\]&lt;br /&gt;
where $D$ stands for drop diameter measured in [mm], $N(D)$ describes number of droplets of size $D$ to $D + \mathrm dD$ in a unit of volume measured in [$mm^{-1} m^{-3}$] and $R$ rain rate measured in [mm/h]. The values of equation parameters were set to $U=8.3 \cdot 10^3$, $V=4.1$ and $\delta=-0.21$. The DSD was also determined experimentally for different rain rates &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. The experimental data is presented in &amp;lt;xr id=&amp;quot;fig:dsd&amp;quot;/&amp;gt;, where we can see that the typical diameter of droplets is in range of mm. There is a discrepancy between the theoretical and experimental data with very small droplets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dsd&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dsd_manual.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt; DSD measured in Czech Republic (one year measurement, rain rate $R$ is the parameter of particular sets of points) &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. Lines represent the theoretical value as determined by $(\ref{eq:dsdr})$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Measurements=&lt;br /&gt;
&lt;br /&gt;
== Measurements of signal attenuation==&lt;br /&gt;
&lt;br /&gt;
Jožef Stefan Institute (JSI) and European Space Agency (ESA) cooperate in SatProSi-Alpha project that includes measuring attenuation of the communication link between ground antenna and a satellite, more precisely between the ASTRA 3B satellite and SatProSi 1 station. The ASTRA 3B is a geostationary communication satellite located on the $23.5^\circ E$ longitude over the equator. It broadcasts the signal at $20$ GHz, which is received at SatProSi 1 with an in-house receiver, namely $1.2$ m parabolic antenna positioned on the top of the JSI main building with a gain of about $47$ dB. The SatProSi measures attenuation every $0.15$ seconds, resulting in over $500000$ daily records, since 1. 10. 2011.&lt;br /&gt;
&lt;br /&gt;
== Measurements of rainfall rate ==&lt;br /&gt;
Two sources of rain measurements are used in this paper. The first one is a pluviograph installed locally in the proximity of the antenna. The rain rate is measured every five minutes.&lt;br /&gt;
&lt;br /&gt;
Another, much more sophisticated, measurements of rain characteristics are provided by meteorological radars. The basic idea behind such radars is to measure EMR that reflects from water droplets. The measured reflectivity is then related with rain rate with Marhsall-Palmer relation.&lt;br /&gt;
Radar reflectivity factor $Z$ is formally defined as the sum of sixth powers of drop diameters over all droplets per unit of volume, which can be converted into an integral&lt;br /&gt;
\[&lt;br /&gt;
Z = \int_0^\infty N(D)D^6 \mathrm dD \ .&lt;br /&gt;
\]&lt;br /&gt;
Note that the form of relation follows the Rayleigh scattering model. $Z$ is usually measured in units $ mm^6m^{-3} $. When conducting measurements a so-called Equivalent Reflectivity Factor&lt;br /&gt;
\[&lt;br /&gt;
Z_e = \frac{\eta \lambda^4}{0.93 \pi^5}&lt;br /&gt;
\]&lt;br /&gt;
is used, where $\eta$ means reflectivity, $\lambda$ is radar wavelength and $0.93$ stands for dielectric factor of water. As the name suggests both are equivalent for large wavelengths compared to the drop sizes, as in Rayleigh model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reflectivity factor and rainfall rate are related through Marshall-Palmer relation as&lt;br /&gt;
\[&lt;br /&gt;
Z_{[mm^6m^{-3}]} = \tilde a R_{[mm/h]}^{\tilde{b}}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_{[mm^6m^{-3}]}$ is reflectivity factor measured in $mm^6m^{-3}$ and $R_{[mm/h]}$ is rainfall rate measured in mm/h. In general, empirical coefficients $\tilde a$ and $\tilde b$ vary with location and/or season, however, are independent of rainfall $R$. Most widely used values are $\tilde a=200$ and $\tilde b=1.6$.&lt;br /&gt;
Meteorologists rather use dimensionless logarithmic scale and define&lt;br /&gt;
\[&lt;br /&gt;
\mathit{dBZ} = 10 \, \log_{10} \frac{Z}{Z_0} = 10 \, \log_{10} Z_{[mm^6m^{-3}]}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_0$ is reflectivity factor equivalent to one droplet of diameter $1$ mm per cubic meter.&lt;br /&gt;
&lt;br /&gt;
The meteorological radars at Lisca emit short ($1$ microsecond) electromagnetic pulses with the frequency of $5.62$ GHz and measure strength of the reflection from different points in their path. Radar collect roughly $650000$ spatial data points per radar per every atmosphere scan, which they do every $10$ minutes. They determine the exact location of all their measurements through their direction and the time it takes for the signal to reflect back to the radar.&lt;br /&gt;
&lt;br /&gt;
In addition to reflectivity radars also measure the radial velocity of the reflecting particles by measuring the Doppler shift of the received EMR, but this is a feature we will not be using.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data analysis=&lt;br /&gt;
The analysis begins with handling approximately $20$ GB of radar data for the academic year 2014/15 accompanied with $3$ GB of signal attenuation data for the same time period and approximately $5$ GB of attenuation and local rain gauge data for years 2012 and 2013.&lt;br /&gt;
&lt;br /&gt;
== Preprocessing the radar spatial data ==&lt;br /&gt;
&lt;br /&gt;
Radar data was firstly reduced by eliminating spatial points far away from our point of interest, namely the JSI main building where antenna is located. The geostationary orbit is $35786$ km above the sea level, therefore the link between the antenna and the satellite has a steep elevation angle $36.3^\circ$. In fact just $20$ km south of the antenna the ray rises above $15$ km, which is the upper boundary for all weather activities. Knowing this, a smaller area of the map can be safely cropped out, reducing the number of data points from around $650000$ to approximately $6500$ for each radar scan covering an $40 \text{km} \times 40 \text{km}$ area.&lt;br /&gt;
&lt;br /&gt;
Although we already gravely reduced original data size, we must still reduce thousands of points into something tangible. The positions of both the antenna and the satellite are known at all times, a lovely consequence of them being stationary; therefore the link between them can be easily traced. Roughly $150$ points on the ray path are used as a discrete representation of the link, referred to as link points in future discussions. For each link point a median of $n$ closest radar measurements is computed as a representative value.&lt;br /&gt;
The other way of extracting reflectivity factor was simply to take closest $n$ points to the antenna and select the median value of those. A visualisation of both methods is presented in &amp;lt;xr id=&amp;quot;fig:support_presentation&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Now we are left with multiple scalar quantities as a function of time. Antenna attenuation for every $0.15$ s, local rain gauge for every $5$ min and various extractions of reflectivity factor for every $10$ min. Note, that radar values are not averaged over $10$ minutes, radar simply needs $10$ minutes to complete a single scan.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:att_ref_time&amp;quot;/&amp;gt; an example of rainfall rate measured with weather radar and the measured attenuation for a three day period is presented. A correlation between quantities is clearly seen on the figure but a closer inspection is needed to reveal more details about the correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:support_presentation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:support.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Positions of radar measurements. The blue rectangle is the location of the antenna and the rain gauge. The $ 64 $ points closest to the antenna are enclosed in a red sphere and marked as red circles. Red dots mark the remainder of $ 512 $ closest points. The green line is the ray path between antenna and satellite with green circles representing corresponding support nodes for support size $n=4$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:att_ref_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time_flow_1800_64_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Measured antenna attenuation and rain rate extracted from $ 64 $ radar measurements closest to the antenna. Both datasets have been sorted into $ 30 $ minute bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Correlation between rain and attenuation ==&lt;br /&gt;
In order to find a relation between rain rate and electromagnetic attenuation, measurements of both quantities must be paired. There is no obvious way of doing this since both are measured at a vastly different time-scale. We ended up dividing time into bins of duration $t_0$ and pairing the measurements that fall within the same bin. The maximum values of every quantity was selected as a representative for the given time period.&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient between two variables $X$ and $Y$ can be calculated using&lt;br /&gt;
\[&lt;br /&gt;
corr(X, Y)=\frac{\text{mean}((X - \text{mean}(X))\cdot(Y - \text{mean}(Y)))}{\text{std}(X)\text{std}(Y)}&lt;br /&gt;
\]&lt;br /&gt;
and is a good quantity for determining linear dependence between $X$ and $Y$.&lt;br /&gt;
&lt;br /&gt;
According to the Marshall-Palmer power law a linear relation exists between logarithms of rain rate and specific attenuation.&lt;br /&gt;
Our measurements are of total attenuation $A$ and not of specific attenuation so we must adjust the equation. We assume a typical distance $L$ as a connecting factor between the, which gives us&lt;br /&gt;
\[&lt;br /&gt;
\log_{10}A = \log_{10}La + b\log_{10}R .&lt;br /&gt;
\]&lt;br /&gt;
The exact value of $L$ is not relevant as only the parameter $b$ will interest us. Therefore a slope on a log-log graph, such as on &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt;, is equal to the model parameter $b$. We used a least square linear fit on each set of data to get the corresponding values for $b$.&lt;br /&gt;
&lt;br /&gt;
In addition, correlation between logarithmic values of rain rate and attenuation&lt;br /&gt;
\[&lt;br /&gt;
corr\left(\log_{10}A_{[\text{dB}]}, \log_{10}R_{[\text{mm/h}]}\right)&lt;br /&gt;
\]&lt;br /&gt;
is used as a quality measure of their relation.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
Once we paired attenuation and rainfall data we can scatter the points on a graph.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt; the attenuation against rain rate at $8$ h bin size is presented. For the local radar representation $n=2^6$ and for integral representation $n=2^2$ support size is used. The correlation can be clearly seen, however not unified, as one would expect if measurements and rain rate - reflectivity model would be perfect.&lt;br /&gt;
Since we introduced two free parameters, namely time bin $t_0$ and spatial support size $n$ for integral and $n$ for local radar representation, a sensitivity analyses regarding those parameters are needed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:attenuation_rainrate&amp;quot;&amp;gt;&lt;br /&gt;
[[File:scatter_all.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Attenuation dependency on the rain rate measured in three different ways. Local rain gauge (blue), path integration on each step selecting closest $ 4 $ points (green) and from $ 64 $ points closest to the antenna (red). All measurements have been put into $ 8 $ h bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:local_correlation&amp;quot;/&amp;gt; a correlation with respect to the number of local support nodes and time bin size is presented. The best correlation is obtained with $8$ h time bins and a local $n=2^6$ support size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:local_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt; local&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly a correlation with respect to the number of integral support nodes and time bin size is presented in &amp;lt;xr id=&amp;quot;fig:integrate_correlation&amp;quot;/&amp;gt;. Again, the best correlation is obtained with 8 h time bins, however with the integral model a small integral support, i.e.\ $n=2^2$, already suffices to obtain fair correlation. Such behaviour is expected. In an integral mode we follow the ray and support is moving along, therefore there is no need to capture vast regions for each link point.&lt;br /&gt;
On the other hand in a local approach only one support is used and therefore that support has to be much bigger to capture enough details about the rain conditions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:integrate_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_integrate.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt;integral&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compare measurements acquired with radar and the ones acquired with the local rain gauge a simpler presentation of correlation is shown on &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. One set of data has rain rate extracted from radar using the integral method with support size $4$ and two sets using closest either $n=64$ or $n=512$ nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:correlation_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation as a function of time bin size $t_0$ for different ways of extracting the rain rate. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the next step we compare our measurements with a Marshall-Palmer model, specifically the exponent $b$. According to &amp;lt;b&amp;gt; Table 1 &amp;lt;/b&amp;gt; in $20$ GHz the $b_0=1.083$ should hold.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:fit_time&amp;quot;/&amp;gt; differences between our measurements and $b_0$ with respect to the time bin size are presented for the same sets of data as were used in the correlation analysis of &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. An order of magnitude improvement is visible between local rain gauge and data extracted with radar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fit_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_fit_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Exponent in attenuation to rainfall relation $b$ compared to value $b_0$ from Table (1) for $ 20 $ GHz as a function of bin duration $t_0$ for a few ways of extracting rainfall. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
This paper deals with the correlation analysis between EMR attenuation due to the scattering on the ASTRA 3B - SatProSi 1 link and measured rain rate. The main objective of the paper is to analyse the related measurements and comparison of results with Marshall-Palmer model.&lt;br /&gt;
&lt;br /&gt;
The attenuation is measured directly with an in-house equipment with a relatively high time resolution ($0.15$ s).&lt;br /&gt;
&lt;br /&gt;
The rain characteristics are measured with a rain gauge positioned next to the antenna and national meteorological radar. The rain gauge measures average rain rate every five minutes at a single position, while the radar provides a full 3D scan of reflectivity every $10$ minutes.&lt;br /&gt;
&lt;br /&gt;
Although the attenuation depends mainly on the DSD, a rain rate is used as a reference quantity, since it is much more descriptive, as well as easier to measure. The reflectivity measured with the radar is therefore transformed to the rain rate with the Marshall-Palmer relation. More direct approach would be to relate the attenuation with the measured reflectivity directly, however that would not change any of the conclusions, since, on a logarithmic scale, a simple power relation between reflectivity and rain rate reflects only as a linear transformation.&lt;br /&gt;
&lt;br /&gt;
The analysis of support size and time bin size showed quite strong influence of those two parameters on the correlation. It is demonstrated that time bin $8h$ and support sizes of $n=2^6$ and $n=2^2$ for local and integral approach, respectively, provide a decent correlation ($0.6-0.7$) between logarithms of measured attenuation and rain rate.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the power model has been fitted over measured data and the value of the exponent has been compared to the values reported in the literature. The model shows best agreement with the Marshall-Palmer model, when the rain rate is gathered from the integral along the communication link. somewhat worse agreement is achieved with a local determination of rain rate. Results obtained with the rain gauge are the furthest from the expected, despite the fact that the correlation with the measured attenuation is the highest with the rain gauge measurements. The localized information from the rain gauge simply cannot provide enough information to fully characterize the rain conditions along the link.&lt;br /&gt;
&lt;br /&gt;
There are still some open questions to resolve, e.g. what is the reason behind the $8$ h time bin giving the best result, how could we improve the correlation, using different statistics to get more information from the data, etc. All these topics will be addressed in future work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Acknowledgment=&lt;br /&gt;
The authors acknowledge the financial support from the state budget by the&lt;br /&gt;
Slovenian Research Agency under Grant P2-0095. Attenuation data was collected in the framework of the ESA-PECS project SatProSi-Alpha. Slovenian Environment Agency provided us with the data collected by their weather radars.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=601</id>
		<title>Attenuation due to liquid water content in the atmosphere</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Attenuation_due_to_liquid_water_content_in_the_atmosphere&amp;diff=601"/>
				<updated>2016-11-10T15:59:35Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: Created page with &amp;quot;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;  [mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolm...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;Correlation between attenuation of 20 GHz satellite communication link and Liquid Water Content in the atmosphere&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[mailto:maks.kolman@student.fmf.uni-lj.si Maks Kolman], [mailto:gregor.kosec@ijs.si Gregor Kosec], Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
[[:File:mipro_attenuation.pdf|Full paper available for download here.]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The effect of Liquid Water Content (LWC), i.e. the mass of the water per volume&lt;br /&gt;
unit of the atmosphere, on the attenuation of a $20$ GHz communication link&lt;br /&gt;
between a ground antenna and communication satellite is tackled in this paper.&lt;br /&gt;
The wavelength of $20$ GHz electromagnetic radiation is comparable to the&lt;br /&gt;
droplet size, consequently the scattering plays an important role in the&lt;br /&gt;
attenuation. To better understand this phenomenon a correlation between&lt;br /&gt;
measured LWC and attenuation is analysed. The LWC is usually estimated from&lt;br /&gt;
the pluviograph rain rate measurements that captures only spatially localized&lt;br /&gt;
and ground level information about the LWC. In this paper the LWC is extracted&lt;br /&gt;
also from the reflectivity measurements provided by a $5.6$ GHz weather radar&lt;br /&gt;
situated in Lisca, Slovenia. The radar measures reflectivity in 3D and&lt;br /&gt;
therefore a precise spatial dependency of LWC along the communication link is&lt;br /&gt;
considered. The attenuation is measured with an in-house receiver Ljubljana&lt;br /&gt;
Station SatProSi 1 that communicates with a geostationary communication&lt;br /&gt;
satellite ASTRA 3B on the $20$ GHz band.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
&lt;br /&gt;
The increasing demands for higher communication capabilities between terrestrial&lt;br /&gt;
and/or earth-satellite repeaters requires employment of frequency bands above&lt;br /&gt;
$10$ GHz. Moving to such frequencies the wavelength of electromagnetic&lt;br /&gt;
radiation (EMR) becomes comparable to the size of water droplets in the&lt;br /&gt;
atmosphere. Consequently, EMR attenuation due to the scattering on the droplets&lt;br /&gt;
becomes significant and ultimately dominant factor in the communications&lt;br /&gt;
quality. During its propagation, the EMR waves encounter different water&lt;br /&gt;
structures, where it can be absorbed or scattered, causing attenuation. In&lt;br /&gt;
general, water in all three states is present in the atmosphere, i.e.\ liquid in&lt;br /&gt;
form of rain, clouds and fog, solid in form of snow and ice crystals, and water&lt;br /&gt;
vapour, which makes the air humid. Regardless the state, it causes considerable&lt;br /&gt;
attenuation that has to be considered in designing of the communication&lt;br /&gt;
strategy. Therefore, in order to effectively introduce high frequency&lt;br /&gt;
communications into the operative regimes, an adequate knowledge about&lt;br /&gt;
atmospheric effects on the attenuation has to be elaborated.&lt;br /&gt;
&lt;br /&gt;
In this paper we deal with the attenuation due to the scattering of EMR on a&lt;br /&gt;
myriad of droplets in the atmosphere that is characterised by LWC or more&lt;br /&gt;
precisely with Drop Size Distribution (DSD). A discussion on the physical&lt;br /&gt;
background of the DSD can be found in (E. Villermaux and B. Bossa. Single-drop&lt;br /&gt;
fragmentation determines size distribution of raindrops, 2009), where authors describe&lt;br /&gt;
basic mechanisms behind distribution of droplets. Despite the efforts to&lt;br /&gt;
understand the complex interplay between droplets, ultimately the empirical&lt;br /&gt;
relations are used. The LWC and DSD can be related to the only involved quantity&lt;br /&gt;
that we can reliable measure, the rain rate. Recently it has been demonstrated&lt;br /&gt;
that for high rain rates also the site location plays a role in the DSD due to&lt;br /&gt;
the local climate conditions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In general, raindrops can be considered as dielectric blobs of water that&lt;br /&gt;
polarize in the presence of an electric field. When introduced to an oscillating&lt;br /&gt;
electric field, such as electromagnetic waves, a droplet of water acts as an&lt;br /&gt;
antenna and re-radiates the received energy in arbitrary direction causing a net&lt;br /&gt;
loss of energy flux towards the receiver. Some part of energy can also be&lt;br /&gt;
absorbed by the raindrop, which results in heating. Absorption is the main cause&lt;br /&gt;
of energy loss when dealing with raindrops large compared to the wavelength,&lt;br /&gt;
whereas scattering is predominant with raindrops smaller than the wavelength.&lt;br /&gt;
The very first model for atmospheric scattering was introduced by lord Rayleigh.&lt;br /&gt;
The Rayleigh assumed the constant spatial polarization within the droplet. Such&lt;br /&gt;
simplifications limits the validity of the model to only relatively small&lt;br /&gt;
droplets in comparison to the wavelength of the incident field, i.e.&lt;br /&gt;
approximately up to $5$ GHz when EMR scattering on the rain droplets&lt;br /&gt;
is considered. A more general model was developed by Mie in 1908, where a&lt;br /&gt;
spatial dependent polarization is considered within the droplet, extending the&lt;br /&gt;
validity of the model to higher droplet size/EMR wavelength ratios. Later, a&lt;br /&gt;
popular empirical model was presented in (J.S. Marshall and W.McK. Palmer. The&lt;br /&gt;
distribution of raindrops with size, 1948), where attenuation is related only to&lt;br /&gt;
the rain rate. The model, also referred to as Marshall-Palmer model, is widely&lt;br /&gt;
used in evaluation of rain rate from reflectivity measured by weather radars.&lt;br /&gt;
Marhsall-Palmer model simply states the relation between the attenuation and&lt;br /&gt;
rain rate in terms of a power function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this paper we seek for correlation between the LWC and attenuation&lt;br /&gt;
measurements. LWC is extracted from reflectivity measurements provided by a&lt;br /&gt;
weather radar situated in Lisca and operated by Slovenian Environment Agency.&lt;br /&gt;
Attenuation is measured by in-house hardware that monitors the signal strength&lt;br /&gt;
between Ljubljana Station SatProSi 1 and communication satellite ASTRA 3B. The&lt;br /&gt;
main purpose of this paper is therefore to investigate correlation between&lt;br /&gt;
precipitation measured in 3D with the meteorological radar and the measured&lt;br /&gt;
attenuation.&lt;br /&gt;
&lt;br /&gt;
=Governing models=&lt;br /&gt;
&lt;br /&gt;
Before we proceed to measurements some basic relations are discussed.&lt;br /&gt;
&lt;br /&gt;
Attenuation ($A$) is a quantity measured in [dB] that describes the loss of electromagnetic radiation propagating through a medium. It is defined with starting intensity $I_s$ and the intensity received after propagation $I_r$ as&lt;br /&gt;
\[&lt;br /&gt;
A = 10\log_{10}\frac{I_s}{I_r}.&lt;br /&gt;
\]&lt;br /&gt;
The specific attenuation ($\alpha=A/L$) measured in [dB/km] as a function of rain rate ($R$) measured in [mm/h] is commonly modelled as &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}&lt;br /&gt;
\[&lt;br /&gt;
\alpha(R) \sim a \,R^{b} \ .&lt;br /&gt;
\]&lt;br /&gt;
Coefficients $a$ and $b$ are determined empirically by fitting the model to the experimental data. In general, coefficients depend on the incident wave frequency and polarization, and ambient temperature. Some example values for different frequencies are presented in Table (&amp;lt;a href=&amp;quot;#tab:marshalpalmer&amp;quot;&amp;gt;tab:marshalpalmer&amp;lt;/a&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;table&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;f[GHz]&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;10&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;12&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;15&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;20&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;25&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;30 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;$a$&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0094&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0177&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0350&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.0722&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.1191&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;0.1789 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
     &amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;$b$&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.273&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.211&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.143&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.083&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.044&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;1.007 &amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
 &amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;caption&amp;quot;&amp;gt;Value of coefficients for Marshal-Palmer relation $\alpha(R)$ at different frequencies.&lt;br /&gt;
&amp;lt;a name=&amp;quot;tab:marshalpalmer&amp;quot;&amp;gt;tab:marshalpalmer&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simplest characterization of rain is through rain rate $R$, measured in [mm/h]. However, rain rate do not give any information about the type of rain. For example a storm and a shower might have the same rain rate, but are comprised of different droplets. Therefore, a more descriptive quantity is a Drop size distribution (DSD) that, unsurprisingly, describes the distribution of droplet sizes.&lt;br /&gt;
A simple DSD model is presented in (J.S. Marshall and W.McK. Palmer. The distribution of raindrops with size, 1948)&lt;br /&gt;
&lt;br /&gt;
\[&lt;br /&gt;
\begin{equation}&lt;br /&gt;
N(D) = U \exp (-V \, R^{\delta} D),&lt;br /&gt;
\end{equation}&lt;br /&gt;
\label{eq:dsdr}&lt;br /&gt;
\]&lt;br /&gt;
where $D$ stands for drop diameter measured in [mm], $N(D)$ describes number of droplets of size $D$ to $D + \mathrm dD$ in a unit of volume measured in [$mm^{-1} m^{-3}$] and $R$ rain rate measured in [mm/h]. The values of equation parameters were set to $U=8.3 \cdot 10^3$, $V=4.1$ and $\delta=-0.21$. The DSD was also determined experimentally for different rain rates &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. The experimental data is presented in &amp;lt;xr id=&amp;quot;fig:dsd&amp;quot;/&amp;gt;, where we can see that the typical diameter of droplets is in range of mm. There is a discrepancy between the theoretical and experimental data with very small droplets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:dsd&amp;quot;&amp;gt;&lt;br /&gt;
[[File:dsd_manual.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt; DSD measured in Czech Republic (one year measurement, rain rate $R$ is the parameter of particular sets of points) &amp;lt;i&amp;gt;citation&amp;lt;/i&amp;gt;{OndrejFiser}. Lines represent the theoretical value as determined by $(\ref{eq:dsdr})$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Measurements=&lt;br /&gt;
&lt;br /&gt;
== Measurements of signal attenuation==&lt;br /&gt;
&lt;br /&gt;
Jožef Stefan Institute (JSI) and European Space Agency (ESA) cooperate in SatProSi-Alpha project that includes measuring attenuation of the communication link between ground antenna and a satellite, more precisely between the ASTRA 3B satellite and SatProSi 1 station. The ASTRA 3B is a geostationary communication satellite located on the $23.5^\circ E$ longitude over the equator. It broadcasts the signal at $20$ GHz, which is received at SatProSi 1 with an in-house receiver, namely $1.2$ m parabolic antenna positioned on the top of the JSI main building with a gain of about $47$ dB. The SatProSi measures attenuation every $0.15$ seconds, resulting in over $500000$ daily records, since 1. 10. 2011.&lt;br /&gt;
&lt;br /&gt;
== Measurements of rainfall rate ==&lt;br /&gt;
Two sources of rain measurements are used in this paper. The first one is a pluviograph installed locally in the proximity of the antenna. The rain rate is measured every five minutes.&lt;br /&gt;
&lt;br /&gt;
Another, much more sophisticated, measurements of rain characteristics are provided by meteorological radars. The basic idea behind such radars is to measure EMR that reflects from water droplets. The measured reflectivity is then related with rain rate with Marhsall-Palmer relation.&lt;br /&gt;
Radar reflectivity factor $Z$ is formally defined as the sum of sixth powers of drop diameters over all droplets per unit of volume, which can be converted into an integral&lt;br /&gt;
\[&lt;br /&gt;
Z = \int_0^\infty N(D)D^6 \mathrm dD \ .&lt;br /&gt;
\]&lt;br /&gt;
Note that the form of relation follows the Rayleigh scattering model. $Z$ is usually measured in units $ mm^6m^{-3} $. When conducting measurements a so-called Equivalent Reflectivity Factor&lt;br /&gt;
\[&lt;br /&gt;
Z_e = \frac{\eta \lambda^4}{0.93 \pi^5}&lt;br /&gt;
\]&lt;br /&gt;
is used, where $\eta$ means reflectivity, $\lambda$ is radar wavelength and $0.93$ stands for dielectric factor of water. As the name suggests both are equivalent for large wavelengths compared to the drop sizes, as in Rayleigh model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reflectivity factor and rainfall rate are related through Marshall-Palmer relation as&lt;br /&gt;
\[&lt;br /&gt;
Z_{[mm^6m^{-3}]} = \tilde a R_{[mm/h]}^{\tilde{b}}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_{[mm^6m^{-3}]}$ is reflectivity factor measured in $mm^6m^{-3}$ and $R_{[mm/h]}$ is rainfall rate measured in mm/h. In general, empirical coefficients $\tilde a$ and $\tilde b$ vary with location and/or season, however, are independent of rainfall $R$. Most widely used values are $\tilde a=200$ and $\tilde b=1.6$.&lt;br /&gt;
Meteorologists rather use dimensionless logarithmic scale and define&lt;br /&gt;
\[&lt;br /&gt;
\mathit{dBZ} = 10 \, \log_{10} \frac{Z}{Z_0} = 10 \, \log_{10} Z_{[mm^6m^{-3}]}\ ,&lt;br /&gt;
\]&lt;br /&gt;
where $Z_0$ is reflectivity factor equivalent to one droplet of diameter $1$ mm per cubic meter.&lt;br /&gt;
&lt;br /&gt;
The meteorological radars at Lisca emit short ($1$ microsecond) electromagnetic pulses with the frequency of $5.62$ GHz and measure strength of the reflection from different points in their path. Radar collect roughly $650000$ spatial data points per radar per every atmosphere scan, which they do every $10$ minutes. They determine the exact location of all their measurements through their direction and the time it takes for the signal to reflect back to the radar.&lt;br /&gt;
&lt;br /&gt;
In addition to reflectivity radars also measure the radial velocity of the reflecting particles by measuring the Doppler shift of the received EMR, but this is a feature we will not be using.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Data analysis=&lt;br /&gt;
The analysis begins with handling approximately $20$ GB of radar data for the academic year 2014/15 accompanied with $3$ GB of signal attenuation data for the same time period and approximately $5$ GB of attenuation and local rain gauge data for years 2012 and 2013.&lt;br /&gt;
&lt;br /&gt;
== Preprocessing the radar spatial data ==&lt;br /&gt;
&lt;br /&gt;
Radar data was firstly reduced by eliminating spatial points far away from our point of interest, namely the JSI main building where antenna is located. The geostationary orbit is $35786$ km above the sea level, therefore the link between the antenna and the satellite has a steep elevation angle $36.3^\circ$. In fact just $20$ km south of the antenna the ray rises above $15$ km, which is the upper boundary for all weather activities. Knowing this, a smaller area of the map can be safely cropped out, reducing the number of data points from around $650000$ to approximately $6500$ for each radar scan covering an $40 \text{km} \times 40 \text{km}$ area.&lt;br /&gt;
&lt;br /&gt;
Although we already gravely reduced original data size, we must still reduce thousands of points into something tangible. The positions of both the antenna and the satellite are known at all times, a lovely consequence of them being stationary; therefore the link between them can be easily traced. Roughly $150$ points on the ray path are used as a discrete representation of the link, referred to as link points in future discussions. For each link point a median of $n$ closest radar measurements is computed as a representative value.&lt;br /&gt;
The other way of extracting reflectivity factor was simply to take closest $n$ points to the antenna and select the median value of those. A visualisation of both methods is presented in &amp;lt;xr id=&amp;quot;fig:support_presentation&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Now we are left with multiple scalar quantities as a function of time. Antenna attenuation for every $0.15$ s, local rain gauge for every $5$ min and various extractions of reflectivity factor for every $10$ min. Note, that radar values are not averaged over $10$ minutes, radar simply needs $10$ minutes to complete a single scan.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:att_ref_time&amp;quot;/&amp;gt; an example of rainfall rate measured with weather radar and the measured attenuation for a three day period is presented. A correlation between quantities is clearly seen on the figure but a closer inspection is needed to reveal more details about the correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:support_presentation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:support.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Positions of radar measurements. The blue rectangle is the location of the antenna and the rain gauge. The $ 64 $ points closest to the antenna are enclosed in a red sphere and marked as red circles. Red dots mark the remainder of $ 512 $ closest points. The green line is the ray path between antenna and satellite with green circles representing corresponding support nodes for support size $n=4$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:att_ref_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time_flow_1800_64_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Measured antenna attenuation and rain rate extracted from $ 64 $ radar measurements closest to the antenna. Both datasets have been sorted into $ 30 $ minute bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Correlation between rain and attenuation ==&lt;br /&gt;
In order to find a relation between rain rate and electromagnetic attenuation, measurements of both quantities must be paired. There is no obvious way of doing this since both are measured at a vastly different time-scale. We ended up dividing time into bins of duration $t_0$ and pairing the measurements that fall within the same bin. The maximum values of every quantity was selected as a representative for the given time period.&lt;br /&gt;
&lt;br /&gt;
The correlation coefficient between two variables $X$ and $Y$ can be calculated using&lt;br /&gt;
\[&lt;br /&gt;
corr(X, Y)=\frac{\text{mean}((X - \text{mean}(X))\cdot(Y - \text{mean}(Y)))}{\text{std}(X)\text{std}(Y)}&lt;br /&gt;
\]&lt;br /&gt;
and is a good quantity for determining linear dependence between $X$ and $Y$.&lt;br /&gt;
&lt;br /&gt;
According to the Marshall-Palmer power law a linear relation exists between logarithms of rain rate and specific attenuation.&lt;br /&gt;
Our measurements are of total attenuation $A$ and not of specific attenuation so we must adjust the equation. We assume a typical distance $L$ as a connecting factor between the, which gives us&lt;br /&gt;
\[&lt;br /&gt;
\log_{10}A = \log_{10}La + b\log_{10}R .&lt;br /&gt;
\]&lt;br /&gt;
The exact value of $L$ is not relevant as only the parameter $b$ will interest us. Therefore a slope on a log-log graph, such as on &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt;, is equal to the model parameter $b$. We used a least square linear fit on each set of data to get the corresponding values for $b$.&lt;br /&gt;
&lt;br /&gt;
In addition, correlation between logarithmic values of rain rate and attenuation&lt;br /&gt;
\[&lt;br /&gt;
corr\left(\log_{10}A_{[\text{dB}]}, \log_{10}R_{[\text{mm/h}]}\right)&lt;br /&gt;
\]&lt;br /&gt;
is used as a quality measure of their relation.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
Once we paired attenuation and rainfall data we can scatter the points on a graph.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:attenuation_rainrate&amp;quot;/&amp;gt; the attenuation against rain rate at $8$ h bin size is presented. For the local radar representation $n=2^6$ and for integral representation $n=2^2$ support size is used. The correlation can be clearly seen, however not unified, as one would expect if measurements and rain rate - reflectivity model would be perfect.&lt;br /&gt;
Since we introduced two free parameters, namely time bin $t_0$ and spatial support size $n$ for integral and $n$ for local radar representation, a sensitivity analyses regarding those parameters are needed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:attenuation_rainrate&amp;quot;&amp;gt;&lt;br /&gt;
[[File:scatter_all.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Attenuation dependency on the rain rate measured in three different ways. Local rain gauge (blue), path integration on each step selecting closest $ 4 $ points (green) and from $ 64 $ points closest to the antenna (red). All measurements have been put into $ 8 $ h bins. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:local_correlation&amp;quot;/&amp;gt; a correlation with respect to the number of local support nodes and time bin size is presented. The best correlation is obtained with $8$ h time bins and a local $n=2^6$ support size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:local_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_local.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt; local&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly a correlation with respect to the number of integral support nodes and time bin size is presented in &amp;lt;xr id=&amp;quot;fig:integrate_correlation&amp;quot;/&amp;gt;. Again, the best correlation is obtained with 8 h time bins, however with the integral model a small integral support, i.e.\ $n=2^2$, already suffices to obtain fair correlation. Such behaviour is expected. In an integral mode we follow the ray and support is moving along, therefore there is no need to capture vast regions for each link point.&lt;br /&gt;
On the other hand in a local approach only one support is used and therefore that support has to be much bigger to capture enough details about the rain conditions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:integrate_correlation&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_contour_integrate.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation with respect to the number of &amp;lt;b&amp;gt;integral&amp;lt;/b&amp;gt; support size $n$ and time bin size $t_0$. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To compare measurements acquired with radar and the ones acquired with the local rain gauge a simpler presentation of correlation is shown on &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. One set of data has rain rate extracted from radar using the integral method with support size $4$ and two sets using closest either $n=64$ or $n=512$ nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:correlation_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Correlation between rain rate and attenuation as a function of time bin size $t_0$ for different ways of extracting the rain rate. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the next step we compare our measurements with a Marshall-Palmer model, specifically the exponent $b$. According to &amp;lt;b&amp;lt;Table 1 &amp;lt;/b&amp;gt; in $20$ GHz the $b_0=1.083$ should hold.&lt;br /&gt;
In &amp;lt;xr id=&amp;quot;fig:fit_time&amp;quot;/&amp;gt; differences between our measurements and $b_0$ with respect to the time bin size are presented for the same sets of data as were used in the correlation analysis of &amp;lt;xr id=&amp;quot;fig:correlation_time&amp;quot;/&amp;gt;. An order of magnitude improvement is visible between local rain gauge and data extracted with radar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:fit_time&amp;quot;&amp;gt;&lt;br /&gt;
[[File:correlation_fit_time.png|600px|thumb|upright=2|alt= ???|&amp;lt;caption&amp;gt;Exponent in attenuation to rainfall relation $b$ compared to value $b_0$ from Table (1) for $ 20 $ GHz as a function of bin duration $t_0$ for a few ways of extracting rainfall. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
This paper deals with the correlation analysis between EMR attenuation due to the scattering on the ASTRA 3B - SatProSi 1 link and measured rain rate. The main objective of the paper is to analyse the related measurements and comparison of results with Marshall-Palmer model.&lt;br /&gt;
&lt;br /&gt;
The attenuation is measured directly with an in-house equipment with a relatively high time resolution (0.15 s).&lt;br /&gt;
&lt;br /&gt;
The rain characteristics are measured with a rain gauge positioned next to the antenna and national meteorological radar. The rain gauge measures average rain rate every five minutes at a single position, while the radar provides a full 3D scan of reflectivity every 10 minutes.&lt;br /&gt;
&lt;br /&gt;
Although the attenuation depends mainly on the DSD, a rain rate is used as a reference quantity, since it is much more descriptive, as well as easier to measure. The reflectivity measured with the radar is therefore transformed to the rain rate with the Marshall-Palmer relation. More direct approach would be to relate the attenuation with the measured reflectivity directly, however that would not change any of the conclusions, since, on a logarithmic scale, a simple power relation between reflectivity and rain rate reflects only as a linear transformation.&lt;br /&gt;
&lt;br /&gt;
The analysis of support size and time bin size showed quite strong influence of those two parameters on the correlation. It is demonstrated that time bin $8h$ and support sizes of $n=2^6$ and $n=2^2$ for local and integral approach, respectively, provide a decent correlation ($0.6-0.7$) between logarithms of measured attenuation and rain rate.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the power model has been fitted over measured data and the value of the exponent has been compared to the values reported in the literature. The model shows best agreement with the Marshall-Palmer model, when the rain rate is gathered from the integral along the communication link. somewhat worse agreement is achieved with a local determination of rain rate. Results obtained with the rain gauge are the furthest from the expected, despite the fact that the correlation with the measured attenuation is the highest with the rain gauge measurements. The localized information from the rain gauge simply cannot provide enough information to fully characterize the rain conditions along the link.&lt;br /&gt;
&lt;br /&gt;
There are still some open questions to resolve, e.g. what is the reason behind the 8 h time bin giving the best result, how could we improve the correlation, using different statistics to get more information from the data, etc. All these topics will be addressed in future work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Acknowledgment=&lt;br /&gt;
The authors acknowledge the financial support from the state budget by the&lt;br /&gt;
Slovenian Research Agency under Grant P2-0095. Attenuation data was collected in the framework of the ESA-PECS project SatProSi-Alpha. Slovenian Environment Agency provided us with the data collected by their weather radars.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=600</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=600"/>
				<updated>2016-11-10T15:59:19Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* [[Attenuation due to liquid water content in the atmosphere|Attenuation of satellite communication]]&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
** [[Point contact]]&lt;br /&gt;
** [[Hertzian contact]]&lt;br /&gt;
** [[Cantilever beam]]&lt;br /&gt;
** [[Bending of a square]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* Execution on Intel® Xeon Phi™ co-processor&lt;br /&gt;
* Execution overheads due to clumsy types&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] :: [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_fit_time.png&amp;diff=599</id>
		<title>File:Correlation fit time.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_fit_time.png&amp;diff=599"/>
				<updated>2016-11-10T15:44:32Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_time.png&amp;diff=598</id>
		<title>File:Correlation time.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_time.png&amp;diff=598"/>
				<updated>2016-11-10T15:43:10Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_contour_integrate.png&amp;diff=597</id>
		<title>File:Correlation contour integrate.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_contour_integrate.png&amp;diff=597"/>
				<updated>2016-11-10T15:42:49Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_contour_local.png&amp;diff=596</id>
		<title>File:Correlation contour local.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_contour_local.png&amp;diff=596"/>
				<updated>2016-11-10T15:40:46Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Scatter_all.png&amp;diff=595</id>
		<title>File:Scatter all.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Scatter_all.png&amp;diff=595"/>
				<updated>2016-11-10T15:39:49Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_time_flow_1800_64_local.png&amp;diff=594</id>
		<title>File:Correlation time flow 1800 64 local.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Correlation_time_flow_1800_64_local.png&amp;diff=594"/>
				<updated>2016-11-10T15:31:08Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Support.png&amp;diff=593</id>
		<title>File:Support.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Support.png&amp;diff=593"/>
				<updated>2016-11-10T15:28:50Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Dsd_manual.png&amp;diff=592</id>
		<title>File:Dsd manual.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Dsd_manual.png&amp;diff=592"/>
				<updated>2016-11-10T15:17:01Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=535</id>
		<title>Heart rate variability detection</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=535"/>
				<updated>2016-11-07T18:15:59Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We used MLS and WLS aproximation to extract heart rate variability from a wearable ECG sensor.&lt;br /&gt;
[[:File:heartratevar.pdf|Full paper available for download here.]]&lt;br /&gt;
[[:File:heartratevar_pres.pdf | Presentation available for download here.]]&lt;br /&gt;
&lt;br /&gt;
The code can be found in &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; /EKG/detect.cpp &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:real&amp;quot;/&amp;gt; shows how we detected beat to beat times from an actual heartbeat.&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:real&amp;quot;&amp;gt;&lt;br /&gt;
[[File:real.png|600px|thumb|upright=2|alt=actual heartbeat detected beat to beat|&amp;lt;caption&amp;gt; We detected beat to beat times from an actual heartbeat in this way. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A slightly abridged version of the paper is presented below.&lt;br /&gt;
&lt;br /&gt;
=Detection of heart rate variability&amp;lt;br&amp;gt;from a wearable differential ECG device=&lt;br /&gt;
&lt;br /&gt;
[mailto:jure.slak@student.fmf.uni-lj.si Jure Slak], [mailto:gregor.kosec@ijs.si Gregor Kosec],&lt;br /&gt;
Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
The precise heart rate variability is extracted from an ECG signal&lt;br /&gt;
measured by a wearable sensor that constantly records the heart activity of an&lt;br /&gt;
active subject for several days. Due to the limited resources of the wearable&lt;br /&gt;
ECG device the signal can only be sampled at relatively low, approximately $100$&lt;br /&gt;
Hz, frequency. Besides low sampling rate the signal from a wearable sensor is&lt;br /&gt;
also burdened with much more noise than the standard $12$-channel ambulatory&lt;br /&gt;
ECG, mostly due to the design of the device, i.e. the electrodes are&lt;br /&gt;
positioned relatively close to each other, and the fact that the subject is&lt;br /&gt;
active during the measurements. To extract heart rate variability with $1$ ms&lt;br /&gt;
precision, i.e. $10$ times more accurate than the sample rate of the measured&lt;br /&gt;
signal, a two-step algorithm is proposed. In first step an approximate global&lt;br /&gt;
search is performed, roughly determining the point of interest, followed by a&lt;br /&gt;
local search based on the Moving Least Squares approximation to refine the&lt;br /&gt;
result. The methodology is evaluated in terms of accuracy, noise sensitivity,&lt;br /&gt;
and computational complexity. All tests are performed on simulated as well as&lt;br /&gt;
measured data. It is demonstrated that the proposed algorithm provides&lt;br /&gt;
accurate results at a low computational cost and it is robust enough for&lt;br /&gt;
practical application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Introduction&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:beat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:beat.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Beat to beat time between two characteristic points.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is well known that the morphology of ECG signals changes from beat to beat as&lt;br /&gt;
a consequence of physical activity, sensations, emotions, breathing, etc. of the&lt;br /&gt;
subject. The most straightforward measure of these changes is&lt;br /&gt;
the heart rate variability (HRV), i.e. small variations of beat duration. HRV&lt;br /&gt;
characterizes the timings of hearth cells repolarization and depolarization&lt;br /&gt;
processes. The HRV is typically determined by measuring the intervals between&lt;br /&gt;
two consecutive R-waves (RRI) or intervals between R and T waves (RTI). Several&lt;br /&gt;
vital signals can be identified from the HRV and therefore it is often used as&lt;br /&gt;
a health status indicator in different fields of medicine, e.g. neurology,&lt;br /&gt;
cardiac surgery, heart transplantation and many more. Typical HRV values of&lt;br /&gt;
healthy subjects are approximately $40$ ms for RRI and $2$ ms for RTI (see&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;). Therefore it is important to detect considered waves with at least $1$&lt;br /&gt;
ms accuracy for practical use.  This paper deals with the detection of HRV in&lt;br /&gt;
ECG signal provided by a Wearable ECG Device (WECGD) that is paired with a&lt;br /&gt;
personal digital assistant (PDA) via Bluetooth Smart protocol. The WECGD, due&lt;br /&gt;
to the hardware limitations, only measures the signal, while the PDA takes care&lt;br /&gt;
of data visualization, basic analysis and transmission of the data to a more&lt;br /&gt;
powerful server for further analyses. In contrast to a standard ambulatory&lt;br /&gt;
$12$-channel ECG measurement, where trained personnel prepare and supervise the&lt;br /&gt;
measurement of subject at rest, the WECGD works on a single channel, the&lt;br /&gt;
subject is active and since the WECGD is often placed by an untrained user its&lt;br /&gt;
orientation might be random, resulting in additional decrease of signal&lt;br /&gt;
quality.  In order to maintain several days of battery autonomy The WECGD also&lt;br /&gt;
records the heart rate with significantly lower frequencies and resolution in&lt;br /&gt;
comparison to ambulatory measurements.  All these factors render the standard&lt;br /&gt;
ECG analysis algorithms ineffective. In this paper we analyse a possible local,&lt;br /&gt;
i.e. only short history of measurement data is required, algorithm for detection&lt;br /&gt;
of heart rate variability with $1$ ms precision of a signal recorded with $120$ Hz.&lt;br /&gt;
&lt;br /&gt;
=Detection method=&lt;br /&gt;
&lt;br /&gt;
In order to evaluate the HRV, the ''characteristic point'' of each heart&lt;br /&gt;
beat has to be detected in the signal, which is provided as values of electric&lt;br /&gt;
potential sampled with a frequency $120$ Hz. Since the HRV is computed from&lt;br /&gt;
differences of consequent characteristic points the choice of the&lt;br /&gt;
characteristic point does not play any role, as long as it is the same in every&lt;br /&gt;
beat. In this work we choose to characterise the beat with a minimal first&lt;br /&gt;
derivative, in another words, we seek the points in the signal with the most&lt;br /&gt;
violent drop in electric potential (&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;) that occurs between R and S&lt;br /&gt;
peaks.&lt;br /&gt;
&lt;br /&gt;
The detection method is separated in two stages, namely global and local. The&lt;br /&gt;
goal of the global method is to approximately detect the characteristic point,&lt;br /&gt;
while the local method serves as a fine precision detection, enabling us to&lt;br /&gt;
detect HRV with much higher accuracy.&lt;br /&gt;
&lt;br /&gt;
==Coarse global search==&lt;br /&gt;
In the first step the algorithm finds a minimal first derivative of a given signal&lt;br /&gt;
on a sample rate accuracy, i.e. $\frac{1}{\nu}$. The global search method is next to&lt;br /&gt;
trivial. The algorithm simply travels along the signal, calculating the&lt;br /&gt;
discrete derivative and storing the position of minimal values found so far.&lt;br /&gt;
Since the points are sampled equidistantly, minimizing $\frac{\Delta y}{\Delta&lt;br /&gt;
t}$ is equal to minimizing $\Delta y$. The middle of the interval where the&lt;br /&gt;
largest drop was detected is taken as the global guess $t_G$. The results&lt;br /&gt;
of the global search are presented in &amp;lt;xr id=&amp;quot;fig:global&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:global&amp;quot;&amp;gt;&lt;br /&gt;
[[File:global.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Global search detection of two beats.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Fine local search==&lt;br /&gt;
The global search provides only coarse, limited to sample points, positions of&lt;br /&gt;
the characteristic points. To push accuracy beyond $1/\nu$, the signal has to be&lt;br /&gt;
represented also in between the sample points. A monomial approximation function&lt;br /&gt;
based on a Moving Least Squares approach is introduced for&lt;br /&gt;
that purpose.&lt;br /&gt;
&lt;br /&gt;
The value of the electrical potential at arbitrary time $t_0$ is approximated.&lt;br /&gt;
Denote the vector of $n$ known values near $t_0$ by $\boldsymbol{f}$ (called&lt;br /&gt;
&amp;lt;i&amp;gt;support&amp;lt;/i&amp;gt;), and the times at which they were measured by&lt;br /&gt;
$\boldsymbol{t}$.  The approximation $\hat{f}$ of $\boldsymbol{f}$ is introduced as a linear&lt;br /&gt;
combination of $m$ in general arbitrary basis functions $(b_j)_{j=1}^m$,&lt;br /&gt;
however in this work only monomials are considered.&lt;br /&gt;
\[\hat{f} = \sum_{j=1}^m\alpha_jb_j \]&lt;br /&gt;
&lt;br /&gt;
The most widely used approach to solve above problem and find the appropriate&lt;br /&gt;
$\hat{f}$ is to minimize the weighted 2-norm of the error, also known as the&lt;br /&gt;
Weighted Least Squares (WLS) method.&lt;br /&gt;
\[  \|\boldsymbol{f} - \hat{f}(\boldsymbol{t})\|_w^2 = \sum_{i=1}^n (f_i -\hat{f}(t_i))^2 w(t_i),  \]&lt;br /&gt;
In the above equation $w$ is a nonnegative weight function.&lt;br /&gt;
&lt;br /&gt;
The only unknown quantities are the $m$ coefficients $\boldsymbol{\alpha}$ of the linear&lt;br /&gt;
combination, which can be expressed as a solution of an overdetermined linear&lt;br /&gt;
system $W\!B\boldsymbol{\alpha} = W\!\boldsymbol{f}$, where $W$ is the $n\times n$ diagonal&lt;br /&gt;
weight matrix, $W_{ii} = \sqrt{w(t_i)}$ and $B$ is the $n\times m$ collocation&lt;br /&gt;
matrix, $B_{ij} = b_j(t_i)$. There are different approaches towards finding the&lt;br /&gt;
solution.  The fastest and also the least stable and accurate is to solve the&lt;br /&gt;
Normal System $B^\mathsf{T} W^\mathsf{T} WB\boldsymbol{\alpha} = B^\mathsf{T} W^\mathsf{T} W\boldsymbol{f}$, a more&lt;br /&gt;
expensive but also more stable is via QR decomposition, and finally the most&lt;br /&gt;
expensive and also the most stable is via SVD&lt;br /&gt;
decomposition. The resulting vector $\boldsymbol{\alpha}$ is then&lt;br /&gt;
used to calculate $\hat{f}(t)$ for any given $t$. The derivatives are&lt;br /&gt;
approximated simply by differentiating the approximating function, $\hat{f}' =&lt;br /&gt;
\sum_{j=1}^m\alpha_jb_j'$.&lt;br /&gt;
&lt;br /&gt;
The WLS approximation weights the influence of support points using the weight&lt;br /&gt;
function $w$. Usually, such a weight is chosen that points closest to $t_0$ are&lt;br /&gt;
more important in the norm in comparison to the nodes far away. Naturally such&lt;br /&gt;
approximation is valid only as long as the evaluation point is close to the&lt;br /&gt;
$t_0$.&lt;br /&gt;
A more general approach is a [[Moving Least Squares (MLS)|Moving Least Square (MLS)]] approximation, where&lt;br /&gt;
coefficients $\alpha$ are not spatially independent any more, but are recomputed&lt;br /&gt;
for each evaluation point. Naturally, such an approach is way more expensive,&lt;br /&gt;
but also more precise. A comparison of both methods, i.e. WLS and MLS, is shown&lt;br /&gt;
in &amp;lt;xr id=&amp;quot;fig:mlswlsHeartvar&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mlswlsHeartvar&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswlsHeartvar.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;MLS and WLS approximation of a heartbeat-like function&lt;br /&gt;
$ f(x) = \frac{\sin x}{x}\frac{\left| x+8\right| - \left| x-5\right|  +26&lt;br /&gt;
}{13 ((\frac{x-1}{7})^4+1)}+\frac{1}{10} $,&lt;br /&gt;
with measurements taken at points  $ \{-14, \ldots,  24\} $.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The task of finding the minimal value of the first derivative is equivalent to&lt;br /&gt;
the task of finding the zero of the second derivative. This zero will be our local&lt;br /&gt;
approximation $t_L$ of the beat time, $\hat{f}''(t_L) = 0$.&lt;br /&gt;
Therefore an approximation function with a non constant second derivative,&lt;br /&gt;
i.e. approximation function with a&lt;br /&gt;
minimal 3rd order monomial basis, is constructed. The most straightforward&lt;br /&gt;
approach to find its root is the simple bisection. Bisection requires initial&lt;br /&gt;
low and high bounds that can be estimated from characteristic point $t_G$&lt;br /&gt;
provided by the global method. Using the fact that the QRS intervals last&lt;br /&gt;
approximately $\Delta t_{\text{QRS}}= 0.1 s$, we can seek for the root of the second&lt;br /&gt;
derivative on an interval $[t_G - \Delta t_{\text{QRS}}/2, t_G + \Delta&lt;br /&gt;
t_{\text{QRS}}/2]$, and at given sample rate this translates to the search&lt;br /&gt;
interval of two sample points away from $t_G$ in each direction.&lt;br /&gt;
&lt;br /&gt;
==HRV calculation and error estimation==&lt;br /&gt;
Given sampled heartbeat, the fine local search produces the vector of $\ell+1$ detected times&lt;br /&gt;
$\boldsymbol{t_L} := (t_{L,i})_{i=1}^{\ell+1}$ of the RS slopes. Their successive&lt;br /&gt;
differences represent a vector $\boldsymbol{\hat{r}}$ of detected beat to beat times,&lt;br /&gt;
the durations of RR intervals.&lt;br /&gt;
\[ \boldsymbol{\hat{r}} = (\hat{r}_{i})_{i=1}^\ell, \quad \hat{r}_i = t_{L,i+1} - t_{L,i} \]&lt;br /&gt;
Let $\boldsymbol{r}$ be the vector of (usually unknown) actual beat to beat times. Then the&lt;br /&gt;
heart rate variability (HRV) $h$ is defined as&lt;br /&gt;
\[ h := \text{std}(\boldsymbol{r}) = \sqrt{\frac{1}{\ell}\sum_{i=1}^{\ell} (r_i - \bar{r})^2}, \]&lt;br /&gt;
where $\bar{r}$ stands for the average beat to beat time, $\bar{r} =&lt;br /&gt;
\sum_{i=1}^\ell r_i / \ell$. The HRV estimation $\hat{h}$ is calculated as the&lt;br /&gt;
standard deviation of the detected times $\boldsymbol{\hat{r}}$.&lt;br /&gt;
&lt;br /&gt;
In the following analyses the actual vector $\boldsymbol{r}$ will be known, since the&lt;br /&gt;
synthesized heartbeat will be analysed. The most obvious error measures are the&lt;br /&gt;
absolute error of HRV, $e_{h} = |\hat{h} - h|$ and the absolute error of the&lt;br /&gt;
average heart beat $e_{\bar{r}} = |\bar{\hat{r}} - \bar{r}|$.  Using the vector&lt;br /&gt;
of errors $\boldsymbol{e} = |\boldsymbol{r} -\boldsymbol{\hat{r}}|$ the average error $e_a = \sum e_i&lt;br /&gt;
/ \ell$ and the maximal error $e_M = \max(\boldsymbol{e})$ can be assessed.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
==Approximation set-up==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:results&amp;quot;&amp;gt;&lt;br /&gt;
[[File:results.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt; Subfigures $\begin{bmatrix} a &amp;amp; b &amp;amp; ,&amp;amp; c &amp;amp; d \end{bmatrix}$. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure a. Sufficiently small support implies interpolation,&lt;br /&gt;
making weight function useless and MLS equal to WLS. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure b. MLS and WLS differ when approximating with a low order polynomial. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure c. MLS and WLS match when approximating with a high order polynomial.&amp;lt;br&amp;gt;&lt;br /&gt;
Sunfigure d. Expected bad behaviour with too many support points and a low order approximation.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In first step the approximation free parameters, i.e. weight function, support&lt;br /&gt;
size, and number of basis functions, have to be assessed. A single heartbeat is&lt;br /&gt;
extracted and approximated with all possible combinations of basis functions&lt;br /&gt;
with orders from $2$ to $10$ and symmetric supports of sizes from $3$ to $15$ using both&lt;br /&gt;
WLS and MLS. An global algorithm described in previous section was used&lt;br /&gt;
to produce the initial guesses. For demonstration four sample cases are&lt;br /&gt;
presented. The weight function was the same in all four cases, a Gaussian&lt;br /&gt;
distribution with $\mu = t_G$ and $\sigma = m/4$, which makes sure that all&lt;br /&gt;
support points are taken into account, but the central ones are more important.&lt;br /&gt;
&lt;br /&gt;
The simplest case is when the support size is the same as number of basis&lt;br /&gt;
functions resulting in an interpolation. In this case, the weight function is&lt;br /&gt;
not important, making WLS and MLS entirely equivalent as seen in&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;a.&lt;br /&gt;
&lt;br /&gt;
In case of small support and small order of monomial basis WLS performs worse&lt;br /&gt;
than MLS and the approaches differ significantly. However, as we increase the&lt;br /&gt;
order of the polynomial basis the difference within the bisection interval&lt;br /&gt;
becomes negligible. This transition can be observed in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;b and &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;c.&lt;br /&gt;
&lt;br /&gt;
As predicted, the support size is important. Both methods perform badly when&lt;br /&gt;
too many surrounding measurements are taken into account while still using a low&lt;br /&gt;
order polynomial approximation. Note that in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;d the initial&lt;br /&gt;
guess is barely improved and the beat shape is skewed away from the RS drop.&lt;br /&gt;
&lt;br /&gt;
The conclusion is, that for our purposes MLS approximation is unnecessary, as&lt;br /&gt;
WLS provides good enough results, when used appropriately. Further analysis&lt;br /&gt;
to determine the best choice of parameters $m$ and $n$ is presented later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computational complexity==&lt;br /&gt;
The presented algorithm is a streaming algorithm, requiring a buffer to store&lt;br /&gt;
the current beat in the signal. Let $b$ be the number of measurements per&lt;br /&gt;
beat, stored in a buffer of length $b$. The global part of the algorithm&lt;br /&gt;
makes $O(b)$ operations, being a simple local search.  The local part is&lt;br /&gt;
more expensive. First a $n \times m$ matrix is constructed in $O(mn)$&lt;br /&gt;
and the right hand side vector is copied from the buffer.  The system is then&lt;br /&gt;
solved using SVD decomposition in $O(mn^2+n^3)$ .  Note that as $m =&lt;br /&gt;
O(n)$, this step takes $O(n^3)$. The minimal first derivative is found&lt;br /&gt;
using bisection. To achieve tolerance $\varepsilon$,&lt;br /&gt;
$\lceil\log_2(1/\varepsilon)\rceil$ function evaluations are needed, each&lt;br /&gt;
costing $O(m)$ operations. Total time complexity is therefore equal to&lt;br /&gt;
$O(b + n^3 + m\log(1/\varepsilon))$. Note, that using MLS would require&lt;br /&gt;
$O(n^3)$ for each function evaluation, resulting in a significantly worse&lt;br /&gt;
time complexity of $O(b+n^3\log_2(1/\varepsilon)))$. The calculation of&lt;br /&gt;
average and variance is done later, after the wanted amount of signal has&lt;br /&gt;
already been analysed.&lt;br /&gt;
&lt;br /&gt;
In practice the algorithm executes very fast, using typical values $b =&lt;br /&gt;
150$, $m=6$, $n=11$ and $\varepsilon = 10^{-10}$ it runs&lt;br /&gt;
approximately $0.27$ s to analyze $1000$ heartbeats ($\approx 10^5$ data&lt;br /&gt;
points). The algorithm was compiled from C++ source code using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; g++5.3.0&amp;lt;/syntaxhighlight&amp;gt; with &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; -O2 &amp;lt;/syntaxhighlight&amp;gt; flag and run on&lt;br /&gt;
an &amp;lt;tt&amp;gt;Intel(R) Core(TM) i7-4700MQ&amp;lt;/tt&amp;gt; processor.&lt;br /&gt;
&lt;br /&gt;
==Simulated heartbeat with known variability==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityscatter&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityscatter.png|600px|thumb|upright=2|alt= Generated beat to beat times and their global detection|&amp;lt;caption&amp;gt;Generated beat to beat times and their global detection.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first set of tests for the presented method was performed using a simulated&lt;br /&gt;
heartbeat. A single real heartbeat was taken and then replicated thousand&lt;br /&gt;
times, each time shifted by a random offset $T$, distributed normally&lt;br /&gt;
around zero, $T \sim \mathcal{N}(0, \sigma^2)$, with $\sigma =&lt;br /&gt;
\frac{1}{2\nu} = \frac{1}{2} \Delta t$.  This means that a decent amount of&lt;br /&gt;
measurements will be more than $\Delta t$ apart, a difference that must be&lt;br /&gt;
detected by global search for method to work.  However, around half of the&lt;br /&gt;
measurements are less than $\Delta t$ apart, forming suitable ground for&lt;br /&gt;
testing the precision of the local search. At given sample frequency,&lt;br /&gt;
$\sigma$ equals $4.167 ms$.&lt;br /&gt;
&lt;br /&gt;
The generated beat to beat times, coarsely detected and finely detected times&lt;br /&gt;
are presented in &amp;lt;xr id=&amp;quot;fig:variabilityscatter&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Beat to beat time precision is significantly improved by the local search.&lt;br /&gt;
As seen in &amp;lt;xr id=&amp;quot;fig:variabilityhist&amp;quot;/&amp;gt;, the distributions of generated and detected&lt;br /&gt;
heart beats match very well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityhist&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityhist.png|600px|thumb|center|upright=2|alt= Generated RRI times and their global detection|&amp;lt;caption&amp;gt;Generated RRI times and their global detection.&lt;br /&gt;
The middle two coarse detection columns continue off the chart, but are not shown completely for clarity.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Results of RRI and HRV detection by global and local search are presented in&lt;br /&gt;
Table 1. The generated times were taken as precise and the&lt;br /&gt;
algorithm was run to produce global and local approximations. Then the average&lt;br /&gt;
RRI time and HRV were calculated for each data set separately.  The average RRI&lt;br /&gt;
time is estimated very well with both methods, but the precision of the global&lt;br /&gt;
method is not satisfactory when measuring heart rate variability. The precision&lt;br /&gt;
is significantly improved using the local search. A chart showing the average&lt;br /&gt;
error of the detected times is shown in &amp;lt;xr id=&amp;quot;fig:allerrs&amp;quot;/&amp;gt;. It can be seen&lt;br /&gt;
that MLS performs better on average, but WLS is very close and this loss of&lt;br /&gt;
precision is a reasonable tradeoff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:allerrs&amp;quot;&amp;gt;&lt;br /&gt;
[[File:allerrs.png|600px|thumb|center|upright=2|alt= Comparison of WLS and MLS errors|&amp;lt;caption&amp;gt; Comparison of WLS and MLS errors using different orders and&lt;br /&gt;
support sizes.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The red values in &amp;lt;xr id=&amp;quot;fig:allerrs&amp;quot;/&amp;gt; indicate invalid region, having more basis functions than support points.&lt;br /&gt;
Both MLS and WLS have the same region of validity, being precise in the predicted regime.&lt;br /&gt;
For very high order approximation the condition number of the matrix becomes critical&lt;br /&gt;
and the method is unstable, which explains the loss of precision for orders larger than $12$.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Table 1: Results and errors of the RRI and HRV detection.&lt;br /&gt;
|-&lt;br /&gt;
! quantity [s]&lt;br /&gt;
! generated&lt;br /&gt;
! coarse&lt;br /&gt;
! fine    &lt;br /&gt;
|-&lt;br /&gt;
|$\bar{r} $||$0.861136$||$0.861139$||$0.861136$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_{\bar{r}}$||$0$||$3.34 \cdot 10^{-6}$||$2.83 \cdot 10^{-8}$ &lt;br /&gt;
|-&lt;br /&gt;
|$h$||$0.004102$||$0.005324$||$0.004137$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_h $||$0$||$0.001222$||$3.52 \cdot 10^{-5}$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_a $||$0$||$0.002969$||$0.000263$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_M $||$0$||$0.007778$||$0.000829$&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Noise analysis==&lt;br /&gt;
The method presented in this paper relies heavily on the values of the&lt;br /&gt;
derivative. Normally, derivatives are very sensitive to noise.&lt;br /&gt;
To analyse noise sensitivity of the algorithm a real heart beat signal was&lt;br /&gt;
taken and normalized by subtracting the average and then divided by maximal&lt;br /&gt;
absolute value, transforming measurements onto the interval $[-1, 1]$.&lt;br /&gt;
A uniform noise of level $p$ was then applied to every measurement.&lt;br /&gt;
Specifically, a uniformly distributed random number on the interval $[-p, p]$,&lt;br /&gt;
for fixed $p\in[0, 1]$ was added to every measurement point.&lt;br /&gt;
&lt;br /&gt;
Example of the noised and normalized beat and the behaviour&lt;br /&gt;
of the algorithm is shown in  &amp;lt;xr id=&amp;quot;fig:noised&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:noised&amp;quot;&amp;gt;&lt;br /&gt;
[[File:noised.png|600px|thumb|upright=2|alt= Detecting heart beat in a 25% uniformly noised signal.|&amp;lt;caption&amp;gt;Detecting heart beat in a 25% uniformly noised signal.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An approximate calculation of the critical noise level can be made. Let $\nu$ be&lt;br /&gt;
the sample frequency and $p$ the noise level. The critical noise level is such,&lt;br /&gt;
that the maximal noise derivative is comparable to the maximal derivative of the&lt;br /&gt;
signal. The maximal noise derivative is achieved when to subsequent measurements&lt;br /&gt;
take extreme values of $p$ and $-p$, resulting in drop of magnitude $2p$.&lt;br /&gt;
In the actual heartbeat, the maximal drop appears during R and S peaks.&lt;br /&gt;
Taking maximal number of nodes in QRS complex to be $\lceil 0.1 \nu \rceil$ as&lt;br /&gt;
before, and with respect to &amp;lt;xr id=&amp;quot;fig:real&amp;quot;/&amp;gt; the RS drop contains at&lt;br /&gt;
most one third of them, we approximate the maximal drop between two subsequent&lt;br /&gt;
measurements as the total RS drop divided by number of nodes included. Taking&lt;br /&gt;
into account that the RS drop equals $2$ after normalization, we obtain&lt;br /&gt;
\[ \Delta y_{\text{max}} = \frac{2}{\frac{1}{3}\lceil 0.1\nu\rceil} \approx 60&lt;br /&gt;
\frac{1}{\nu}. \]&lt;br /&gt;
The critical noise level is such, that both noise levels are approximately equal&lt;br /&gt;
\[ 2p_{\text{crit}} \approx 60 \frac{1}{\nu}. \]&lt;br /&gt;
At sample frequency of $120$ Hz this yields&lt;br /&gt;
\[ p_{\text{crit}} \approx 0.25. \]&lt;br /&gt;
&lt;br /&gt;
The algorithm was tested as described above for noise levels $p \in [0, 1]$&lt;br /&gt;
with a step of $1$%.  Results are presented in &amp;lt;xr id=&amp;quot;fig:noiseanalysis&amp;quot;/&amp;gt;. Note that the rise in&lt;br /&gt;
error corresponds nicely with predicted critical noise.&lt;br /&gt;
&lt;br /&gt;
A little bit more can be achieved using simple statistical analysis. When noise&lt;br /&gt;
levels are around critical, a few false beats may be recognized among the actual&lt;br /&gt;
ones. The RRI time before and after the false beat will deviate greatly from&lt;br /&gt;
the average RRI time. Therefore, we can choose to ignore extreme values, as they&lt;br /&gt;
are very likely to be wrong. Average and standard deviation are both sensitive&lt;br /&gt;
to extreme data points, however the median is not, and while extreme values&lt;br /&gt;
occur rarely, the median should give a good sense of what the &amp;amp;quot;correct&amp;amp;quot; beat&lt;br /&gt;
to beat time is. To detect the extreme values a Median Absolute Deviation (MAD)&lt;br /&gt;
was chosen as the simplest and robust enough approach. In this method, the&lt;br /&gt;
median of differences from the median is taken as the estimate of data&lt;br /&gt;
variability. All values that differ more that five times MAD from the median are&lt;br /&gt;
ignored. The results of applying this technique are also shown in&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:noiseanalysis&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:noiseanalysis&amp;quot;&amp;gt;&lt;br /&gt;
[[File:noiseanalysis.png|600px|thumb|upright=2|alt= Variability detection at different noise levels.|&amp;lt;caption&amp;gt;Variability detection at different noise levels.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Actual heartbeat analysis==&lt;br /&gt;
The presented method was tested on sample inputs from the wearable sensor.&lt;br /&gt;
Detected beats are presented in &amp;lt;xr id=&amp;quot;fig:real&amp;quot;/&amp;gt;. Average beat to beat time&lt;br /&gt;
of the subject was $\bar{\hat{r}} = 0.888264 s$ and heart rate variability&lt;br /&gt;
equals $\hat{h} = 0.282443 s$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
In this paper a simple MLS based algorithm for accurate detection of HRV from a&lt;br /&gt;
low sample rate ECG signal, typically provided by a wearable sensor, is&lt;br /&gt;
demonstrated. The algorithm is formulated in a fairly general way. Most of the&lt;br /&gt;
approximation parameters can be easily changed. However, to keep the analyses&lt;br /&gt;
within the reasonable limits, only the order of monomial basis and the support&lt;br /&gt;
size are varied to find the optimal set-up. It is demonstrated that increasing&lt;br /&gt;
order of basis as well as support size improves results up to roughly $m=12$&lt;br /&gt;
and/or $n=20$, at which point the system matrix becomes ill-conditioned. Based&lt;br /&gt;
on the presented analyses a basis of $10$th order supported with $15$ nodes is&lt;br /&gt;
claimed to be the optimal set-up. It is also demonstrated that using a way more&lt;br /&gt;
computationally expensive MLS, in comparison to WLS, do not improve accuracy&lt;br /&gt;
enough to justify it.&lt;br /&gt;
&lt;br /&gt;
Based on results from approximation analyses a fast two-stage streaming&lt;br /&gt;
algorithm for HRV detection is developed. The algorithm is tested on synthetic&lt;br /&gt;
as well as actual data, achieving good performance. It is demonstrated that&lt;br /&gt;
the detected beat times of a simulated heartbeat differ from the actual ones&lt;br /&gt;
with an average absolute error of $0.263$ ms at sample frequency of&lt;br /&gt;
$120$ Hz. In other words the detection accuracy is roughly ten times more&lt;br /&gt;
accurate in comparison with a coarse one. The method is also stable up to&lt;br /&gt;
approximately $25$% noise and computationally extremely effective, successfully&lt;br /&gt;
processing $1000$ heartbeats in approximately $0.27$ s on a standard laptop&lt;br /&gt;
computer.&lt;br /&gt;
&lt;br /&gt;
In future work we will focus on full analysis, including different basis and&lt;br /&gt;
weight functions, as well as non-linear MLS to fit also the basis shape&lt;br /&gt;
parameters.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Noiseanalysis.png&amp;diff=533</id>
		<title>File:Noiseanalysis.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Noiseanalysis.png&amp;diff=533"/>
				<updated>2016-11-07T18:08:05Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Noised.png&amp;diff=534</id>
		<title>File:Noised.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Noised.png&amp;diff=534"/>
				<updated>2016-11-07T18:08:05Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=532</id>
		<title>Heart rate variability detection</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=532"/>
				<updated>2016-11-07T17:50:37Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We used MLS and WLS aproximation to extract heart rate variability from a wearable ECG sensor.&lt;br /&gt;
[[:File:heartratevar.pdf|Full paper available for download here.]]&lt;br /&gt;
[[:File:heartratevar_pres.pdf | Presentation available for download here.]]&lt;br /&gt;
&lt;br /&gt;
The code can be found in &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; /EKG/detect.cpp &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:real&amp;quot;/&amp;gt; shows how we detected beat to beat times from an actual heartbeat.&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:real&amp;quot;&amp;gt;&lt;br /&gt;
[[File:real.png|600px|thumb|upright=2|alt=actual heartbeat detected beat to beat|&amp;lt;caption&amp;gt; We detected beat to beat times from an actual heartbeat in this way. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A slightly abridged version of the paper is presented below.&lt;br /&gt;
&lt;br /&gt;
=Detection of heart rate variability&amp;lt;br&amp;gt;from a wearable differential ECG device=&lt;br /&gt;
&lt;br /&gt;
[mailto:jure.slak@student.fmf.uni-lj.si Jure Slak], [mailto:gregor.kosec@ijs.si Gregor Kosec],&lt;br /&gt;
Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
The precise heart rate variability is extracted from an ECG signal&lt;br /&gt;
measured by a wearable sensor that constantly records the heart activity of an&lt;br /&gt;
active subject for several days. Due to the limited resources of the wearable&lt;br /&gt;
ECG device the signal can only be sampled at relatively low, approximately $100$&lt;br /&gt;
Hz, frequency. Besides low sampling rate the signal from a wearable sensor is&lt;br /&gt;
also burdened with much more noise than the standard $12$-channel ambulatory&lt;br /&gt;
ECG, mostly due to the design of the device, i.e. the electrodes are&lt;br /&gt;
positioned relatively close to each other, and the fact that the subject is&lt;br /&gt;
active during the measurements. To extract heart rate variability with $1$ ms&lt;br /&gt;
precision, i.e. $10$ times more accurate than the sample rate of the measured&lt;br /&gt;
signal, a two-step algorithm is proposed. In first step an approximate global&lt;br /&gt;
search is performed, roughly determining the point of interest, followed by a&lt;br /&gt;
local search based on the Moving Least Squares approximation to refine the&lt;br /&gt;
result. The methodology is evaluated in terms of accuracy, noise sensitivity,&lt;br /&gt;
and computational complexity. All tests are performed on simulated as well as&lt;br /&gt;
measured data. It is demonstrated that the proposed algorithm provides&lt;br /&gt;
accurate results at a low computational cost and it is robust enough for&lt;br /&gt;
practical application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Introduction&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:beat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:beat.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Beat to beat time between two characteristic points.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is well known that the morphology of ECG signals changes from beat to beat as&lt;br /&gt;
a consequence of physical activity, sensations, emotions, breathing, etc. of the&lt;br /&gt;
subject. The most straightforward measure of these changes is&lt;br /&gt;
the heart rate variability (HRV), i.e. small variations of beat duration. HRV&lt;br /&gt;
characterizes the timings of hearth cells repolarization and depolarization&lt;br /&gt;
processes. The HRV is typically determined by measuring the intervals between&lt;br /&gt;
two consecutive R-waves (RRI) or intervals between R and T waves (RTI). Several&lt;br /&gt;
vital signals can be identified from the HRV and therefore it is often used as&lt;br /&gt;
a health status indicator in different fields of medicine, e.g. neurology,&lt;br /&gt;
cardiac surgery, heart transplantation and many more. Typical HRV values of&lt;br /&gt;
healthy subjects are approximately $40$ ms for RRI and $2$ ms for RTI (see&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;). Therefore it is important to detect considered waves with at least $1$&lt;br /&gt;
ms accuracy for practical use.  This paper deals with the detection of HRV in&lt;br /&gt;
ECG signal provided by a Wearable ECG Device (WECGD) that is paired with a&lt;br /&gt;
personal digital assistant (PDA) via Bluetooth Smart protocol. The WECGD, due&lt;br /&gt;
to the hardware limitations, only measures the signal, while the PDA takes care&lt;br /&gt;
of data visualization, basic analysis and transmission of the data to a more&lt;br /&gt;
powerful server for further analyses. In contrast to a standard ambulatory&lt;br /&gt;
$12$-channel ECG measurement, where trained personnel prepare and supervise the&lt;br /&gt;
measurement of subject at rest, the WECGD works on a single channel, the&lt;br /&gt;
subject is active and since the WECGD is often placed by an untrained user its&lt;br /&gt;
orientation might be random, resulting in additional decrease of signal&lt;br /&gt;
quality.  In order to maintain several days of battery autonomy The WECGD also&lt;br /&gt;
records the heart rate with significantly lower frequencies and resolution in&lt;br /&gt;
comparison to ambulatory measurements.  All these factors render the standard&lt;br /&gt;
ECG analysis algorithms ineffective. In this paper we analyse a possible local,&lt;br /&gt;
i.e. only short history of measurement data is required, algorithm for detection&lt;br /&gt;
of heart rate variability with $1$ ms precision of a signal recorded with $120$ Hz.&lt;br /&gt;
&lt;br /&gt;
=Detection method=&lt;br /&gt;
&lt;br /&gt;
In order to evaluate the HRV, the ''characteristic point'' of each heart&lt;br /&gt;
beat has to be detected in the signal, which is provided as values of electric&lt;br /&gt;
potential sampled with a frequency $120$ Hz. Since the HRV is computed from&lt;br /&gt;
differences of consequent characteristic points the choice of the&lt;br /&gt;
characteristic point does not play any role, as long as it is the same in every&lt;br /&gt;
beat. In this work we choose to characterise the beat with a minimal first&lt;br /&gt;
derivative, in another words, we seek the points in the signal with the most&lt;br /&gt;
violent drop in electric potential (&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;) that occurs between R and S&lt;br /&gt;
peaks.&lt;br /&gt;
&lt;br /&gt;
The detection method is separated in two stages, namely global and local. The&lt;br /&gt;
goal of the global method is to approximately detect the characteristic point,&lt;br /&gt;
while the local method serves as a fine precision detection, enabling us to&lt;br /&gt;
detect HRV with much higher accuracy.&lt;br /&gt;
&lt;br /&gt;
==Coarse global search==&lt;br /&gt;
In the first step the algorithm finds a minimal first derivative of a given signal&lt;br /&gt;
on a sample rate accuracy, i.e. $\frac{1}{\nu}$. The global search method is next to&lt;br /&gt;
trivial. The algorithm simply travels along the signal, calculating the&lt;br /&gt;
discrete derivative and storing the position of minimal values found so far.&lt;br /&gt;
Since the points are sampled equidistantly, minimizing $\frac{\Delta y}{\Delta&lt;br /&gt;
t}$ is equal to minimizing $\Delta y$. The middle of the interval where the&lt;br /&gt;
largest drop was detected is taken as the global guess $t_G$. The results&lt;br /&gt;
of the global search are presented in &amp;lt;xr id=&amp;quot;fig:global&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:global&amp;quot;&amp;gt;&lt;br /&gt;
[[File:global.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Global search detection of two beats.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Fine local search==&lt;br /&gt;
The global search provides only coarse, limited to sample points, positions of&lt;br /&gt;
the characteristic points. To push accuracy beyond $1/\nu$, the signal has to be&lt;br /&gt;
represented also in between the sample points. A monomial approximation function&lt;br /&gt;
based on a Moving Least Squares approach is introduced for&lt;br /&gt;
that purpose.&lt;br /&gt;
&lt;br /&gt;
The value of the electrical potential at arbitrary time $t_0$ is approximated.&lt;br /&gt;
Denote the vector of $n$ known values near $t_0$ by $\boldsymbol{f}$ (called&lt;br /&gt;
&amp;lt;i&amp;gt;support&amp;lt;/i&amp;gt;), and the times at which they were measured by&lt;br /&gt;
$\boldsymbol{t}$.  The approximation $\hat{f}$ of $\boldsymbol{f}$ is introduced as a linear&lt;br /&gt;
combination of $m$ in general arbitrary basis functions $(b_j)_{j=1}^m$,&lt;br /&gt;
however in this work only monomials are considered.&lt;br /&gt;
\[\hat{f} = \sum_{j=1}^m\alpha_jb_j \]&lt;br /&gt;
&lt;br /&gt;
The most widely used approach to solve above problem and find the appropriate&lt;br /&gt;
$\hat{f}$ is to minimize the weighted 2-norm of the error, also known as the&lt;br /&gt;
Weighted Least Squares (WLS) method.&lt;br /&gt;
\[  \|\boldsymbol{f} - \hat{f}(\boldsymbol{t})\|_w^2 = \sum_{i=1}^n (f_i -\hat{f}(t_i))^2 w(t_i),  \]&lt;br /&gt;
In the above equation $w$ is a nonnegative weight function.&lt;br /&gt;
&lt;br /&gt;
The only unknown quantities are the $m$ coefficients $\boldsymbol{\alpha}$ of the linear&lt;br /&gt;
combination, which can be expressed as a solution of an overdetermined linear&lt;br /&gt;
system $W\!B\boldsymbol{\alpha} = W\!\boldsymbol{f}$, where $W$ is the $n\times n$ diagonal&lt;br /&gt;
weight matrix, $W_{ii} = \sqrt{w(t_i)}$ and $B$ is the $n\times m$ collocation&lt;br /&gt;
matrix, $B_{ij} = b_j(t_i)$. There are different approaches towards finding the&lt;br /&gt;
solution.  The fastest and also the least stable and accurate is to solve the&lt;br /&gt;
Normal System $B^\mathsf{T} W^\mathsf{T} WB\boldsymbol{\alpha} = B^\mathsf{T} W^\mathsf{T} W\boldsymbol{f}$, a more&lt;br /&gt;
expensive but also more stable is via QR decomposition, and finally the most&lt;br /&gt;
expensive and also the most stable is via SVD&lt;br /&gt;
decomposition. The resulting vector $\boldsymbol{\alpha}$ is then&lt;br /&gt;
used to calculate $\hat{f}(t)$ for any given $t$. The derivatives are&lt;br /&gt;
approximated simply by differentiating the approximating function, $\hat{f}' =&lt;br /&gt;
\sum_{j=1}^m\alpha_jb_j'$.&lt;br /&gt;
&lt;br /&gt;
The WLS approximation weights the influence of support points using the weight&lt;br /&gt;
function $w$. Usually, such a weight is chosen that points closest to $t_0$ are&lt;br /&gt;
more important in the norm in comparison to the nodes far away. Naturally such&lt;br /&gt;
approximation is valid only as long as the evaluation point is close to the&lt;br /&gt;
$t_0$.&lt;br /&gt;
A more general approach is a [[Moving Least Squares (MLS)|Moving Least Square (MLS)]] approximation, where&lt;br /&gt;
coefficients $\alpha$ are not spatially independent any more, but are recomputed&lt;br /&gt;
for each evaluation point. Naturally, such an approach is way more expensive,&lt;br /&gt;
but also more precise. A comparison of both methods, i.e. WLS and MLS, is shown&lt;br /&gt;
in &amp;lt;xr id=&amp;quot;fig:mlswlsHeartvar&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mlswlsHeartvar&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswlsHeartvar.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;MLS and WLS approximation of a heartbeat-like function&lt;br /&gt;
$ f(x) = \frac{\sin x}{x}\frac{\left| x+8\right| - \left| x-5\right|  +26&lt;br /&gt;
}{13 ((\frac{x-1}{7})^4+1)}+\frac{1}{10} $,&lt;br /&gt;
with measurements taken at points  $ \{-14, \ldots,  24\} $.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The task of finding the minimal value of the first derivative is equivalent to&lt;br /&gt;
the task of finding the zero of the second derivative. This zero will be our local&lt;br /&gt;
approximation $t_L$ of the beat time, $\hat{f}''(t_L) = 0$.&lt;br /&gt;
Therefore an approximation function with a non constant second derivative,&lt;br /&gt;
i.e. approximation function with a&lt;br /&gt;
minimal 3rd order monomial basis, is constructed. The most straightforward&lt;br /&gt;
approach to find its root is the simple bisection. Bisection requires initial&lt;br /&gt;
low and high bounds that can be estimated from characteristic point $t_G$&lt;br /&gt;
provided by the global method. Using the fact that the QRS intervals last&lt;br /&gt;
approximately $\Delta t_{\text{QRS}}= 0.1 s$, we can seek for the root of the second&lt;br /&gt;
derivative on an interval $[t_G - \Delta t_{\text{QRS}}/2, t_G + \Delta&lt;br /&gt;
t_{\text{QRS}}/2]$, and at given sample rate this translates to the search&lt;br /&gt;
interval of two sample points away from $t_G$ in each direction.&lt;br /&gt;
&lt;br /&gt;
==HRV calculation and error estimation==&lt;br /&gt;
Given sampled heartbeat, the fine local search produces the vector of $\ell+1$ detected times&lt;br /&gt;
$\boldsymbol{t_L} := (t_{L,i})_{i=1}^{\ell+1}$ of the RS slopes. Their successive&lt;br /&gt;
differences represent a vector $\boldsymbol{\hat{r}}$ of detected beat to beat times,&lt;br /&gt;
the durations of RR intervals.&lt;br /&gt;
\[ \boldsymbol{\hat{r}} = (\hat{r}_{i})_{i=1}^\ell, \quad \hat{r}_i = t_{L,i+1} - t_{L,i} \]&lt;br /&gt;
Let $\boldsymbol{r}$ be the vector of (usually unknown) actual beat to beat times. Then the&lt;br /&gt;
heart rate variability (HRV) $h$ is defined as&lt;br /&gt;
\[ h := \text{std}(\boldsymbol{r}) = \sqrt{\frac{1}{\ell}\sum_{i=1}^{\ell} (r_i - \bar{r})^2}, \]&lt;br /&gt;
where $\bar{r}$ stands for the average beat to beat time, $\bar{r} =&lt;br /&gt;
\sum_{i=1}^\ell r_i / \ell$. The HRV estimation $\hat{h}$ is calculated as the&lt;br /&gt;
standard deviation of the detected times $\boldsymbol{\hat{r}}$.&lt;br /&gt;
&lt;br /&gt;
In the following analyses the actual vector $\boldsymbol{r}$ will be known, since the&lt;br /&gt;
synthesized heartbeat will be analysed. The most obvious error measures are the&lt;br /&gt;
absolute error of HRV, $e_{h} = |\hat{h} - h|$ and the absolute error of the&lt;br /&gt;
average heart beat $e_{\bar{r}} = |\bar{\hat{r}} - \bar{r}|$.  Using the vector&lt;br /&gt;
of errors $\boldsymbol{e} = |\boldsymbol{r} -\boldsymbol{\hat{r}}|$ the average error $e_a = \sum e_i&lt;br /&gt;
/ \ell$ and the maximal error $e_M = \max(\boldsymbol{e})$ can be assessed.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
==Approximation set-up==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:results&amp;quot;&amp;gt;&lt;br /&gt;
[[File:results.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt; Subfigures $\begin{bmatrix} a &amp;amp; b &amp;amp; ,&amp;amp; c &amp;amp; d \end{bmatrix}$. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure a. Sufficiently small support implies interpolation,&lt;br /&gt;
making weight function useless and MLS equal to WLS. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure b. MLS and WLS differ when approximating with a low order polynomial. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure c. MLS and WLS match when approximating with a high order polynomial.&amp;lt;br&amp;gt;&lt;br /&gt;
Sunfigure d. Expected bad behaviour with too many support points and a low order approximation.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In first step the approximation free parameters, i.e. weight function, support&lt;br /&gt;
size, and number of basis functions, have to be assessed. A single heartbeat is&lt;br /&gt;
extracted and approximated with all possible combinations of basis functions&lt;br /&gt;
with orders from $2$ to $10$ and symmetric supports of sizes from $3$ to $15$ using both&lt;br /&gt;
WLS and MLS. An global algorithm described in previous section was used&lt;br /&gt;
to produce the initial guesses. For demonstration four sample cases are&lt;br /&gt;
presented. The weight function was the same in all four cases, a Gaussian&lt;br /&gt;
distribution with $\mu = t_G$ and $\sigma = m/4$, which makes sure that all&lt;br /&gt;
support points are taken into account, but the central ones are more important.&lt;br /&gt;
&lt;br /&gt;
The simplest case is when the support size is the same as number of basis&lt;br /&gt;
functions resulting in an interpolation. In this case, the weight function is&lt;br /&gt;
not important, making WLS and MLS entirely equivalent as seen in&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;a.&lt;br /&gt;
&lt;br /&gt;
In case of small support and small order of monomial basis WLS performs worse&lt;br /&gt;
than MLS and the approaches differ significantly. However, as we increase the&lt;br /&gt;
order of the polynomial basis the difference within the bisection interval&lt;br /&gt;
becomes negligible. This transition can be observed in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;b and &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;c.&lt;br /&gt;
&lt;br /&gt;
As predicted, the support size is important. Both methods perform badly when&lt;br /&gt;
too many surrounding measurements are taken into account while still using a low&lt;br /&gt;
order polynomial approximation. Note that in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;d the initial&lt;br /&gt;
guess is barely improved and the beat shape is skewed away from the RS drop.&lt;br /&gt;
&lt;br /&gt;
The conclusion is, that for our purposes MLS approximation is unnecessary, as&lt;br /&gt;
WLS provides good enough results, when used appropriately. Further analysis&lt;br /&gt;
to determine the best choice of parameters $m$ and $n$ is presented later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computational complexity==&lt;br /&gt;
The presented algorithm is a streaming algorithm, requiring a buffer to store&lt;br /&gt;
the current beat in the signal. Let $b$ be the number of measurements per&lt;br /&gt;
beat, stored in a buffer of length $b$. The global part of the algorithm&lt;br /&gt;
makes $O(b)$ operations, being a simple local search.  The local part is&lt;br /&gt;
more expensive. First a $n \times m$ matrix is constructed in $O(mn)$&lt;br /&gt;
and the right hand side vector is copied from the buffer.  The system is then&lt;br /&gt;
solved using SVD decomposition in $O(mn^2+n^3)$ .  Note that as $m =&lt;br /&gt;
O(n)$, this step takes $O(n^3)$. The minimal first derivative is found&lt;br /&gt;
using bisection. To achieve tolerance $\varepsilon$,&lt;br /&gt;
$\lceil\log_2(1/\varepsilon)\rceil$ function evaluations are needed, each&lt;br /&gt;
costing $O(m)$ operations. Total time complexity is therefore equal to&lt;br /&gt;
$O(b + n^3 + m\log(1/\varepsilon))$. Note, that using MLS would require&lt;br /&gt;
$O(n^3)$ for each function evaluation, resulting in a significantly worse&lt;br /&gt;
time complexity of $O(b+n^3\log_2(1/\varepsilon)))$. The calculation of&lt;br /&gt;
average and variance is done later, after the wanted amount of signal has&lt;br /&gt;
already been analysed.&lt;br /&gt;
&lt;br /&gt;
In practice the algorithm executes very fast, using typical values $b =&lt;br /&gt;
150$, $m=6$, $n=11$ and $\varepsilon = 10^{-10}$ it runs&lt;br /&gt;
approximately $0.27$ s to analyze $1000$ heartbeats ($\approx 10^5$ data&lt;br /&gt;
points). The algorithm was compiled from C++ source code using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; g++5.3.0&amp;lt;/syntaxhighlight&amp;gt; with &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; -O2 &amp;lt;/syntaxhighlight&amp;gt; flag and run on&lt;br /&gt;
an &amp;lt;tt&amp;gt;Intel(R) Core(TM) i7-4700MQ&amp;lt;/tt&amp;gt; processor.&lt;br /&gt;
&lt;br /&gt;
==Simulated heartbeat with known variability==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityscatter&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityscatter.png|600px|thumb|upright=2|alt= Generated beat to beat times and their global detection|&amp;lt;caption&amp;gt;Generated beat to beat times and their global detection.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first set of tests for the presented method was performed using a simulated&lt;br /&gt;
heartbeat. A single real heartbeat was taken and then replicated thousand&lt;br /&gt;
times, each time shifted by a random offset $T$, distributed normally&lt;br /&gt;
around zero, $T \sim \mathcal{N}(0, \sigma^2)$, with $\sigma =&lt;br /&gt;
\frac{1}{2\nu} = \frac{1}{2} \Delta t$.  This means that a decent amount of&lt;br /&gt;
measurements will be more than $\Delta t$ apart, a difference that must be&lt;br /&gt;
detected by global search for method to work.  However, around half of the&lt;br /&gt;
measurements are less than $\Delta t$ apart, forming suitable ground for&lt;br /&gt;
testing the precision of the local search. At given sample frequency,&lt;br /&gt;
$\sigma$ equals $4.167 ms$.&lt;br /&gt;
&lt;br /&gt;
The generated beat to beat times, coarsely detected and finely detected times&lt;br /&gt;
are presented in &amp;lt;xr id=&amp;quot;fig:variabilityscatter&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Beat to beat time precision is significantly improved by the local search.&lt;br /&gt;
As seen in &amp;lt;xr id=&amp;quot;fig:variabilityhist&amp;quot;/&amp;gt;, the distributions of generated and detected&lt;br /&gt;
heart beats match very well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityhist&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityhist.png|600px|thumb|center|upright=2|alt= Generated RRI times and their global detection|&amp;lt;caption&amp;gt;Generated RRI times and their global detection.&lt;br /&gt;
The middle two coarse detection columns continue off the chart, but are not shown completely for clarity.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Results of RRI and HRV detection by global and local search are presented in&lt;br /&gt;
Table 1. The generated times were taken as precise and the&lt;br /&gt;
algorithm was run to produce global and local approximations. Then the average&lt;br /&gt;
RRI time and HRV were calculated for each data set separately.  The average RRI&lt;br /&gt;
time is estimated very well with both methods, but the precision of the global&lt;br /&gt;
method is not satisfactory when measuring heart rate variability. The precision&lt;br /&gt;
is significantly improved using the local search. A chart showing the average&lt;br /&gt;
error of the detected times is shown in &amp;lt;xr id=&amp;quot;fig:allerrs&amp;quot;/&amp;gt;. It can be seen&lt;br /&gt;
that MLS performs better on average, but WLS is very close and this loss of&lt;br /&gt;
precision is a reasonable tradeoff.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:allerrs&amp;quot;&amp;gt;&lt;br /&gt;
[[File:allerrs.png|600px|thumb|center|upright=2|alt= Comparison of WLS and MLS errors|&amp;lt;caption&amp;gt; Comparison of WLS and MLS errors using different orders and&lt;br /&gt;
support sizes.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The red values in &amp;lt;xr id=&amp;quot;fig:allerrs&amp;quot;/&amp;gt; indicate invalid region, having more basis functions than support points.&lt;br /&gt;
Both MLS and WLS have the same region of validity, being precise in the predicted regime.&lt;br /&gt;
For very high order approximation the condition number of the matrix becomes critical&lt;br /&gt;
and the method is unstable, which explains the loss of precision for orders larger than $12$.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Results and errors of the RRI and HRV detection.&lt;br /&gt;
|-&lt;br /&gt;
! quantity [s]&lt;br /&gt;
! generated&lt;br /&gt;
! coarse&lt;br /&gt;
! fine    &lt;br /&gt;
|-&lt;br /&gt;
|$\bar{r} $||$0.861136$||$0.861139$||$0.861136$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_{\bar{r}}$||$0$||$3.34 \cdot 10^{-6}$||$2.83 \cdot 10^{-8}$ &lt;br /&gt;
|-&lt;br /&gt;
|$h$||$0.004102$||$0.005324$||$0.004137$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_h $||$0$||$0.001222$||$3.52 \cdot 10^{-5}$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_a $||$0$||$0.002969$||$0.000263$ &lt;br /&gt;
|-&lt;br /&gt;
|$e_M $||$0$||$0.007778$||$0.000829$&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Allerrs.png&amp;diff=531</id>
		<title>File:Allerrs.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Allerrs.png&amp;diff=531"/>
				<updated>2016-11-07T17:38:14Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=530</id>
		<title>Heart rate variability detection</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Heart_rate_variability_detection&amp;diff=530"/>
				<updated>2016-11-07T17:36:38Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We used MLS and WLS aproximation to extract heart rate variability from a wearable ECG sensor.&lt;br /&gt;
[[:File:heartratevar.pdf|Full paper available for download here.]]&lt;br /&gt;
[[:File:heartratevar_pres.pdf | Presentation available for download here.]]&lt;br /&gt;
&lt;br /&gt;
The code can be found in &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; /EKG/detect.cpp &amp;lt;/syntaxhighlight&amp;gt;.&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:real&amp;quot;/&amp;gt; shows how we detected beat to beat times from an actual heartbeat.&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:real&amp;quot;&amp;gt;&lt;br /&gt;
[[File:real.png|600px|thumb|upright=2|alt=actual heartbeat detected beat to beat|&amp;lt;caption&amp;gt; We detected beat to beat times from an actual heartbeat in this way. &amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A slightly abridged version of the paper is presented below.&lt;br /&gt;
&lt;br /&gt;
=Detection of heart rate variability&amp;lt;br&amp;gt;from a wearable differential ECG device=&lt;br /&gt;
&lt;br /&gt;
[mailto:jure.slak@student.fmf.uni-lj.si Jure Slak], [mailto:gregor.kosec@ijs.si Gregor Kosec],&lt;br /&gt;
Jožef Stefan Insitute, Department of Communication Systems, Ljubljana.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
The precise heart rate variability is extracted from an ECG signal&lt;br /&gt;
measured by a wearable sensor that constantly records the heart activity of an&lt;br /&gt;
active subject for several days. Due to the limited resources of the wearable&lt;br /&gt;
ECG device the signal can only be sampled at relatively low, approximately $100$&lt;br /&gt;
Hz, frequency. Besides low sampling rate the signal from a wearable sensor is&lt;br /&gt;
also burdened with much more noise than the standard $12$-channel ambulatory&lt;br /&gt;
ECG, mostly due to the design of the device, i.e. the electrodes are&lt;br /&gt;
positioned relatively close to each other, and the fact that the subject is&lt;br /&gt;
active during the measurements. To extract heart rate variability with $1$ ms&lt;br /&gt;
precision, i.e. $10$ times more accurate than the sample rate of the measured&lt;br /&gt;
signal, a two-step algorithm is proposed. In first step an approximate global&lt;br /&gt;
search is performed, roughly determining the point of interest, followed by a&lt;br /&gt;
local search based on the Moving Least Squares approximation to refine the&lt;br /&gt;
result. The methodology is evaluated in terms of accuracy, noise sensitivity,&lt;br /&gt;
and computational complexity. All tests are performed on simulated as well as&lt;br /&gt;
measured data. It is demonstrated that the proposed algorithm provides&lt;br /&gt;
accurate results at a low computational cost and it is robust enough for&lt;br /&gt;
practical application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;Introduction&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:beat&amp;quot;&amp;gt;&lt;br /&gt;
[[File:beat.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Beat to beat time between two characteristic points.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is well known that the morphology of ECG signals changes from beat to beat as&lt;br /&gt;
a consequence of physical activity, sensations, emotions, breathing, etc. of the&lt;br /&gt;
subject. The most straightforward measure of these changes is&lt;br /&gt;
the heart rate variability (HRV), i.e. small variations of beat duration. HRV&lt;br /&gt;
characterizes the timings of hearth cells repolarization and depolarization&lt;br /&gt;
processes. The HRV is typically determined by measuring the intervals between&lt;br /&gt;
two consecutive R-waves (RRI) or intervals between R and T waves (RTI). Several&lt;br /&gt;
vital signals can be identified from the HRV and therefore it is often used as&lt;br /&gt;
a health status indicator in different fields of medicine, e.g. neurology,&lt;br /&gt;
cardiac surgery, heart transplantation and many more. Typical HRV values of&lt;br /&gt;
healthy subjects are approximately $40$ ms for RRI and $2$ ms for RTI (see&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;). Therefore it is important to detect considered waves with at least $1$&lt;br /&gt;
ms accuracy for practical use.  This paper deals with the detection of HRV in&lt;br /&gt;
ECG signal provided by a Wearable ECG Device (WECGD) that is paired with a&lt;br /&gt;
personal digital assistant (PDA) via Bluetooth Smart protocol. The WECGD, due&lt;br /&gt;
to the hardware limitations, only measures the signal, while the PDA takes care&lt;br /&gt;
of data visualization, basic analysis and transmission of the data to a more&lt;br /&gt;
powerful server for further analyses. In contrast to a standard ambulatory&lt;br /&gt;
$12$-channel ECG measurement, where trained personnel prepare and supervise the&lt;br /&gt;
measurement of subject at rest, the WECGD works on a single channel, the&lt;br /&gt;
subject is active and since the WECGD is often placed by an untrained user its&lt;br /&gt;
orientation might be random, resulting in additional decrease of signal&lt;br /&gt;
quality.  In order to maintain several days of battery autonomy The WECGD also&lt;br /&gt;
records the heart rate with significantly lower frequencies and resolution in&lt;br /&gt;
comparison to ambulatory measurements.  All these factors render the standard&lt;br /&gt;
ECG analysis algorithms ineffective. In this paper we analyse a possible local,&lt;br /&gt;
i.e. only short history of measurement data is required, algorithm for detection&lt;br /&gt;
of heart rate variability with $1$ ms precision of a signal recorded with $120$ Hz.&lt;br /&gt;
&lt;br /&gt;
=Detection method=&lt;br /&gt;
&lt;br /&gt;
In order to evaluate the HRV, the ''characteristic point'' of each heart&lt;br /&gt;
beat has to be detected in the signal, which is provided as values of electric&lt;br /&gt;
potential sampled with a frequency $120$ Hz. Since the HRV is computed from&lt;br /&gt;
differences of consequent characteristic points the choice of the&lt;br /&gt;
characteristic point does not play any role, as long as it is the same in every&lt;br /&gt;
beat. In this work we choose to characterise the beat with a minimal first&lt;br /&gt;
derivative, in another words, we seek the points in the signal with the most&lt;br /&gt;
violent drop in electric potential (&amp;lt;xr id=&amp;quot;fig:beat&amp;quot;/&amp;gt;) that occurs between R and S&lt;br /&gt;
peaks.&lt;br /&gt;
&lt;br /&gt;
The detection method is separated in two stages, namely global and local. The&lt;br /&gt;
goal of the global method is to approximately detect the characteristic point,&lt;br /&gt;
while the local method serves as a fine precision detection, enabling us to&lt;br /&gt;
detect HRV with much higher accuracy.&lt;br /&gt;
&lt;br /&gt;
==Coarse global search==&lt;br /&gt;
In the first step the algorithm finds a minimal first derivative of a given signal&lt;br /&gt;
on a sample rate accuracy, i.e. $\frac{1}{\nu}$. The global search method is next to&lt;br /&gt;
trivial. The algorithm simply travels along the signal, calculating the&lt;br /&gt;
discrete derivative and storing the position of minimal values found so far.&lt;br /&gt;
Since the points are sampled equidistantly, minimizing $\frac{\Delta y}{\Delta&lt;br /&gt;
t}$ is equal to minimizing $\Delta y$. The middle of the interval where the&lt;br /&gt;
largest drop was detected is taken as the global guess $t_G$. The results&lt;br /&gt;
of the global search are presented in &amp;lt;xr id=&amp;quot;fig:global&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:global&amp;quot;&amp;gt;&lt;br /&gt;
[[File:global.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;Global search detection of two beats.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Fine local search==&lt;br /&gt;
The global search provides only coarse, limited to sample points, positions of&lt;br /&gt;
the characteristic points. To push accuracy beyond $1/\nu$, the signal has to be&lt;br /&gt;
represented also in between the sample points. A monomial approximation function&lt;br /&gt;
based on a Moving Least Squares approach is introduced for&lt;br /&gt;
that purpose.&lt;br /&gt;
&lt;br /&gt;
The value of the electrical potential at arbitrary time $t_0$ is approximated.&lt;br /&gt;
Denote the vector of $n$ known values near $t_0$ by $\boldsymbol{f}$ (called&lt;br /&gt;
&amp;lt;i&amp;gt;support&amp;lt;/i&amp;gt;), and the times at which they were measured by&lt;br /&gt;
$\boldsymbol{t}$.  The approximation $\hat{f}$ of $\boldsymbol{f}$ is introduced as a linear&lt;br /&gt;
combination of $m$ in general arbitrary basis functions $(b_j)_{j=1}^m$,&lt;br /&gt;
however in this work only monomials are considered.&lt;br /&gt;
\[\hat{f} = \sum_{j=1}^m\alpha_jb_j \]&lt;br /&gt;
&lt;br /&gt;
The most widely used approach to solve above problem and find the appropriate&lt;br /&gt;
$\hat{f}$ is to minimize the weighted 2-norm of the error, also known as the&lt;br /&gt;
Weighted Least Squares (WLS) method.&lt;br /&gt;
\[  \|\boldsymbol{f} - \hat{f}(\boldsymbol{t})\|_w^2 = \sum_{i=1}^n (f_i -\hat{f}(t_i))^2 w(t_i),  \]&lt;br /&gt;
In the above equation $w$ is a nonnegative weight function.&lt;br /&gt;
&lt;br /&gt;
The only unknown quantities are the $m$ coefficients $\boldsymbol{\alpha}$ of the linear&lt;br /&gt;
combination, which can be expressed as a solution of an overdetermined linear&lt;br /&gt;
system $W\!B\boldsymbol{\alpha} = W\!\boldsymbol{f}$, where $W$ is the $n\times n$ diagonal&lt;br /&gt;
weight matrix, $W_{ii} = \sqrt{w(t_i)}$ and $B$ is the $n\times m$ collocation&lt;br /&gt;
matrix, $B_{ij} = b_j(t_i)$. There are different approaches towards finding the&lt;br /&gt;
solution.  The fastest and also the least stable and accurate is to solve the&lt;br /&gt;
Normal System $B^\mathsf{T} W^\mathsf{T} WB\boldsymbol{\alpha} = B^\mathsf{T} W^\mathsf{T} W\boldsymbol{f}$, a more&lt;br /&gt;
expensive but also more stable is via QR decomposition, and finally the most&lt;br /&gt;
expensive and also the most stable is via SVD&lt;br /&gt;
decomposition. The resulting vector $\boldsymbol{\alpha}$ is then&lt;br /&gt;
used to calculate $\hat{f}(t)$ for any given $t$. The derivatives are&lt;br /&gt;
approximated simply by differentiating the approximating function, $\hat{f}' =&lt;br /&gt;
\sum_{j=1}^m\alpha_jb_j'$.&lt;br /&gt;
&lt;br /&gt;
The WLS approximation weights the influence of support points using the weight&lt;br /&gt;
function $w$. Usually, such a weight is chosen that points closest to $t_0$ are&lt;br /&gt;
more important in the norm in comparison to the nodes far away. Naturally such&lt;br /&gt;
approximation is valid only as long as the evaluation point is close to the&lt;br /&gt;
$t_0$.&lt;br /&gt;
A more general approach is a [[Moving Least Squares (MLS)|Moving Least Square (MLS)]] approximation, where&lt;br /&gt;
coefficients $\alpha$ are not spatially independent any more, but are recomputed&lt;br /&gt;
for each evaluation point. Naturally, such an approach is way more expensive,&lt;br /&gt;
but also more precise. A comparison of both methods, i.e. WLS and MLS, is shown&lt;br /&gt;
in &amp;lt;xr id=&amp;quot;fig:mlswlsHeartvar&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:mlswlsHeartvar&amp;quot;&amp;gt;&lt;br /&gt;
[[File:mlswlsHeartvar.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt;MLS and WLS approximation of a heartbeat-like function&lt;br /&gt;
$ f(x) = \frac{\sin x}{x}\frac{\left| x+8\right| - \left| x-5\right|  +26&lt;br /&gt;
}{13 ((\frac{x-1}{7})^4+1)}+\frac{1}{10} $,&lt;br /&gt;
with measurements taken at points  $ \{-14, \ldots,  24\} $.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The task of finding the minimal value of the first derivative is equivalent to&lt;br /&gt;
the task of finding the zero of the second derivative. This zero will be our local&lt;br /&gt;
approximation $t_L$ of the beat time, $\hat{f}''(t_L) = 0$.&lt;br /&gt;
Therefore an approximation function with a non constant second derivative,&lt;br /&gt;
i.e. approximation function with a&lt;br /&gt;
minimal 3rd order monomial basis, is constructed. The most straightforward&lt;br /&gt;
approach to find its root is the simple bisection. Bisection requires initial&lt;br /&gt;
low and high bounds that can be estimated from characteristic point $t_G$&lt;br /&gt;
provided by the global method. Using the fact that the QRS intervals last&lt;br /&gt;
approximately $\Delta t_{\text{QRS}}= 0.1 s$, we can seek for the root of the second&lt;br /&gt;
derivative on an interval $[t_G - \Delta t_{\text{QRS}}/2, t_G + \Delta&lt;br /&gt;
t_{\text{QRS}}/2]$, and at given sample rate this translates to the search&lt;br /&gt;
interval of two sample points away from $t_G$ in each direction.&lt;br /&gt;
&lt;br /&gt;
==HRV calculation and error estimation==&lt;br /&gt;
Given sampled heartbeat, the fine local search produces the vector of $\ell+1$ detected times&lt;br /&gt;
$\boldsymbol{t_L} := (t_{L,i})_{i=1}^{\ell+1}$ of the RS slopes. Their successive&lt;br /&gt;
differences represent a vector $\boldsymbol{\hat{r}}$ of detected beat to beat times,&lt;br /&gt;
the durations of RR intervals.&lt;br /&gt;
\[ \boldsymbol{\hat{r}} = (\hat{r}_{i})_{i=1}^\ell, \quad \hat{r}_i = t_{L,i+1} - t_{L,i} \]&lt;br /&gt;
Let $\boldsymbol{r}$ be the vector of (usually unknown) actual beat to beat times. Then the&lt;br /&gt;
heart rate variability (HRV) $h$ is defined as&lt;br /&gt;
\[ h := \text{std}(\boldsymbol{r}) = \sqrt{\frac{1}{\ell}\sum_{i=1}^{\ell} (r_i - \bar{r})^2}, \]&lt;br /&gt;
where $\bar{r}$ stands for the average beat to beat time, $\bar{r} =&lt;br /&gt;
\sum_{i=1}^\ell r_i / \ell$. The HRV estimation $\hat{h}$ is calculated as the&lt;br /&gt;
standard deviation of the detected times $\boldsymbol{\hat{r}}$.&lt;br /&gt;
&lt;br /&gt;
In the following analyses the actual vector $\boldsymbol{r}$ will be known, since the&lt;br /&gt;
synthesized heartbeat will be analysed. The most obvious error measures are the&lt;br /&gt;
absolute error of HRV, $e_{h} = |\hat{h} - h|$ and the absolute error of the&lt;br /&gt;
average heart beat $e_{\bar{r}} = |\bar{\hat{r}} - \bar{r}|$.  Using the vector&lt;br /&gt;
of errors $\boldsymbol{e} = |\boldsymbol{r} -\boldsymbol{\hat{r}}|$ the average error $e_a = \sum e_i&lt;br /&gt;
/ \ell$ and the maximal error $e_M = \max(\boldsymbol{e})$ can be assessed.&lt;br /&gt;
&lt;br /&gt;
=Results and discussion=&lt;br /&gt;
&lt;br /&gt;
==Approximation set-up==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:results&amp;quot;&amp;gt;&lt;br /&gt;
[[File:results.png|600px|thumb|upright=2|alt= Beat graph|&amp;lt;caption&amp;gt; Subfigures $\begin{bmatrix} a &amp;amp; b &amp;amp; ,&amp;amp; c &amp;amp; d \end{bmatrix}$. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure a. Sufficiently small support implies interpolation,&lt;br /&gt;
making weight function useless and MLS equal to WLS. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure b. MLS and WLS differ when approximating with a low order polynomial. &amp;lt;br&amp;gt;&lt;br /&gt;
Subfigure c. MLS and WLS match when approximating with a high order polynomial.&amp;lt;br&amp;gt;&lt;br /&gt;
Sunfigure d. Expected bad behaviour with too many support points and a low order approximation.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In first step the approximation free parameters, i.e. weight function, support&lt;br /&gt;
size, and number of basis functions, have to be assessed. A single heartbeat is&lt;br /&gt;
extracted and approximated with all possible combinations of basis functions&lt;br /&gt;
with orders from $2$ to $10$ and symmetric supports of sizes from $3$ to $15$ using both&lt;br /&gt;
WLS and MLS. An global algorithm described in previous section was used&lt;br /&gt;
to produce the initial guesses. For demonstration four sample cases are&lt;br /&gt;
presented. The weight function was the same in all four cases, a Gaussian&lt;br /&gt;
distribution with $\mu = t_G$ and $\sigma = m/4$, which makes sure that all&lt;br /&gt;
support points are taken into account, but the central ones are more important.&lt;br /&gt;
&lt;br /&gt;
The simplest case is when the support size is the same as number of basis&lt;br /&gt;
functions resulting in an interpolation. In this case, the weight function is&lt;br /&gt;
not important, making WLS and MLS entirely equivalent as seen in&lt;br /&gt;
&amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;a.&lt;br /&gt;
&lt;br /&gt;
In case of small support and small order of monomial basis WLS performs worse&lt;br /&gt;
than MLS and the approaches differ significantly. However, as we increase the&lt;br /&gt;
order of the polynomial basis the difference within the bisection interval&lt;br /&gt;
becomes negligible. This transition can be observed in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;b and &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;c.&lt;br /&gt;
&lt;br /&gt;
As predicted, the support size is important. Both methods perform badly when&lt;br /&gt;
too many surrounding measurements are taken into account while still using a low&lt;br /&gt;
order polynomial approximation. Note that in &amp;lt;xr id=&amp;quot;fig:results&amp;quot;/&amp;gt;d the initial&lt;br /&gt;
guess is barely improved and the beat shape is skewed away from the RS drop.&lt;br /&gt;
&lt;br /&gt;
The conclusion is, that for our purposes MLS approximation is unnecessary, as&lt;br /&gt;
WLS provides good enough results, when used appropriately. Further analysis&lt;br /&gt;
to determine the best choice of parameters $m$ and $n$ is presented later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Computational complexity==&lt;br /&gt;
The presented algorithm is a streaming algorithm, requiring a buffer to store&lt;br /&gt;
the current beat in the signal. Let $b$ be the number of measurements per&lt;br /&gt;
beat, stored in a buffer of length $b$. The global part of the algorithm&lt;br /&gt;
makes $O(b)$ operations, being a simple local search.  The local part is&lt;br /&gt;
more expensive. First a $n \times m$ matrix is constructed in $O(mn)$&lt;br /&gt;
and the right hand side vector is copied from the buffer.  The system is then&lt;br /&gt;
solved using SVD decomposition in $O(mn^2+n^3)$ .  Note that as $m =&lt;br /&gt;
O(n)$, this step takes $O(n^3)$. The minimal first derivative is found&lt;br /&gt;
using bisection. To achieve tolerance $\varepsilon$,&lt;br /&gt;
$\lceil\log_2(1/\varepsilon)\rceil$ function evaluations are needed, each&lt;br /&gt;
costing $O(m)$ operations. Total time complexity is therefore equal to&lt;br /&gt;
$O(b + n^3 + m\log(1/\varepsilon))$. Note, that using MLS would require&lt;br /&gt;
$O(n^3)$ for each function evaluation, resulting in a significantly worse&lt;br /&gt;
time complexity of $O(b+n^3\log_2(1/\varepsilon)))$. The calculation of&lt;br /&gt;
average and variance is done later, after the wanted amount of signal has&lt;br /&gt;
already been analysed.&lt;br /&gt;
&lt;br /&gt;
In practice the algorithm executes very fast, using typical values $b =&lt;br /&gt;
150$, $m=6$, $n=11$ and $\varepsilon = 10^{-10}$ it runs&lt;br /&gt;
approximately $0.27$ s to analyze $1000$ heartbeats ($\approx 10^5$ data&lt;br /&gt;
points). The algorithm was compiled from C++ source code using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; g++5.3.0&amp;lt;/syntaxhighlight&amp;gt; with &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; inline&amp;gt; -O2 &amp;lt;/syntaxhighlight&amp;gt; flag and run on&lt;br /&gt;
an &amp;lt;tt&amp;gt;Intel(R) Core(TM) i7-4700MQ&amp;lt;/tt&amp;gt; processor.&lt;br /&gt;
&lt;br /&gt;
==Simulated heartbeat with known variability==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityscatter&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityscatter.png|600px|thumb|upright=2|alt= Generated beat to beat times and their global detection|&amp;lt;caption&amp;gt;Generated beat to beat times and their global detection.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first set of tests for the presented method was performed using a simulated&lt;br /&gt;
heartbeat. A single real heartbeat was taken and then replicated thousand&lt;br /&gt;
times, each time shifted by a random offset $T$, distributed normally&lt;br /&gt;
around zero, $T \sim \mathcal{N}(0, \sigma^2)$, with $\sigma =&lt;br /&gt;
\frac{1}{2\nu} = \frac{1}{2} \Delta t$.  This means that a decent amount of&lt;br /&gt;
measurements will be more than $\Delta t$ apart, a difference that must be&lt;br /&gt;
detected by global search for method to work.  However, around half of the&lt;br /&gt;
measurements are less than $\Delta t$ apart, forming suitable ground for&lt;br /&gt;
testing the precision of the local search. At given sample frequency,&lt;br /&gt;
$\sigma$ equals $4.167 ms$.&lt;br /&gt;
&lt;br /&gt;
The generated beat to beat times, coarsely detected and finely detected times&lt;br /&gt;
are presented in &amp;lt;xr id=&amp;quot;fig:variabilityscatter&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Beat to beat time precision is significantly improved by the local search.&lt;br /&gt;
As seen in &amp;lt;xr id=&amp;quot;fig:variabilityhist&amp;quot;/&amp;gt;, the distributions of generated and detected&lt;br /&gt;
heart beats match very well.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;figure id=&amp;quot;fig:variabilityhist&amp;quot;&amp;gt;&lt;br /&gt;
[[File:variabilityhist.png|600px|thumb|center|upright=2|alt= Generated RRI times and their global detection|&amp;lt;caption&amp;gt;Generated RRI times and their global detection.&lt;br /&gt;
The middle two coarse detection columns continue off the chart, but are not shown completely for clarity.&amp;lt;/caption&amp;gt;]]&lt;br /&gt;
&amp;lt;/figure&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Variabilityscatter.png&amp;diff=528</id>
		<title>File:Variabilityscatter.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Variabilityscatter.png&amp;diff=528"/>
				<updated>2016-11-07T17:31:20Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Variabilityhist.png&amp;diff=529</id>
		<title>File:Variabilityhist.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Variabilityhist.png&amp;diff=529"/>
				<updated>2016-11-07T17:31:20Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=File:Results.png&amp;diff=524</id>
		<title>File:Results.png</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=File:Results.png&amp;diff=524"/>
				<updated>2016-11-07T16:56:28Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: File uploaded with MsUpload&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;File uploaded with MsUpload&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	<entry>
		<id>http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=523</id>
		<title>Medusa</title>
		<link rel="alternate" type="text/html" href="http://e6.ijs.si/medusa/wiki/index.php?title=Medusa&amp;diff=523"/>
				<updated>2016-11-07T16:48:38Z</updated>
		
		<summary type="html">&lt;p&gt;Anja: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--__NOTITLE__--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Library for solving PDEs&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Parallel and Distributed Systems Laboratory we are working on a C++ library that is first and foremost focused on tools for solving Partial Differential Equations by meshless methods. The basic idea is to create generic codes for tools that are needed for solving not only PDEs but many other problems, e.g. Moving Least Squares approximation, kD-tree, domain generation engines, etc. Technical details about code, examples, and  can be found on our [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/html/ documentation page]&lt;br /&gt;
and [https://gitlab.com/e62Lab/e62numcodes the code].&lt;br /&gt;
&lt;br /&gt;
This wiki site is meant for more relaxed discussions about general principles, possible and already implemented applications, preliminary analyses, etc.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
* [[Moving Least Squares (MLS)]]&lt;br /&gt;
* [[kd Tree]]&lt;br /&gt;
* [[Meshless Local Strong Form Method (MLSM)]]&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
* [[Analysis of MLSM performance | Solving Diffusion Equation]]&lt;br /&gt;
* Attenuation of satellite communication&lt;br /&gt;
* [[Heart rate variability detection]]&lt;br /&gt;
* [[Dynamic Thermal Rating of over head lines]]&lt;br /&gt;
* [[Fluid Flow]]&lt;br /&gt;
* [[Phase field tracking]]&lt;br /&gt;
* [[Solid Mechanics]]&lt;br /&gt;
&lt;br /&gt;
== Preliminary analyses ==&lt;br /&gt;
* Execution on Intel® Xeon Phi™ co-processor&lt;br /&gt;
* Execution overheads due to clumsy types&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
* [https://gitlab.com/e62Lab/e62numcodes Code and README on Gitlab]&lt;br /&gt;
* [[How to build]]&lt;br /&gt;
* [[Coding style | Coding style]]&lt;br /&gt;
* [[Testing]]&lt;br /&gt;
* [http://www-e6.ijs.si/ParallelAndDistributedSystems/MeshlessMachine/technical_docs/ Technical documentation]&lt;br /&gt;
* [[Wiki editing guide]]&lt;br /&gt;
* [[Wiki backup guide]]&lt;br /&gt;
&lt;br /&gt;
== FAQ ==&lt;br /&gt;
Also see [[Frequently asked questions]].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Kosec G., A local numerical solution of a fluid-flow problem on an irregular domain. Advances in engineering software. 2016;7 ; [29512743] :: [http://comms.ijs.si/~gkosec/data/papers/29512743.pdf manuscript]&lt;br /&gt;
* Kosec G., Trobec R., Simulation of semiconductor devices with a local numerical approach. Engineering analysis with boundary elements. 2015;69-75; [27912487] :: [http://comms.ijs.si/~gkosec/data/papers/27912487.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., Simulation of macrosegregation with mesosegregates in binary metallic casts by a meshless method. Engineering analysis with boundary elements. 2014;36-44; [http://comms.ijs.si/~gkosec/data/papers/3218939.pdf manuscript]&lt;br /&gt;
* Kosec G., Depolli M., Rashkovska A., Trobec R., Super linear speedup in a local parallel meshless solution of thermo-fluid problem. Computers &amp;amp; Structures. 2014;133:30-38; [http://comms.ijs.si/~gkosec/data/papers/27339815.pdf manuscript]&lt;br /&gt;
* Kosec G., Zinterhof P., Local strong form meshless method on multiple Graphics Processing Units. Computer modeling in engineering &amp;amp; sciences. 2013;91:377-396; [http://comms.ijs.si/~gkosec/data/papers/26785063.pdf manuscript]&lt;br /&gt;
* Kosec G., Šarler B., H-adaptive local radial basis function collocation meshless method. Computers, materials &amp;amp; continua. 2011;26:227-253; [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerBurgers.pdf manuscript]&lt;br /&gt;
* Trobec R., Kosec G., Šterk M., Šarler B., Comparison of local weak and strong form meshless methods for 2-D diffusion equation. Engineering analysis with boundary elements. 2012;36:310-321; [http://comms.ijs.si/~gkosec/data/papers/EABE2499.pdf manuscript]&lt;br /&gt;
* Kosec G, Zaloznik M, Sarler B, Combeau H. A Meshless Approach Towards Solution of Macrosegregation Phenomena. CMC: Computers, Materials, &amp;amp; Continua. 2011;580:1-27 [http://comms.ijs.si/~gkosec/data/papers/KosecZaloznikSarlerCombeauSegregation.pdf manuscript]&lt;br /&gt;
* Kosec G, Sarler B. Solution of thermo-fluid problems by collocation with local pressure correction. International Journal of Numerical Methods for Heat &amp;amp; Fluid Flow. 2008;18:868-82 [http://comms.ijs.si/~gkosec/data/papers/KosecSarlerNS2008.pdf manuscript]&lt;br /&gt;
*  Trobec R., Kosec G., Parallel Scientific Computing, ISBN: 978-3-319-17072-5 (Print) 978-3-319-17073-2.&lt;br /&gt;
*  Slak, J., Kosec, G.. Detection of heart rate variability from a wearable differential ECG device., MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938, pp 450-455.&lt;br /&gt;
*  Kolman, M., Kosec, G. Correlation between attenuation of 20 GHz satellite communication link and liquid water content in the atmosphere. MIPRO 2016, 39th International Convention, 2016, Opatija, Croatia, ISSN 1847-3938. pp. 308-313.&lt;/div&gt;</summary>
		<author><name>Anja</name></author>	</entry>

	</feed>