Difference between revisions of "Positioning of computational nodes"

From Medusa: Coordinate Free Mehless Method implementation
Jump to: navigation, search
(Refinement of the nodal distribution)
 
(107 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Although the meshless methods do not require any topological relations between nodes and even randomly distributed nodes could be used, it is well-known that using regularly distributed nodes leads to more accurate and more stable results. So despite meshless seeming robustness regarding the nodal distribution, a certain effort has to be invested into the positioning of the nodes and following discussion, to some extent, deals with this problem.
+
{{Box-round|title= Related papers |
In a following discussion we want to form set of algorithms that can cover arbitrary domain in n dimensions with nodes that can be further used to solve systems of PDEs. We start with algorithms for filling the domain with nodes. Second set of algorithms is denoted to improving of the nodal distributions and a last set of algorithms deals with the refinement of the nodal distribution. The algorithms can of course be combined.
 
  
== Filling domain with nodes ==
+
[https://e6.ijs.si/ParallelAndDistributedSystems/publications/98533123.pdf M. Depolli, J. Slak, G. Kosec; Parallel domain discretization algorithm for RBF-FD and other meshless numerical methods for solving PDEs, Computers & Structures, 2022 [DOI: 10.1016/j.compstruc.2022.106773]]
The goal is to fill arbitrary domain, which we can ask if position $\b{p}$ is in the domain our outside, with nodes following the given target density function $\rho(\b{p})$.  
 
  
We start with a simple algorithm based on '''Poisson Disk Sampling''' (PDS) that results in a relatively tightly packed distribution of nodes. What we do is:
+
[https://e6.ijs.si/ParallelAndDistributedSystems/publications/32782887.pdf J. Slak, G. Kosec; On generation of node distributions for meshless PDE discretizations, SIAM journal on scientific computing, vol. 41, 2019 [DOI: 10.1137/18M1231456]]
* Randomly select a node within the domain.
 
* Add N (depending on the dimensionality of the space) nodes on circle/sphere around that node.  
 
* Randomly select one of just added nodes and repeat adding.
 
* Each time check if any of new nodes violates minimal distance criterion, i.e. if it is positioned to close to any of the existing nodes.
 
  
Examples of PDS discretized irregular domains in 2D (first code snippet, left image) and in 3D (second code snippet, right image) can be found below. 
+
[https://e6.ijs.si/ParallelAndDistributedSystems/publications/56730115.pdf U. Duh, G. Kosec, J. Slak; Fast variable density node generation on parametric surfaces with application to mesh-free methods, SIAM journal on scientific computing, vol. 43, 2021 [DOI: 10.1137/20M1325642]]
<syntaxhighlight lang="c++" line>
 
    // setup
 
    PoissonDiskSamplingFill fill_engine;
 
    fill_engine.iterations(1000000).optim_after(1000).use_MC(true).proximity_relax(0.8);
 
    // create test domain
 
    RectangleDomain<Vec2d> domain({0, 0}, {1.2, 1.5});
 
    CircleDomain<Vec2d> o1({0.4, 0.4}, 0.25);
 
    domain.subtract(o1);
 
    // define density function
 
    auto fill_density = [](Vec2d p)->double{
 
        return (std::pow(p[0]/10, 2) + 0.01);
 
    };
 
    fill_engine(domain, fill_density);
 
</syntaxhighlight>
 
  
<syntaxhighlight lang="c++" line>
+
[https://e6.ijs.si/ParallelAndDistributedSystems/publications/69777155.pdf J. Slak, G. Kosec; Medusa : A C++ library for solving PDEs using strong form mesh-free methods, ACM transactions on mathematical software, vol. 47, 2021 [DOI: 10.1145/3450966]]
PoissonDiskSamplingFill fill_engine;
+
}}
    // setup
 
    PoissonDiskSamplingFill fill_engine;
 
    fill_engine.iterations(1000000).optim_after(1000).use_MC(true).proximity_relax(0.8);
 
    // create test domain
 
    RectangleDomain<Vec3d> domain({0, 0, 0}, {1.2, 1.5, 1.1});
 
    CircleDomain<Vec3d> o1({0, 0, 0}, 0.5);
 
    domain.subtract(o1);
 
    // define target density
 
    auto fill_density = [](Vec3d p)->double{
 
        return (std::pow(p[0]/10 + p[1]/10 + p[2]/10, 2) + 0.025);
 
    };
 
    fill_engine(domain, fill_density);
 
</syntaxhighlight>
 
  
Examples of PDS discretized in 2D and in 3D generated with above code.
+
__TOC__
  
[[File:2d_poisson_disk_sampling.png|500px]] [[File:3d_poisson_disk_sampling.png|600px]]
+
Since one of the most attractive features of mesh-free methods is the ability to use nodes
 +
without any connectivity information, node placing was considered much easier than mesh generation
 +
or simply used existing tools for mesh generation and was thus often disregarded,
 +
sometimes implying that any nodes could be used, even if placed at random.
 +
It soon turned out that that is not the case, mostly with strong form methods,
 +
since many methods require regular nodes for good performance and bad distributions
 +
can impact their stability.
  
== Relaxation of the nodal distribution ==
 
To construct stable and reliable shape functions the support domains need to be non-degenerated [1], i.e. the distances between support nodes have to be balanced. Naturally, this condition is fulfilled in regular nodal distributions, but when working with complex geometries, the nodes have to be positioned accordingly. There are different algorithms designed to optimally fill the domain with different shapes [2, 3]. Here an intrinsic feature of the MLSM is used to take care of that problem. The goal is to minimize the overall support domain degeneration in order to attain stable numerical solution. In other words, a global optimization problem with the overall deformation of the local support domains acting as the cost function is tackled. We seek the global minimum by a local iterative approach. In each iteration, the computational nodes are translated according to the local derivative of the potential
 
\begin{equation}
 
\delta \b{p}\left( \b{p} \right)=-\sigma_{k}\sum\limits_{n=1}^{n}{\nabla }V\left( \mathbf{p}-\b{p}_n \right)
 
\end{equation}
 
where $V, n, \delta \b{p}, \b{p}_{n}$ and $\sigma_{k}$ stand for the potential, number of support nodes, offset of the node, position of n-th support node and relaxation parameter, respectively. After offsets in all nodes are computed, the nodes are repositioned as
 
$\b{p}\leftarrow \b{p}+\delta \b{p}\left( \b{p} \right)$.
 
Presented iterative process procedure begins with positioning of boundary nodes, which is considered as the definition of the domain, and then followed by the positioning of internal nodes.
 
  
The BasicRelax Engine  supports two call types:
+
One of the key successes of RBF based mesh free methods, such as RBF-generated finite differences
* with supplied distribution function relax(func), where it tries to satisfy the user supplied nodal density function. This can be achieved only when there is the total number of domain nodes the same as integral of density function over the domain. If there is too much nodes a volatile relax might occur. If there is not enough nodes the relax might become lazy. The best use of this mode is in combination with fillDistribution Engines.
+
(RBF-FD) is the ability to use highly spatially variable node distributions which can
* without distribution, where nodes always move towards less populated area. The relax magnitude is simply determined from Annealing factor and distance to the closest node. A simple and stable approach, however, note that this relax always converges towards uniformly distributed nodes.
+
adapt to irregular geometries and allow for refinement in critical areas.
 +
We present our algorithms below. Their goal is to fill an arbitrary domain $\Omega \subseteq \R^d$ with nodes following the given target spacing function $h(\b{p}): \Omega \to (0, \infty)$, which maps points from $\Omega$ to desired distance to neighboring points.
 +
 
 +
Please refer to following papers for mode details:
 +
 
 +
 
 +
== Measures of node regularity ==
 +
 
 +
To analyze the regularity of nodes locally, we find $c$ nearest neighbors $\b{p}_{i, j}, j = 1, 2, \dots c$ of each node $\b{p}_i$. Local regularity can now be measured with
 +
\begin{align*}
 +
\bar{d}_i = \frac{1}{c} \sum_{j =1}^{c}\|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{average distance to} \ c \ \text{nearest neighbors}, \\
 +
d_i^{\text{min}} = \min_{j=1, \dots c} \|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{minimum distance to} \ c \ \text{nearest neighbors}, \\
 +
d_i^{\text{max}} = \max_{j=1, \dots c} \|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{maximum distance to} \ c \ \text{nearest neighbors}. \\
 +
\end{align*}
 +
We can also normalize the quantities by scaling them with $h$, thus getting $d'_i = d_i / h(\b{p}_i)$. It is therefore desirable that all the normalised above quantities are approximately equal to $1$.  
  
Example of filling and relaxing 2D domain can be found in below code snippet and Figure. Note the difference between relax without supplied distribution (right) and with supplied distribution (left). The quiver plot represents normal vectors in boundary nodes.
+
Global regularity can be assessed by plotting distributions of local regularity measures. For example, a desirable distribution of $\bar{d}'_i$ would have maximum and average approximately equal to $1$ and a low standard deviation. If $h$ is a constant function, a discretization of $\Omega$ with point set $\mathcal{X} = {x_1, \dots, x_N} \subseteq \Omega$ can also be assessed with standard concepts such as<ref name="ScatteredData">H. Wendland, Scattered data approximation, vol. 17, Cambridge university press, 2004.</ref>
More examples can be found in main repository under tests.
+
\begin{align*}
<syntaxhighlight lang="c++" line>
+
r_{\max, \mathcal{X}} = \sup_{x \in \Omega} \min_{1 \leq j \leq N} \|x - x_j\| & \quad \dots \quad \text{maximum empty sphere radius}, \\
    double r = 0.25;
+
r_{\min, \mathcal{X}} = \frac{1}{2} \min_{i \neq j} \| x_i - x_j \| & \quad \dots \quad \text{separation distance}. \\
    CircleDomain<Vec2d> c({0.5, 0.5}, r);
+
\end{align*}
 +
Those can also be normalized, getting $r' = r / h$. It is desirable that both $r'_\max$ and $r'_\min$ are approximately equal to $\frac{1}{2}$, however values of $r_\max$ around $2$ (especially when $N$ is small) are also considered good enough. In practice, the maximum empty sphere radius can be numerically estimated by discretizing $\Omega$ with a much smaller nodal spacing $h$ and calculating the maximum empty sphere radius with center in one of the generated nodes.
  
    BasicRelax relax;
+
== Filling domain interior ==
    relax.iterations(100).InitialHeat(1).FinalHeat(0).projectionType(1).numNeighbours(3);
 
    relax.boundaryProjectionThreshold(0.55);
 
    auto fill_density = [](Vec2d p) -> double {
 
        return (0.005 + (p[0] - 0.5) * (p[0] - 0.5) / 2 + (p[1] - 0.5) * (p[1] - 0.5) / 2);
 
    };
 
  
    c.fillUniformBoundaryWithStep(fill_density(Vec2d({r, 0.0})));
+
We start with a simple algorithm based on '''Poisson Disk Sampling''' (PDS) that results in a relatively tightly packed distribution of nodes.
    PoissonDiskSamplingFill fill_engine;
+
The algorithm beings with a given non-empty set of nodes called "seed nodes".
 +
A single seed node placed anywhere in the domain interior is needed to begin the
 +
algorithm and if none are provided, one can be chosen at random.
 +
However, in the context of PDE discretisations, some nodes on the boundary are usually
 +
already known and can be used as seed nodes, possibly along with additional nodes in the interior.
  
    fill_engine(c, fill_density);
+
The initial nodes are put in a queue. In each iteration $i$, a new node $\b{p}_i$ is dequeued.
    relax(c);
+
Its desired nodal spacing $r_i$ is obtained from the function $h$, $r_i = h(\b{p}_i)$. A
</syntaxhighlight>
+
set $C_i$ of $n$ new candidates is generated, which lie on the sphere with center $\b{p}_i$ and radius $r_i$.
 +
New candidates are spaced uniformly with a random rotation.
 +
Candidates that lie outside of the domain or are too close to already existing nodes
 +
are rejected. Nearest neighbor search is performed by a spatial search structure, usually a [[K-d tree|''k''-d tree]] is used. Remaining candidates are enqueued and node $\b{p}_i$ is marked as "expanded".
 +
The iteration continues until the queue is empty.
  
 +
An illustration of the algorithm's progress on a unit square can be seen in <xr id="fig:gf_generation"/><ref name="GeneralFill">J. Slak and G. Kosec, On generation of node distributions for meshless PDE discretizations, SIAM Journal on Scientific Computing, 41 (2019),
 +
pp. A3202–A3229, https://doi.org/10.1137/18M1231456.</ref>.
  
[[File:relax_with_dist.png|600px]] [[File:relax_no_dist.png|600px]]
+
<figure id="fig:gf_generation">
 +
[[File:gf_generation.png|1000px|thumb|center|<caption> Progress of the interior filling algorithm on a unit square. </caption>]]
 +
</figure>
  
 +
See <xr id="fig:gf_examples_1"/> and <xr id="fig:gf_examples_2"/> for examples of discretized 2D and 3D domains.
  
[1] Lee CK, Liu X, Fan SC. Local muliquadric approximation for solving boundary value problems. Comput Mech. 2003;30:395-409.  
+
<figure id="fig:gf_examples_1">
 +
[[File:2d_poisson_disk_sampling.png|thumb|<caption> Examples of 2D domains filled by the interior filling algorithm. </caption>]]
 +
</figure>
  
[2] Löhner R, Oñate E. A general advancing front technique for filling space with arbitrary objects. Int J Numer Meth Eng. 2004;61:1977-91.
+
<figure id="fig:gf_examples_2">
 +
[[File:3d_poisson_disk_sampling.png|thumb|<caption> Example of a 3D domain filled by the interior filling algorithm. </caption>]]
 +
</figure>
  
[3] Liu Y, Nie Y, Zhang W, Wang L. Node placement method by bubble simulation and its application. CMES-Comp Model Eng. 2010;55:89.
+
The interior filling algorithm is thoroughly analyzed in <ref name="GeneralFill"/>. It is implemented in Medusa as [http://e6.ijs.si/medusa/docs/html/classmm_1_1GeneralFill.html GeneralFill].
  
== Refinement of the nodal distribution ==
+
=== Regularity analysis ===
  
Here we consider possible meshless refinement algorithms (sometimes also called adaptive cloud refinement). The refinement mechanisms we have so far studied include:
+
See <xr id="fig:gf_hist"/><ref name="GeneralFill"/> for a distribution of normalized average distances to $c = 3$ nearest neighbors on a 2D unit square (left) and $c = 6$ nearest neighbors on a 3D unit cube (right) filled with constant $h$. The distribution is satisfactory, since it has a lower standard deviation and maximum around $1$. See also the table below for other global measures of regularity for the same cases as those on the histograms.
* refinement based on closest node distance
 
* refinement based on averaged (inter-)node distance
 
* refinement based on half-links
 
  
Here we only want to compare the quality of the refined grids and have not tied the refinement algorithm with a error indicator, thus we only study the node insertion process by refining the whole grid.  
+
<figure id="fig:gf_hist">
 +
[[File:gf_hist.png|thumb|center|600px|<caption> Distribution of normalized distances to $c = 3$ nearest neighbors on a 2D unit square (left) and $c = 6$ nearest neighbors on a 3D unit cube (right) filled with constant $h$. </caption>]]
 +
</figure>
  
The refinement routine takes a range of nodes (e.g. a subregion of the domain) together with the refinement parameters and generates new nodes around the old ones. Special care must be taken with refinement of the boundary nodes. Points have to be selected on the actual boundary either analytically considering the geometry or with a numerical root finder such as bisection.  
+
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
 +
!colspan="6"|Measures of regularity
 +
|-
 +
|dim.
 +
|$\text{mean} \, \bar{d}'_i$
 +
|$\text{std} \, \bar{d}'_i$
 +
|$\text{mean} \left(\left(d_i^{\text{max}}\right)' - \left(d_i^{\text{min}}\right)'\right)$
 +
|$r'_\min$
 +
|$r'_\max$
 +
|-
 +
|2D
 +
|$1.0416$
 +
|$0.0344$
 +
|$0.0832$
 +
|$0.5000$
 +
|$2.0656$
 +
|-
 +
|3D
 +
|$1.0508$
 +
|$0.0418$
 +
|$0.0849$
 +
|$0.5000$
 +
|$2.1023$
 +
|}
  
====Problem description====
+
=== Computational complexity ===
  
To compare the node refinement mechanisms we study the process of reaction-diffusion in an infinite cylindrical catalyst pellet (infinite in the $z$-dimension). Since the pellet is infinite in one dimension this problem simplifies to a 2D problem (in the $xy$-plane). For a catalyst pellet of radius $R$ centered at $(x,y) = (0,0)$ and the reactant undergoing a first order reaction we must solve the equation
+
Computational complexity  of the interior filling algorithm is<ref name="GeneralFill"/>
 
\begin{equation}
 
\begin{equation}
\b{\nabla}^2 C - {M_T}^2 C = 0,
+
T_{\text{interior}} = O(P(N) + NnQ(N)+NI(N)),
 
\end{equation}
 
\end{equation}
where $C$ is the concentration of the reactant, $M_T = R\sqrt{k/D}$ is known as Thiele's modulus and $k$ and $D$ represent the reaction rate constant and diffusivity of the reacting species. The boundary conditions for this problem is \[C(R) = C_s.\] The analytical solution can be found easily using cylindrical coordinates and is given by
+
where $N$ is the number of generated nodes, $P$ is the precomputation complexity of the spatial search structure, $Q$ is the computational complexity of a radius (nearest neighbor) query and $I$ is the computational complexity of insertions into the spatial search structure. When using a [[K-d tree|''k''-d tree]] spatial search structure this simplifies to
 
\begin{equation}
 
\begin{equation}
\frac{C(r)}{C_S} = \frac{I_0(r M_T)}{I_0(R M_T)},
+
T_{\text{interior, tree}} = O(nN \log N).
 
\end{equation}
 
\end{equation}
where $I_0(r)$ is the modified Bessel function of first kind (this function is available in the library Boost as well as scripting languages such as Python or MATLAB). The conversion from cartesian to cylindrical coordinates is given by \[r = \sqrt{x^2+y^2}.\]
+
and when using a uniform-grid based spatial search structure<ref name="GeneralFill"/> ([http://e6.ijs.si/medusa/docs/html/classmm_1_1KDGrid.html KDGrid] in Medusa) it simplifies to
 
 
==== Error indicators ====
 
 
 
To compare the quality of the refined meshes for the described problem case we look at different error criteria including the max norm $L_\infty$ defined as
 
 
\begin{equation}
 
\begin{equation}
L_\infty = \mathrm{max}_i \left|C_i^\mathrm{numerical} - C_i^\mathrm{analytical}\right|,
+
T_{\text{interior, grid}} = O\left(\frac{|\text{obb} \Omega|}{|\Omega|}N + nN\right),
 
\end{equation}
 
\end{equation}
the $L_2$ norm per node defined as
+
where $\text{obb} \Omega$ is the oriented bounding box of $\Omega$.
\begin{equation}
 
\bar{L_2} = \frac{\sqrt{\sum^N_{i = 1}\left(C_i^\mathrm{numerical} - C_i^\mathrm{analytical}\right)^2}}{N},
 
\end{equation}
 
where $N$ is the number of nodes (and pertinent equations) in the domain.
 
  
We also measure the number of iterations required by the sparse BiCGSTAB solver to reach convergence and the estimated error of solving the system of equations.
+
Below are some computational times on a laptop computer of filling a box with a hole with roughly $100 \, 000$ nodes, given as a rough reference.
  
== Closest node ==
+
* 2D, KDTree: $1.57$ s, of which $5\%$ is candidate generation, $85\%$ is spatial queries and $4\%$ is spatial inserts
 +
* 2D, KDGrid: $0.35$ s, of which $19\%$ is candidate generation, $56\%$ is spatial queries and $0.002\%$ is spatial inserts
 +
* 3D, KDTree: $7.87$ s, of which $6\%$ is candidate generation, $89\%$ is spatial queries and $1\%$ is spatial inserts
 +
* 3D, KDGrid: $2.58$ s, of which $16\%$ is candidate generation, $70\%$ is spatial queries and $0.001\%$ is spatial inserts
  
For a given node $\b{x}_0 = (x_0,y_0)$:
+
Percentages vary slightly with $N$, with larger $N$ increasing spatial query share by $2$ percent points.
  
# find the closest node $\b{x}_1 = (x_1,y_1)$ 
+
== Filling parametric surfaces ==
# calculate the half distance between the two nodes \[d = |\b{x}_1 - \b{x}_0|/2\]
 
# randomly select up to 6 (The case of 6 nodes is the limit since it produces a regular hexagon. In practice this never occurs due to the "monte carlo" node selection procedure.) new nodes on the circle with center $\b{x}_0$ and radius $d$ and simultaneously make sure their is a minimal inter-nodal distance $d$ between the new nodes.
 
  
For boundary points we first select 2 points that intersect with the boundary of the domain and only then points lying inside the domain. Due to geometrical constraints boundary points will usually end up with 3 new nodes (in case of straight boundaries we could end up with 4, which would be the previously discussed hexagon limit).
+
The algorithm from the previous section can be modified to work on domain boundaries, for example curves in 2D and surfaces in 3D. Let $\partial \Omega$ be a domain boundary parametrized with a regular parametrization $\boldsymbol{r}: \Lambda \subset \mathbb{R}^{d - 1} \to \partial \Omega \subset \mathbb{R}^{d}$ and let $h(\boldsymbol{p})$ be our spacing function.
  
<figure id="fig:node_refinement_1">
+
We can consider our problem as filling the domain $\Lambda$ in a way, that when its nodes are mapped by $\boldsymbol{r}$, they are approximately $h$ apart. The general logic of iteratively expanding nodes can thus stay the same, we only need to generate different candidates. Let $\boldsymbol{\lambda}_i \in \Lambda$ be the parameter we wish to expand. We want to generate candidates $\boldsymbol{\eta}_{i,j} \in H_i \subset \Lambda$ so that
[[File:closest_node.png|1000px|thumb|center|<caption>Refinement based on closest node approach (initial unrefined grid is on the left). In the second refinement step an erroneous point has appeared from an internal point that was too close to the boundary. Also noticable is clustering of points on the boundary.</caption>]]
+
\begin{equation}
</figure>
+
||\boldsymbol{r}(\boldsymbol{\eta}_{i,j}) - \boldsymbol{r}(\boldsymbol{\lambda}_i)|| = h(\boldsymbol{r}(\boldsymbol{\lambda}_i)).
 +
\end{equation}
 +
Let
 +
\begin{equation}
 +
\boldsymbol{\eta}_{i,j} = \boldsymbol{\lambda}_i + \alpha_{i, j} \vec{s}_{i,j},
 +
\end{equation}
 +
where $\vec{s}_{i,j}$ is a unit vector and $\alpha_{i, j} > 0$. By using the first order Taylor's expansion we get
 +
\begin{align}
 +
h(\boldsymbol{r}(\boldsymbol{\lambda}_i)) &\approx ||\boldsymbol{r}(\boldsymbol{\lambda}_i) + \alpha_{i, j} \nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j} - \boldsymbol{r}(\boldsymbol{\lambda}_i)|| = \alpha_{i, j} ||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||, \\
 +
\alpha_{i, j} &\approx \frac{h(\boldsymbol{r}(\boldsymbol{\lambda}_i))}{||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||}.
 +
\end{align}
 +
Therefore, our set of candidates for expansion of $\boldsymbol{\lambda}_i$ can be expressed as
 +
\begin{equation}
 +
H_i = \left\{ \boldsymbol{\lambda}_i + \frac{h(\boldsymbol{r}(\boldsymbol{\lambda}_i))}{||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||} \vec{s}_{i, j} ; \vec{s}_{i,j} \in S_i, \right\}.
 +
\end{equation}
 +
where $S_i$ is a random discretization of a unit sphere and $|S_i| = n$. Now, we can accept candidates that are at least $h(\b{r}(\b{\lambda}_i))$ away from $\b{r}(\b{\lambda}_i)$ and enqueue them for expansion. The boundary filling algorithm terminates when the queue is empty.
  
== Average radius ==
+
An illustration of the algorithm's progress on a part of a unit sphere can be seen in <xr id="fig:gsf_generation"/> <ref name="GeneralSurfaceFill">U. Duh, G. Kosec and J. Slak, Fast variable density node generation on parametric surfaces with application to mesh-free methods, arXiv preprint arXiv:2005.08767 (2020).</ref>.
  
Input parameters: $f$ and $l_s$
+
<figure id="fig:gsf_generation">
 +
[[File:gsf_generation.png|1000px|thumb|center|<caption> Progress of the boundary filling algorithm in parametric domain $\Lambda$ (bottom) and main domain $\partial \Omega$ (top). Part of a unit sphere was discretized with $h = 0.08$. </caption>]]
 +
</figure>
  
For a given node $\b{x}_0$:
+
See <xr id="fig:gsf_examples_1"/><ref name="GeneralSurfaceFill"/>, <xr id="fig:gsf_examples_2"/> and <xr id="fig:gsf_examples_3"/><ref name="GeneralSurfaceFill"/>, for examples of discretized curves in 2D and surfaces in 3D.
# find the $l_s$ (an integer from 1 to 7) closest nodes
 
# calculate the average distance $\bar{d}$ to the $l_s$ closest nodes
 
# randomly select up to 6 new nodes on the circle with center $\b{x}_0$ and radius $f\cdot\bar{d}$ where $f$ is the radius fraction that lies between 0.2 (leads to clustering) and 0.8. Only allow nodes that are separated by the distance $f \cdot \bar{d}$.  
 
  
''(note that in case $l_s = 1$ and $f = 0.5$ the average radius mechanism becomes equal to the closest node refinement approach described above)''
+
<figure id="fig:gsf_examples_1">
 
+
[[File:gsf_2d_article.png|thumb|<caption> Example of a 2D domain filled by the boundary filling algorithm. </caption>]]
<figure id="fig:node_refinement_2">
+
</figure>
[[File:new_average_radius.png|1000px|thumb|center|<caption>Refinement based on average radius approach (initial unrefined grid is on the left). The parameters are $l_s = 5$ closest nodes in average radius calculation and points placed at radius fraction $f = 0.5$. </caption>]]
+
<figure id="fig:gsf_examples_2">
 +
[[File:3d_simple.png|thumb|<caption> Example of a 3D domain filled by the boundary filling algorithm. </caption>]]
 
</figure>
 
</figure>
 
+
<figure id="fig:gsf_examples_3">
== Half-links ==
+
[[File:gsf_3d_article.png|thumb|<caption> Example of a 3D domain filled by the boundary filling algorithm. </caption>]]
 
 
Input parameters: $l_s$, $d_m$
 
 
 
For a given node $\b{x}_0$:
 
# find the $l_s$ (an integer from 1 to 7) closest nodes $\b{x}_i$
 
# select new nodes in the middle of the segments $\b{x}_i - \b{x}_0$ only allowing points that are separated by the minimal distance $d_m$
 
 
 
''(note also that in the 1D case the half-link and closest radius approach become the same)''
 
 
 
The minimal distance $d_m$ is chosen as a fraction of the distance to the closest link, e.g. $d_m = f d$, where $f$ is the provided fraction and $d$ is the distance to the closest link.
 
 
 
<figure id="fig:node_refinement_3">
 
[[File:half_link.png|1000px|thumb|center|<caption>Refinement based on half links (initial unrefined grid is on the left). The parameters are $l_s = 6$ and $d_m = 0.4 d$, where $d$ is the distance to the closest link.</caption>]]
 
 
</figure>
 
</figure>
  
 +
The boundary filling algorithm can also be used to fill surfaces defined by multiple patches, such as non-uniform rational basis spline (NURBS) models generated by Computer aided design (CAD) software. It is usually beneficial to discretize patch boundaries ($\partial \partial \Omega$) first, since it ensures no gaps of size between $h(\boldsymbol{p})$ and $2h(\boldsymbol{p})$ on patch boundaries.
  
 +
The algorithm is thoroughly analyzed in <ref name="GeneralSurfaceFill"/>. It is implemented in Medusa as [http://e6.ijs.si/medusa/docs/html/classmm_1_1GeneralSurfaceFill.html GeneralSurfaceFill]. See also [[parametric domains]] example and [[NURBS domains]] example.
  
<figure id="fig:node_refinement_3a">
+
=== Regularity analysis ===
[[File:refineWithRelax.png|1000px|thumb|center|<caption>Refinement based on half links with additional 10 step relax after refinement</caption>]]
 
</figure>
 
  
After experimentation we noticed there are some inconsistencies when trying to refine structured point sets with this approach. The reason for these inconsistencies is that the boundary and internal points have a different number of "natural neighbours". For example in 2D on a square grid, the internal points have 8 neighbours, while boundary points have 5 neighbours. If we choose higher numbers e.g 9 links for an internal node, the 9th node might be any of the 4 nodes one shell further out that only differ at machine precision.
+
See <xr id="fig:gsf_hist"/><ref name="GeneralSurfaceFill"/> for a distribution of normalized average distances to $c = 2$ nearest neighbors in 2D on a curve from <xr id="fig:gsf_examples_1"/> (left) and $c = 3$ nearest neighbors in 3D on a heart-like surface from <xr id="fig:gsf_examples_3"/> (right) filled with constant $h$. The distribution is satisfactory, since it has a low standard deviation and maximum around $1$. It is even further improved for bigger $N$ (smaller $h$). See also the table below for other global measures of regularity for the same cases as those on the histograms.
  
The figures below show some preliminary results of refinement based on half-links. For the circle domain relaxation was applied after the refinement.
+
<figure id="fig:gsf_hist">
 
+
[[File:gsf_hist.png|thumb|center|600px|<caption> Distribution of normalized distances to $c = 2$ nearest neighbors in 2D (left) and $c = 3$ nearest neighbors in 3D (right) filled with constant $h$. </caption>]]
<figure id="fig:square_refinement">
 
[[File:dc_field.png|1000px|thumb|center|<caption> Thermal diffusion in (convective) flow at a stagnation point (bottom left corner).</caption>]]
 
 
</figure>
 
</figure>
  
After the second refinement of the corner, the solver had difficulty converging to the solution. This was the result of a fixed size shape parameter in the shape functions of the node points. The shape functions have to be tailored to the local characteristic distance in the point set.
+
{| class="wikitable" style="margin-left: auto; margin-right: auto; border: none;"
 +
!colspan="6"|Measures of regularity
 +
|-
 +
|dim.
 +
|$\text{mean} \, \bar{d}'_i$
 +
|$\text{std} \, \bar{d}'_i$
 +
|$\text{mean} \left(\left(d_i^{\text{max}}\right)' - \left(d_i^{\text{min}}\right)'\right)$
 +
|$r'_\min$
 +
|$r'_\max$
 +
|-
 +
|2D
 +
|$1.0001$
 +
|$5.1483 \times 10^{-4}$
 +
|$1.1136 \times 10^{-10}$
 +
|$0.4622$
 +
|$0.7808$
 +
|-
 +
|3D
 +
|$1.0357$
 +
|$0.0374$
 +
|$3.8888 \times 10^{-4}$
 +
|$0.3522$
 +
|$1.5730$
 +
|}
  
 +
=== Computational complexity ===
  
<figure id="fig:circle_refinement">
+
Since the boundary filling algorithm is based on the interior filling algorithm, its computational complexity is also equal to<ref name="GeneralSurfaceFill"/>
[[File:ref_circle.png|1000px|thumb|center|<caption>Reaction-diffusion in a cylinder catalyst. Two successive refinements have been applied for $r > 0.5$ and $r > 0.8$, where $r$ is the radial coordinate. The cylinder radius $R = 1$.$</caption>]]  
+
\begin{equation}
</figure>
+
T_{\text{boundary}} = O(P(N) + NnQ(N)+NI(N)),
 +
\end{equation}
 +
where $N$ is the number of generated nodes, $P$ is the precomputation complexity of the spatial search structure, $Q$ is the computational complexity of a radius (nearest neighbor) query and $I$ is the computational complexity of insertions into the spatial search structure. When using a [[K-d tree|''k''-d tree]] spatial search structure this simplifies to
 +
\begin{equation}
 +
T_{\text{boundary, tree}} = O(nN \log N).
 +
\end{equation}
  
== Hybrid approach ==
+
Below are some computational times on a laptop computer of filling examples from <xr id="fig:gsf_examples_1"/> and <xr id="fig:gsf_examples_3"/> with $100 \, 000$ nodes, given as a rough reference.
  
A hybrid might give better distributions of the refined points. The half-link approach performs well at the boundaries while the distance approach gives less regular internal distributions. In any case it is suggested to perform a few more relaxation steps to equilibrate the mesh.
+
* 2D ($n = 2$): $0.28$ s
 +
* 3D ($n = 15$): $1.32$ s
  
<figure id="fig:node_refinement_4">
+
=References=
[[File:hybrid_refine.png|1000px|thumb|center|<caption>Refinement based on half links at the boundaries and closest distances for the internal nodes (initial unrefined grid is on the left). for the boundary nodes the parameters are $l_s = 7$ and $d_m = 0.5 d$, where $d$ is the distance to the closest link.</caption>]]
+
<references/>
</figure>
 

Latest revision as of 18:01, 4 December 2022

edit 

Related papers

M. Depolli, J. Slak, G. Kosec; Parallel domain discretization algorithm for RBF-FD and other meshless numerical methods for solving PDEs, Computers & Structures, 2022 [DOI: 10.1016/j.compstruc.2022.106773]

J. Slak, G. Kosec; On generation of node distributions for meshless PDE discretizations, SIAM journal on scientific computing, vol. 41, 2019 [DOI: 10.1137/18M1231456]

U. Duh, G. Kosec, J. Slak; Fast variable density node generation on parametric surfaces with application to mesh-free methods, SIAM journal on scientific computing, vol. 43, 2021 [DOI: 10.1137/20M1325642]

J. Slak, G. Kosec; Medusa : A C++ library for solving PDEs using strong form mesh-free methods, ACM transactions on mathematical software, vol. 47, 2021 [DOI: 10.1145/3450966]


Since one of the most attractive features of mesh-free methods is the ability to use nodes without any connectivity information, node placing was considered much easier than mesh generation or simply used existing tools for mesh generation and was thus often disregarded, sometimes implying that any nodes could be used, even if placed at random. It soon turned out that that is not the case, mostly with strong form methods, since many methods require regular nodes for good performance and bad distributions can impact their stability.


One of the key successes of RBF based mesh free methods, such as RBF-generated finite differences (RBF-FD) is the ability to use highly spatially variable node distributions which can adapt to irregular geometries and allow for refinement in critical areas. We present our algorithms below. Their goal is to fill an arbitrary domain $\Omega \subseteq \R^d$ with nodes following the given target spacing function $h(\b{p}): \Omega \to (0, \infty)$, which maps points from $\Omega$ to desired distance to neighboring points.

Please refer to following papers for mode details:


Measures of node regularity

To analyze the regularity of nodes locally, we find $c$ nearest neighbors $\b{p}_{i, j}, j = 1, 2, \dots c$ of each node $\b{p}_i$. Local regularity can now be measured with \begin{align*} \bar{d}_i = \frac{1}{c} \sum_{j =1}^{c}\|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{average distance to} \ c \ \text{nearest neighbors}, \\ d_i^{\text{min}} = \min_{j=1, \dots c} \|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{minimum distance to} \ c \ \text{nearest neighbors}, \\ d_i^{\text{max}} = \max_{j=1, \dots c} \|\b{p}_i - \b{p}_{i, j}\| & \quad \dots \quad \text{maximum distance to} \ c \ \text{nearest neighbors}. \\ \end{align*} We can also normalize the quantities by scaling them with $h$, thus getting $d'_i = d_i / h(\b{p}_i)$. It is therefore desirable that all the normalised above quantities are approximately equal to $1$.

Global regularity can be assessed by plotting distributions of local regularity measures. For example, a desirable distribution of $\bar{d}'_i$ would have maximum and average approximately equal to $1$ and a low standard deviation. If $h$ is a constant function, a discretization of $\Omega$ with point set $\mathcal{X} = {x_1, \dots, x_N} \subseteq \Omega$ can also be assessed with standard concepts such as[1] \begin{align*} r_{\max, \mathcal{X}} = \sup_{x \in \Omega} \min_{1 \leq j \leq N} \|x - x_j\| & \quad \dots \quad \text{maximum empty sphere radius}, \\ r_{\min, \mathcal{X}} = \frac{1}{2} \min_{i \neq j} \| x_i - x_j \| & \quad \dots \quad \text{separation distance}. \\ \end{align*} Those can also be normalized, getting $r' = r / h$. It is desirable that both $r'_\max$ and $r'_\min$ are approximately equal to $\frac{1}{2}$, however values of $r_\max$ around $2$ (especially when $N$ is small) are also considered good enough. In practice, the maximum empty sphere radius can be numerically estimated by discretizing $\Omega$ with a much smaller nodal spacing $h$ and calculating the maximum empty sphere radius with center in one of the generated nodes.

Filling domain interior

We start with a simple algorithm based on Poisson Disk Sampling (PDS) that results in a relatively tightly packed distribution of nodes. The algorithm beings with a given non-empty set of nodes called "seed nodes". A single seed node placed anywhere in the domain interior is needed to begin the algorithm and if none are provided, one can be chosen at random. However, in the context of PDE discretisations, some nodes on the boundary are usually already known and can be used as seed nodes, possibly along with additional nodes in the interior.

The initial nodes are put in a queue. In each iteration $i$, a new node $\b{p}_i$ is dequeued. Its desired nodal spacing $r_i$ is obtained from the function $h$, $r_i = h(\b{p}_i)$. A set $C_i$ of $n$ new candidates is generated, which lie on the sphere with center $\b{p}_i$ and radius $r_i$. New candidates are spaced uniformly with a random rotation. Candidates that lie outside of the domain or are too close to already existing nodes are rejected. Nearest neighbor search is performed by a spatial search structure, usually a k-d tree is used. Remaining candidates are enqueued and node $\b{p}_i$ is marked as "expanded". The iteration continues until the queue is empty.

An illustration of the algorithm's progress on a unit square can be seen in Figure 1[2].

Figure 1: Progress of the interior filling algorithm on a unit square.

See Figure 2 and Figure 3 for examples of discretized 2D and 3D domains.

Figure 2: Examples of 2D domains filled by the interior filling algorithm.
Figure 3: Example of a 3D domain filled by the interior filling algorithm.

The interior filling algorithm is thoroughly analyzed in [2]. It is implemented in Medusa as GeneralFill.

Regularity analysis

See Figure 4[2] for a distribution of normalized average distances to $c = 3$ nearest neighbors on a 2D unit square (left) and $c = 6$ nearest neighbors on a 3D unit cube (right) filled with constant $h$. The distribution is satisfactory, since it has a lower standard deviation and maximum around $1$. See also the table below for other global measures of regularity for the same cases as those on the histograms.

Figure 4: Distribution of normalized distances to $c = 3$ nearest neighbors on a 2D unit square (left) and $c = 6$ nearest neighbors on a 3D unit cube (right) filled with constant $h$.
Measures of regularity
dim. $\text{mean} \, \bar{d}'_i$ $\text{std} \, \bar{d}'_i$ $\text{mean} \left(\left(d_i^{\text{max}}\right)' - \left(d_i^{\text{min}}\right)'\right)$ $r'_\min$ $r'_\max$
2D $1.0416$ $0.0344$ $0.0832$ $0.5000$ $2.0656$
3D $1.0508$ $0.0418$ $0.0849$ $0.5000$ $2.1023$

Computational complexity

Computational complexity of the interior filling algorithm is[2] \begin{equation} T_{\text{interior}} = O(P(N) + NnQ(N)+NI(N)), \end{equation} where $N$ is the number of generated nodes, $P$ is the precomputation complexity of the spatial search structure, $Q$ is the computational complexity of a radius (nearest neighbor) query and $I$ is the computational complexity of insertions into the spatial search structure. When using a k-d tree spatial search structure this simplifies to \begin{equation} T_{\text{interior, tree}} = O(nN \log N). \end{equation} and when using a uniform-grid based spatial search structure[2] (KDGrid in Medusa) it simplifies to \begin{equation} T_{\text{interior, grid}} = O\left(\frac{|\text{obb} \Omega|}{|\Omega|}N + nN\right), \end{equation} where $\text{obb} \Omega$ is the oriented bounding box of $\Omega$.

Below are some computational times on a laptop computer of filling a box with a hole with roughly $100 \, 000$ nodes, given as a rough reference.

  • 2D, KDTree: $1.57$ s, of which $5\%$ is candidate generation, $85\%$ is spatial queries and $4\%$ is spatial inserts
  • 2D, KDGrid: $0.35$ s, of which $19\%$ is candidate generation, $56\%$ is spatial queries and $0.002\%$ is spatial inserts
  • 3D, KDTree: $7.87$ s, of which $6\%$ is candidate generation, $89\%$ is spatial queries and $1\%$ is spatial inserts
  • 3D, KDGrid: $2.58$ s, of which $16\%$ is candidate generation, $70\%$ is spatial queries and $0.001\%$ is spatial inserts

Percentages vary slightly with $N$, with larger $N$ increasing spatial query share by $2$ percent points.

Filling parametric surfaces

The algorithm from the previous section can be modified to work on domain boundaries, for example curves in 2D and surfaces in 3D. Let $\partial \Omega$ be a domain boundary parametrized with a regular parametrization $\boldsymbol{r}: \Lambda \subset \mathbb{R}^{d - 1} \to \partial \Omega \subset \mathbb{R}^{d}$ and let $h(\boldsymbol{p})$ be our spacing function.

We can consider our problem as filling the domain $\Lambda$ in a way, that when its nodes are mapped by $\boldsymbol{r}$, they are approximately $h$ apart. The general logic of iteratively expanding nodes can thus stay the same, we only need to generate different candidates. Let $\boldsymbol{\lambda}_i \in \Lambda$ be the parameter we wish to expand. We want to generate candidates $\boldsymbol{\eta}_{i,j} \in H_i \subset \Lambda$ so that \begin{equation} ||\boldsymbol{r}(\boldsymbol{\eta}_{i,j}) - \boldsymbol{r}(\boldsymbol{\lambda}_i)|| = h(\boldsymbol{r}(\boldsymbol{\lambda}_i)). \end{equation} Let \begin{equation} \boldsymbol{\eta}_{i,j} = \boldsymbol{\lambda}_i + \alpha_{i, j} \vec{s}_{i,j}, \end{equation} where $\vec{s}_{i,j}$ is a unit vector and $\alpha_{i, j} > 0$. By using the first order Taylor's expansion we get \begin{align} h(\boldsymbol{r}(\boldsymbol{\lambda}_i)) &\approx ||\boldsymbol{r}(\boldsymbol{\lambda}_i) + \alpha_{i, j} \nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j} - \boldsymbol{r}(\boldsymbol{\lambda}_i)|| = \alpha_{i, j} ||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||, \\ \alpha_{i, j} &\approx \frac{h(\boldsymbol{r}(\boldsymbol{\lambda}_i))}{||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||}. \end{align} Therefore, our set of candidates for expansion of $\boldsymbol{\lambda}_i$ can be expressed as \begin{equation} H_i = \left\{ \boldsymbol{\lambda}_i + \frac{h(\boldsymbol{r}(\boldsymbol{\lambda}_i))}{||\nabla \boldsymbol{r}(\boldsymbol{\lambda}_i) \vec s_{i, j}||} \vec{s}_{i, j} ; \vec{s}_{i,j} \in S_i, \right\}. \end{equation} where $S_i$ is a random discretization of a unit sphere and $|S_i| = n$. Now, we can accept candidates that are at least $h(\b{r}(\b{\lambda}_i))$ away from $\b{r}(\b{\lambda}_i)$ and enqueue them for expansion. The boundary filling algorithm terminates when the queue is empty.

An illustration of the algorithm's progress on a part of a unit sphere can be seen in Figure 5 [3].

Figure 5: Progress of the boundary filling algorithm in parametric domain $\Lambda$ (bottom) and main domain $\partial \Omega$ (top). Part of a unit sphere was discretized with $h = 0.08$.

See Figure 6[3], Figure 7 and Figure 8[3], for examples of discretized curves in 2D and surfaces in 3D.

Figure 6: Example of a 2D domain filled by the boundary filling algorithm.
Figure 7: Example of a 3D domain filled by the boundary filling algorithm.
Figure 8: Example of a 3D domain filled by the boundary filling algorithm.

The boundary filling algorithm can also be used to fill surfaces defined by multiple patches, such as non-uniform rational basis spline (NURBS) models generated by Computer aided design (CAD) software. It is usually beneficial to discretize patch boundaries ($\partial \partial \Omega$) first, since it ensures no gaps of size between $h(\boldsymbol{p})$ and $2h(\boldsymbol{p})$ on patch boundaries.

The algorithm is thoroughly analyzed in [3]. It is implemented in Medusa as GeneralSurfaceFill. See also parametric domains example and NURBS domains example.

Regularity analysis

See Figure 9[3] for a distribution of normalized average distances to $c = 2$ nearest neighbors in 2D on a curve from Figure 6 (left) and $c = 3$ nearest neighbors in 3D on a heart-like surface from Figure 8 (right) filled with constant $h$. The distribution is satisfactory, since it has a low standard deviation and maximum around $1$. It is even further improved for bigger $N$ (smaller $h$). See also the table below for other global measures of regularity for the same cases as those on the histograms.

Figure 9: Distribution of normalized distances to $c = 2$ nearest neighbors in 2D (left) and $c = 3$ nearest neighbors in 3D (right) filled with constant $h$.
Measures of regularity
dim. $\text{mean} \, \bar{d}'_i$ $\text{std} \, \bar{d}'_i$ $\text{mean} \left(\left(d_i^{\text{max}}\right)' - \left(d_i^{\text{min}}\right)'\right)$ $r'_\min$ $r'_\max$
2D $1.0001$ $5.1483 \times 10^{-4}$ $1.1136 \times 10^{-10}$ $0.4622$ $0.7808$
3D $1.0357$ $0.0374$ $3.8888 \times 10^{-4}$ $0.3522$ $1.5730$

Computational complexity

Since the boundary filling algorithm is based on the interior filling algorithm, its computational complexity is also equal to[3] \begin{equation} T_{\text{boundary}} = O(P(N) + NnQ(N)+NI(N)), \end{equation} where $N$ is the number of generated nodes, $P$ is the precomputation complexity of the spatial search structure, $Q$ is the computational complexity of a radius (nearest neighbor) query and $I$ is the computational complexity of insertions into the spatial search structure. When using a k-d tree spatial search structure this simplifies to \begin{equation} T_{\text{boundary, tree}} = O(nN \log N). \end{equation}

Below are some computational times on a laptop computer of filling examples from Figure 6 and Figure 8 with $100 \, 000$ nodes, given as a rough reference.

  • 2D ($n = 2$): $0.28$ s
  • 3D ($n = 15$): $1.32$ s

References

  1. H. Wendland, Scattered data approximation, vol. 17, Cambridge university press, 2004.
  2. 2.0 2.1 2.2 2.3 2.4 J. Slak and G. Kosec, On generation of node distributions for meshless PDE discretizations, SIAM Journal on Scientific Computing, 41 (2019), pp. A3202–A3229, https://doi.org/10.1137/18M1231456.
  3. 3.0 3.1 3.2 3.3 3.4 3.5 U. Duh, G. Kosec and J. Slak, Fast variable density node generation on parametric surfaces with application to mesh-free methods, arXiv preprint arXiv:2005.08767 (2020).