Difference between revisions of "Meshless Local Strong Form Method (MLSM)"

From Medusa: Coordinate Free Mehless Method implementation
Jump to: navigation, search
Line 6: Line 6:
  
 
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.
 
The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.
 +
 +
The core of the spatial discretization used in this paper is a local approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as
 +
\[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\], (1)
 +
with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively.  When the number of basis functions and number of support domain is the same $n=m$ the determination of coefficients $\mathbf{\alpha }$ simplifies to solving a system of $n$ linear equations that results from expressing eq. (1) in each support node.
 +
\[u\left( {{\mathbf{p}}_{j}} \right)=\mathbf{u}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\], (2)
 +
${{\mathbf{p}}_{j}}$ are positions of support nodes and \[\mathbf{u}\] are values of considered field in the support positions. The above system can be written in a matrix form as
 +
\[\mathbf{u}=\mathbf{B\alpha }\], (3)
 +
where $\mathbf{B}$ stands for coefficient matrix with elements ${{B}_{ji}}={{b}_{i}}\left( {{\mathbf{p}}_{j}} \right)$. System (3) can be effectively solved with Cholesky decomposition. The most known method that use such an approach is LRBFCM that has been recently used in various problems [16, 17]. If the number of support nodes is higher than the number of basis functions \[n>m\] a Weighted Least Squares (WLS) approximation is used to solve over-determined problem. An example of such approach is DAM [5] that was originally formulated to solve fluid flow in porous media. DAM uses six monomials for basis and nine nodded support domains to evaluate first and second derivatives of physical fields required to solve problem at hand. To determine the approximation coefficients a norm
 +
\[{{r}^{2}}=\sum\limits_{j}^{n}{W\left( {{\mathbf{p}}_{j}} \right){{\left( u({{\mathbf{p}}_{j}})-\hat{u}({{\mathbf{p}}_{j}}) \right)}^{2}}}={{\left( \mathbf{B\alpha }-\mathbf{u} \right)}^{\text{T}}}\mathbf{W}\left( \mathbf{B\alpha }-\mathbf{u} \right)\], (4)
 +
is minimized, where $\mathbf{W}$ is a diagonal matrix with elements ${{W}_{jj}}=W\left( {{\mathbf{p}}_{j}} \right)$ with
 +
\[W\left( \mathbf{p} \right)=\exp \left( -{{\left( \frac{\left\| {{\mathbf{p}}_{0}}-\mathbf{p} \right\|}{\sigma {{p}_{\min }}} \right)}^{2}} \right)\], (5)
 +
where $\sigma $ stands for weight parameter, ${{\mathbf{p}}_{0}}$ for centre of support domain  and ${{p}_{\min }}$ for the distance to the first support domain node. There are different approaches to solve (4). The most intuitive and also computationally effective is to simply compute gradient of (4) with respect to the $\mathbf{\alpha }$ resulting in
 +
\[~\mathbf{WB}{{\mathbf{B}}^{\text{T}}}\mathbf{\alpha }=\mathbf{W}{{\mathbf{B}}^{\text{T}}}\mathbf{u}\]. (6)
 +
The problem of above approach is bad conditioning. More stable and more expensive approach is QR decomposition. Even more stable is SVD decomposition, which is off course also more expensive. Nevertheless, the problem can be written in matrix form as
 +
\[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\], (7)
 +
where \[{{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}\] stand for a Moore–Penrose pseudo inverse. By explicit expression of $\mathbf{\alpha }$ into the (2) an equation
 +
\[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\], (8)
 +
is obtained, where \[\mathbf{\chi }\] stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function,
 +
\[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\], (9)
 +
where L stands for general differential operator. In this paper we deal with a Navier-Stokes equation and therefore only shape functions for Laplace operator and first derivatives are needed, which are pre-computed and stored
 +
\[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (10)
 +
\[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (11)
 +
\[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (12)
 +
Although the selection of basis function $\mathbf{b}$ is general, several researchers follow the results from Franke's analysis [18] and use Hardy’s Multiquadrics, however in this work the monomials up to second order $\left( 1,\,{{p}_{x}},\,{{p}_{y}},\,p_{x}^{2},\,p_{y}^{2},\,{{p}_{x}}{{p}_{y}} \right)$ are used based on the results presented in [19].
 +
The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator, refer to Equation (21) for example.
 +
The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity  of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$  stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a kD-tree [20] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting n supporting nodes.

Revision as of 20:36, 20 October 2016

The Meshless Local Strong Form Method (MLSM) is a generalization of methods which are in literature also known as Diffuse Approximate Method (DAM), Local Radial Basis Function Collocation Methods (LRBFCM), Generalized FDM, Collocated discrete least squares (CDLS) meshless, etc. Although each of the named methods pose some unique properties, the basic concept of all local strong form methods is similar, namely to approximate treated fields with nodal trial functions over the local support domain. The nodal trial function is then used to evaluate various operators, e.g. derivation, integration, and after all, approximation of a considered field in arbitrary position. The MLSM could easily be understood as a meshless generalization of the FDM, however much more powerful. MLSM has an ambition to avoid using pre-defined relations between nodes and shift this task into the solution procedure. The final goal of such an approach is higher flexibility in complex domains.

The scheme of local meshless principle.
Figure 1: The scheme of local meshless principle.

The elegance of MLSM is its simplicity and generality. The presented methodology can be also easily upgraded or altered, e.g. with nodal adaptation, basis augmentation, conditioning of the approximation, etc., to treat anomalies such as sharp discontinues or other obscure situations, which might occur in complex simulations. In the MLSM, the type of approximation, the size of support domain, and the type and number of basis function is general. For example minimal support size for 2D transport problem (system of PDEs of second order) is five, however higher support domains can be used to stabilize computations on scattered nodes at the cost of computational complexity. Various types of basis functions might appear in the calculation of the trial function, however, the most commonly used are multiquadrics, Gaussians and monomials. Some authors also enrich the radial basis with monomials to improve performance of the method. All these features can be controlled on the fly during the simulation. From the computation point of view, the localization of the method reduces inter-processor communication, which is often a bottleneck of parallel algorithms.

The core of the spatial discretization used in this paper is a local approximation of a considered field over the overlapping local support domains, i.e. in each node we use approximation over a small local sub-set of neighbouring $n$ nodes. The trial function is thus introduced as \[\hat{u}(\mathbf{p})=\sum\limits_{i}^{m}{{{\alpha }_{i}}{{b}_{i}}(\mathbf{p})}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\], (1) with $m,\,\,\mathbf{\alpha }\text{,}\,\,\mathbf{b},\,\,\mathbf{p}\left( {{p}_{x}},{{p}_{y}} \right)$ standing for the number of basis functions, approximation coefficients, basis functions and the position vector, respectively. When the number of basis functions and number of support domain is the same $n=m$ the determination of coefficients $\mathbf{\alpha }$ simplifies to solving a system of $n$ linear equations that results from expressing eq. (1) in each support node. \[u\left( {{\mathbf{p}}_{j}} \right)=\mathbf{u}=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}\mathbf{\alpha }\], (2) ${{\mathbf{p}}_{j}}$ are positions of support nodes and \[\mathbf{u}\] are values of considered field in the support positions. The above system can be written in a matrix form as \[\mathbf{u}=\mathbf{B\alpha }\], (3) where $\mathbf{B}$ stands for coefficient matrix with elements ${{B}_{ji}}={{b}_{i}}\left( {{\mathbf{p}}_{j}} \right)$. System (3) can be effectively solved with Cholesky decomposition. The most known method that use such an approach is LRBFCM that has been recently used in various problems [16, 17]. If the number of support nodes is higher than the number of basis functions \[n>m\] a Weighted Least Squares (WLS) approximation is used to solve over-determined problem. An example of such approach is DAM [5] that was originally formulated to solve fluid flow in porous media. DAM uses six monomials for basis and nine nodded support domains to evaluate first and second derivatives of physical fields required to solve problem at hand. To determine the approximation coefficients a norm \[{{r}^{2}}=\sum\limits_{j}^{n}{W\left( {{\mathbf{p}}_{j}} \right){{\left( u({{\mathbf{p}}_{j}})-\hat{u}({{\mathbf{p}}_{j}}) \right)}^{2}}}={{\left( \mathbf{B\alpha }-\mathbf{u} \right)}^{\text{T}}}\mathbf{W}\left( \mathbf{B\alpha }-\mathbf{u} \right)\], (4) is minimized, where $\mathbf{W}$ is a diagonal matrix with elements ${{W}_{jj}}=W\left( {{\mathbf{p}}_{j}} \right)$ with \[W\left( \mathbf{p} \right)=\exp \left( -{{\left( \frac{\left\| {{\mathbf{p}}_{0}}-\mathbf{p} \right\|}{\sigma {{p}_{\min }}} \right)}^{2}} \right)\], (5) where $\sigma $ stands for weight parameter, ${{\mathbf{p}}_{0}}$ for centre of support domain and ${{p}_{\min }}$ for the distance to the first support domain node. There are different approaches to solve (4). The most intuitive and also computationally effective is to simply compute gradient of (4) with respect to the $\mathbf{\alpha }$ resulting in \[~\mathbf{WB}{{\mathbf{B}}^{\text{T}}}\mathbf{\alpha }=\mathbf{W}{{\mathbf{B}}^{\text{T}}}\mathbf{u}\]. (6) The problem of above approach is bad conditioning. More stable and more expensive approach is QR decomposition. Even more stable is SVD decomposition, which is off course also more expensive. Nevertheless, the problem can be written in matrix form as \[~\mathbf{\alpha }={{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\mathbf{u}\], (7) where \[{{\left( {{\mathbf{W}}^{0.5}}\mathbf{B} \right)}^{+}}\] stand for a Moore–Penrose pseudo inverse. By explicit expression of $\mathbf{\alpha }$ into the (2) an equation \[~\hat{u}\left( \mathbf{p} \right)=\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{u}=\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\], (8) is obtained, where \[\mathbf{\chi }\] stand for the shape functions. Now, we can apply partial differential operator, which is our goal, on the trial function, \[L~\hat{u}\left( \mathbf{p} \right)=L\mathbf{\chi }\left( \mathbf{p} \right)\mathbf{u}\], (9) where L stands for general differential operator. In this paper we deal with a Navier-Stokes equation and therefore only shape functions for Laplace operator and first derivatives are needed, which are pre-computed and stored \[{{\mathbf{\chi }}^{\partial x}}\left( \mathbf{p} \right)=\frac{\partial }{\partial x}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (10) \[{{\mathbf{\chi }}^{\partial y}}\left( \mathbf{p} \right)=\frac{\partial }{\partial y}~\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (11) \[{{\mathbf{\chi }}^{{{\nabla }^{2}}}}\left( \mathbf{p} \right)={{\nabla }^{2}}\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\], (12) Although the selection of basis function $\mathbf{b}$ is general, several researchers follow the results from Franke's analysis [18] and use Hardy’s Multiquadrics, however in this work the monomials up to second order $\left( 1,\,{{p}_{x}},\,{{p}_{y}},\,p_{x}^{2},\,p_{y}^{2},\,{{p}_{x}}{{p}_{y}} \right)$ are used based on the results presented in [19]. The presented formulation is convenient for implementation since most of the complex operations, i.e. finding support nodes and building shape functions, are performed only when nodal topology changes. In the main simulation, the pre-computed shape functions are then convoluted with the vector of field values in the support to evaluate the desired operator, refer to Equation (21) for example. The presented approach is even easier to handle than the FDM, however, despite its simplicity it offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field, furthermore, the stability versus computation complexity and accuracy can be regulated simply by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation by re computing the shape functions with different setups. However, such re-setup is expensive, since the \[\mathbf{b}{{\left( \mathbf{p} \right)}^{\text{T}}}{{\left( {{\mathbf{W}}^{0.5}}\left( \mathbf{p} \right)\mathbf{B} \right)}^{+}}{{\mathbf{W}}^{0.5}}\] has to be re-evaluated, with asymptotical complexity of $O\left( {{N}_{D}}n{{m}^{2}} \right)$, where ${{N}_{D}}$ stands for total number of discretization nodes. In addition, the determination of support domain nodes also consumes some time, for example, if a kD-tree [20] data structure is used, first the tree is built with $O\left( {{N}_{D}}\log {{N}_{D}} \right)$ and then additional $O\left( {{N}_{D}}\left( log{{N}_{D}}+n \right) \right)$ for collecting n supporting nodes.