Difference between revisions of "Weighted Least Squares (WLS)"

From Medusa: Coordinate Free Mehless Method implementation
Jump to: navigation, search
Line 8: Line 8:
 
\[\hat u({\bf{p}}) = \sum\limits_i^m {{\alpha _i}{b_i}({\bf{p}})}  = {{\bf{b}}^{\rm{T}}}{\bf{\alpha }}\]
 
\[\hat u({\bf{p}}) = \sum\limits_i^m {{\alpha _i}{b_i}({\bf{p}})}  = {{\bf{b}}^{\rm{T}}}{\bf{\alpha }}\]
 
where $\hat u,\,{B_i}\,and\,{\alpha _i}$  stand for approx. function, coefficients and basis function, respectively. We minimize the sum of residuum squares, i.e., the sum of squares of difference between the approx. function and target function, in addition we can also add weight function that controls the significance of nodes, i.e.,
 
where $\hat u,\,{B_i}\,and\,{\alpha _i}$  stand for approx. function, coefficients and basis function, respectively. We minimize the sum of residuum squares, i.e., the sum of squares of difference between the approx. function and target function, in addition we can also add weight function that controls the significance of nodes, i.e.,
[{_ <math display='block' xmlns='http://www.w3.org/1998/Math/MathML'>
+
\[{r^2} = \sum\limits_j^n {W\left( {{{\bf{p}}_j}} \right){{\left( {u({{\bf{p}}_j}) - \hat u({{\bf{p}}_j})} \right)}^2}}  = {\left( {{\bf{B\alpha }} - {\bf{u}}} \right)^{\rm{T}}}{\bf{W}}\left( {{\bf{B\alpha }} - {\bf{u}}} \right)\]
<semantics>
 
  <mrow>
 
  <msup>
 
    <mi>r</mi>
 
    <mn>2</mn>
 
  </msup>
 
  <mo>=</mo><mstyle displaystyle='true'>
 
    <munderover>
 
    <mo>&#x2211;</mo>
 
    <mi>j</mi>
 
    <mi>n</mi>
 
    </munderover>
 
    <mrow>
 
    <mi>W</mi><mrow><mo>(</mo>
 
      <mrow>
 
      <msub>
 
        <mstyle mathvariant='bold' mathsize='normal'><mi>p</mi></mstyle>
 
        <mi>j</mi>
 
      </msub>
 
      </mrow>
 
    <mo>)</mo></mrow><msup>
 
      <mrow>
 
      <mrow><mo>(</mo>
 
        <mrow>
 
        <mi>u</mi><mo stretchy='false'>(</mo><msub>
 
          <mstyle mathvariant='bold' mathsize='normal'><mi>p</mi></mstyle>
 
          <mi>j</mi>
 
        </msub>
 
        <mo stretchy='false'>)</mo><mo>&#x2212;</mo><mover accent='true'>
 
          <mi>u</mi>
 
          <mo>&#x005E;</mo>
 
        </mover>
 
        <mo stretchy='false'>(</mo><msub>
 
          <mstyle mathvariant='bold' mathsize='normal'><mi>p</mi></mstyle>
 
          <mi>j</mi>
 
        </msub>
 
        <mo stretchy='false'>)</mo></mrow>
 
      <mo>)</mo></mrow></mrow>
 
      <mn>2</mn>
 
    </msup>
 
    </mrow>
 
  </mstyle><mo>=</mo><msup>
 
    <mrow>
 
    <mrow><mo>(</mo>
 
      <mrow>
 
      <mstyle mathvariant='bold' mathsize='normal'><mi>B</mi><mi>&#x03B1;</mi></mstyle><mo>&#x2212;</mo><mstyle mathvariant='bold' mathsize='normal'><mi>u</mi></mstyle></mrow>
 
    <mo>)</mo></mrow></mrow>
 
    <mtext>T</mtext>
 
  </msup>
 
  <mstyle mathvariant='bold' mathsize='normal'><mi>W</mi></mstyle><mrow><mo>(</mo>
 
    <mrow>
 
    <mstyle mathvariant='bold' mathsize='normal'><mi>B</mi><mi>&#x03B1;</mi></mstyle><mo>&#x2212;</mo><mstyle mathvariant='bold' mathsize='normal'><mi>u</mi></mstyle></mrow>
 
  <mo>)</mo></mrow></mrow>
 
  <annotation encoding='MathType-MTEF'>MathType@MTEF@5@5@+=
 
  feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn
 
  hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr
 
  4rNCHbGeaGqipu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9
 
  vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x
 
  fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOCamaaCa
 
  aaleqabaGaaGOmaaaakiabg2da9maaqahabaGaam4vamaabmaabaGa
 
  aCiCamaaBaaaleaacaWGQbaabeaaaOGaayjkaiaawMcaamaabmaaba
 
  GaamyDaiaacIcacaWHWbWaaSbaaSqaaiaadQgaaeqaaOGaaiykaiab
 
  gkHiTiqadwhagaqcaiaacIcacaWHWbWaaSbaaSqaaiaadQgaaeqaaO
 
  GaaiykaaGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaeaacaWG
 
  QbaabaGaamOBaaqdcqGHris5aOGaeyypa0ZaaeWaaeaacaWHcbGaaC
 
  ySdiabgkHiTiaahwhaaiaawIcacaGLPaaadaahaaWcbeqaaiaabsfa
 
  aaGccaWHxbWaaeWaaeaacaWHcbGaaCySdiabgkHiTiaahwhaaiaawI
 
  cacaGLPaaaaaa@5BA9@
 
  </annotation>
 
</semantics>
 
</math>}]
 

Revision as of 18:23, 20 October 2016

One of the most important building blocks of the meshless methods is the Moving Least Squares approximation, which is implemented in the EngineMLS class.

1D MLS example
Figure 1: Example of 1D MLS approximation

In general, approximation function can be written as \[\hat u({\bf{p}}) = \sum\limits_i^m {{\alpha _i}{b_i}({\bf{p}})} = {{\bf{b}}^{\rm{T}}}{\bf{\alpha }}\] where $\hat u,\,{B_i}\,and\,{\alpha _i}$ stand for approx. function, coefficients and basis function, respectively. We minimize the sum of residuum squares, i.e., the sum of squares of difference between the approx. function and target function, in addition we can also add weight function that controls the significance of nodes, i.e., \[{r^2} = \sum\limits_j^n {W\left( {{{\bf{p}}_j}} \right){{\left( {u({{\bf{p}}_j}) - \hat u({{\bf{p}}_j})} \right)}^2}} = {\left( {{\bf{B\alpha }} - {\bf{u}}} \right)^{\rm{T}}}{\bf{W}}\left( {{\bf{B\alpha }} - {\bf{u}}} \right)\]