Difference between revisions of "Weighted Least Squares (WLS)"
(→SVD decomposition) |
|||
Line 74: | Line 74: | ||
Again we can solve either the system or compute pseudo inverse as | Again we can solve either the system or compute pseudo inverse as | ||
− | {{\mathbf{B}}^{-1}}={{\left( \mathbf{U}\sum {{\mathbf{V}}^{\text{T}}} \right)}^{-1}}=\mathbf{V}{{\sum }^{-1}}{{\mathbf{U}}^{\text{T}}} | + | {{\mathbf{B}}^{-1}}={{\left( \mathbf{U}\sum {{\mathbf{V}}^{\text{T}}} \right)}^{-1}}=\mathbf{V}{{\sum }^{-1}}{{\mathbf{U}}^{\text{T}}} |
where \bf{Σ}^-1 is trivial, just replace every non-zero diagonal entry by its reciprocal and transpose the resulting matrix. The stability gain is exactly here, one can now set threshold below which the singular value is considered as 0, basically truncate all singular values below some value and thus stabilize the inverse. | where \bf{Σ}^-1 is trivial, just replace every non-zero diagonal entry by its reciprocal and transpose the resulting matrix. The stability gain is exactly here, one can now set threshold below which the singular value is considered as 0, basically truncate all singular values below some value and thus stabilize the inverse. | ||
SVD decomposition complexity 2m{{n}^{2}}~+\text{ }2{{n}^{3}}=O({{n}^{3}}) | SVD decomposition complexity 2m{{n}^{2}}~+\text{ }2{{n}^{3}}=O({{n}^{3}}) |
Revision as of 19:05, 20 October 2016
One of the most important building blocks of the meshless methods is the Moving Least Squares approximation, which is implemented in the EngineMLS class. Check EngineMLS unit tests for examples https://gitlab.com/e62Lab/e62numcodes/blob/master/test/mls_test.cpp
Basics
In general, approximation function can be written as \hat u({\bf{p}}) = \sum\limits_i^m {{\alpha _i}{b_i}({\bf{p}})} = {{\bf{b}}^{\rm{T}}}{\bf{\alpha }}
- Normal equation (fastest – less accurate) – using Cholesky decomposition
- QR decomposition of \bf{B} (rank(\bf{B})=m – number of basis functions)
- SVD decomposition of \bf{B} (more expensive – more reliable, no rank demand)
In MM we use SVD.
Normal equations
We seek the minimum of {\bf{r}} = {\bf{u}} - {\bf{B\alpha }}
Complexity of Chol: \frac{{{n}^{3}}}{3} complexity of matrix multiplication n{{m}^{2}}
QR Decomposition
{\bf{B}} = {\bf{QR}} = \left[ {{{\bf{Q}}_1},{{\bf{Q}}_2}} \right]\left[ {\begin{array}{*{20}{c}} {{{\bf{R}}_1}}\\ 0 \end{array}} \right]
Complexity of QR decomposition \frac{2}{3}m{{n}^{2}}+{{n}^{2}}+\frac{1}{3}n-2=O({{n}^{3}})
SVD decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It has many useful applications in signal processing and statistics.
Formally, the singular value decomposition of an m × n real or complex matrix \bf{B} is a factorization of the form \bf{B}= \bf{UΣV}*, where \bf{U} is an m × m real or complex unitary matrix, \bf{Σ} is an m × n rectangular diagonal matrix with non-negative real numbers on the diagonal, and \bf{V*} is an n × n real or complex unitary matrix. The diagonal entries Σ_{i,i} of are known as the singular values of \bf{B}. The m columns of \bf{U} and the n columns of \bf{V} are called the left-singular vectors and right-singular vectors of \bf{B}, respectively.
The singular value decomposition and the eigen decomposition are closely related. Namely:
- The left-singular vectors of \bf{B} are eigen vectors of \bf{B*B^*}.
- The right-singular vectors of \bf{B} are eigen vectors of \bf{B*B}.
- The non-zero singular values of \bf{B} (found on the diagonal entries of \bf{Σ}) are the square roots of the non-zero eigenvalues of both \bf{B*B} and \bf{B*B^*}.
with SVD we can write B as \mathbf{B}=\mathbf{U}\sum {{\mathbf{V}}^{\text{T}}}
Again we can solve either the system or compute pseudo inverse as {{\mathbf{B}}^{-1}}={{\left( \mathbf{U}\sum {{\mathbf{V}}^{\text{T}}} \right)}^{-1}}=\mathbf{V}{{\sum }^{-1}}{{\mathbf{U}}^{\text{T}}}
SVD decomposition complexity 2m{{n}^{2}}~+\text{ }2{{n}^{3}}=O({{n}^{3}})