Javascript required
Skip to content Skip to sidebar Skip to footer

Solution of Sparse Indefinite Systems of Linear Equations

In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of a indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986.[1] It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975.[2] [3] The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems.

The method [edit]

Denote the Euclidean norm of any vector v by v {\displaystyle \|v\|} . Denote the (square) system of linear equations to be solved by

A x = b . {\displaystyle Ax=b.\,}

The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., that b = 1 {\displaystyle \|b\|=1} .

The n-th Krylov subspace for this problem is

K n = K n ( A , r 0 ) = span { r 0 , A r 0 , A 2 r 0 , , A n 1 r 0 } . {\displaystyle K_{n}=K_{n}(A,r_{0})=\operatorname {span} \,\{r_{0},Ar_{0},A^{2}r_{0},\ldots ,A^{n-1}r_{0}\}.\,}

where r 0 = b A x 0 {\displaystyle r_{0}=b-Ax_{0}} is the initial error given an initial guess x 0 0 {\displaystyle x_{0}\neq 0} . Clearly r 0 = b {\displaystyle r_{0}=b} if x 0 = 0 {\displaystyle x_{0}=0} .

GMRES approximates the exact solution of A x = b {\displaystyle Ax=b} by the vector x n K n {\displaystyle x_{n}\in K_{n}} that minimizes the Euclidean norm of the residual r n = b A x n {\displaystyle r_{n}=b-Ax_{n}} .

The vectors r 0 , A r 0 , A n 1 r 0 {\displaystyle r_{0},Ar_{0},\ldots A^{n-1}r_{0}} might be close to linearly dependent, so instead of this basis, the Arnoldi iteration is used to find orthonormal vectors q 1 , q 2 , , q n {\displaystyle q_{1},q_{2},\ldots ,q_{n}\,} which form a basis for K n {\displaystyle K_{n}} . In particular, q 1 = r 0 2 1 r 0 {\displaystyle q_{1}=\|r_{0}\|_{2}^{-1}r_{0}} .

Therefore, the vector x n K n {\displaystyle x_{n}\in K_{n}} can be written as x n = x 0 + Q n y n {\displaystyle x_{n}=x_{0}+Q_{n}y_{n}} with y n R n {\displaystyle y_{n}\in \mathbb {R} ^{n}} , where Q n {\displaystyle Q_{n}} is the m-by-n matrix formed by q 1 , , q n {\displaystyle q_{1},\ldots ,q_{n}} .

The Arnoldi process also produces an ( n + 1 {\displaystyle n+1} )-by- n {\displaystyle n} upper Hessenberg matrix H ~ n {\displaystyle {\tilde {H}}_{n}} with

A Q n = Q n + 1 H ~ n . {\displaystyle AQ_{n}=Q_{n+1}{\tilde {H}}_{n}.\,}

For a symmetric matrices, a symmetric tri-diagonal matrix is actually achieved, resulting in the minres method.

Because columns of Q n {\displaystyle Q_{n}} are orthonormal, we have

r n = b A x n = b A ( x 0 + Q n y n ) = r 0 A Q n y n = β q 1 A Q n y n = β q 1 Q n + 1 H ~ n y n = Q n + 1 ( β e 1 H ~ n y n ) = β e 1 H ~ n y n , {\displaystyle \|r_{n}\|=\|b-Ax_{n}\|=\|b-A(x_{0}+Q_{n}y_{n})\|=\|r_{0}-AQ_{n}y_{n}\|=\|\beta q_{1}-AQ_{n}y_{n}\|=\|\beta q_{1}-Q_{n+1}{\tilde {H}}_{n}y_{n}\|=\|Q_{n+1}(\beta e_{1}-{\tilde {H}}_{n}y_{n})\|=\|\beta e_{1}-{\tilde {H}}_{n}y_{n}\|,\,}

where

e 1 = ( 1 , 0 , 0 , , 0 ) T {\displaystyle e_{1}=(1,0,0,\ldots ,0)^{T}\,}

is the first vector in the standard basis of R n + 1 {\displaystyle \mathbb {R} ^{n+1}} , and

β = r 0 , {\displaystyle \beta =\|r_{0}\|\,,}

r 0 {\displaystyle r_{0}} being the first trial vector (usually zero). Hence, x n {\displaystyle x_{n}} can be found by minimizing the Euclidean norm of the residual

r n = H ~ n y n β e 1 . {\displaystyle r_{n}={\tilde {H}}_{n}y_{n}-\beta e_{1}.}

This is a linear least squares problem of size n.

This yields the GMRES method. On the n {\displaystyle n} -th iteration:

  1. calculate q n {\displaystyle q_{n}} with the Arnoldi method;
  2. find the y n {\displaystyle y_{n}} which minimizes r n {\displaystyle \|r_{n}\|} ;
  3. compute x n = x 0 + Q n y n {\displaystyle x_{n}=x_{0}+Q_{n}y_{n}} ;
  4. repeat if the residual is not yet small enough.

At every iteration, a matrix-vector product A q n {\displaystyle Aq_{n}} must be computed. This costs about 2 m 2 {\displaystyle 2m^{2}} floating-point operations for general dense matrices of size m {\displaystyle m} , but the cost can decrease to O ( m ) {\displaystyle O(m)} for sparse matrices. In addition to the matrix-vector product, O ( n m ) {\displaystyle O(nm)} floating-point operations must be computed at the n -th iteration.

Convergence [edit]

The nth iterate minimizes the residual in the Krylov subspace K n {\displaystyle K_{n}} . Since every subspace is contained in the next subspace, the residual does not increase. After m iterations, where m is the size of the matrix A, the Krylov space K m is the whole of R m and hence the GMRES method arrives at the exact solution. However, the idea is that after a small number of iterations (relative to m), the vector x n is already a good approximation to the exact solution.

This does not happen in general. Indeed, a theorem of Greenbaum, Pták and Strakoš states that for every nonincreasing sequence a 1, …, a m−1, a m = 0, one can find a matrix A such that the ||r n || = a n for all n, where r n is the residual defined above. In particular, it is possible to find a matrix for which the residual stays constant for m − 1 iterations, and only drops to zero at the last iteration.

In practice, though, GMRES often performs well. This can be proven in specific situations. If the symmetric part of A, that is ( A T + A ) / 2 {\displaystyle (A^{T}+A)/2} , is positive definite, then

r n ( 1 λ min 2 ( 1 / 2 ( A T + A ) ) λ max ( A T A ) ) n / 2 r 0 , {\displaystyle \|r_{n}\|\leq \left(1-{\frac {\lambda _{\min }^{2}(1/2(A^{T}+A))}{\lambda _{\max }(A^{T}A)}}\right)^{n/2}\|r_{0}\|,}

where λ m i n ( M ) {\displaystyle \lambda _{\mathrm {min} }(M)} and λ m a x ( M ) {\displaystyle \lambda _{\mathrm {max} }(M)} denote the smallest and largest eigenvalue of the matrix M {\displaystyle M} , respectively.[4]

If A is symmetric and positive definite, then we even have

r n ( κ 2 ( A ) 2 1 κ 2 ( A ) 2 ) n / 2 r 0 . {\displaystyle \|r_{n}\|\leq \left({\frac {\kappa _{2}(A)^{2}-1}{\kappa _{2}(A)^{2}}}\right)^{n/2}\|r_{0}\|.}

where κ 2 ( A ) {\displaystyle \kappa _{2}(A)} denotes the condition number of A in the Euclidean norm.

In the general case, where A is not positive definite, we have

r n b inf p P n p ( A ) κ 2 ( V ) inf p P n max λ σ ( A ) | p ( λ ) | , {\displaystyle {\frac {\|r_{n}\|}{\|b\|}}\leq \inf _{p\in P_{n}}\|p(A)\|\leq \kappa _{2}(V)\inf _{p\in P_{n}}\max _{\lambda \in \sigma (A)}|p(\lambda )|,\,}

where P n denotes the set of polynomials of degree at most n with p(0) = 1, V is the matrix appearing in the spectral decomposition of A, and σ(A) is the spectrum of A. Roughly speaking, this says that fast convergence occurs when the eigenvalues of A are clustered away from the origin and A is not too far from normality.[5]

All these inequalities bound only the residuals instead of the actual error, that is, the distance between the current iterate x n and the exact solution.

Extensions of the method [edit]

Like other iterative methods, GMRES is usually combined with a preconditioning method in order to speed up convergence.

The cost of the iterations grow as O(n 2), where n is the iteration number. Therefore, the method is sometimes restarted after a number, say k, of iterations, with x k as initial guess. The resulting method is called GMRES(k) or Restarted GMRES. For non-positive definite matrices, this method may suffer from stagnation in convergence as the restarted subspace is often close to the earlier subspace.

The shortcomings of GMRES and restarted GMRES are addressed by the recycling of Krylov subspace in the GCRO type methods such as GCROT and GCRODR.[6] Recycling of Krylov subspaces in GMRES can also speed up convergence when sequences of linear systems need to be solved.[7]

Comparison with other solvers [edit]

The Arnoldi iteration reduces to the Lanczos iteration for symmetric matrices. The corresponding Krylov subspace method is the minimal residual method (MinRes) of Paige and Saunders. Unlike the unsymmetric case, the MinRes method is given by a three-term recurrence relation. It can be shown that there is no Krylov subspace method for general matrices, which is given by a short recurrence relation and yet minimizes the norms of the residuals, as GMRES does.

Another class of methods builds on the unsymmetric Lanczos iteration, in particular the BiCG method. These use a three-term recurrence relation, but they do not attain the minimum residual, and hence the residual does not decrease monotonically for these methods. Convergence is not even guaranteed.

The third class is formed by methods like CGS and BiCGSTAB. These also work with a three-term recurrence relation (hence, without optimality) and they can even terminate prematurely without achieving convergence. The idea behind these methods is to choose the generating polynomials of the iteration sequence suitably.

None of these three classes is the best for all matrices; there are always examples in which one class outperforms the other. Therefore, multiple solvers are tried in practice to see which one is the best for a given problem.

Solving the least squares problem [edit]

One part of the GMRES method is to find the vector y n {\displaystyle y_{n}} which minimizes

H ~ n y n β e 1 . {\displaystyle \|{\tilde {H}}_{n}y_{n}-\beta e_{1}\|.\,}

Note that H ~ n {\displaystyle {\tilde {H}}_{n}} is an (n + 1)-by-n matrix, hence it gives an over-constrained linear system of n+1 equations for n unknowns.

The minimum can be computed using a QR decomposition: find an (n + 1)-by-(n + 1) orthogonal matrix Ω n and an (n + 1)-by-n upper triangular matrix R ~ n {\displaystyle {\tilde {R}}_{n}} such that

Ω n H ~ n = R ~ n . {\displaystyle \Omega _{n}{\tilde {H}}_{n}={\tilde {R}}_{n}.}

The triangular matrix has one more row than it has columns, so its bottom row consists of zero. Hence, it can be decomposed as

R ~ n = [ R n 0 ] , {\displaystyle {\tilde {R}}_{n}={\begin{bmatrix}R_{n}\\0\end{bmatrix}},}

where R n {\displaystyle R_{n}} is an n-by-n (thus square) triangular matrix.

The QR decomposition can be updated cheaply from one iteration to the next, because the Hessenberg matrices differ only by a row of zeros and a column:

H ~ n + 1 = [ H ~ n h n + 1 0 h n + 2 , n + 1 ] , {\displaystyle {\tilde {H}}_{n+1}={\begin{bmatrix}{\tilde {H}}_{n}&h_{n+1}\\0&h_{n+2,n+1}\end{bmatrix}},}

where h n+1 = (h 1,n+1 , …, h n+1,n+1 )T. This implies that premultiplying the Hessenberg matrix with Ω n , augmented with zeroes and a row with multiplicative identity, yields almost a triangular matrix:

[ Ω n 0 0 1 ] H ~ n + 1 = [ R n r n + 1 0 ρ 0 σ ] {\displaystyle {\begin{bmatrix}\Omega _{n}&0\\0&1\end{bmatrix}}{\tilde {H}}_{n+1}={\begin{bmatrix}R_{n}&r_{n+1}\\0&\rho \\0&\sigma \end{bmatrix}}}

This would be triangular if σ is zero. To remedy this, one needs the Givens rotation

G n = [ I n 0 0 0 c n s n 0 s n c n ] {\displaystyle G_{n}={\begin{bmatrix}I_{n}&0&0\\0&c_{n}&s_{n}\\0&-s_{n}&c_{n}\end{bmatrix}}}

where

c n = ρ ρ 2 + σ 2 and s n = σ ρ 2 + σ 2 . {\displaystyle c_{n}={\frac {\rho }{\sqrt {\rho ^{2}+\sigma ^{2}}}}\quad {\mbox{and}}\quad s_{n}={\frac {\sigma }{\sqrt {\rho ^{2}+\sigma ^{2}}}}.}

With this Givens rotation, we form

Ω n + 1 = G n [ Ω n 0 0 1 ] . {\displaystyle \Omega _{n+1}=G_{n}{\begin{bmatrix}\Omega _{n}&0\\0&1\end{bmatrix}}.}

Indeed,

Ω n + 1 H ~ n + 1 = [ R n r n + 1 0 r n + 1 , n + 1 0 0 ] with r n + 1 , n + 1 = ρ 2 + σ 2 {\displaystyle \Omega _{n+1}{\tilde {H}}_{n+1}={\begin{bmatrix}R_{n}&r_{n+1}\\0&r_{n+1,n+1}\\0&0\end{bmatrix}}\quad {\text{with}}\quad r_{n+1,n+1}={\sqrt {\rho ^{2}+\sigma ^{2}}}}

is a triangular matrix.

Given the QR decomposition, the minimization problem is easily solved by noting that

H ~ n y n β e 1 = Ω n ( H ~ n y n β e 1 ) = R ~ n y n β Ω n e 1 . {\displaystyle \|{\tilde {H}}_{n}y_{n}-\beta e_{1}\|=\|\Omega _{n}({\tilde {H}}_{n}y_{n}-\beta e_{1})\|=\|{\tilde {R}}_{n}y_{n}-\beta \Omega _{n}e_{1}\|.}

Denoting the vector β Ω n e 1 {\displaystyle \beta \Omega _{n}e_{1}} by

g ~ n = [ g n γ n ] {\displaystyle {\tilde {g}}_{n}={\begin{bmatrix}g_{n}\\\gamma _{n}\end{bmatrix}}}

with g n R n and γ n R, this is

H ~ n y n β e 1 = R ~ n y n β Ω n e 1 = [ R n 0 ] y n [ g n γ n ] . {\displaystyle \|{\tilde {H}}_{n}y_{n}-\beta e_{1}\|=\|{\tilde {R}}_{n}y_{n}-\beta \Omega _{n}e_{1}\|=\left\|{\begin{bmatrix}R_{n}\\0\end{bmatrix}}y_{n}-{\begin{bmatrix}g_{n}\\\gamma _{n}\end{bmatrix}}\right\|.}

The vector y that minimizes this expression is given by

y n = R n 1 g n . {\displaystyle y_{n}=R_{n}^{-1}g_{n}.}

Again, the vectors g n {\displaystyle g_{n}} are easy to update.[8]

Example code [edit]

Regular GMRES (MATLAB / GNU Octave) [edit]

                        function                                    [x, e] = gmres            (            A, b, x, max_iterations, threshold)                                                n                                    =                                    length            (            A            );                                                m                                    =                                    max_iterations            ;                                                % use x as the initial vector                                                r                                    =                                    b                                    -                                    A                                    *                                    x            ;                                                b_norm                                    =                                    norm            (            b            );                                                error                                    =                                    norm            (            r            )                                    /                                    b_norm            ;                                                % initialize the 1D vectors                                                sn                                    =                                    zeros            (            m            ,                                    1            );                                                cs                                    =                                    zeros            (            m            ,                                    1            );                                                %e1 = zeros(n, 1);                                                e1                                    =                                    zeros            (            m            +            1            ,                                    1            );                                                e1            (            1            )                                    =                                    1            ;                                                e                                    =                                    [            error            ];                                                r_norm                                    =                                    norm            (            r            );                                                Q            (:,            1            )                                    =                                    r                                    /                                    r_norm            ;                                                beta                                    =                                    r_norm                                    *                                    e1            ;                                    %Note: this is not the beta scalar in section "The method" above but the beta scalar multiplied by e1                                                for                                    k                                    =                                    1            :            m                                                % run arnoldi                                                [            H            (            1            :            k            +            1            ,                                    k            )                                    Q            (:,                                    k            +            1            )]                                    =                                    arnoldi            (            A            ,                                    Q            ,                                    k            );                                                                        % eliminate the last element in H ith row and update the rotation matrix                                                [            H            (            1            :            k            +            1            ,                                    k            )                                    cs            (            k            )                                    sn            (            k            )]                                    =                                    apply_givens_rotation            (            H            (            1            :            k            +            1            ,            k            ),                                    cs            ,                                    sn            ,                                    k            );                                                                        % update the residual vector                                                beta            (            k                                    +                                    1            )                                    =                                    -            sn            (            k            )                                    *                                    beta            (            k            );                                                beta            (            k            )                                    =                                    cs            (            k            )                                    *                                    beta            (            k            );                                                error                                    =                                    abs            (            beta            (            k                                    +                                    1            ))                                    /                                    b_norm            ;                                                % save the error                                                e                                    =                                    [            e            ;                                    error            ];                                                if                                    (            error                                    <=                                    threshold            )                                                break            ;                                                end                                                end                                                % if threshold is not reached, k = m at this point (and not m+1)                                                                                    % calculate the result                                                y                                    =                                    H            (            1            :            k            ,                                    1            :            k            )                                    \                                    beta            (            1            :            k            );                                                x                                    =                                    x                                    +                                    Q            (:,                                    1            :            k            )                                    *                                    y            ;                        end                        %----------------------------------------------------%                        %                  Arnoldi Function                  %                        %----------------------------------------------------%                        function                                    [h, q] = arnoldi            (A, Q, k)                                                q                                    =                                    A            *            Q            (:,            k            );                                    % Krylov Vector                                                for                                    i                                    =                                    1            :            k                                    % Modified Gram-Schmidt, keeping the Hessenberg matrix                                                h            (            i            )                                    =                                    q            '                                    *                                    Q            (:,                                    i            );                                                q                                    =                                    q                                    -                                    h            (            i            )                                    *                                    Q            (:,                                    i            );                                                end                                                h            (            k                                    +                                    1            )                                    =                                    norm            (            q            );                                                q                                    =                                    q                                    /                                    h            (            k                                    +                                    1            );                        end                        %---------------------------------------------------------------------%                        %                  Applying Givens Rotation to H col                  %                        %---------------------------------------------------------------------%                        function                                    [h, cs_k, sn_k] = apply_givens_rotation            (h, cs, sn, k)                                                % apply for ith column                                                for                                    i                                    =                                    1            :            k            -            1                                                temp                                    =                                    cs            (            i            )                                    *                                    h            (            i            )                                    +                                    sn            (            i            )                                    *                                    h            (            i                                    +                                    1            );                                                h            (            i            +            1            )                                    =                                    -            sn            (            i            )                                    *                                    h            (            i            )                                    +                                    cs            (            i            )                                    *                                    h            (            i                                    +                                    1            );                                                h            (            i            )                                    =                                    temp            ;                                                end                                                % update the next sin cos values for rotation                                                [            cs_k                                    sn_k            ]                                    =                                    givens_rotation            (            h            (            k            ),                                    h            (            k                                    +                                    1            ));                                                % eliminate H(i + 1, i)                                                h            (            k            )                                    =                                    cs_k                                    *                                    h            (            k            )                                    +                                    sn_k                                    *                                    h            (            k                                    +                                    1            );                                                h            (            k                                    +                                    1            )                                    =                                    0.0            ;                        end                        %%----Calculate the Given rotation matrix----%%                        function                                    [cs, sn] = givens_rotation            (v1, v2)                        %  if (v1 == 0)                        %    cs = 0;                        %    sn = 1;                        %  else                                                t                                    =                                    sqrt            (            v1^2                                    +                                    v2^2            );                        %    cs = abs(v1) / t;                        %    sn = cs * v2 / v1;                                                cs                                    =                                    v1                                    /                                    t            ;                                    % see http://www.netlib.org/eispack/comqr.f                                                sn                                    =                                    v2                                    /                                    t            ;                        %  end                        end                      

See also [edit]

  • Biconjugate gradient method

References [edit]

  1. ^ Y. Saad and M.H. Schultz
  2. ^ Paige and Saunders, "Solution of Sparse Indefinite Systems of Linear Equations", SIAM J. Numer. Anal., vol 12, page 617 (1975) https://doi.org/10.1137/0712047
  3. ^ N.Nifa. "Doctoral Thesis" (PDF).
  4. ^ Eisenstat, Elman & Schultz, Thm 3.3. NB all results for GCR also hold for GMRES, cf. Saad & Schultz
  5. ^ Trefethen & Bau, Thm 35.2
  6. ^ Amritkar, Amit; de Sturler, Eric; Świrydowicz, Katarzyna; Tafti, Danesh; Ahuja, Kapil (2015). "Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver". Journal of Computational Physics. 303: 222. arXiv:1501.03358. Bibcode:2015JCoPh.303..222A. doi:10.1016/j.jcp.2015.09.040.
  7. ^ Gaul, André (2014). Recycling Krylov subspace methods for sequences of linear systems (Ph.D.). TU Berlin. doi:10.14279/depositonce-4147.
  8. ^ Stoer and Bulirsch, §8.7.2

Notes [edit]

  • A. Meister, Numerik linearer Gleichungssysteme, 2nd edition, Vieweg 2005, ISBN 978-3-528-13135-7.
  • Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, Society for Industrial and Applied Mathematics, 2003. ISBN 978-0-89871-534-7.
  • Y. Saad and M.H. Schultz, "GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., 7:856–869, 1986. doi:10.1137/0907058.
  • S. C. Eisenstat, H.C. Elman and M.H. Schultz, "Variational iterative methods for nonsymmetric systems of linear equations", SIAM Journal on Numerical Analysis, 20(2), 345–357, 1983.
  • J. Stoer and R. Bulirsch, Introduction to numerical analysis, 3rd edition, Springer, New York, 2002. ISBN 978-0-387-95452-3.
  • Lloyd N. Trefethen and David Bau, III, Numerical Linear Algebra, Society for Industrial and Applied Mathematics, 1997. ISBN 978-0-89871-361-9.
  • Dongarra et al. , Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition, SIAM, Philadelphia, 1994
  • Amritkar, Amit; de Sturler, Eric; Świrydowicz, Katarzyna; Tafti, Danesh; Ahuja, Kapil (2015). "Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver". Journal of Computational Physics 303: 222. doi:10.1016/j.jcp.2015.09.040

Solution of Sparse Indefinite Systems of Linear Equations

Source: https://en.wikipedia.org/wiki/Generalized_minimal_residual_method