TransWikia.com

When is it easy to invert a sparse matrix?

Computational Science Asked on February 7, 2021

(Crossposted on cstheory.SE)

When is it easy to invert a sparse matrix? Specifically, I’m wondering about the cases in which matrix inversion has similar cost to sparse matrix multiplication, hence much lower cost than full matrix inversion.

If the pattern of non-zeros corresponds to a bounded tree-width graph, exact inversion is linear in the number of non-zeros.

For unbounded tree-width but diagonally dominant matrix, Gauss-Seidel and Jacobi algorithms converge exponentially fast. For a larger class of "walk-summable" matrices (which restricts magnitude of off-diagonal entries), Gaussian belief propagation converges exponentially fast (but gives a biased estimate of the inverse).

What are other interesting conditions for easy invertibility besides tree-width/diagonal dominance?

2 Answers

One such case is if the sparse matrix is banded. For example, tridiagonal linear systems can be solved in linear time using Thomas' algorithm. For small bandwidths, you can find an algorithm of linear time cost. Note that as the bandwidth grows, the hidden coefficient grows too.

The literature on the topic is active and there are many characterizations as far as I see, some of which you have already found. Maybe you should add reference-request as a tag.

Answered by Abdullah Ali Sivas on February 7, 2021

There are many ways to solve systems with sparse matrices, so there is no way to answer this definitively and exhaust all possibilities. I'll add Krylov methods as one answer though. Many Krylov methods achieve fast results under the right conditions.

The conditions for Krylov methods to perform well have been asked here before. Every Krylov solver is a little bit different, but I'll give GMRES as an example.

For GMRES if your input matrix has the following properties:

  1. Normal (or approximately normal)
  2. Eigenvalues not too close to 0
  3. Eigenvalues clustering (easily identified "groups" of eigenvalues all close together)

then GMRES will likely converge very rapidly regardless of the right-hand-side vector. Note that the conditions (1) through (3) actually don't say anything about sparsity of the matrix at all, but usually since sparse matrix-vector product is cheap the GMRES algorithm can be well suited to the sparse case.

Answered by Reid.Atcheson on February 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP