TransWikia.com

How to create a good preconditioner for a system of linear equations that is created with FEM applied on the time harming Maxwell eqution?

Computational Science Asked by CuteCompute on June 28, 2021

I set out to solve the time harmonic Maxwell equation numerically which was discritzed using FEM and with the use of Nedelec elements as basis and test functions. The equation reads:
$$ nabla times nabla times E , , – k^2_0E=0 quad in Omega $$
subject to the boundary conditions $$ n times E =0 quad in Gamma_D $$ $$ ntimes nabla times E=alpha ntimes ntimes E +g quad in Gamma_N$$ on different parts parts of the domain. With $k_o$ being a real number and $alpha$ and $g$ being complex.
Writing $E=E_{real}+i E_{imag}$, we can break down the electric field components into real and imaginary parts and applying the FEM procedure we can get the following system:
begin{aligned}
int nabla times v .nabla times E_{real} – k^2_0 int v.E_{real} & qquad alpha oint nabla times v .nabla times E_{imag} &=0
-alpha oint nabla times v .nabla times E_{real} & qquad int nabla times v .nabla times E_{imag} – k^2_0 int v.E_{imag} &=oint v g
end{aligned}

The system above leads to a 2×2 block matrix which is hard to converge without a good preconditioner. I am building my matrices using dealii and using trilinos as my linear algebra solver

2 Answers

As a preamble, I would not expect that splitting $E$ into real/imaginary parts is very profitable. Normally, block 2x2 systems are motivated because one block of unknowns is "easier" to solve than the other in some sense (better conditioned? smaller in cardinality? etc). This is not the case for time-harmonic Maxwell, I think you'd be better off remaining in the complex field. If you can't do that (maybe these frameworks don't support complex variables?), the next best thing would be to probably "interleave" the real and imaginary parts and use point-like preconditioners with two degrees of freedom (1 real, 1 imag) per point. I'd expect this to have the same effect as working in the complex field to begin with.

All that said, a lot of classical preconditioners that you might try at this point (smoothing, incomplete factorizations, multigrid) are not very effective on time-harmonic Maxwell, because it has basically the same spectral properties as the (bad) Helmholtz equation: unbounded spectrum, indefinite, oscillatory in nature. An excellent survey of these troubles can be found here, you can basically apply the same arguments to time-harmonic Maxwell.

In fact, Maxwell is even a little more difficult in two ways. The minor way is because it's vector-valued and thus has more unknowns than a Helmholtz problem of equal size and wavenumber. The major way is that Maxwell's $nabla times nabla times$ operator is more complicated than Helmholtz's $nabla bullet nabla $ operator, because of the presence of an infinite dimensional nullspace. Helmholtz's gradient operator has a one dimensional nullspace (spanned by the constant function $1$), but Maxwell's curl operator has an infinite dimensional nullspace (spanned by the gradients of scalar functions $nabla f$). Many preconditioning schemes rely upon "coarsening" the problem down (in a multigrid sense), and the Helmholtz equation enjoys that its nullspace (the constant function) can be approximated with no error on any grid that is arbitrarily coarse. In contrast, when you discretize the Maxwell equation on a fine grid, the nullspace of the discrete operator will not be exactly representable on a coarser grid, and thus your exchange operations (restriction/prolongation) must be more carefully constructed (otherwise you are likely to take exactly-zero eigencomponents on one grid and map them to close-but-not-quite-zero eigencomponents on another, which wrecks convergence .. see here for some discussion).

On a less pessimistic note, a reliable FE solver for low/medium-wavenumber time-harmonic Maxwell is an excellent tool to have in your pocket, because it can be readily hybridized with high-wavenumber methods (modal expansions, integral equations, asymptotics, etc) but complements their deficiencies (unlike FE, these methods typically only work for homogeneous media). Along those lines, the two approaches I've used most successfully for solving Maxwell's equations iteratively are (i) p-multiplicative-schwarz (pMUS) and (ii) domain decomposition (DDM).

The former (pMUS) is basically multigrid, but applied in polynomial order (p) instead of mesh size (h). The basic idea is to use a low order p=0 solution to precondition a high order p=(0+1+...) solution. It's easy to implement, but requires some formulation-level effort to tabulate your (Nedelec) basis functions in such a way that they separate the nullspace/range of the curl operator as described earlier (see this paper for a good representative in this class of methods).

The latter (DDM) is basically a deflation-like scheme, wherein you partition space into non-overlapping domains, eliminate/substructure away the interior unknowns, then solve the resulting interface-only problem to re-establish the correct field continuity / global solution. Much thought has been put into the right kinds of "transmission conditions" that should be used to terminate the subdomains and match them back together, much of the work on Maxwell has been adapted from similar work on the Helmholtz equation. See here for pioneering work on Helmholtz, and here for its evolution into Maxwell.

Answered by rchilton1980 on June 28, 2021

Sparse approximate inverse (SAI) and incomplete LU factorization (ILU) are standard approaches to preconditioning iterative solutions of large sparse matrix equations. See https://www5.in.tum.de/lehre/vorlesungen/parnum/WS10/PARNUM_10.pdf for more details.

Answered by sssssssssssss on June 28, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP