Definitions and usage

Mathematical background

The elliptic field equation is defined as: This equation must be solved for the potential with as a known source term and known coefficients , which may vary in time and space. The domain represents an individual 2D poloidal plane, where the gradient . At the domain boundaries, Dirichlet and Neumann boundary conditions are prescribed, with denoting the outward normal vector to the domain.

For theoretical considerations we may assume , which is only present for practical reasons and could be absorbed into and . Under this assumption, the weak form of the equation becomes: Apart from the boundary term the problem is positive definite for and , which corresponds to the class of problems considered in this context.

Discretisation

The mesh in PARALLAX is technically logically unstructured but locally Cartesian. For didactic purposes, we will use the locally Cartesian indices throughout the following discussion. For details on how to map to the unstructured index and obtain adjacent points, please refer to the mesh section.

For inner grid points, we use a standard second-order finite difference scheme to discretise the elliptic equation. The discretised equation for a given inner point is:

For boundary points with Dirichlet boundary conditions , the value of the potential is directly set to the Dirichlet value as follows: where ​ contains the Dirichlet boundary values .

For boundary points with Neumann boundary conditions , we discretise the normal derivative using a finite difference approximation: where contains the flux values . For more details on the discretisation of Neumann boundary conditions, please refer to the Boundary conditions section.

Finally, ghost points are used to extend the domain for computation purposes. For these points, the discrete field operator is simply defined as: The ghost points can be further refined or modified later using the extrapolate_ghost_points routine.

Usage of the field solver

The following example demonstrates the basic usage of the field solver within a time-stepping environment, which is the typical use case for the solver:

use params_hsolver_m, only : read_params_hsolver, select_hsolver
use helmholtz_solver_m, only : helmholtz_solver_t
use helmholtz_solver_factory_m, only : helmholtz_solver_factory, parameters_helmholtz_solver_factory
...
type(parameters_helmholtz_solver_factory) :: params_hsolver
class(helmholtz_solver_t), allocatable :: hsolver
real(GP), dimension(mesh%get_n_points()) :: co
real(GP), dimension(mesh%get_n_points_inner()) :: lambda, xi
real(GP), dimension(mesh%get_n_points()) :: phi, b
...
! Read solver parameters from input file 'params.in'
call read_params_hsolver('params.in')

! Create the field solver using the factory routine
call helmholtz_solver_factory(multigrid, &
                              BND_TYPE_DIRICHLET_ZERO, &
                              BND_TYPE_DIRICHLET_ZERO, &
                              BND_TYPE_DIRICHLET_ZERO, &
                              BND_TYPE_DIRICHLET_ZERO, &
                              co, lambda, xi, &
                              params_hsolver, select_hsolver, hsolver)
...
! Time-stepping loop (this could be placed in a separate subroutine or module, passing hsolver)
do t = 1, ...
    ! Optionally update the coefficients co, lambda, xi
    ...

    ! Optionally recompute boundary values and descriptors
    ...

    ! Update coefficients and boundary conditions in the solver
    call hsolver%update(co, lambda, xi, &
                        bnd_type_core, bnd_type_wall, &
                        bnd_type_dome, bnd_type_out)

    ! Compute the right-hand side (b) and initial guess for phi
    ...

    ! Solve the system and compute the potential phi
    call hsolver%solve(b, phi, res, info)

    ! Optionally check the residual and convergence status (info)
    ...

end do

When calling the factory routine, note that the coefficients and boundary types can be assigned somewhat arbitrary initial values, as they can be updated later using the update routine. The multigrid object must be created beforehand (refer to multigrid). Additionally, the parameter select_hsolver allows you to choose from a selection of solvers:

  • MGMRES: The flexible GMRES solver from MKL, utilizing a reverse communication interface (MKL FGMRES). This solver employs the in-house developed multigrid preconditioner, offering a robust and efficient solution method.
  • DIRECT: A direct sparse solver, such as the PARDISO solver from MKL.
  • MGMRES_CXX, MGMRES_GPU: These solvers are based on the same algorithms as MGMRES, but with GPU support via different backends provided by the PAccX library. Available only if PARALLAX is compiled with -DENABLE_PACCX=ON.
  • ROCALUTION_GPU: An algebraic multigrid-preconditioned FGMRES solver using the rocALUTION (https://rocm.docs.amd.com/projects/rocALUTION/en/latest/) backend within PAccX. Only works with AMD hardware. Available only if PARALLAX is compiled with PARALLAX_ENABLE_PACCX=ON PACCX_ENABLE_HIP=ON PACCX_ENABLE_ROCALUTION=ON.
  • PETSC_PCMG, PETSC_PCRC: Experimental solvers utilizing the PETSc library. Available only if PARALLAX is compiled with -DENABLE_PETSC=ON.

Further specific solver parameters, like tolerance etc., are passed via the variables stored within params_hsolver.