Introductory example

Example overview

In this example, we illustrate how to compute the parallel gradient using a central finite difference along magnetic field lines. While this method is not necessarily the most optimized or suitable for all discretization needs, it provides a simple introduction to utilizing PARALLAX's FCI capabilities. Note that the example is not optimized for performance, nor is it parallelized using OpenMP for in-plane parallelism, as the focus here is on readability. However, we do employ MPI domain decomposition along the toroidal direction, meaning each MPI process manages its own poloidal mesh. The quantity being computed is stored as a one-dimensional array u on each MPI process, with halo exchange occurring between adjacent planes. This example does not assume axisymmetry, so the size of the mesh and the array u may vary between planes.

The discrete parallel gradient on a given plane (=MPI process) is calculated from the mapped values as follows: where ​ represent the values at the mapped points, obtained through field line tracing and interpolation, and ​ denotes the distances along magnetic field lines to these mapped points.

Code example

In the code snippet below, we focus on FCI related routines. It is assumed that the equilibrium, meshes, and other necessary components are already set up by previous builds. Additionally, only the relevant variables and routines necessary to understand the example are included in the header.

! Variable and its parallel gradient
real(FP), dimension(mesh%get_n_points()) :: u, pargrad_u

! Values of u at mapped points
real(FP), dimension(mesh%get_n_points()) :: u_map_fwd, u_map_bwd

! Lengths along field lines to adjacent planes
real(FP), dimension(mesh%get_n_points()) :: ds_fwd, ds_bwd

! Poloidal meshes on adjacent planes
type(mesh_t) :: mesh_fwd, mesh_bwd

! Matrices in Compressed Sparse Row format to compute values at mapped points
type(csrmat_t) :: qcsr_fwd, qcsr_bwd

! Halo fields, these will be allocated within the `getdata_fwdbwdplane` routine.
! with the size of mesh_fwd and mesh_bwd
real(FP), allocatable, dimension(:) :: u_halo_fwd, u_halo_bwd

...

! Exchange the adjacent meshes (fwd: k+1) and (bwd: k-1),
! as the problem may not be axisymmetric
call mesh%exchange_mesh_mpi(MPI_COMM_WORLD, step=-1, mesh_received=mesh_bwd)
call mesh%exchange_mesh_mpi(MPI_COMM_WORLD, step=+1, mesh_received=mesh_fwd)

! Build map matrix for the forward (k+1) direction 
! and return lengths along field lines
call create_map_matrix(qcsr_bwd, MPI_COMM_WORLD, equi, mesh, mesh_fwd, 
                       phi_direction=+1, intorder=1, arclength_array=ds_fwd)

! Build map matrix for the backward (k-1) direction
call create_map_matrix(qcsr_fwd, MPI_COMM_WORLD, equi, mesh, mesh_fwd, 
                       phi_direction=-1, intorder=1, arclength_array=ds_bwd)

! Allocate and fetch halo array from forward mesh. 
! u_halo_bwd is allocated within the routine with 
! size of n_fetch = mesh_bwd%get_n_points()
call getdata_fwdbwdplane(MPI_COMM_WORLD, -1, mesh%get_n_points(), u, &
                         n_fetch, u_halo_bwd)

! On return: size of u_halo_fwd = n_fetch = mesh_fwd%get_n_points()
call getdata_fwdbwdplane(MPI_COMM_WORLD, +1, mesh%get_n_points(), u, &
                         n_fetch, u_halo_fwd)

! Compute values at mapped points
call csr_times_vec(qcsr_bwd, u_halo_bwd, u_map_bwd)
call csr_times_vec(qcsr_fwd, u_halo_fwd, u_map_fwd)

! Compute parallel gradient using central finite difference
do l = 1, mesh%get_n_points()
    pargrad_u(l) = (u_map_fwd(l) - u_map_bwd(l)) / (ds_fwd(l) + ds_bwd(l))
enddo

The core component of the FCI functionality in PARALLAX is the create_map_matrix routine, which internally performs field line tracing and interpolation in a variety of different flavours. In this example, a simple bilinear interpolation (intorder=1) is used.