PARALLAX is primarily designed as a library to be integrated into other codes, such as GRILLIX or GENE-X. As such, it provides a set of executables with only limited functionality, primarily intended for testing and benchmarking purposes. In addition to the unit test executables, which are generated when compiled with -DPARALLAX_ENABLE_UTESTS=ON, PARALLAX includes the following three executables:
This program is intended to test and illustrate the discretization of parallel operators within the FCI approach. It simulates the time evolution of the parallel (along magnetic field lines) diffusion equation:
\[ \partial_t u = \nabla\cdot(\mathbf{b}\nabla_\parallel u). \]
The program is run using MPI, where the number of processes must match the number of poloidal planes. To execute, for example with mpirun, use the following command:
$ mpirun -np <nplanes> ./test_diffusion
If executed without any further input, the program runs a default case with 4 poloidal planes and produces no output except to the screen. For additional examples, refer to the itests/diffusion directory, where the params.in file provides input via Fortran namelists. For a detailed description of all parameters, please consult the source code in src/integration_tests/test_diffusion.f90. Below is a brief overview only of the most relevant parameters, that allow you to play around a little bit with the programm:
geometry: Specifies the type of magnetic field geometry (equilibrium).nplanes: Defines the number of poloidal planes to be used. This must match the number of MPI processes.write_case_to_files: Set this to .true. to save the mesh and solution data to files.spacing_f: Defines the Cartesian mesh spacing within poloidal planes.xc_gauss, yc_gauss, wx_gauss, wy_gauss: These parameters define the initial Gaussian blob on the first plane, controlling its location and width.dtau, nsnaps, nt_per_snaps: These parameters control the timestep, number of timesteps to be performed and snapshot frequency.An illustrative example is provided in itests/diffusion/circtor, which simulates parallel diffusion for a magnetic field configuration with circular flux surfaces in toroidal geometry. The domain spans a flux shell, with the safety factor profile ranging from q=2.5 at the inner flux shell to q=3.5 at the outer flux shell, and q=3 at the center. If the example is run with write_case_to_files = .true., the program generates mesh and snapshot files for each plane in NetCDF format, which can be used for further analysis and visualization. The figure below shows the solution at different toroidal locations (from left to right) at t=0 (top row) and t=20 (bottom row). The structure clearly elongates along the magnetic field lines as expected.

Note that the test_diffusion program runs with the parallel diffusion operator discretized using both the support operator method and the direct method. In the screen output of the examples, you can observe that the support operator method exactly conserves the L1-norm of the solution, while the L1-norm decays with the direct discretization.
The benchmark_helmholtz_solvers program is designed to test the integrity and benchmark the computational performance of the elliptic field solver, which is a critical component of plasma turbulence codes. This program runs using a single MPI process. Similar to the test_diffusion program, it executes a default case without any further input provided. Additional examples can be found in the itests/helmholtz directory. The key parameters are:
read_case_from_files: If set to .false., the program solves a case for the specified geometry, with test fields generated internally. If set to .true., it reads the mesh, solver data, and fields from the files multigrid.nc and helmholtz_data.nc.write_case_to_files: If set to .true., the program writes the meshes, fields, and solution (stored in the output file as variable guess) to the output files multigrid_out.nc and helmholtz_data_out.nc for possible reuse or further analysis.&solver_params: This namelist contains various options to configure the field solver.TODO