Auxiliary routines for MPI mapping across planes
Overloading routine getdata_fwdbwdplane for integer and real data
Communicates data (FP) between planes receives data from plane rank + step and sends data to plane rank-step with periodicity in communication
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | comm |
MPI Communicator |
||
| integer, | intent(in) | :: | step |
Step size of communication |
||
| integer, | intent(in) | :: | nsend |
Dimension of array to be sent data |
||
| real(kind=FP), | intent(in), | dimension(nsend) | :: | usend |
Array to be sent |
|
| integer, | intent(out) | :: | nrecv |
Dimension of received array |
||
| real(kind=FP), | intent(out), | allocatable, dimension(:) | :: | urecv |
Array to be received |
Communicates data (Integer) between planes receives data from plane rank + step and sends data to plane rank-step with periodicity in communication
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | comm |
MPI Communicator |
||
| integer, | intent(in) | :: | step |
Step size of communication |
||
| integer, | intent(in) | :: | nsend |
Dimension of array to be sent data |
||
| integer, | intent(in), | dimension(nsend) | :: | usend |
Array to be sent |
|
| integer, | intent(out) | :: | nrecv |
Dimension of received array |
||
| integer, | intent(out), | allocatable, dimension(:) | :: | urecv |
Array to be received |
Communicates data (CSR matrix) between planes receives data from plane rank + step and sends data to plane rank-step with periodicity in communication
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | comm |
MPI Communicator |
||
| integer, | intent(in) | :: | step |
Step size of communication |
||
| type(csrmat_t), | intent(in) | :: | acsr_send |
CSR matrix to be sent |
||
| type(csrmat_t), | intent(out), | allocatable | :: | acsr_recv |
CSR matrix to be received |
Divides an integer range into chunks, that can be worked on with separate MPI-processes
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | comm |
MPI communicator |
||
| integer, | intent(in) | :: | istart |
Initial index of range |
||
| integer, | intent(in) | :: | iend |
Final index of range |
||
| integer, | intent(out) | :: | n |
Global length of range |
||
| integer, | intent(out) | :: | n_loc |
Local length of range |
||
| integer, | intent(out) | :: | iloc_start |
Local start of MPI-chunk |
||
| integer, | intent(out) | :: | iloc_end |
Local end of MPI-chunk |
Assembles a global CSR matrix, which was built partially on individual processes
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | comm |
MPI communicator |
||
| integer, | intent(in) | :: | ndim_glob |
Dimension of global matrix |
||
| integer, | intent(in) | :: | ndim_loc |
Local dimension of partial matrix |
||
| integer, | intent(in) | :: | nz_al |
Dimension of jcsr, val, must be larger than numbers of non-zeros of global matrix |
||
| integer, | intent(inout), | dimension(ndim_glob+1) | :: | icsr |
On input: i-indices (CSR format) of partial matrix On output: i-indices (CSR format) of global matrix |
|
| integer, | intent(inout), | dimension(nz_al) | :: | jcsr |
On input: columnd indices (CSR format) of partial matrix On output: columnd indices (CSR format) of global matrix |
|
| real(kind=FP), | intent(inout), | dimension(nz_al) | :: | val |
On input: Values (CSR format) of partial matrix On output: Values (CSR format) of global matrix |