waLBerla 7.2
|
Encapsulates MPI Rank/Communicator information.
Every process has two ranks/communicators: World: This communicator/rank is valid after calling activateMPI, usually at the beginning of the program. This communicator never changes.
Custom: Can be adapted to the block structure. During the block structure setup, either the Cartesian setup has to be chosen using createCartesianComm() or the world communicator has to be used: useWorldComm()
#include <MPIManager.h>
Public Member Functions | |
~MPIManager () | |
void | initializeMPI (int *argc, char ***argv, bool abortOnException=true) |
Configures the class, initializes numProcesses, worldRank the rank and comm variables are still invalid, until custom communicator is set up. | |
void | finalizeMPI () |
void | resetMPI () |
void | abort () |
Cartesian Communicator | |
void | createCartesianComm (const std::array< int, 3 > &, const std::array< int, 3 > &) |
void | createCartesianComm (const uint_t xProcesses, const uint_t yProcesses, const uint_t zProcesses, const bool xPeriodic=false, const bool yPeriodic=false, const bool zPeriodic=false) |
void | cartesianCoord (std::array< int, 3 > &coordOut) const |
Cartesian coordinates of own rank. | |
void | cartesianCoord (int rank, std::array< int, 3 > &coordOut) const |
Cartesian coordinates of given rank. | |
int | cartesianRank (std::array< int, 3 > &coords) const |
translates Cartesian coordinates to rank | |
int | cartesianRank (const uint_t x, const uint_t y, const uint_t z) const |
translates Cartesian coordinates to rank | |
World Communicator | |
void | useWorldComm () |
Getter Function | |
int | worldRank () const |
int | numProcesses () const |
int | rank () const |
MPI_Comm | comm () const |
uint_t | bitsNeededToRepresentRank () const |
bool | isMPIInitialized () const |
bool | hasCartesianSetup () const |
bool | rankValid () const |
Rank is valid after calling createCartesianComm() or useWorldComm() | |
bool | hasWorldCommSetup () const |
bool | isCommMPIIOValid () const |
Indicates whether MPI-IO can be used with the current MPI communicator; certain versions of OpenMPI produce segmentation faults when using MPI-IO with a 3D Cartesian MPI communicator (see waLBerla issue #73) | |
template<typename CType > | |
MPI_Datatype | getCustomType () const |
Return the custom MPI_Datatype stored in 'customMPITypes_' and defined by the user and passed to 'commitCustomType'. | |
template<typename CType > | |
MPI_Op | getCustomOperation (mpi::Operation op) const |
Return the custom MPI_Op stored in 'customMPIOperation_' and defined by the user and passed to 'commitCustomOperation'. | |
Public Attributes | |
WALBERLA_BEFRIEND_SINGLETON | |
Setter Function | |
int | worldRank_ {0} |
Rank in MPI_COMM_WORLD. | |
int | rank_ {-1} |
Rank in the custom communicator. | |
int | numProcesses_ {1} |
Total number of processes. | |
MPI_Comm | comm_ |
Use this communicator for all MPI calls this is in general not equal to MPI_COMM_WORLD this may change during domain setup, where a custom communicator adapted to the domain is created. | |
bool | isMPIInitialized_ {false} |
Indicates whether initializeMPI has been called. If true, MPI_Finalize is called upon destruction. | |
bool | cartesianSetup_ {false} |
Indicates whether a Cartesian communicator has been created. | |
bool | currentlyAborting_ {false} |
bool | finalizeOnDestruction_ {false} |
std::map< std::type_index, walberla::mpi::Datatype > | customMPITypes_ {} |
It is possible to commit own datatypes to MPI, that are not part of the standard. | |
std::map< walberla::mpi::Operation, walberla::mpi::MPIOperation > | customMPIOperations_ {} |
template<typename CType , class ConstructorArgumentType > | |
void | commitCustomType (ConstructorArgumentType &argument) |
! | |
template<typename CType > | |
void | commitCustomOperation (mpi::Operation op, MPI_User_function *fct) |
! | |
static std::string | getMPIErrorString (int errorCode) |
static std::string | getMPICommName (MPI_Comm comm) |
MPIManager () | |
walberla::mpi::MPIManager::~MPIManager | ( | ) |
|
inlineprivate |
void walberla::mpi::MPIManager::abort | ( | ) |
|
inline |
void walberla::mpi::MPIManager::cartesianCoord | ( | int | rank, |
std::array< int, 3 > & | coordOut ) const |
Cartesian coordinates of given rank.
void walberla::mpi::MPIManager::cartesianCoord | ( | std::array< int, 3 > & | coordOut | ) | const |
Cartesian coordinates of own rank.
int walberla::mpi::MPIManager::cartesianRank | ( | const uint_t | x, |
const uint_t | y, | ||
const uint_t | z ) const |
translates Cartesian coordinates to rank
int walberla::mpi::MPIManager::cartesianRank | ( | std::array< int, 3 > & | coords | ) | const |
translates Cartesian coordinates to rank
|
inline |
|
inline |
!
Initializes a custom MPI_Op and logs it in the customMPIOperation map
op | A operator, e.g. SUM, MIN. |
fct | The definition of the MPI_User_function used for this operator. |
|
inline |
!
Initializes a custom MPI_Datatype and logs it in the customMPITypes_ map.
argument | The argument that is expected by the constructor of mpi::Datatype At the point of creation 26.01.2024 this is either MPI_Datatype or const int. |
void walberla::mpi::MPIManager::createCartesianComm | ( | const std::array< int, 3 > & | dims, |
const std::array< int, 3 > & | periodicity ) |
void walberla::mpi::MPIManager::createCartesianComm | ( | const uint_t | xProcesses, |
const uint_t | yProcesses, | ||
const uint_t | zProcesses, | ||
const bool | xPeriodic = false, | ||
const bool | yPeriodic = false, | ||
const bool | zPeriodic = false ) |
void walberla::mpi::MPIManager::finalizeMPI | ( | ) |
Free the custom types and operators
|
inline |
Return the custom MPI_Op stored in 'customMPIOperation_' and defined by the user and passed to 'commitCustomOperation'.
|
inline |
Return the custom MPI_Datatype stored in 'customMPITypes_' and defined by the user and passed to 'commitCustomType'.
|
static |
|
static |
|
inline |
|
inline |
void walberla::mpi::MPIManager::initializeMPI | ( | int * | argc, |
char *** | argv, | ||
bool | abortOnException = true ) |
Configures the class, initializes numProcesses, worldRank the rank and comm variables are still invalid, until custom communicator is set up.
abortOnException | if true, MPI_Abort is called in case of an uncaught exception |
bool walberla::mpi::MPIManager::isCommMPIIOValid | ( | ) | const |
Indicates whether MPI-IO can be used with the current MPI communicator; certain versions of OpenMPI produce segmentation faults when using MPI-IO with a 3D Cartesian MPI communicator (see waLBerla issue #73)
|
inline |
|
inline |
|
inline |
|
inline |
Rank is valid after calling createCartesianComm() or useWorldComm()
void walberla::mpi::MPIManager::resetMPI | ( | ) |
|
inline |
|
inline |
|
private |
Indicates whether a Cartesian communicator has been created.
|
private |
Use this communicator for all MPI calls this is in general not equal to MPI_COMM_WORLD this may change during domain setup, where a custom communicator adapted to the domain is created.
|
private |
|
private |
|
private |
It is possible to commit own datatypes to MPI, that are not part of the standard.
One example would be float16. With these maps, it is possible to track self defined MPI_Datatypes and MPI_Ops, to access them at any time and place in the program, also, they are automatically freed once MPIManager::finalizeMPI is called. To initialize types or operations and add them to the map, the getter functions 'commitCustomType' and 'commitCustomOperation' should be used. This can for example be done e.g. in the specialization of the MPITrait of the newly defined type. For an example see MPIWrapper.cpp
|
private |
|
private |
Indicates whether initializeMPI has been called. If true, MPI_Finalize is called upon destruction.
|
private |
Total number of processes.
|
private |
Rank in the custom communicator.
walberla::mpi::MPIManager::WALBERLA_BEFRIEND_SINGLETON |
|
private |
Rank in MPI_COMM_WORLD.