Collects / Gathers data from multiple blocks to a single process.
Usage / Restrictions:
- each block sends a fixed amount of data to a collection process
- this amount has to be the same for all timesteps (i.e. calls of communicate() )
- to use the Scheme implement the GatherPackInfo interface or use one of the existing implementations
- The collect operation is done every time the communicate() method is called. If the result is not needed immediately, but only at the end of the simulation consider using the FileCollectorScheme.
Implementation:
- when communicate() is called first, the amount of data that each process sends is determined
- then an MPI_Allgather operation over all processes has to be done, to determine which processes participate this potentially very expensive operation is done only once in the setup phases
- a MPI communicator is created for all processes participating in the gather operation (i.e. that packed something ), and the amount of data that each process sends is sent to the gathering process
- subsequent calls of communicate() use that communicator for an MPI_Gatherv operation
#include <MPIGatherScheme.h>
◆ PackInfoVector
◆ MPIGatherScheme()
◆ ~MPIGatherScheme()
walberla::gather::MPIGatherScheme::~MPIGatherScheme |
( |
| ) |
|
◆ addPackInfo()
void walberla::gather::MPIGatherScheme::addPackInfo |
( |
const shared_ptr< GatherPackInfo > & |
pi | ) |
|
Registering PackInfo's.
- The ownership of the passed pack info is transferred to the MPICollectorScheme, i.e. the pack info is deleted by the scheme
- when calling addPackInfo after communicate(), the expensive setupPhase has to be done again -> better first call all addPackInfo() then start calling communicate()
◆ communicate()
void walberla::gather::MPIGatherScheme::communicate |
( |
| ) |
|
Performs the gather operation.
Collects all data on sending processes according to information given in the pack infos and unpacks them on the process that holds the root block (1,1,1)
◆ operator()()
void walberla::gather::MPIGatherScheme::operator() |
( |
| ) |
|
|
inline |
Similar to communicate but only executes everyNTimestep ( see constructor )
◆ runSetupPhase()
void walberla::gather::MPIGatherScheme::runSetupPhase |
( |
| ) |
|
|
private |
◆ setupGatherCommunicator()
void walberla::gather::MPIGatherScheme::setupGatherCommunicator |
( |
bool |
thisProcessParticipates, |
|
|
MPI_Comm & |
commOut, |
|
|
int & |
newRank |
|
) |
| |
|
private |
◆ blocks_
◆ bytesToSend_
int walberla::gather::MPIGatherScheme::bytesToSend_ |
|
private |
number of bytes sent by this process ( on all processes )
◆ displacementVector_
std::vector<int> walberla::gather::MPIGatherScheme::displacementVector_ |
|
private |
encodes how much each participating process sends (see MPI_Gatherv) ( only on gather process )
◆ everyNTimestep_
uint_t walberla::gather::MPIGatherScheme::everyNTimestep_ |
|
private |
◆ gatherCommunicator_
MPI_Comm walberla::gather::MPIGatherScheme::gatherCommunicator_ |
|
private |
communicator containing only participating processes
◆ gatherMsgSize_
int walberla::gather::MPIGatherScheme::gatherMsgSize_ |
|
private |
total size of gather message ( only on gather process )
◆ gatherRank_
int walberla::gather::MPIGatherScheme::gatherRank_ |
|
private |
rank in gatherCommunicator_ that gathers the data
◆ gatherRankInGlobalComm_
int walberla::gather::MPIGatherScheme::gatherRankInGlobalComm_ |
|
private |
gather processes rank in mpiManager->comm()
◆ packInfos_
◆ sendBytesPerProcess_
std::vector<int> walberla::gather::MPIGatherScheme::sendBytesPerProcess_ |
|
private |
For each process in gatherCommunicator_ the number of bytes to send ( only on gather process )
◆ setupPhaseDone_
bool walberla::gather::MPIGatherScheme::setupPhaseDone_ |
|
private |
The documentation for this class was generated from the following files: