openPMD-api can be used within a MPI-parallel environment. The following API contracts need to be followed.
Generally, opening and closing the
Series object with a MPI communicator is a collective operation.
The communicator will be duplicated via
MPI_Comm_dup in its constructor and freed in the destructor.
Iteration::particles::operator for the first time will create or open openPMD objects, which is a collective operation.
For read-only series, this contrain might be more relaxed but we first have to add more tests on it.
Flush should be treated as collective operation.
<openPMD-api::object>::flush(), exchanged memory buffers (
loadChunk()) can be manipulated or freed.
loadChunk() are non-collective in most cases, we have not sufficient test coverage to ensure
flush() can be called in a non-collective manner.
Also see GitHub Bug #490 for limitations.
Attribute writes should be treated as collective operations until further tests were performed.
Attribute writes are likely non-collective in ADIOS1 & ADIOS2 and a single writing MPI rank is sufficient. (Needs tests in CI.) Attribute reads should generally be non-collective (also needs tests in CI).
Store and Load Chunk¶
::loadChunk() are designed to be non-collective.
Participating MPI ranks can skip calls to these member functions or call them independently multiple times.
If you rely on parallel HDF5 output with non-collective calls to these member functions, please see its limitations and control options with various MPI-I/O implementations and set appropriate runtime options.