MPI
Collective Behavior
openPMD-api is designed to support both serial as well as parallel I/O. The latter is implemented through the Message Passing Interface (MPI).
A collective operation needs to be executed by all MPI ranks of the MPI communicator that was passed to openPMD::Series
.
Contrarily, independent operations can also be called by a subset of these MPI ranks.
For more information, please see the MPI standard documents, for example MPI-3.1 in “Section 2.4 - Semantic Terms”.
Functionality |
Behavior |
Description |
---|---|---|
|
collective |
open and close |
|
collective |
read and write |
|
collective |
write, performed at flush |
|
coll/indep. |
behavior specified by bool param |
|
independent |
declare and open |
|
collective |
explicit open |
|
independent |
declare, open, write |
|
independent |
declare, open, write |
|
backend-specific |
declare, write |
|
independent |
open, reading |
|
independent |
declare, open, write |
backend-specific |
declare, write |
|
|
backend-specific |
declare, write |
|
independent |
write |
|
independent |
read |
|
collective |
read, immediate result |
Warning
The openPMD-api will by default flush only those Iterations which are dirty, i.e. have been written to.
This is somewhat unfortunate in parallel setups since only the dirty status of the current MPI rank can be considered.
As a workaround, use Attributable::seriesFlush()
on an Iteration (or an object contained within an Iteration) to force flush that Iteration regardless of its dirty status.
Tip
Just because an operation is independent does not mean it is allowed to be inconsistent.
For example, undefined behavior will occur if ranks pass differing values to ::setAttribute
or try to use differing names to describe the same mesh.
Efficient Parallel I/O Patterns
Note
This section is a stub. We will improve it in future versions.
Write as large data set chunks as possible in ::storeChunk
operations.
Read in large, non-overlapping subsets of the stored data (::loadChunk
).
Ideally, read the same chunk extents as were written, e.g. through ParticlePatches
(example to-do).
See the implemented I/O backends for individual tuning options.