MPI

Collective Behavior

openPMD-api is designed to support both serial as well as parallel I/O. The latter is implemented through the Message Passing Interface (MPI).

A collective operation needs to be executed by all MPI ranks of the MPI communicator that was passed to openPMD::Series. Contrarily, independent operations can also be called by a subset of these MPI ranks. For more information, please see the MPI standard documents, for example MPI-3.1 in “Section 2.4 - Semantic Terms”.

Functionality Behavior Description
Series collective open and close
::flush() collective read and write
Iteration [1] independent declare and open
Mesh [1] independent declare, open, write
ParticleSpecies [1] independent declare, open, write
::setAttribute [2] backend-specific declare, write
::getAttribute independent open, reading
::storeChunk [1] independent write
::loadChunk independent read
[1](1, 2, 3, 4) Individual backends, e.g. HDF5, will only support independent operations if the default, non-collective behavior is kept. (Otherwise these operations are collective.)
[2]HDF5 only supports collective attribute definitions/writes; ADIOS1 and ADIOS2 attributes can be written independently. If you want to support all backends equally, treat as a collective operation.

Tip

Just because an operation is independent does not mean it is allowed to be inconsistent. For example, undefined behavior will occur if ranks pass differing values to ::setAttribute or try to use differing names to describe the same mesh.

Efficient Parallel I/O Patterns

Note

This section is a stub. We will improve it in future versions.

Write as large data set chunks as possible in ::storeChunk operations.

Read in large, non-overlapping subsets of the stored data (::loadChunk). Ideally, read the same chunk extents as were written, e.g. through ParticlePatches (example to-do).

See the implemented I/O backends for individual tuning options.