MPI

Collective Behavior

openPMD-api is designed to support both serial as well as parallel I/O. The latter is implemented through the Message Passing Interface (MPI).

A collective operation needs to be executed by all MPI ranks of the MPI communicator that was passed to openPMD::Series. Contrarily, independent operations can also be called by a subset of these MPI ranks. For more information, please see the MPI standard documents, for example MPI-3.1 in “Section 2.4 - Semantic Terms”.

Functionality

Behavior

Description

Series

collective

open and close

::flush()

collective

read and write

::setRankTable()

collective

write, performed at flush

::rankTable()

coll/indep.

behavior specified by bool param

Iteration [1]

independent

declare and open

::open() [4]

collective

explicit open

Mesh [1]

independent

declare, open, write

ParticleSpecies [1]

independent

declare, open, write

::setAttribute [3]

backend-specific

declare, write

::getAttribute

independent

open, reading

RecordComponent [1]

independent

declare, open, write

::resetDataset [1] [2]

backend-specific

declare, write

::makeConstant [3]

backend-specific

declare, write

::storeChunk [1]

independent

write

::loadChunk

independent

read

::availableChunks [4]

collective

read, immediate result

Warning

The openPMD-api will by default flush only those Iterations which are dirty, i.e. have been written to. This is somewhat unfortunate in parallel setups since only the dirty status of the current MPI rank can be considered. As a workaround, use Attributable::seriesFlush() on an Iteration (or an object contained within an Iteration) to force flush that Iteration regardless of its dirty status.

Tip

Just because an operation is independent does not mean it is allowed to be inconsistent. For example, undefined behavior will occur if ranks pass differing values to ::setAttribute or try to use differing names to describe the same mesh.

Efficient Parallel I/O Patterns

Note

This section is a stub. We will improve it in future versions.

Write as large data set chunks as possible in ::storeChunk operations.

Read in large, non-overlapping subsets of the stored data (::loadChunk). Ideally, read the same chunk extents as were written, e.g. through ParticlePatches (example to-do).

See the implemented I/O backends for individual tuning options.