Use of MPI for Metadata I/O

Background:


All of HDF5's I/O operations that involve storing or retrieving metadata in the file are performed through the HDF5 "metadata cache".  This central location coordinates access to all HDF5 metadata and enforces rules about metadata creation and access.  The metadata cache within the HDF5 library provides deserialized metadata objects back to other parts of the HDF5 library by either reading metadata in from the file and deserializing it into a metadata object, or by providing an already deserialized object that it has cached from a prior use.  When other parts of the HDF5 library are finished using a metadata object, they release it back to the metadata cache, which may hold it for a future use.


Eventually, as the limits of the cache are reached, metadata objects that haven't been used recently are evicted from the metadata cache.  If a metadata object has been modified, the metadata cache serializes it and writes the serial form back to the HDF5 file.  Unmodified metadata objects are destroyed without accessing the file.



HDF5 API Calls with an MPI Application:


When an MPI application creates or modifies metadata in an HDF5 file, all processes must perform the HDF5 API call collectively:


One process in an MPI application may perform metadata operations that open or read objects in an HDF5 independently from other processes:



The HDF5 Metadata Cache's use of MPI:


The metadata cache in HDF5 can use MPI to synchronize the I/O operations that are performed when evicting metadata objects and is covered in the following Overview of the HDF5 Metadata Cache.

Page Index