HDF5 documents and links 
Introduction to HDF5 
HDF5 User Guide 
And in this document, the HDF5 Reference Manual 
H5DS   H5IM   H5LT   H5PT   H5TB 
H5   H5A   H5D   H5E   H5F   H5G   H5I 
H5L   H5O   H5P   H5R   H5S   H5T   H5Z 
Tools   Datatypes   Fortran   Compatibility Macros 
(Printable PDF of this Reference Manual) 

HDF5 Tools

HDF5 Tool Interfaces

HDF5-related tools are available to assist the user in a variety of activities, including examining or managing HDF5 files, converting raw data between HDF5 and other special-purpose formats, moving data and files between the HDF4 and HDF5 formats, measuring HDF5 library performance, and managing HDF5 library and application compilation, installation and configuration. Unless otherwise specified below, these tools are distributed and installed with HDF5.


Tool Name: h5dump
Syntax:
h5dump [OPTIONS] file
Purpose:
Displays HDF5 file contents.
Description:
h5dump enables the user to examine the contents of an HDF5 file and dump those contents, in human readable form, to an ASCII file.

h5dump dumps HDF5 file content to standard output. It can display the contents of the entire HDF5 file or selected objects, which can be groups, datasets, a subset of a dataset, links, attributes, or datatypes.

The --header option displays object header information only.

Names are the absolute names of the objects. h5dump displays objects in the order same as the command order. If a name does not start with a slash, h5dump begins searching for the specified object starting at the root group.

If an object is hard linked with multiple names, h5dump displays the content of the object in the first occurrence. Only the link information is displayed in later occurrences.

h5dump assigns a name for any unnamed datatype in the form of #oid1:oid2, where oid1 and oid2 are the object identifiers assigned by the library. The unnamed types are displayed within the root group.

Datatypes are displayed with standard type names. For example, if a dataset is created with H5T_NATIVE_INT type and the standard type name for integer on that machine is H5T_STD_I32BE, h5dump displays H5T_STD_I32BE as the type of the dataset.

h5dump can also dump a subset of a dataset. This feature operates in much the same way as hyperslabs in HDF5; the parameters specified on the command line are passed to the function H5Sselect_hyperslab and the resulting selection is displayed.

The h5dump output is described in detail in the DDL for HDF5, the Data Description Language document.

Note: It is not permissible to specify multiple attributes, datasets, datatypes, groups, or soft links with one flag. For example, one may not issue the command
         WRONG:   h5dump -a /attr1 /attr2 foo.h5
to display both /attr1 and /attr2. One must issue the following command:
         CORRECT:   h5dump -a /attr1 -a /attr2 foo.h5

It's possible to select the file driver with which to open the HDF5 file by using the --filedriver (-f) command-line option. Acceptable values for the --filedriver option are: "sec2", "family", "split", and "multi". If the file driver flag isn't specified, then the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file.

One byte integer type data is displayed in decimal by default. When displayed in ASCII, a non-printable code is displayed in 3 octal digits preceeded by a back-slash unless there is a C language escape sequence for it. For example, CR and LF are printed as \r and \n. Though the NUL code is represented as \0 in C, it is printed as \000 to avoid ambiguity as illustrated in the following 1 byte char data (since this is not a string, embedded NUL is possible).

	141 142 143 000 060 061 062 012
	  a   b   c  \0   0   1   2  \n 
h5dump prints them as "abc\000012\n". But if h5dump prints NUL as \0, the output is "abc\0012\n" which is ambiguous.

XML Output:
With the --xml option, h5dump generates XML output. This output contains a complete description of the file, marked up in XML. The XML conforms to the HDF5 Document Type Definition (DTD) available at http://www.hdfgroup.org/DTDs/HDF5-File.dtd.

The XML output is suitable for use with other tools, including the HDF5 Java Tools.

Options and Parameters:

Examples:
  1. Dump the group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -g /GroupFoo/GroupBar quux.h5

  2. Dump the dataset Fnord, which is in the group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -d /GroupFoo/GroupBar/Fnord quux.h5

  3. Dump the attribute metadata of the dataset Fnord, which is in the group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -a /GroupFoo/GroupBar/Fnord/metadata quux.h5

  4. Dump the attribute metadata which is an attribute of the root group in the file quux.h5:
         h5dump -a /metadata quux.h5

  5. Produce an XML listing of the file bobo.h5, saving the listing in the file bobo.h5.xml:
         h5dump --xml bobo.h5 > bobo.h5.xml

  6. Dump a subset of the dataset /GroupFoo/databar/ in the file quux.h5:
         h5dump -d /GroupFoo/databar --start="1,1" --stride="2,3"
             --count="3,19" --block="1,1" quux.h5


  7. The same example, using the short form to specify the subsetting parameters:
         h5dump -d "/GroupFoo/databar[1,1;2,3;3,19;1,1]" quux.h5

  8. Dump a binary copy of the dataset /GroupD/FreshData/ in the file quux.h5, with data written in little-endian form, to the output file FreshDataD.bin:
         h5dump -d "/GroupD/FreshData" -b LE
             -o "FreshDataD.bin" quux.h5


Current Status:
The current version of h5dump displays the following information:
See Also:
History:

Tool Name: h5ls
Syntax:
h5ls [OPTIONS] file [OBJECTS...]
Purpose:
Prints information about a file or dataset.
Description:
h5ls prints selected information about file objects in the specified format.
Options and Parameters:

Tool Name: h5diff    
Syntax:
h5diff [OPTIONS] file1 file2 [object1 [object2 ] ]

ph5diff [OPTIONS] file1 file2 [object1 [object2 ] ]

Purpose:
Compare two HDF5 files and report the differences.

Description:
h5diff and ph5diff are command line tools that compare two HDF5 files, file1 and file2, and report the differences between them. h5diff is for serial use while ph5diff is for use in parallel environments.

Optionally, h5diff and ph5diff will compare two objects within these files. If only one object, object1, is specified, h5diff will compare object1 in file1 with object1 in file2. If two objects, object1 and object2, are specified, h5diff will compare object1 in file1 with object2 in file2.

object1 and object2 can be groups, datasets, named datatypes, or links and must be expressed as absolute paths from the respective file’s root group.

h5diff and ph5diff have the following output modes:

Normal mode Prints the number of differences found and where they occurred.
Report mode (-r) Prints the above plus the differences.
Verbose mode (-v)     Prints all of the above plus a list of objects and warnings.
Quiet mode (-q) Prints no output.
(h5diff always returns an exit code of 1 when differences are found.)

h5diff and NaNs:
h5diff detects when a value in a dataset is a NaN (a "not a number" value), but does not differentiate among various types of NaNs. Thus, when one NaN is compared with another NaN, h5diff treats them as equal; when a NaN is compared with a valid number, h5diff treats them as not equal.

Difference between h5diff and ph5diff:
With the following exception, h5diff and ph5diff behave identically. With ph5diff, the comparison of objects is shared across multiple processors, with the comparison of each pair of objects assigned to a single processor. I.e., the comparison of a single object, even a very large dataset, in each file is not shared.

Options and Parameters:
    -h   or  --help Print help message.
    -V   or  --version Print version number and exit.
    -r   or  --report Report mode — Print the differences.
    -v   or  --verbose Verbose mode — Print the differences, a list of objects, and warnings.
    -q   or  --quiet Quiet mode — Do not print output.
    -n count   or
    --count=count
    Print difference up to count differences, then stop. count must be a positive integer.
    -d delta   or
    --delta=delta
    Print only differences that are greater than the limit delta. delta must be a positive number. The comparison criterion is whether the absolute value of the difference of two corresponding values is greater than delta (e.g.,  |a–b| > delta, where a is a value in file1 and b is a value in file2).
    -p relative   or
    --relative=relative        
    Print only differences that are greater than a relative error. relative must be a positive number. The comparison criterion is whether the absolute value of the difference 1 and the ratio of two corresponding values is greater than relative (e.g., |1–(b/a)| > relative where a is a value in file1 and b is a value in file2).
    file1    file2 The HDF5 files to be compared.
    object1    object2 Specific object(s) within the files to be compared.

Returns:
Returns 1 (one) if differences are found, 0 (zero) if no differences are found, and 2 (two) if the call fails.

Examples:
The following h5diff call compares the object /a/b in file1 with the object /a/c in file2:
    h5diff file1 file2 /a/b /a/c
This h5diff call compares the object /a/b in file1 with the same object in file2:
    h5diff file1 file2 /a/b
And this h5diff call compares all objects in both files:
    h5diff file1 file2

History:
    Release     Change
    1.6.0 h5diff introduced in this release.
    1.8.0 ph5diff introduced in this release.
    h5diff command line syntax changed in this release.
    1.8.2 and
      1.6.8
    Return value on failure changed in this release.

Tool Name: h5repack    
Syntax:
h5repack [OPTIONS] in_file out_file

h5repack -i in_file -o out_file [OPTIONS]

Purpose:
Copies an HDF5 file to a new file with or without compression and/or chunking.

Description:
h5repack is a command line tool that applies HDF5 filters to an input file in_file, saving the output in a new output file, out_file.

Options and Parameters:
-i in_file
Input HDF5 file

-o out_file
Output HDF5 file

-h   or  --help
Print help message.

-v   or  --verbose
Print verbose output.

-V   or  --version
Print version number.

-n   or  --native
Use native HDF5 datatypes when repacking.
(Default behavior is to use original file datatypes.)
Note that this is a change in default behavior; prior to Release 1.6.6, h5repack generated files only with native datatypes.

-L   or  --latest
Use latest version of the HDF5 file format.

-c max_compact_links   or  --compact=max_compact_links
Set the maximum number of links, max_compact_links, that can be stored in a group header message (compact format).

-d min_indexed_links   or  --indexed=min_indexed_links
Set the minimum number of links, min_indexed_links, in the indexed format.

max_compact_links and min_indexed_links are closely related and the first must be equal to or greater than the second. In the general case, however, performance will suffer, possibly dramatically, if they are equal; performance can be improved by tuning the gap between the two values to minimize unnecessary thrashing between the compact storage and indexed storage modes as group size waxes and wanes. The relationship between max_compact_links and min_indexed_links is most important when group sizes are highly dynamic; that relationship is much less important in files with a stable structure. Compact mode is space and performance-efficient when groups have small numbers of members; indexed mode requires slightly more storage space, but provides increasingly better performance as the number of members in each group increases.

-m number   or  --threshold=number
Apply filter(s) only to objects whose size in bytes is equal to or greater than number. If no size is specified, a threshold of 1024 bytes is assumed.

-s min_size[:header_type]   or  --ssize=min_size[:header_type]
Set the minimum size of optionally specified types of shared object header messages.

min_size is the minimum size, in bytes, of a shared object header message. Header messages smaller than the specified size will not be shared.

header_type specifies the type(s) of header message that this minimum size is to be applied to. Valid values of header_type are any of the following:
  dspace  for dataspace header messages
  dtype   for datatype header messages
  fill    for fill values
  pline   for property list header messages
  attr    for attribute header messages
If header_type is not specified, min_size will be applied to all header messages.

-f filter   or  --filter=filter
Filter type

filter is a string of the following format:

list_of_objects : name_of_filter[=filter_parameters]

list_of_objects is a comma separated list of object names meaning apply the filter(s) only to those objects. If no object names are specified, the filter is applied to all objects.

name_of_filter can be one of the following:
     GZIP, to apply the HDF5 GZIP filter (GZIP compression)
     SZIP, to apply the HDF5 SZIP filter (SZIP compression)
     SHUF, to apply the HDF5 shuffle filter
     FLET, to apply the HDF5 checksum filter
     NBIT, to apply the HDF5 N-bit filter
     SOFF, to apply the HDF5 scale/offset filter
     NONE, to remove any filter(s)

filter_parameters conveys optional compression information:
     GZIP=deflation_level from 1-9
     SZIP=pixels_per_block,coding_method
         pixels_per_block is a even number in the range 2-32.
         coding_method is EC or NN.
     SHUF (no parameter)
     FLET (no parameter)
     NBIT (no parameter)
     SOFF=scale_factor,scale_type
         scale_factor is an integer.
         scale_type is either IN or DS.
     NONE (no parameter)

-l layout   or  --layout=layout
Layout type

layout is a string of the following format:

list_of_objects : layout_type[=layout_parameters]

list_of_objects is a comma separated list of object names, meaning that layout information is supplied for those objects. If no object names are specified, the layout is applied to all objects.

layout_type can be one of the following:
     CHUNK, to apply chunking layout
     COMPA, to apply compact layout
     CONTI, to apply continuous layout

layout_parameters is present only in the CHUNK case and specifies the chunk size of each dimension in the following format with no intervening spaces:
     dim_1 × dim_2 × ... dim_n

-e file
File containing the -f and -l options (only filter and layout flags)

in_file
Input HDF5 file

out_file
Output HDF5 file

Examples:
  1. h5repack -f GZIP=1 -v file1 file2
    Applies GZIP compression to all objects in file1 and saves the output in file2. Prints verbose output.
     
  2. h5repack -f dset1:SZIP=8,NN file1 file2
    Applies SZIP compression only to object dset1.
     
  3. h5repack -l dset1,dset2:CHUNK=20x10 file1 file2
    Applies chunked layout to objects dset1 and dset2.

History:
    Release     Command Line Tool
    1.6.2 h5repack introduced in this release.
    1.8.0 h5repack command line syntax changed in this release.
    1.8.1 Original syntax restored; both the new and the original syntax are now supported.

Tool Name: h5repart
Syntax:
h5repart [-v] [-V] [-[b|m]N[g|m|k]] [-family_to_sec2] source_file dest_file
Purpose:
Repartitions a file or family of files.
Description:
h5repart joins a family of files into a single file, or copies one family of files to another while changing the size of the family members. h5repart can also be used to copy a single file to a single file with holes. At this stage, h5repart can not split a single non-family file into a family of file(s).

To convert a family of file(s) to a single non-family file (sec2 file), the option -family_to_sec2 has to be used.

Sizes associated with the -b and -m options may be suffixed with g for gigabytes, m for megabytes, or k for kilobytes.

File family names include an integer printf format such as %d.

Options and Parameters:
    -v Produce verbose output.
    -V Print a version number and exit.
    -bN The I/O block size, defaults to 1kB
    -mN The destination member size or 1GB
    -family_to_sec2 Convert file driver from family to sec2
    source_file     The name of the source file
    dest_file The name of the destination files

Tool Name: h5jam/h5unjam
Syntax:
h5jam -u user_block -i in_file.h5 [-o out_file.h5] [--clobber]
h5jam -h
 
h5unjam -i in_file.h5 [-u user_block | --delete] [-o out_file.h5]
h5unjam -h
Purpose:
Adds user block to front of an HDF5 file, to create a new concatenated file.
Splits user block and HDF5 file into two files: user block data and HDF5 data.
Description:
h5jam  concatenates a user_block file and an HDF5 file to create an HDF5 file with a user block. The user block can be either binary or text. The output file is padded so that the HDF5 header begins on byte 512, 1024, etc.. (See the HDF5 File Format.)

If out_file.h5 is given, a new file is created with the user_block followed by the contents of in_file.h5. In this case, infile.h5 is unchanged.

If out_file.h5 is not specified, the user_block is added to in_file.h5.

If in_file.h5 already has a user block, the contents of user_block will be added to the end of the existing user block, and the file shifted to the next boundary. If --clobber is set, any existing user block will be overwritten.

h5unjam splits an HDF5 file, writing the user block to a file or to stdout and the HDF5 file to an HDF5 file with a header at byte zero (0, i.e., with no user block).

If out_file.h5 is given, a new file is created with the contents of in_file.h5 without the user block. In this case, infile.h5 is unchanged.

If out_file.h5 is not specified, the user_block is removed and in_file.h5 is rewritten, starting at byte 0.

If user_block is set, the user block will be written to user_block. If user_block is not set, the user block, if any, will be written to stdout. If --delete is selected, the user block will not be written.

Examples:
Create new file, newfile.h5, with the text in file mytext.txt as the user block for the HDF5 file file.h5.
    h5jam -u mytext.txt -i file.h5 -o newfile.h5
Add text in file mytext.txt to front of HDF5 dataset, file.h5.
    h5jam -u mytext.txt -i file.h5 
Overwrite the user block, if any, in file.h5 with the contents of mytext.txt.
    h5jam -u mytext.txt -i file.h5 --clobber
For an HDF5 file, with_ub.h5, with a user block, extract the user block to user_block.txt and the HDF5 portion of the file to wo_ub.h5.
    h5unjam -i with_ub.h5 -u user_block.txt -o wo_ub.h5
Return Value:
h5jam returns the size of the output file, or -1 if an error occurs.

h5unjam returns the size of the output file, or -1 if an error occurs.

Caveats
These tools copy all the data sequentially in the file(s) to new offsets. For a large file, this copy will take a long time.

The most efficient way to create a user block is to create the file with a user block (see H5Pset_user_block), and write the user block data into that space from a program.

The user block is completely opaque to the HDF5 library and to the h5jam and h5unjam tools. The user block is simply read or written as a string of bytes, which could be text or any kind of binary data; it is up to the user to know what the contents of the user block means and how to process it.

When the user block is extracted, all the data is written to the output, including any padding or unwritten data.

This tool moves the HDF5 portion of the file through byte copies; i.e., it does not read or interpret the HDF5 objects.


Tool Name: h5copy
Syntax:
h5copy [OPTIONS] [OBJECTS]
Purpose:
Copy an object from one HDF5 file to another HDF5 file.
Description:
h5copy  copies an HDF5 object (dataset, named datatype, or group) from one HDF5 file to another HDF5 file, which may either already exist or not.
Arguments:
Options and Parameters:
-h   or   --help
Print a usage message and exit.
-v   or   --verbose
Produce verbose output, printing information regarding the specified options and objects.
-f flag_type   or   --flag=flag_type
Specify one or more of several copy options; flag_type may be one of the following strings or a logical AND of two or more:
-V   or   --Version
Print version information.

Objects (all required):
-i input_file   or   --input=input_file
Input HDF5 file name
-o output_file   or   --output=output_file
Output HDF5 file name (existing or non-existing)
-s source_object   or   --source=source_object
Input HDF5 object name within the source file
-d destination_object   or   --destination=destination_object
Output HDF5 object name within the destination file
 
Example Usage
In verbose mode, create a new file, test1.out.h5, containing the object array in the root group, copied from the existing file test1.h5 and object array.
    h5copy -v -i "test1.h5" -o "test1.out.h5" -s "/array" -d "/array
        

In verbose mode and using the flag shallow to prevent recursion in the file hierarchy, create a new file, test1.out.h5, containing the object array in the root group, copied from the existing file test1.h5 and object array.

    h5copy -v -f shallow -i "test1.h5" -s "/array" -o test1.out.h5" -d "/array"
History:
    shallow  Copy only immediate members of a group.
    (Default: Recursively copy all objects below the group.)
    soft  Expand soft links to copy target objects.
    (Default: Keep soft links as they are.)
    ext  Expand external links to copy external objects.
    (Default: Keep external links as they are.)
    ref  Copy objects that are pointed to by references.
    (Default: Update only the values of object references.)
    attr  Copy objects without copying attributes.
    (Default: Copy objects and all attributes.)
    allflags   Switch each setting above from the default to the setting described in this table.
    Equivalent to logical AND of all flags above.
    Release     Command Line Tool
    1.8.0 Tool introduced in this release.

    Tool Name: h5mkgrp
    Syntax:
    h5mkgrp [OPTIONS] file_name group_name...
    Purpose:
    Creates new group(s) in an HDF5 file.

    Description:
    h5mkgrp  creates one or more new groups in an HDF5 file.

    Options and Parameters:
    file_name
    Name of HDF5 file within which new group is to be created.
    group_name
    Name of group to be created; specified as full path name from the root group, i.e., starting with a slash (/).
     
    Options:
    -h, --help
    Print a usage message and exit.
    -l, --latest
    Use latest version of file format to create new group.
    -p, --parents
    Create parent or intervening groups as needed. Issue no error if intervening groups or new group already exist.
    -v, --verbose
    Print verbose output, including information about file, group(s), and options.
    -V, --version
    Print tool version number then exit. Tool version number is that of the corresponding HDF5 Library.

    Example Usage
    Create a new group, new_group,  within the existing group /a/b in the file HDF5_file.
        h5mkgrp "HDF5_file" "/a/b/new_group"
    Create a new group, new_group,  within the group /a/b in the file HDF5_file. Create the groups a and b if they do not already exist. Issue no error if the intervening groups or the new group already exist.
        h5mkgrp -p "HDF5_file" "/a/b/new_group"
    Create the new groups /a/b/new_c  and /a/x/new_4  in the file HDF5_file. The groups /a/b  and /a/x  must already exist.
        h5mkgrp -p "HDF5_file" "/a/b/new_c" "/a/x/new_4"

    History:
      Release     Command Line Tool
      1.8.0 Tool introduced in this release.

    Tool Name: h5import
    Syntax:
    h5import infile in_options [infile in_options ...] -o outfile
    h5import infile in_options [infile in_options ...] -outfile outfile
    h5import -h
    h5import -help
    Purpose:
    Imports data into an existing or new HDF5 file.
    Description:
    h5import converts data from one or more ASCII or binary files, infile, into the same number of HDF5 datasets in the existing or new HDF5 file, outfile. Data conversion is performed in accordance with the user-specified type and storage properties specified in in_options.

    The primary objective of h5import is to import floating point or integer data. The utility's design allows for future versions that accept ASCII text files and store the contents as a compact array of one-dimensional strings, but that capability is not implemented in HDF5 Release 1.6.

    Input data and options:
    Input data can be provided in one of the following forms:

    • As an ASCII, or plain-text, file containing either floating point or integer data
    • As a binary file containing either 32-bit or 64-bit native floating point data
    • As a binary file containing native integer data, signed or unsigned and 8-bit, 16-bit, 32-bit, or 64-bit.
    • As an ASCII, or plain-text, file containing text data. (This feature is not implemented in HDF5 Release 1.6.)
    Each input file, infile, contains a single n-dimensional array of values of one of the above types expressed in the order of fastest-changing dimensions first.

    Floating point data in an ASCII input file may be expressed either in the fixed-point form (e.g., 323.56) or in scientific notation (e.g., 3.23E+02) in an ASCII input file.

    Each input file can be associated with options specifying the datatype and storage properties. These options can be specified either as command line arguments or in a configuration file. Note that exactly one of these approaches must be used with a single input file.

    Command line arguments, best used with simple input files, can be used to specify the class, size, dimensions of the input data and a path identifying the output dataset.

    The recommended means of specifying input data options is in a configuration file; this is also the only means of specifying advanced storage features. See further discussion in "The configuration file" below.

    The only required option for input data is dimension sizes; defaults are available for all others.

    h5import will accept up to 30 input files in a single call. Other considerations, such as the maximum length of a command line, may impose a more stringent limitation.

    Output data and options:
    The name of the output file is specified following the -o or -output option in outfile. The data from each input file is stored as a separate dataset in this output file. outfile may be an existing file. If it does not yet exist, h5import will create it.

    Output dataset information and storage properties can be specified only by means of a configuration file.
      Dataset path If the groups in the path leading to the dataset do not exist, h5import will create them.
    If no group is specified, the dataset will be created as a member of the root group.
    If no dataset name is specified, the default name is dataset0 for the first input dataset, dataset1 for the second input dataset, dataset2 for the third input dataset, etc.
    h5import does not overwrite a pre-existing dataset of the specified or default name. When an existing dataset of a conflicting name is encountered, h5import quits with an error; the current input file and any subsequent input files are not processed.
      Output type Datatype parameters for output data
          Output data class Signed or unsigned integer or floating point
          Output data size 8-, 16-, 32-, or 64-bit integer
    32- or 64-bit floating point
          Output architecture IEEE
    STD
    NATIVE (Default)
    Other architectures are included in the h5import design but are not implemented in this release.
          Output byte order Little- or big-endian.
    Relevant only if output architecture is IEEE, UNIX, or STD; fixed for other architectures.
      Dataset layout and storage  
            properties
    Denote how raw data is to be organized on the disk. If none of the following are specified, the default configuration is contiguous layout and with no compression.
          Layout Contiguous (Default)
    Chunked
          External storage Allows raw data to be stored in a non-HDF5 file or in an external HDF5 file.
    Requires contiguous layout.
          Compressed Sets the type of compression and the level to which the dataset must be compressed.
    Requires chunked layout.
          Extendable Allows the dimensions of the dataset increase over time and/or to be unlimited.
    Requires chunked layout.
          Compressed and
            extendable
    Requires chunked layout.
       

    Command-line arguments:
    The h5import syntax for the command-line arguments, in_options, is as follows:
         h5import infile -d dim_list [-p pathname] [-t input_class] [-s input_size] [infile ...] -o outfile
    or
    h5import infile -dims dim_list [-path pathname] [-type input_class] [-size input_size] [infile ...] -outfile outfile
    or
    h5import infile -c config_file [infile ...] -outfile outfile
    Note the following: If the -c config_file option is used with an input file, no other argument can be used with that input file. If the -c config_file option is not used with an input data file, the -d dim_list argument (or -dims dim_list) must be used and any combination of the remaining options may be used. Any arguments used must appear in exactly the order used in the syntax declarations immediately above.

    The configuration file:
    A configuration file is specified with the -c config_file option:
         h5import infile -c config_file [infile -c config_file2 ...] -outfile outfile

    The configuration file is an ASCII file and must be organized as "Configuration_Keyword Value" pairs, with one pair on each line. For example, the line indicating that the input data class (configuration keyword INPUT-CLASS) is floating point in a text file (value TEXTFP) would appear as follows:
        INPUT-CLASS TEXTFP

    A configuration file may have the following keywords each followed by one of the following defined values. One entry for each of the first two keywords, RANK and DIMENSION-SIZES, is required; all other keywords are optional.


    Keyword  
        Value

    Description

    RANK  

    The number of dimensions in the dataset. (Required)
        rank An integer specifying the number of dimensions in the dataset.
    Example:   4   for a 4-dimensional dataset.

    DIMENSION-SIZES

    Sizes of the dataset dimensions. (Required)
        dim_sizes A string of space-separated integers specifying the sizes of the dimensions in the dataset. The number of sizes in this entry must match the value in the RANK entry. The fastest-changing dimension must be listed first.
    Example:   4 3 4 38   for a 38x4x3x4 dataset.

    PATH

    Path of the output dataset.
        path The full HDF5 pathname identifying the output dataset relative to the root group within the output file.
    I.e., path is a string consisting of optional group names, each followed by a slash, and ending with a dataset name. If the groups in the path do no exist, they will be created.
    If PATH is not specified, the output dataset is stored as a member of the root group and the default dataset name is dataset0 for the first input dataset, dataset1 for the second input dataset, dataset2 for the third input dataset, etc.
    Note that h5import does not overwrite a pre-existing dataset of the specified or default name. When an existing dataset of a conflicting name is encountered, h5import quits with an error; the current input file and any subsequent input files are not processed.
    Example: The configuration file entry
         PATH grp1/grp2/dataset1
    indicates that the output dataset dataset1 will be written in the group grp2/ which is in the group grp1/, a member of the root group in the output file.

    INPUT-CLASS  

    A string denoting the type of input data.
        TEXTIN Input is signed integer data in an ASCII file.
        TEXTUIN Input is unsigned integer data in an ASCII file.
        TEXTFP Input is floating point data in either fixed-point notation (e.g., 325.34) or scientific notation (e.g., 3.2534E+02) in an ASCII file.
        IN Input is signed integer data in a binary file.
        UIN Input is unsigned integer data in a binary file.
        FP Input is floating point data in a binary file. (Default)
        STR Input is character data in an ASCII file. With this value, the configuration keywords RANK, DIMENSION-SIZES, OUTPUT-CLASS, OUTPUT-SIZE, OUTPUT-ARCHITECTURE, and OUTPUT-BYTE-ORDER will be ignored.
    (Not implemented in this release.)

    INPUT-SIZE

    An integer denoting the size of the input data, in bits.
        8
        16
        32
        64
    For signed and unsigned integer data: TEXTIN, TEXTUIN, IN, or UIN. (Default: 32)
        32
        64
    For floating point data: TEXTFP or FP. (Default: 32)

    OUTPUT-CLASS  

    A string denoting the type of output data.
        IN Output is signed integer data.
    (Default if INPUT-CLASS is IN or TEXTIN)
        UIN Output is unsigned integer data.
    (Default if INPUT-CLASS is UIN or TEXTUIN)
        FP Output is floating point data.
    (Default if INPUT-CLASS is not specified or is FP or TEXTFP)
        STR Output is character data, to be written as a 1-dimensional array of strings.
    (Default if INPUT-CLASS is STR)
    (Not implemented in this release.)

    OUTPUT-SIZE

    An integer denoting the size of the output data, in bits.
        8
        16
        32
        64
    For signed and unsigned integer data: IN or UIN. (Default: Same as INPUT-SIZE, else 32)
        32
        64
    For floating point data: FP. (Default: Same as INPUT-SIZE, else 32)

    OUTPUT-ARCHITECTURE

    A string denoting the type of output architecture.
        NATIVE
        STD
        IEEE
        INTEL *
        CRAY *
        MIPS *
        ALPHA *
        UNIX *
    See the "Predefined Atomic Types" section in the "HDF5 Datatypes" chapter of the HDF5 User's Guide for a discussion of these architectures.
    Values marked with an asterisk (*) are not implemented in this release.
    (Default: NATIVE)

    OUTPUT-BYTE-ORDER

    A string denoting the output byte order. This entry is ignored if the OUTPUT-ARCHITECTURE is not specified or if it is not specified as IEEE, UNIX, or STD.
        BE Big-endian. (Default)
        LE Little-endian.

    The following options are disabled by default, making the default storage properties no chunking, no compression, no external storage, and no extensible dimensions.

    CHUNKED-DIMENSION-SIZES

    Dimension sizes of the chunk for chunked output data.
        chunk_dims A string of space-separated integers specifying the dimension sizes of the chunk for chunked output data. The number of dimensions must correspond to the value of RANK.
    The presence of this field indicates that the output dataset is to be stored in chunked layout; if this configuration field is absent, the dataset will be stored in contiguous layout.

    COMPRESSION-TYPE

    Type of compression to be used with chunked storage. Requires that CHUNKED-DIMENSION-SIZES be specified.
        GZIP Gzip compression.
    Other compression algorithms are not implemented in this release of h5import.

    COMPRESSION-PARAM

    Compression level. Required if COMPRESSION-TYPE is specified.
        1 through 9 Gzip compression levels: 1 will result in the fastest compression while 9 will result in the best compression ratio.
    (Default: 6. The default gzip compression level is 6; not all compression methods will have a default level.)

    EXTERNAL-STORAGE

    Name of an external file in which to create the output dataset. Cannot be used with CHUNKED-DIMENSIONS-SIZES, COMPRESSION-TYPE, OR MAXIMUM-DIMENSIONS.
        external_file        A string specifying the name of an external file.

    MAXIMUM-DIMENSIONS

    Maximum sizes of all dimensions. Requires that CHUNKED-DIMENSION-SIZES be specified.
        max_dims A string of space-separated integers specifying the maximum size of each dimension of the output dataset. A value of -1 for any dimension implies unlimited size for that particular dimension.
    The number of dimensions must correspond to the value of RANK.


    Options and Parameters:
      infile(s) Name of the Input file(s).
      in_options Input options. Note that while only the -dims argument is required, arguments must used in the order in which they are listed below.
        -d dim_list  
        -dims dim_list Input data dimensions. dim_list is a string of comma-separated numbers with no spaces describing the dimensions of the input data. For example, a 50 x 100 2-dimensional array would be specified as -dims 50,100.
      Required argument: if no configuration file is used, this command-line argument is mandatory.
        -p pathname  
        -pathname pathname  
                            
      pathname is a string consisting of one or more strings separated by slashes (/) specifying the path of the dataset in the output file. If the groups in the path do no exist, they will be created.
      Optional argument: if not specified, the default path is dataset1 for the first input dataset, dataset2 for the second input dataset, dataset3 for the third input dataset, etc.
      h5import does not overwrite a pre-existing dataset of the specified or default name. When an existing dataset of a conflicting name is encountered, h5import quits with an error; the current input file and any subsequent input files are not processed.
        -t input_class  
        -type input_class   input_class specifies the class of the input data and determines the class of the output data.
      Valid values are as defined in the Keyword/Values table in the section "The configuration file" above.
      Optional argument: if not specified, the default value is FP.
        -s input_size  
        -size input_size input_size specifies the size in bits of the input data and determines the size of the output data.
      Valid values for signed or unsigned integers are 8, 16, 32, and 64.
      Valid values for floating point data are 32 and 64.
      Optional argument: if not specified, the default value is 32.
        -c config_file config_file specifies a configuration file.
      This argument replaces all other arguments except infile and -o outfile
        -h  
        -help Prints the h5import usage summary:
      h5import -h[elp], OR
      h5import <infile> <options> [<infile> <options>...] -o[utfile] <outfile>

      Then exits.
      outfile Name of the HDF5 output file.
    Examples:
    Using command-line arguments:
    h5import infile -dims 2,3,4 -type TEXTIN -size 32 -o out1
         This command creates a file out1 containing a single 2x3x4 32-bit integer dataset. Since no pathname is specified, the dataset is stored in out1 as /dataset1.
    h5import infile -dims 20,50 -path bin1/dset1 -type FP -size 64 -o out2
         This command creates a file out2 containing a single a 20x50 64-bit floating point dataset. The dataset is stored in out2 as /bin1/dset1.
    Sample configuration files:
    The following configuration file specifies the following:
    – The input data is a 5x2x4 floating point array in an ASCII file.
    – The output dataset will be saved in chunked layout, with chunk dimension sizes of 2x2x2.
    – The output datatype will be 64-bit floating point, little-endian, IEEE.
    – The output dataset will be stored in outfile at /work/h5/pkamat/First-set.
    – The maximum dimension sizes of the output dataset will be 8x8x(unlimited).
                PATH work/h5/pkamat/First-set
                INPUT-CLASS TEXTFP
                RANK 3
                DIMENSION-SIZES 5 2 4
                OUTPUT-CLASS FP
                OUTPUT-SIZE 64
                OUTPUT-ARCHITECTURE IEEE
                OUTPUT-BYTE-ORDER LE
                CHUNKED-DIMENSION-SIZES 2 2 2 
                MAXIMUM-DIMENSIONS 8 8 -1
            
    The next configuration file specifies the following:
    – The input data is a 6x3x5x2x4 integer array in a binary file.
    – The output dataset will be saved in chunked layout, with chunk dimension sizes of 2x2x2x2x2.
    – The output datatype will be 32-bit integer in NATIVE format (as the output architecture is not specified).
    – The output dataset will be compressed using Gzip compression with a compression level of 7.
    – The output dataset will be stored in outfile at /Second-set.
                PATH Second-set
                INPUT-CLASS IN
                RANK 5
                DIMENSION-SIZES 6 3 5 2 4
                OUTPUT-CLASS IN
                OUTPUT-SIZE 32
                CHUNKED-DIMENSION-SIZES 2 2 2 2 2
                COMPRESSION-TYPE GZIP
                COMPRESSION-PARAM 7
            
    History:
      Release     Command Line Tool
      1.6.0 Tool introduced in this release.

    Tool Name: gif2h5
    Syntax:
    gif2h5 gif_file h5_file
    Purpose:
    Converts a GIF file to an HDF5 file.
    Description:
    gif2h5 accepts as input the GIF file gif_file and produces the HDF5 file h5_file as output.
    Options and Parameters:
      gif_file     The name of the input GIF file
      h5_file The name of the output HDF5 file

    Tool Name: h52gif
    Syntax:
    h52gif h5_file gif_file -i h5_image [-p h5_palette]
    Purpose:
    Converts an HDF5 file to a GIF file.
    Description:
    h52gif accepts as input the HDF5 file h5_file and the names of images and associated palettes within that file as input and produces the GIF file gif_file, containing those images, as output.

    h52gif expects at least one h5_image. You may repeat
         -i h5_image [-p h5_palette]
    up to 50 times, for a maximum of 50 images.

    Options and Parameters:
      h5_file The name of the input HDF5 file
      gif_file The name of the output GIF file
      -i h5_image Image option, specifying the name of an HDF5 image or dataset containing an image to be converted
      -p h5_palette     Palette option, specifying the name of an HDF5 dataset containing a palette to be used in an image conversion

    Tool Name: h5toh4
    Syntax:
    h5toh4 -h
    h5toh4 h5file h4file
    h5toh4 h5file
    h5toh4 -m h5file1 h5file2 h5file3 ...
    Purpose:
    Converts an HDF5 file into an HDF4 file.
    Description:
    h5toh4 is an HDF5 utility which reads an HDF5 file, h5file, and converts all supported objects and pathways to produce an HDF4 file, h4file. If h4file already exists, it will be replaced.

    If only one file name is given, the name must end in .h5 and is assumed to represent the HDF5 input file. h5toh4 replaces the .h5 suffix with .hdf to form the name of the resulting HDF4 file and proceeds as above. If a file with the name of the intended HDF4 file already exists, h5toh4 exits with an error without changing the contents of any file.

    The -m option allows multiple HDF5 file arguments. Each file name is treated the same as the single file name case above.

    The -h option causes the following syntax summary to be displayed:

                  h5toh4 file.h5 file.hdf
                  h5toh4 file.h5
                  h5toh4 -m file1.h5 file2.h5 ...

    The following HDF5 objects occurring in an HDF5 file are converted to HDF4 objects in the HDF4 file:

    • HDF5 group objects are converted into HDF4 Vgroup objects. HDF5 hard links and soft links pointing to objects are converted to HDF4 Vgroup references.
    • HDF5 dataset objects of integer datatype are converted into HDF4 SDS objects. These datasets may have up to 32 fixed dimensions. The slowest varying dimension may be extendable. 8-bit, 16-bit, and 32-bit integer datatypes are supported.
    • HDF5 dataset objects of floating point datatype are converted into HDF4 SDS objects. These datasets may have up to 32 fixed dimensions. The slowest varying dimension may be extendable. 32-bit and 64-bit floating point datatypes are supported.
    • HDF5 dataset objects of single dimension and compound datatype are converted into HDF4 Vdata objects. The length of that single dimension may be fixed or extendable. The members of the compound datatype are constrained to be no more than rank 4.
    • HDF5 dataset objects of single dimension and fixed length string datatype are converted into HDF4 Vdata objects. The HDF4 Vdata is a single field whose order is the length of the HDF5 string type. The number of records of the Vdata is the length of the single dimension which may be fixed or extendable.
    Other objects are not converted and are not recorded in the resulting h4file.

    Attributes associated with any of the supported HDF5 objects are carried over to the HDF4 objects. Attributes may be of integer, floating point, or fixed length string datatype and they may have up to 32 fixed dimensions.

    All datatypes are converted to big-endian. Floating point datatypes are converted to IEEE format.

    Note:
    The h5toh4 and h4toh5 utilities are no longer part of the HDF5 product; they are distributed separately through the page Converting between HDF (4.x) and HDF5.

    Options and Parameters:
      -h Displays a syntax summary.
      -m Converts multiple HDF5 files to multiple HDF4 files.
      h5file     The HDF5 file to be converted.
      h4file The HDF4 file to be created.

    Tool Name: h4toh5
    Syntax:
    h4toh5 -h
    h4toh5 h4file h5file
    h4toh5 h4file
    Purpose:
    Converts an HDF4 file to an HDF5 file.
    Description:
    h4toh5 is a file conversion utility that reads an HDF4 file, h4file (input.hdf for example), and writes an HDF5 file, h5file (output.h5 for example), containing the same data.

    If no output file h5file is specified, h4toh5 uses the input filename to designate the output file, replacing the extension .hdf with .h5. For example, if the input file scheme3.hdf is specified with no output filename, h4toh5 will name the output file scheme3.h5.

    The -h option causes a syntax summary similar to the following to be displayed:

                  h4toh5 inputfile.hdf outputfile.h5
                  h4toh5 inputfile.hdf                 

    Each object in the HDF4 file is converted to an equivalent HDF5 object, according to the mapping described in Mapping HDF4 Objects to HDF5 Objects.

    h4toh5 converts the following HDF4 objects:

    HDF4 Object Resulting HDF5 Object
    SDS Dataset
    GR, RI8, and RI24 image Dataset
    Vdata Dataset
    Vgroup Group
    Annotation Attribute
    Palette Dataset
    Note:
    The h4toh5 and h5toh4 utilities are no longer part of the HDF5 product; they are distributed separately through the page Converting between HDF (4.x) and HDF5.

    Options and Parameters:
      -h Displays a syntax summary.
      h4file     The HDF4 file to be converted.
      h5file The HDF5 file to be created.

    Tool Name: h5stat
    Syntax:
    h5stat [OPTIONSfile

    Purpose:
    Reports HDF5 file and object statistics.

    Description:
    h5stat reports statistics regarding an HDF5 file and the objects in that file.

    See “RFC: h5stat tool” for a complete description of this tool.

    Options and Parameters:
      -V   or   --version Print version number and exit.
      -f   or   --file Print file information.
      -F   or   --filemetadata Print file meta data.
      -g   or   --group Print group information.
      -G   or   --groupmetadata    Print group meta data.
      -d   or   --dset Print dataset information.
      -D   or   --dsetmetadata Print dataset meta data.
      -T   or   --dtypemetadata Print datatype meta data.
      -A   or   --attribute Print attribute information.

    History:
      Release     Command Line Tool
      1.8.0 Tool introduced in this release.

    Tool Name: h5perf
    Syntax:
    h5perf [-h | --help]
    h5perf [options]

    Purpose:
    Tests Parallel HDF5 performance.

    Description:
    h5perf is a tool for testing the performance of the Parallel HDF5 Library. The tool can perform testing with 1-dimensional and 2-dimensional buffers and datasets. For details regarding data organization and access, see “h5perf, a Parallel File System Benchmarking Tool.”

    The following environment variables have the following effects on h5perf behavior:
         HDF5_NOCLEANUP If set, h5perf does not remove data files.
    (Default: Data files are removed.)
      HDF5_MPI_INFO Must be set to a string containing a list of semi-colon separated key=value pairs for the MPI INFO object.
    Example:
      HDF5_PARAPREFIX   Sets the prefix for parallel output data files.

    Options and Parameters:
      These terms are used as follows in this section:
      file   A filename
      size A size specifier, expressed as an integer greater than or equal to 0 (zero) followed by a size indicator:
           K for kilobytes (1024 bytes)
           M for megabytes (1048576 bytes)
           G for gigabytes (1073741824 bytes)
      Example: 37M specifies 37 megabytes or 38797312 bytes.
      N An integer greater than or equal to 0 (zero)

      -h, --help
               Prints a usage message and exits.
      -a size, --align=size
               Specifies the alignment of objects in the HDF5 file.
      (Default: 1)
      -A api_list, --api=api_list
               Specifies which APIs to test. api_list is a comma-separated list with the following valid values:
           phdf5   Parallel HDF5
        mpiio MPI-I/O
        posix POSIX
      (Default: All APIs)

      Example, --api=mpiio,phdf5 specifies that the MPI I/O and Parallel HDF5 APIs are to be monitored.
      -B size, --block-size=size
               Controls the block size within the transfer buffer.
      (Default: Half the number of bytes per process per dataset)

      Block size versus transfer buffer size:
      The transfer buffer size is the size of a buffer in memory. The data in that buffer is broken into block size pieces and written to the file.

      Transfer buffer size is discussed below with the -x (or --min-xfer-size) and -X (or --max-xfer-size) options.

      The pattern in which the blocks are written to the file is described in the discussion of the -I (or --interleaved) option.

      -c, --chunk
               Creates HDF5 datasets in chunked layout.
      (Default: Off)
      -C, --collective
               Use collective I/O for the MPI I/O and Parallel HDF5 APIs.
      (Default: Off, i.e., independent I/O)

      If this option is set and the MPI-I/O and PHDF5 APIs are in use, all the blocks of every process will be written at once with an MPI derived type.

      -d N, --num-dsetsN
               Sets the number of datasets per file.
      (Default: 1)
      -D debug_flags, --debug=debug_flags
               Sets the debugging level. debug_flags is a comma-separated list of debugging flags with the following valid values:
           1   Minimal debugging
        2 Moderate debugging (“not quite everything”)
        3 Extensive debugging (“everything”)
        4 All possible debugging (“the kitchen sink”)
        r Raw data I/O throughput information
        t Times, in additions to throughputs
        v Verify data correctness
      (Default: No debugging)

      Example: --debug=2,r,t specifies to run a moderate level of debugging while collecting raw data I/O throughput information and verifying the correctness of the data.

      Throughput values are computed by dividing the total amount of transferred data (excluding metadata) over the time spent by the slowest process. Several time counters are defined to measure the data transfer time and the total elapsed time; the latter includes the time spent during file open and close operations. A number of iterations can be specified with the option -i (or --num-iterations) to create the desired population of measurements from which maximum, minimum, and average values can be obtained. The timing scheme is the following:

          for each iteration
              initialize elapsed time counter
              initialize data transfer time counter
              for each file
                  start and accumulate elapsed time counter
                      file open
                      start and accumulate data transfer time counter
                          access entire file
                      stop data transfer time counter
                      file close
                  stop elapsed time counter
              end file
              save elapsed time counter
              save data transfer time counter
          end iteration
                  

      The reported write throughput is based on the accumulated data transfer time, while the write open-close throughput uses the accumulated elapsed time.

      -e size, --num-bytes=size
               Specifies the number of bytes per process per dataset.
      (Default: 256K for 1D, 8K for 2D)

      Depending on the selected geometry, each test dataset can be a linear array of size bytes-per-process * num-processes or a square array of size (bytes-per-process * num-processes) × (bytes-per-process * num-processes). The number of processes is set by the -p (or --min-num-processes) and -P (or --max-num-processes) options.

      -F N, --num-files=N
               Specifies the number of files.
      (Default: 1)
      -g, --geometry
               Selects 2D geometry for testing.
      (Default: Off, i.e., 1D geometry)
      -i N, --num-iterations=N
               Sets the number of iterations to perform.
      (Default: 1)
      -I, --interleaved
               Sets interleaved block I/O.
      (Default: Contiguous block I/O)

      Interleaved and contiguous patterns in 1D geometry:
      When a contiguous access pattern is chosen, the dataset is evenly divided into num-processes regions and each process writes data to its assigned region. When interleaved blocks are written to a dataset, space for the first block of the first process is allocated in the dataset, then space is allocated for the first block of the second process, etc., until space is allocated for the first block of each process, then space is allocated for the second block of the first process, the second block of the second process, etc.

      For example, with a three process run, 512KB bytes-per-process, 256KB transfer buffer size, and 64KB block size, each process must issue two transfer requests to complete access to the dataset.

      Contiguous blocks of the first transfer request are written as follows:
          1111----2222----3333----

      Interleaved blocks of the first transfer request are written as follows:
          123123123123------------

      The actual number of I/O operations involved in a transfer request depends on the access pattern and communication mode. When using independent I/O with an interleaved access pattern, each process performs four small non-contiguous I/O operations per transfer request. If collective I/O is turned on, the combined content of the buffers of the three processes will be written using one collective I/O operation per transfer request.

      For details regarding the impact of performance and access patterns in 2D, see “h5perf, a Parallel File System Benchmarking Tool.”

      -m, --mpi-posix Sets use of MPI-posix driver for HDF5 I/O.
      (Default: MPI-I/O driver)
      -n, --no-fill Specifies to not write fill values to HDF5 datasets. This option is supported only in HDF5 Release v1.6 or later.
      (Default: Off, i.e., write fill values)
      -o file, --output=file Sets the output file for raw data to file.
      (Default: None)
      -p N, --min-num-processes=N Sets the minimum number of processes to be used.
      (Default: 1)
      -P N, --max-num-processes=N
                                    
      Sets the maximum number of processes to be used.
      (Default: All MPI_COMM_WORLD processes)
      -T size, --threshold=size Sets the threshold for alignment of objects in the HDF5 file.
      (Default: 1)
      -w, --write-only Performs only write tests, not read tests.
      (Default: Read and write tests)
      -x size, --min-xfer-size=size Sets the minimum transfer buffer size.
      (Default: Half the number of bytes per processor per dataset)

      This option and the -X size option (or --max-xfer-size=size) control transfer-buffer-size, the size of the transfer buffer in memory. In 1D geometry, the transfer buffer is a linear array of size transfer-buffer-size. In 2D geometry, the transfer buffer is a rectangular array of size block-size × transfer-buffer-size, or transfer-buffer-size × block-size if the interleaved access pattern is selected.

      -X size, --max-xfer-size=size Sets the maximum transfer buffer size.
      (Default: The number of bytes per processor per dataset)

    History:
      Release               Change
      1.6.0 Tool introduced in this release.
      1.6.8 and 1.8.0 Option -g, --geometry introduced in this release.

    Tool Name: h5perf_serial
    Syntax:
    h5perf_serial [-h | --help]
    h5perf_serial [options]

    Purpose:
    Tests HDF5 serial performance.

    Description:
    h5perf_serial provides tools for testing the performance of the HDF5 Library in serial mode.

    See “h5perf_serial, a Serial File System Benchmarking Tool” for a complete description of this tool.

    The following environment variable can be set to control the specfied aspect of h5perf_serial behavior:
         HDF5_NOCLEANUP      If set, h5perf_serial does not remove data files.
    (Default: Data files are removed.)
         HDF5_PREFIX      Sets the prefix for output data files.

    Options and Parameters:
      The term size specifier is used as follows in this section: A size specifier is an integer greater than or equal to 0 (zero) followed by a size indicator:
           K for kilobytes (1024 bytes)
           M for megabytes (1048576 bytes)
           G for gigabytes (1073741824 bytes)
      Example: 37M specifies 37 megabytes or 38797312 bytes.

      -A api_list    Specifies which APIs to test. api_list is a comma-separated list with the following valid values:
           hdf5   HDF5 Library APIs
        posix POSIX APIs
      (Default: All APIs are monitored.)

      Example: -A hdf5,posix specifies that the HDF5 and POSIX APIs are to be monitored.

      -c chunk_size_list Specifies chunked storage and defines chunks dimensions and sizes.
      (Default: Chunking is off.)

      chunk_size_list is a comma-separated list of size specifiers. For example, a chunk_size_list value of
          2K,4K,6M
      specifies that chunking is turned on and that chunk size is 2 kilobytes by 4 kilobytes by 6 megabytes.

      -e dataset_size_list    Specifies dataset dimensionality and dataset dimension sizes.
      (Default dataset size is 100x200, or 100,200.)

      dataset_size_list is a comma-separated list of size specifiers, which are defined above.

      For example, a dataset_size_list value of
          2K,4K,6M
      specifies a 2 kilobytes by 4 kilobytes by 6 megabytes dataset.

      -i iterations Specifies the number of iterations to perform.
      (Default: A single iteration, 1, is performed.)

      iterations is an integer specifying the number of iterations.

      -r access_order Specifies dimension access order.
      (Default: 1,2)

      access_order is a comma-separated list of integers specifying the order of access. For example,
          -r 1,3,2
      specifies the traversal of dimension 1 first, then dimension 3, and finally dimension 2.

      -t Selects extendable HDF5 dataset dimensions.
      (Default: Datasets are fixed size.)

      -v file_driver Selects HDF5 driver to be used for HDF5 file access.
      (Default: sec2)

      Valid values are as follows:
         sec2      
         stdio     
         core      
         split     
         multi     
         family    
         direct    

      -w Specifies the performance of write tests only, read performance will not be tested.
      (Default: Both write and read tests are performed.)
      -x buffer_size_list Specifies transfer buffer dimensions and sizes.
      (Default: 10,20)

    History:
      Release     Command Line Tool
      1.8.1 Tool introduced in this release.

    Tool Name: h5redeploy
    Syntax:
    h5redeploy [help | -help]
    h5redeploy [-echo] [-force] [-prefix=dir] [-tool=tool] [-show]
    Purpose:
    Updates HDF5 compiler tools after an HDF5 software installation in a new location.
    Description:
    h5redeploy updates the HDF5 compiler tools after the HDF5 software has been installed in a new location.
    Options and Parameters:
      help, -help Prints a help message.
      -echo Shows all the shell commands executed.
      -force Performs the requested action without offering any prompt requesting confirmation.
      -prefix=dir     Specifies a new directory in which to find the HDF5 subdirectories lib/ and include/.
      (Default: current working directory)
      -tool=tool Specifies the tool to update. tool must be in the current directory and must be writable.
      (Default: h5cc)
      -show Shows all of the shell commands to be executed without actually executing them.
    History:
      Release     Command Line Tool
      1.6.0 Tool introduced in this release.

    Tool Name: h5cc and h5pcc
    Syntax:
    h5cc [OPTIONS] <compile line>
    h5pcc [OPTIONS] <compile_line>
    Purpose:
    Helper scripts to compile HDF5 applications.
    Description:
    h5cc and h5pcc can be used in much the same way as mpicc by MPICH is used to compile an HDF5 program. These tools take care of specifying on the command line the locations of the HDF5 header files and libraries. h5cc is for use in serial computing environments; h5pcc is for parallel environments.

    h5cc and h5pcc subsume all other compiler scripts in that if you have used a set of scripts to compile the HDF5 library, then h5cc and h5pcc also use those scripts. For example, when compiling an MPICH program, you use the mpicc script. If you have built HDF5 using MPICH, then h5cc uses the MPICH program for compilation.

    Some programs use HDF5 in only a few modules. It is not necessary to use h5cc or h5pcc to compile those modules which do not use HDF5. In fact, since h5cc and h5pcc are only convenience scripts, you can still compile HDF5 modules in the normal manner, though you will have to specify the HDF5 libraries and include paths yourself. Use the -show option to see the details.

    An example of how to use h5cc to compile the program hdf_prog, which consists of the modules prog1.c and prog2.c and uses the HDF5 shared library, would be as follows. h5pcc is used in an identical manner.

            # h5cc -c prog1.c
            # h5cc -c prog2.c
            # h5cc -shlib -o hdf_prog prog1.o prog2.o
    Options and Parameters:
      -help Prints a help message.
      -echo Show all the shell commands executed.
      -prefix=DIR Use the directory DIR to find the HDF5 lib/ and include/ subdirectories.
      Default: prefix specified when configuring HDF5.
      -show Show the commands without executing them.
      -shlib Compile using shared HDF5 libraries.
      -noshlib Compile using static HDF5 libraries [default].
      <compile line>     The normal compile line options for your compiler. h5cc and h5pcc use the the same compiler you used to compile HDF5. Check your compiler's manual for more information on which options are needed.
    Environment Variables:
    When set, these environment variables override some of the built-in h5cc and h5pcc defaults.
      HDF5_CC Use a different C compiler.
      HDF5_CLINKER Use a different linker.
      HDF5_USE_SHLIB=[yes|no]     Use shared version of the HDF5 library [default: no].

    Tool Name: h5fc and h5pfc
    Syntax:
    h5fc [OPTIONS] <compile line>
    h5pfc [OPTIONS] <compile_line>
    Purpose:
    Helper scripts to compile HDF5 Fortran90 applications.
    Description:
    h5fc and h5pfc can be used in much the same way mpif90 by MPICH is used to compile an HDF5 program. These tools take care of specifying on the command line the locations of the HDF5 header files and libraries. h5fc is for use in serial computing environments; h5pfc is for parallel environments.

    h5fc and h5pfc subsume all other compiler scripts in that if you have used a set of scripts to compile the HDF5 Fortran library, then h5fc and h5pfc also use those scripts. For example, when compiling an MPICH program, you use the mpif90 script. If you have built HDF5 using MPICH, then h5fc uses the MPICH program for compilation.

    Some programs use HDF5 in only a few modules. It is not necessary to use h5fc and h5pfc to compile those modules which do not use HDF5. In fact, since h5fc and h5pfc are only convenience scripts, you can still compile HDF5 Fortran modules in the normal manner, though you will have to specify the HDF5 libraries and include paths yourself. Use the -show option to see the details.

    An example of how to use h5fc to compile the program hdf_prog, which consists of the modules prog1.f90 and prog2.f90 and uses the HDF5 Fortran library, would be as follows. h5pfc is used in an identical manner.

            # h5fc -c prog1.f90
            # h5fc -c prog2.f90
            # h5fc -o hdf_prog prog1.o prog2.o 
    Options and Parameters:
      -help Prints a help message.
      -echo Show all the shell commands executed.
      -prefix=DIR Use the directory DIR to find HDF5 lib/ and include/ subdirectories
      Default: prefix specified when configuring HDF5.
      -show Show the commands without executing them.
      <compile line>     The normal compile line options for your compiler. h5fc and h5pfc use the the same compiler you used to compile HDF5. Check your compiler's manual for more information on which options are needed.
    Environment Variables:
    When set, these environment variables override some of the built-in h5fc and h5pfc defaults.
      HDF5_FC Use a different Fortran90 compiler.
      HDF5_FLINKER     Use a different linker.
    History:
      Release     Command Line Tool
      1.6.0 Tool introduced in this release.

    Tool Name: h5c++
    Syntax:
    h5c++ [OPTIONS] <compile line>
    Purpose:
    Helper script to compile HDF5 C++ applications.
    Description:

    h5c++ can be used in much the same way MPIch is used to compile an HDF5 program. It takes care of specifying where the HDF5 header files and libraries are on the command line.

    h5c++ supersedes all other compiler scripts in that if you've used one set of compiler scripts to compile the HDF5 C++ library, then h5c++ uses those same scripts. For example, when compiling an MPIch program, you use the mpiCC script.

    Some programs use HDF5 in only a few modules. It isn't necessary to use h5c++ to compile those modules which don't use HDF5. In fact, since h5c++ is only a convenience script, you are still able to compile HDF5 C++ modules in the normal way. In that case, you will have to specify the HDF5 libraries and include paths yourself.

    An example of how to use h5c++ to compile the program hdf_prog, which consists of modules prog1.cpp and prog2.cpp and uses the HDF5 C++ library, would be as follows:

            # h5c++ -c prog1.cpp
            # h5c++ -c prog2.cpp
            # h5c++ -o hdf_prog prog1.o prog2.o
    Options and Parameters:
      -help Prints a help message.
      -echo Show all the shell commands executed.
      -prefix=DIR Use the directory DIR to find HDF5 lib/ and include/ subdirectories
      Default: prefix specified when configuring HDF5.
      -show Show the commands without executing them.
      <compile line>
                      
      The normal compile line options for your compiler. h5c++ uses the same compiler you used to compile HDF5. Check your compiler's manual for more information on which options are needed.
    Environment Variables:
    When set, these environment variables override some of the built-in defaults of h5c++.
      HDF5_CXX Use a different C++ compiler.
      HDF5_CXXLINKER     Use a different linker.
    History:
      Release     Command Line Tool
      1.6.0 Tool introduced in this release.

    HDF5 documents and links 
    Introduction to HDF5 
    HDF5 User Guide 
    And in this document, the HDF5 Reference Manual 
    H5DS   H5IM   H5LT   H5PT   H5TB 
    H5   H5A   H5D   H5E   H5F   H5G   H5I 
    H5L   H5O   H5P   H5R   H5S   H5T   H5Z 
    Tools   Datatypes   Fortran   Compatibility Macros 
    (Printable PDF of this Reference Manual) 

    THG Help Desk:
    Describes HDF5 Release 1.8.2, November 2008.