public class H5ScalarDS extends ScalarDS
The library predefines a modest number of datatypes. For details, read HDF5 Datatypes
fillValue, imageDataRange, interlace, INTERLACE_LINE, INTERLACE_PIXEL, INTERLACE_PLANE, isDefaultImageOrder, isFillValueConverted, isImage, isImageDisplay, isText, isTrueColor, palette, unsignedConvertedchunkSize, compression, compression_gzip_txt, convertByteToString, convertedBuf, data, datatype, dimNames, dims, filters, inited, isDataLoaded, maxDims, nPoints, originalBuf, rank, selectedDims, selectedIndex, selectedStride, startDims, storage, storage_layoutfileFormat, linkTargetObjName, oid, separator| Constructor and Description |
|---|
H5ScalarDS(FileFormat theFile,
String theName,
String thePath)
Constructs an instance of a H5 scalar dataset with given file, dataset name and path.
|
H5ScalarDS(FileFormat theFile,
String theName,
String thePath,
long[] oid)
Deprecated.
Not for public use in the future.
Using H5ScalarDS(FileFormat, String, String) |
| Modifier and Type | Method and Description |
|---|---|
void |
clear()
Clears memory held by the dataset, such as the data buffer.
|
void |
close(long did)
Closes access to the object.
|
Dataset |
copy(Group pgroup,
String dstName,
long[] dims,
Object buff)
Creates a new dataset and writes the data buffer to the new dataset.
|
static Dataset |
create(String name,
Group pgroup,
Datatype type,
long[] dims,
long[] maxdims,
long[] chunks,
int gzip,
Object data) |
static Dataset |
create(String name,
Group pgroup,
Datatype type,
long[] dims,
long[] maxdims,
long[] chunks,
int gzip,
Object fillValue,
Object data)
Creates a scalar dataset in a file with/without chunking and compression.
|
void |
extend(long[] newDims)
H5Dset_extent verifies that the dataset is at least of size size, extending it if necessary.
|
Datatype |
getDatatype()
Returns the datatype of the data object.
|
List<Attribute> |
getMetadata()
Retrieves the object's metadata, such as attributes, from the file.
|
List<Attribute> |
getMetadata(int... attrPropList) |
byte[][] |
getPalette()
Returns the palette of this scalar dataset or null if palette does not exist.
|
String |
getPaletteName(int idx)
Get the name of a specific image palette from file.
|
byte[] |
getPaletteRefs()
Returns the byte array of palette refs.
|
String |
getVirtualFilename(int index) |
int |
getVirtualMaps() |
boolean |
hasAttribute()
Check if the object has any attributes attached.
|
void |
init()
Retrieves datatype and dataspace information from file and sets the dataset
in memory.
|
boolean |
isVirtual() |
long |
open()
Opens an existing object such as a dataset or group for access.
|
Object |
read()
Reads the data from file.
|
byte[] |
readBytes()
Reads the raw data of the dataset from file to a byte array.
|
byte[][] |
readPalette(int idx)
Reads a specific image palette from file.
|
void |
removeMetadata(Object info)
Deletes an existing piece of metadata from this object.
|
void |
setName(String newName)
Sets the name of the object.
|
void |
updateMetadata(Object info)
Updates an existing piece of metadata attached to this object.
|
void |
write(Object buf)
Writes the given data buffer into this dataset in a file.
|
void |
writeMetadata(Object info)
Writes a specific piece of metadata (such as an attribute) into the file.
|
addFilteredImageValue, clearData, convertFromUnsignedC, convertToUnsignedC, getFillValue, getFilteredImageValues, getImageDataRange, getInterlace, isDefaultImageOrder, isImage, isImageDisplay, isTrueColor, setImageDataRange, setIsImage, setIsImageDisplay, setPalettebyteToString, convertFromUnsignedC, convertFromUnsignedC, convertToUnsignedC, convertToUnsignedC, getChunkSize, getCompression, getConvertByteToString, getData, getDimNames, getDims, getFilters, getHeight, getMaxDims, getOriginalClass, getRank, getSelectedDims, getSelectedIndex, getSize, getStartDims, getStorage, getStorageLayout, getStride, getWidth, isInited, isString, setConvertByteToString, setData, stringToByte, writedebug, equals, equalsOID, getFID, getFile, getFileFormat, getFullName, getLinkTargetObjName, getName, getOID, getPath, setLinkTargetObjName, setPath, toStringpublic H5ScalarDS(FileFormat theFile, String theName, String thePath)
For example, in H5ScalarDS(h5file, "dset", "/arrays/"), "dset" is the name of the dataset, "/arrays" is the group path of the dataset.
theFile - the file that contains the data object.theName - the name of the data object, e.g. "dset".thePath - the full path of the data object, e.g. "/arrays/".@Deprecated public H5ScalarDS(FileFormat theFile, String theName, String thePath, long[] oid)
H5ScalarDS(FileFormat, String, String)theFile - the file that contains the data object.theName - the name of the data object, e.g. "dset".thePath - the full path of the data object, e.g. "/arrays/".oid - the oid of the data object.public long open()
HObjectopen in class HObjectHObject.close(long)public void close(long did)
HObjectSub-classes must implement this interface because different data objects have their own ways of how the data resources are closed.
For example, H5Group.close() calls the hdf.hdf5lib.H5.H5Gclose() method and closes the group resource specified by the group id.
public void init()
The init() is designed to support lazy operation in a dataset object. When a data object is retrieved from file, the datatype, dataspace and raw data are not loaded into memory. When it is asked to read the raw data from file, init() is first called to get the datatype and dataspace information, then load the raw data from file.
init() is also used to reset the selection of a dataset (start, stride and count) to the default, which is the entire dataset for 1D or 2D datasets. In the following example, init() at step 1) retrieves datatype and dataspace information from file. getData() at step 3) reads only one data point. init() at step 4) resets the selection to the whole dataset. getData() at step 4) reads the values of whole dataset into memory.
dset = (Dataset) file.get(NAME_DATASET);
// 1) get datatype and dataspace information from file
dset.init();
rank = dset.getRank(); // rank = 2, a 2D dataset
count = dset.getSelectedDims();
start = dset.getStartDims();
dims = dset.getDims();
// 2) select only one data point
for (int i = 0; i < rank; i++) {
start[0] = 0;
count[i] = 1;
}
// 3) read one data point
data = dset.getData();
// 4) reset selection to the whole dataset
dset.init();
// 5) clean the memory data buffer
dset.clearData();
// 6) Read the whole dataset
data = dset.getData();
public boolean hasAttribute()
MetaDataContainerpublic Datatype getDatatype()
DataFormatgetDatatype in interface DataFormatgetDatatype in class Datasetpublic void clear()
Datasetpublic byte[] readBytes() throws hdf.hdf5lib.exceptions.HDF5Exception
DatasetreadBytes() reads raw data to an array of bytes instead of array of its datatype. For example, for a one-dimension 32-bit integer dataset of size 5, readBytes() returns a byte array of size 20 instead of an int array of 5.
readBytes() can be used to copy data from one dataset to another efficiently because the raw data is not converted to its native type, it saves memory space and CPU time.
public Object read() throws Exception
read() reads the data from file to a memory buffer and returns the memory buffer. The dataset object does not hold the memory buffer. To store the memory buffer in the dataset object, one must call getData().
By default, the whole dataset is read into memory. Users can also select a subset to read. Subsetting is done in an implicit way.
How to Select a Subset
A selection is specified by three arrays: start, stride and count.
The following example shows how to make a subset. In the example, the
dataset is a 4-dimensional array of [200][100][50][10], i.e. dims[0]=200;
dims[1]=100; dims[2]=50; dims[3]=10;
We want to select every other data point in dims[1] and dims[2]
int rank = dataset.getRank(); // number of dimensions of the dataset
long[] dims = dataset.getDims(); // the dimension sizes of the dataset
long[] selected = dataset.getSelectedDims(); // the selected size of the
// dataset
long[] start = dataset.getStartDims(); // the offset of the selection
long[] stride = dataset.getStride(); // the stride of the dataset
int[] selectedIndex = dataset.getSelectedIndex(); // the selected
// dimensions for
// display
// select dim1 and dim2 as 2D data for display, and slice through dim0
selectedIndex[0] = 1;
selectedIndex[1] = 2;
selectedIndex[1] = 0;
// reset the selection arrays
for (int i = 0; i < rank; i++) {
start[i] = 0;
selected[i] = 1;
stride[i] = 1;
}
// set stride to 2 on dim1 and dim2 so that every other data point is
// selected.
stride[1] = 2;
stride[2] = 2;
// set the selection size of dim1 and dim2
selected[1] = dims[1] / stride[1];
selected[2] = dims[1] / stride[2];
// when dataset.getData() is called, the selection above will be used
// since
// the dimension arrays are passed by reference. Changes of these arrays
// outside the dataset object directly change the values of these array
// in the dataset object.
For ScalarDS, the memory data buffer is a one-dimensional array of byte, short, int, float, double or String type based on the datatype of the dataset.
For CompoundDS, the memory data object is an java.util.List object. Each element of the list is a data array that corresponds to a compound field.
For example, if compound dataset "comp" has the following nested structure, and member datatypes
comp --> m01 (int) comp --> m02 (float) comp --> nest1 --> m11 (char) comp --> nest1 --> m12 (String) comp --> nest1 --> nest2 --> m21 (long) comp --> nest1 --> nest2 --> m22 (double)getData() returns a list of six arrays: {int[], float[], char[], String[], long[] and double[]}.
Exception - if object can not be readDataset.getData(),
DataFormat.read()public void write(Object buf) throws hdf.hdf5lib.exceptions.HDF5Exception
buf - The buffer that contains the data values.hdf.hdf5lib.exceptions.HDF5Exception - If there is an error at the HDF5 library level.public List<Attribute> getMetadata() throws hdf.hdf5lib.exceptions.HDF5Exception
MetaDataContainerMetadata, such as attributes, is stored in a List.
hdf.hdf5lib.exceptions.HDF5Exceptionpublic List<Attribute> getMetadata(int... attrPropList) throws hdf.hdf5lib.exceptions.HDF5Exception
hdf.hdf5lib.exceptions.HDF5Exceptionpublic void writeMetadata(Object info) throws Exception
MetaDataContainerinfo - the metadata to write.Exception - if the metadata can not be writtenpublic void removeMetadata(Object info) throws hdf.hdf5lib.exceptions.HDF5Exception
MetaDataContainerinfo - the metadata to delete.hdf.hdf5lib.exceptions.HDF5Exceptionpublic void updateMetadata(Object info) throws hdf.hdf5lib.exceptions.HDF5Exception
MetaDataContainerinfo - the metadata to update.hdf.hdf5lib.exceptions.HDF5Exceptionpublic void setName(String newName) throws Exception
HObjectpublic static Dataset create(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object data) throws Exception
Exceptionpublic static Dataset create(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object fillValue, Object data) throws Exception
The following example shows how to create a string dataset using this function.
H5File file = new H5File("test.h5", H5File.CREATE);
int max_str_len = 120;
Datatype strType = new H5Datatype(Datatype.CLASS_STRING, max_str_len, -1, -1);
int size = 10000;
long dims[] = { size };
long chunks[] = { 1000 };
int gzip = 9;
String strs[] = new String[size];
for (int i = 0; i < size; i++)
strs[i] = String.valueOf(i);
file.open();
file.createScalarDS("/1D scalar strings", null, strType, dims, null, chunks, gzip, strs);
try {
file.close();
}
catch (Exception ex) {
}
name - the name of the dataset to create.pgroup - parent group where the new dataset is created.type - the datatype of the dataset.dims - the dimension size of the dataset.maxdims - the max dimension size of the dataset. maxdims is set to dims if maxdims = null.chunks - the chunk size of the dataset. No chunking if chunk = null.gzip - GZIP compression level (1 to 9). No compression if gzip<=0.fillValue - the default data value.data - the array of data values.Exception - if there is a failure.public Dataset copy(Group pgroup, String dstName, long[] dims, Object buff) throws Exception
DatasetThis function allows applications to create a new dataset for a given data buffer. For example, users can select a specific interesting part from a large image and create a new image with the selection.
The new dataset retains the datatype and dataset creation properties of this dataset.
copy in class Datasetpgroup - the group which the dataset is copied to.dstName - the name of the new dataset.dims - the dimension sizes of the the new dataset.buff - the data values of the subset to be copied.Exception - if dataset can not be copiedpublic byte[][] getPalette()
ScalarDSA Scalar dataset can be displayed as spreadsheet data or an image. When a scalar dataset is displayed as an image, the palette or color table may be needed to translate a pixel value to color components (for example, red, green, and blue). Some scalar datasets have no palette and some datasets have one or more than one palettes. If an associated palette exists but is not loaded, this interface retrieves the palette from the file and returns the palette. If the palette is loaded, it returns the palette. It returns null if there is no palette associated with the dataset.
Current implementation only supports palette model of indexed RGB with 256 colors. Other models such as YUV", "CMY", "CMYK", "YCbCr", "HSV will be supported in the future.
The palette values are stored in a two-dimensional byte array and are arranges by color components of red, green and blue. palette[][] = byte[3][256], where, palette[0][], palette[1][] and palette[2][] are the red, green and blue components respectively.
Sub-classes have to implement this interface. HDF4 and HDF5 images use different libraries to retrieve the associated palette.
getPalette in class ScalarDSpublic String getPaletteName(int idx)
ScalarDSA scalar dataset may have multiple palettes attached to it. getPaletteName(int idx) returns the name of a specific palette identified by its index.
getPaletteName in class ScalarDSidx - the index of the palette to retrieve the name.public byte[][] readPalette(int idx)
ScalarDSA scalar dataset may have multiple palettes attached to it. readPalette(int idx) returns a specific palette identified by its index.
readPalette in class ScalarDSidx - the index of the palette to read.public byte[] getPaletteRefs()
ScalarDSA palette reference is an object reference that points to the palette dataset.
For example, Dataset "Iceberg" has an attribute of object reference "Palette". The arrtibute "Palette" has value "2538" that is the object reference of the palette data set "Iceberg Palette".
getPaletteRefs in class ScalarDSpublic void extend(long[] newDims) throws hdf.hdf5lib.exceptions.HDF5Exception
newDims - the dimension target sizehdf.hdf5lib.exceptions.HDF5Exception - If there is an error at the HDF5 library level.public String getVirtualFilename(int index)
getVirtualFilename in class Datasetpublic int getVirtualMaps()
getVirtualMaps in class DatasetCopyright © 2018. All Rights Reserved.