imas.db_entry.DBEntry

class imas.db_entry.DBEntry(uri: str, mode: str, *, dd_version: str | None = None, xml_path: str | Path | None = None)
class imas.db_entry.DBEntry(backend_id: int, db_name: str, pulse: int, run: int, user_name: str | None = None, data_version: str | None = None, *, shot: int | None = None, dd_version: str | None = None, xml_path: str | Path | None = None)

Bases: object

Represents an IMAS database entry, which is a collection of stored IDSs.

A DBEntry can be used as a context manager:

import imas

# old constructor:
with imas.DBEntry(imas.ids_defs.HDF5_BACKEND, "test", 1, 12) as dbentry:
    # dbentry is now opened and can be used for reading data:
    ids = dbentry.get(...)
# The dbentry is now closed

# new constructor also allows creating the Data Entry with the mode
# argument
with imas.DBEntry("imas:hdf5?path=testdb", "w") as dbentry:
    # dbentry is now created and can be used for writing data:
    dbentry.put(ids)
# The dbentry is now closed
__init__(uri: str, mode: str, *, dd_version: str | None = None, xml_path: str | Path | None = None)
__init__(backend_id: int, db_name: str, pulse: int, run: int, user_name: str | None = None, data_version: str | None = None, *, shot: int | None = None, dd_version: str | None = None, xml_path: str | Path | None = None)

Open or create a Data Entry based on the provided URI and mode, or prepare a DBEntry using legacy parameters.

Note

When using legacy parameters (backend_id, db_name, pulse, run), the DBEntry is not opened. You have to call open() or create() after creating the DBEntry object before you can use it for reading or writing data.

Parameters:
uri: str

URI to the data entry, see explanation above.

mode: str

Mode to open the Data Entry in:

  • "r": Open an existing data entry. Raises an error when the data entry does not exist.

    Note

    The opened data entry is not read-only, it can be written to.

  • "a": Open an existing data entry, create the data entry if it does not exist.

  • "w": Create a data entry, overwriting any existing.

    Caution

    This will irreversibly delete any existing data.

  • "x": Create a data entry. Raises an error when a data entry already exists.

backend_id: int

ID of the backend to use. See Backend identifiers.

db_name: str

Database name, e.g. “ITER”.

pulse: int

Pulse number of the database entry.

run: int

Run number of the database entry.

user_name: str | None = None

User name of the database, retrieved from environment when not supplied.

data_version: str | None = None

Major version of the DD used by the the access layer.

Keyword Arguments:
shot: int | None = None

Legacy alternative for pulse.

dd_version: str | None = None

Use a specific Data Dictionary version instead of the default one. See Working with multiple data dictionary versions.

xml_path: str | Path | None = None

Use a specific Data Dictionary build by pointing to the IDSDef.xml. See Using custom builds of the Data Dictionary.

Methods

__init__()

Open or create a Data Entry based on the provided URI and mode, or prepare a DBEntry using legacy parameters.

close(*[, erase])

Close this Database Entry.

create(*[, options, force])

Create a new database entry.

delete_data(ids_name[, occurrence])

Delete the provided IDS occurrence from this IMAS database entry.

get(ids_name[, occurrence, lazy, ...])

Read the contents of an IDS into memory.

get_sample(ids_name, tmin, tmax[, dtime, ...])

Read a range of time slices from an IDS in this Database Entry.

get_slice(ids_name, time_requested, ...[, ...])

Read a single time slice from an IDS in this Database Entry.

list_all_occurrences()

List all non-empty occurrences of an IDS

open([mode, options, force])

Open an existing database entry.

put(ids[, occurrence])

Write the contents of an IDS into this Database Entry.

put_slice(ids[, occurrence])

Append a time slice of the provided IDS to the Database Entry.

Attributes

dd_version

Get the DD version used by this DB entry

factory

Get the IDS factory used by this DB entry.

close(*, erase=False)

Close this Database Entry.

Keyword Arguments:
erase=False

Remove the pulse file from the database. Note: this parameter may be ignored by the backend. It is best to not use it.

create(*, options=None, force=True) None

Create a new database entry.

This method may not be called when using the URI constructor of DBEntry.

Caution

This method erases the previous entry if it existed!

Keyword Arguments:
options=None

Backend specific options.

force=True

Whether to force create the database entry.

Example

import imas
from imas.ids_defs import HDF5_BACKEND

imas_entry = imas.DBEntry(HDF5_BACKEND, "test", 1, 1234)
imas_entry.create()
property dd_version : str

Get the DD version used by this DB entry

delete_data(ids_name: str, occurrence: int = 0) None

Delete the provided IDS occurrence from this IMAS database entry.

Parameters:
ids_name: str

Name of the IDS to delete from the backend.

occurrence: int = 0

Which occurrence of the IDS to delete.

property factory : IDSFactory

Get the IDS factory used by this DB entry.

get(ids_name: str, occurrence: int = 0, *, lazy: bool = False, autoconvert: bool = True, ignore_unknown_dd_version: bool = False, destination: IDSToplevel | None = None) IDSToplevel

Read the contents of an IDS into memory.

This method fetches an IDS in its entirety, with all time slices it may contain. See get_slice() for reading a specific time slice.

Parameters:
ids_name: str

Name of the IDS to read from the backend.

occurrence: int = 0

Which occurrence of the IDS to read.

Keyword Arguments:
lazy: bool = False

When set to True, values in this IDS will be retrieved only when needed (instead of getting the full IDS immediately). See Lazy loading for more details.

Note

Lazy loading is not supported by the ASCII backend.

autoconvert: bool = True

Automatically convert IDSs.

If enabled (default), a call to get() or get_slice() will return an IDS from the Data Dictionary version attached to this Data Entry. Data is automatically converted between the on-disk version and the in-memory version.

When set to False, the IDS will be returned in the DD version it was stored in.

ignore_unknown_dd_version: bool = False

When an IDS is stored with an unknown DD version, do not attempt automatic conversion and fetch the data in the Data Dictionary version attached to this Data Entry.

destination: IDSToplevel | None = None

Populate this IDSToplevel instead of creating an empty one.

Returns:

The loaded IDS.

Example

import imas

imas_entry = imas.DBEntry(imas.ids_defs.MDSPLUS_BACKEND, "ITER", 131024, 41, "public")
imas_entry.open()
core_profiles = imas_entry.get("core_profiles")
get_sample(ids_name: str, tmin: float, tmax: float, dtime: float | ndarray | None = None, interpolation_method: int | None = None, occurrence: int = 0, *, lazy: bool = False, autoconvert: bool = True, ignore_unknown_dd_version: bool = False, destination: IDSToplevel | None = None) IDSToplevel

Read a range of time slices from an IDS in this Database Entry.

This method has three different modes, depending on the provided arguments:

  1. No interpolation. This method is selected when dtime and interpolation_method are not provided.

    This mode returns an IDS object with all constant/static data filled. The dynamic data is retrieved for the provided time range [tmin, tmax].

  2. Interpolate dynamic data on a uniform time base. This method is selected when dtime and interpolation_method are provided. dtime must be a number or a numpy array of size 1.

    This mode will generate an IDS with a homogeneous time vector [tmin, tmin + dtime, tmin + 2*dtime, ... up to tmax. The chosen interpolation method will have no effect on the time vector, but may have an impact on the other dynamic values. The returned IDS always has ids_properties.homogeneous_time = 1.

  3. Interpolate dynamic data on an explicit time base. This method is selected when dtime and interpolation_method are provided. dtime must be a numpy array of size larger than 1.

    This mode will generate an IDS with a homogeneous time vector equal to dtime. tmin and tmax are ignored in this mode. The chosen interpolation method will have no effect on the time vector, but may have an impact on the other dynamic values. The returned IDS always has ids_properties.homogeneous_time = 1.

Parameters:
ids_name: str

Name of the IDS to read from the backend

tmin: float

Lower bound of the requested time range

tmax: float

Upper bound of the requested time range, must be larger than or equal to tmin

dtime: float | ndarray | None = None

Interval to use when interpolating, must be positive, or numpy array containing an explicit time base to interpolate.

interpolation_method: int | None = None

Interpolation method to use. Available options:

occurrence: int = 0

Which occurrence of the IDS to read.

Keyword Arguments:
lazy: bool = False

When set to True, values in this IDS will be retrieved only when needed (instead of getting the full IDS immediately). See Lazy loading for more details.

autoconvert: bool = True

Automatically convert IDSs.

If enabled (default), a call to get_sample() will return an IDS from the Data Dictionary version attached to this Data Entry. Data is automatically converted between the on-disk version and the in-memory version.

When set to False, the IDS will be returned in the DD version it was stored in.

ignore_unknown_dd_version: bool = False

When an IDS is stored with an unknown DD version, do not attempt automatic conversion and fetch the data in the Data Dictionary version attached to this Data Entry.

destination: IDSToplevel | None = None

Populate this IDSToplevel instead of creating an empty one.

Returns:

The loaded IDS.

Example

import imas
import numpy
from imas import ids_defs

imas_entry = imas.DBEntry(
    "imas:mdsplus?user=public;pulse=131024;run=41;database=ITER", "r")

# All time slices between t=200 and t=370
core_profiles = imas_entry.get_sample("core_profiles", 200, 370)

# Closest points to [0, 100, 200, ..., 1000]
core_profiles_interp = imas_entry.get_sample(
    "core_profiles", 0, 1000, 100, ids_defs.CLOSEST_INTERP)

# Linear interpolation for [10, 11, 12, 14, 16, 20, 30, 40, 50]
times = numpy.array([10, 11, 12, 14, 16, 20, 30, 40, 50])
core_profiles_interp = imas_entry.get_sample(
    "core_profiles", 0, 0, times, ids_defs.LINEAR_INTERP)
get_slice(ids_name: str, time_requested: float, interpolation_method: int, occurrence: int = 0, *, lazy: bool = False, autoconvert: bool = True, ignore_unknown_dd_version: bool = False, destination: IDSToplevel | None = None) IDSToplevel

Read a single time slice from an IDS in this Database Entry.

This method returns an IDS object with all constant/static data filled. The dynamic data is interpolated on the requested time slice. This means that the size of the time dimension in the returned data is 1.

Parameters:
ids_name: str

Name of the IDS to read from the backend.

time_requested: float

Requested time slice

interpolation_method: int

Interpolation method to use. Available options:

occurrence: int = 0

Which occurrence of the IDS to read.

Keyword Arguments:
lazy: bool = False

When set to True, values in this IDS will be retrieved only when needed (instead of getting the full IDS immediately). See Lazy loading for more details.

autoconvert: bool = True

Automatically convert IDSs.

If enabled (default), a call to get() or get_slice() will return an IDS from the Data Dictionary version attached to this Data Entry. Data is automatically converted between the on-disk version and the in-memory version.

When set to False, the IDS will be returned in the DD version it was stored in.

ignore_unknown_dd_version: bool = False

When an IDS is stored with an unknown DD version, do not attempt automatic conversion and fetch the data in the Data Dictionary version attached to this Data Entry.

destination: IDSToplevel | None = None

Populate this IDSToplevel instead of creating an empty one.

Returns:

The loaded IDS.

Example

import imas

imas_entry = imas.DBEntry(imas.ids_defs.MDSPLUS_BACKEND, "ITER", 131024, 41, "public")
imas_entry.open()
core_profiles = imas_entry.get_slice("core_profiles", 370, imas.ids_defs.PREVIOUS_INTERP)
list_all_occurrences(ids_name: str, node_path: None = None) list[int]
list_all_occurrences(ids_name: str, node_path: str) tuple[list[int], list[imas.ids_base.IDSBase]]

List all non-empty occurrences of an IDS

Note: this is only available with Access Layer core version 5.1 or newer.

Parameters:
ids_name: str

name of the IDS (e.g. “magnetics”, “core_profiles” or “equilibrium”)

node_path: None = None
node_path: str

path to a Data-Dictionary node (e.g. “ids_properties/comment”, “code/name”, “ids_properties/provider”).

Returns:

When no node_path is supplied, a (sorted) list with non-empty occurrence numbers is returned.

When node_path is supplied, a tuple (occurrence_list, node_content_list) is returned. The occurrence_list is a (sorted) list of non-empty occurrence numbers. The node_content_list contains the contents of the node in the corresponding occurrences.

Return type:

tuple or list

Example

dbentry = imas.DBEntry(uri, "r")
occurrence_list, node_content_list = \
    dbentry.list_all_occurrences("magnetics", "ids_properties/comment")
dbentry.close()
open(mode=40, *, options=None, force=False) None

Open an existing database entry.

This method may not be called when using the URI constructor of DBEntry.

Keyword Arguments:
options=None

Backend specific options.

force=False

Whether to force open the database entry.

Example

import imas
from imas.ids_defs import HDF5_BACKEND

imas_entry = imas.DBEntry(HDF5_BACKEND, "test", 1, 1234)
imas_entry.open()
put(ids: IDSToplevel, occurrence: int = 0) None

Write the contents of an IDS into this Database Entry.

The IDS is written entirely, with all time slices it may contain.

Caution

The put method deletes any previously existing data within the target IDS occurrence in the Database Entry.

Parameters:
ids: IDSToplevel

IDS object to put.

occurrence: int = 0

Which occurrence of the IDS to write to.

Example

ids = imas.IDSFactory().pf_active()
...  # fill the pf_active IDS here
imas_entry.put(ids)
put_slice(ids: IDSToplevel, occurrence: int = 0) None

Append a time slice of the provided IDS to the Database Entry.

Time slices must be appended in strictly increasing time order, since the Access Layer is not reordering time arrays. Doing otherwise will result in non-monotonic time arrays, which will create confusion and make subsequent get_slice() commands to fail.

Although being put progressively time slice by time slice, the final IDS must be compliant with the data dictionary. A typical error when constructing IDS variables time slice by time slice is to change the size of the IDS fields during the time loop, which is not allowed but for the children of an array of structure which has time as its coordinate.

The put_slice() command is appending data, so does not modify previously existing data within the target IDS occurrence in the Data Entry.

It is possible possible to append several time slices to a node of the IDS in one put_slice() call, however the user must ensure that the size of the time dimension of the node remains consistent with the size of its timebase.

Parameters:
ids: IDSToplevel

IDS object to put.

occurrence: int = 0

Which occurrence of the IDS to write to.

Example

A frequent use case is storing IMAS data progressively in a time loop. You can fill the constant and static values only once and progressively append the dynamic values calculated in each step of the time loop with put_slice().

ids = imas.IDSFactory().pf_active() ...  # fill the static data of the
pf_active IDS here for i in range(N):
    ... # fill time slice of the pf_active IDS imas_entry.put_slice(ids)

Last update: 2026-01-28