Skip to content

webknossos.dataset.dataset

#   class Dataset:

A dataset is the entry point of the Dataset API. An existing dataset on disk can be opened or new datasets can be created.

A dataset stores the data in .wkw files on disk with metadata in datasource-properties.json. The information in those files are kept in sync with the object.

Each dataset consists of one or more layers (webknossos.dataset.layer.Layer), which themselves can comprise multiple magnifications (webknossos.dataset.mag_view.MagView).

#   Dataset(dataset_path: Union[str, pathlib.Path])

To open an existing dataset on disk, simply call the constructor of Dataset. This requires that the datasource-properties.json exists. Based on the datasource-properties.json, a dataset object is constructed. Only layers and magnifications that are listed in the properties are loaded (even though there might exists more layer or magnifications on disk).

The dataset_path refers to the top level directory of the dataset (excluding layer or magnification names).

#   path

Location of the dataset

Getter for dictionary containing all layers.

#   def upload(self) -> str:
#   def get_layer(self, layer_name: str) -> webknossos.dataset.layer.Layer:

Returns the layer called layer_name of this dataset. The return type is webknossos.dataset.layer.Layer.

This function raises an IndexError if the specified layer_name does not exist.

#   def add_layer( self, layer_name: str, category: Literal['color', 'segmentation'], dtype_per_layer: Union[str, numpy.dtype, type, NoneType] = None, dtype_per_channel: Union[str, numpy.dtype, type, NoneType] = None, num_channels: Union[int, NoneType] = None, **kwargs: Any ) -> webknossos.dataset.layer.Layer:

Creates a new layer called layer_name and adds it to the dataset. The dtype can either be specified per layer or per channel. If neither of them are specified, uint8 per channel is used as default. When creating a "Segmentation Layer" (category="segmentation"), the parameter largest_segment_id also has to be specified.

Creates the folder layer_name in the directory of self.path.

The return type is webknossos.dataset.layer.Layer.

This function raises an IndexError if the specified layer_name already exists.

#   def get_or_add_layer( self, layer_name: str, category: Literal['color', 'segmentation'], dtype_per_layer: Union[str, numpy.dtype, type, NoneType] = None, dtype_per_channel: Union[str, numpy.dtype, type, NoneType] = None, num_channels: Union[int, NoneType] = None, **kwargs: Any ) -> webknossos.dataset.layer.Layer:

Creates a new layer called layer_name and adds it to the dataset, in case it did not exist before. Then, returns the layer.

For more information see add_layer.

#   def add_layer_like( self, other_layer: webknossos.dataset.layer.Layer, layer_name: str ) -> webknossos.dataset.layer.Layer:
#   def add_layer_for_existing_files( self, layer_name: str, category: Literal['color', 'segmentation'], **kwargs: Any ) -> webknossos.dataset.layer.Layer:
#   def get_segmentation_layer(self) -> webknossos.dataset.layer.SegmentationLayer:

Returns the only segmentation layer.

Fails with a IndexError if there are multiple segmentation layers or none.

#   def get_color_layer(self) -> webknossos.dataset.layer.Layer:

Returns the only color layer.

Fails with a RuntimeError if there are multiple color layers or none.

#   def delete_layer(self, layer_name: str) -> None:

Deletes the layer from the datasource-properties.json and the data from disk.

#   def add_copy_layer( self, foreign_layer: Union[str, pathlib.Path, webknossos.dataset.layer.Layer], new_layer_name: Union[str, NoneType] = None ) -> webknossos.dataset.layer.Layer:

Copies the data at foreign_layer which belongs to another dataset to the current dataset. Additionally, the relevant information from the datasource-properties.json of the other dataset are copied too. If new_layer_name is None, the name of the foreign layer is used.

#   def copy_dataset( self, new_dataset_path: Union[str, pathlib.Path], scale: Union[Tuple[float, float, float], NoneType] = None, block_len: Union[int, NoneType] = None, file_len: Union[int, NoneType] = None, compress: Union[bool, NoneType] = None, args: Union[argparse.Namespace, NoneType] = None ) -> webknossos.dataset.dataset.Dataset:

Creates a new dataset at new_dataset_path and copies the data from the current dataset to empty_target_ds. If not specified otherwise, the scale, block_len, file_len and block_type of the current dataset are also used for the new dataset.

#   def shallow_copy_dataset( self, new_dataset_path: pathlib.Path, name: Union[str, NoneType] = None, make_relative: bool = False, layers_to_ignore: Union[List[str], NoneType] = None ) -> webknossos.dataset.dataset.Dataset:

Create a new dataset at the given path. Link all mags of all existing layers. In addition, link all other directories in all layer directories to make this method robust against additional files e.g. layer/mappings/agglomerate_view.hdf5. This method becomes useful when exposing a dataset to webknossos.

#   scale: Tuple[float, float, float]
#   name: str
#   default_view_configuration: Union[webknossos.dataset.properties.DatasetViewConfiguration, NoneType]
#  
@classmethod
def create( cls, dataset_path: Union[str, pathlib.Path], scale: Tuple[float, float, float], name: Union[str, NoneType] = None ) -> webknossos.dataset.dataset.Dataset:

Creates a new dataset and the associated datasource-properties.json.

#  
@classmethod
def get_or_create( cls, dataset_path: Union[str, pathlib.Path], scale: Tuple[float, float, float], name: Union[str, NoneType] = None ) -> webknossos.dataset.dataset.Dataset:

Creates a new Dataset, in case it did not exist before, and then returns it. The datasource-properties.json is used to check if the dataset already exist.

Back to top