webknossos.dataset.mag_view
¶
Classes:
-
MagView
–A
MagView
contains all information about the data of a single magnification of aLayer
.
MagView
¶
MagView(layer: Layer, mag: Mag, chunk_shape: Vec3Int, chunks_per_shard: Vec3Int, compression_mode: bool, create: bool = False, path: Optional[UPath] = None)
Bases: View
A MagView
contains all information about the data of a single magnification of a Layer
.
MagView
inherits from View
. The main difference is that the MagView
has a reference to its Layer
Therefore, a MagView
can write outside the specified bounding box (unlike a normal View
), resizing the layer's bounding box.
If necessary, the properties are automatically updated (e.g. if the bounding box changed).
Do not use this constructor manually. Instead use webknossos.dataset.layer.Layer.add_mag()
.
Methods:
-
chunk
–This method chunks the view into multiple sub-views of size
chunk_shape
(in Mag(1)). -
compress
–Compresses the files on disk. This has consequences for writing data (see
write
). -
content_is_equal
– -
for_each_chunk
–The view is chunked into multiple sub-views of size
chunk_shape
(in Mag(1)), -
for_zipped_chunks
–This method is similar to
for_each_chunk
in the sense that it delegates work to smaller chunks, -
get_bounding_boxes_on_disk
–Returns a Mag(1) bounding box for each file on disk.
-
get_buffered_slice_reader
–The returned reader yields slices of data along a specified axis.
-
get_buffered_slice_writer
–The returned writer buffers multiple slices before they are written to disk.
-
get_dtype
–Returns the dtype per channel of the data. For example
uint8
. -
get_view
– -
get_views_on_disk
–Yields a view for each file on disk, which can be used for efficient parallelization.
-
get_zarr_array
–Directly access the underlying Zarr array. Only available for Zarr-based datasets.
-
map_chunk
–The view is chunked into multiple sub-views of size
chunk_shape
(in Mag(1)), -
merge_chunk
– -
merge_with_view
– -
read
– -
read_bbox
–⚠️ Deprecated. Please use
read()
withrelative_bounding_box
orabsolute_bounding_box
in Mag(1) instead. -
read_xyz
–The user can specify the bounding box in the dataset's coordinate system.
-
write
–
Attributes:
-
bounding_box
(NDBoundingBox
) – -
global_offset
(Vec3Int
) –⚠️ Deprecated, use
Vec3Int.zeros()
instead. -
header
(Header
) –⚠️ Deprecated, use
info
instead. -
info
(ArrayInfo
) – -
is_remote_to_dataset
(bool
) – -
layer
(Layer
) – -
mag
(Mag
) – -
name
(str
) – -
path
(Path
) – -
read_only
(bool
) – -
size
(VecInt
) –⚠️ Deprecated, use
mag_view.bounding_box.in_mag(mag_view.mag).bottomright
instead.
size
property
¶
size: VecInt
⚠️ Deprecated, use mag_view.bounding_box.in_mag(mag_view.mag).bottomright
instead.
chunk
¶
chunk(chunk_shape: VecIntLike, chunk_border_alignments: Optional[VecIntLike] = None, read_only: bool = False) -> Generator[View, None, None]
This method chunks the view into multiple sub-views of size chunk_shape
(in Mag(1)).
The chunk_border_alignments
parameter specifies the alignment of the chunks.
The default is to align the chunks to the origin (0, 0, 0).
Example:
# ...
# let 'mag1' be a `MagView`
chunks = mag1.chunk(chunk_shape=(100, 100, 100), chunk_border_alignments=(50, 50, 50))
compress
¶
compress(target_path: Optional[Union[str, Path]] = None, args: Optional[Namespace] = None, executor: Optional[Executor] = None) -> None
Compresses the files on disk. This has consequences for writing data (see write
).
The data gets compressed inplace, if target_path is None. Otherwise it is written to target_path/layer_name/mag.
Compressing mags on remote file systems requires a target_path
.
content_is_equal
¶
content_is_equal(other: View, args: Optional[Namespace] = None, executor: Optional[Executor] = None) -> bool
for_each_chunk
¶
for_each_chunk(func_per_chunk: Callable[[Tuple[View, int]], None], chunk_shape: Optional[Vec3IntLike] = None, executor: Optional[Executor] = None, progress_desc: Optional[str] = None, *, chunk_size: Optional[Vec3IntLike] = None) -> None
The view is chunked into multiple sub-views of size chunk_shape
(in Mag(1)),
by default one chunk per file.
Then, func_per_chunk
is performed on each sub-view.
Besides the view, the counter i
is passed to the func_per_chunk
,
which can be used for logging.
Additional parameters for func_per_chunk
can be specified using functools.partial
.
The computation of each chunk has to be independent of each other.
Therefore, the work can be parallelized with executor
.
If the View
is of type MagView
only the bounding box from the properties is chunked.
Example:
from webknossos.utils import named_partial
def some_work(args: Tuple[View, int], some_parameter: int) -> None:
view_of_single_chunk, i = args
# perform operations on the view
...
# ...
# let 'mag1' be a `MagView`
func = named_partial(some_work, some_parameter=42)
mag1.for_each_chunk(
func,
)
for_zipped_chunks
¶
for_zipped_chunks(func_per_chunk: Callable[[Tuple[View, View, int]], None], target_view: View, source_chunk_shape: Optional[Vec3IntLike] = None, target_chunk_shape: Optional[Vec3IntLike] = None, executor: Optional[Executor] = None, progress_desc: Optional[str] = None, *, source_chunk_size: Optional[Vec3IntLike] = None, target_chunk_size: Optional[Vec3IntLike] = None) -> None
This method is similar to for_each_chunk
in the sense that it delegates work to smaller chunks,
given by source_chunk_shape
and target_chunk_shape
(both in Mag(1),
by default using the larger of the source_views and the target_views file-sizes).
However, this method also takes another view as a parameter. Both views are chunked simultaneously
and a matching pair of chunks is then passed to the function that shall be executed.
This is useful if data from one view should be (transformed and) written to a different view,
assuming that the transformation of the data can be handled on chunk-level.
Additionally to the two views, the counter i
is passed to the func_per_chunk
, which can be used for logging.
The mapping of chunks from the source view to the target is bijective.
The ratio between the size of the source_view
(self
) and the source_chunk_shape
must be equal to
the ratio between the target_view
and the target_chunk_shape
. This guarantees that the number of chunks
in the source_view
is equal to the number of chunks in the target_view
.
The target_chunk_shape
must be a multiple of the file size on disk to avoid concurrent writes.
Example use case: downsampling from Mag(1) to Mag(2)
- size of the views: 16384³
(8192³
in Mag(2) for target_view
)
- automatic chunk sizes: 2048³
, assuming default file-lengths
(1024³
in Mag(2), which fits the default file-length of 32*32)
get_bounding_boxes_on_disk
¶
get_bounding_boxes_on_disk() -> Iterator[NDBoundingBox]
Returns a Mag(1) bounding box for each file on disk.
This differs from the bounding box in the properties, which is an "overall" bounding box, abstracting from the files on disk.
get_buffered_slice_reader
¶
get_buffered_slice_reader(offset: Optional[Vec3IntLike] = None, size: Optional[Vec3IntLike] = None, buffer_size: int = 32, dimension: int = 2, *, relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None, use_logging: bool = False) -> BufferedSliceReader
The returned reader yields slices of data along a specified axis. Internally, it reads multiple slices from disk at once and buffers the data.
Arguments:
- The user can specify where the writer should start:
relative_bounding_box
in Mag(1)absolute_bounding_box
in Mag(1)- ⚠️ deprecated:
offset
andsize
in the current Mag,offset
used to be relative forView
and absolute forMagView
buffer_size
: amount of slices that get buffereddimension
: dimension along which the data is sliced (x:0
, y:1
, z:2
; default is2
)).
The reader must be used as a context manager using the with
syntax (see example below).
Entering the context returns an iterator yielding slices (np.ndarray).
Usage:
view = ...
with view.get_buffered_slice_reader() as reader:
for slice_data in reader:
...
get_buffered_slice_writer
¶
get_buffered_slice_writer(offset: Optional[Vec3IntLike] = None, buffer_size: int = 32, dimension: int = 2, json_update_allowed: bool = True, *, relative_offset: Optional[Vec3IntLike] = None, absolute_offset: Optional[Vec3IntLike] = None, relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None, use_logging: bool = False) -> BufferedSliceWriter
The returned writer buffers multiple slices before they are written to disk. As soon as the buffer is full, the data gets written to disk.
Arguments:
- The user can specify where the writer should start:
relative_offset
in Mag(1)absolute_offset
in Mag(1)relative_bounding_box
in Mag(1)absolute_bounding_box
in Mag(1)- ⚠️ deprecated:
offset
in the current Mag, used to be relative forView
and absolute forMagView
buffer_size
: amount of slices that get buffereddimension
: dimension along which the data is sliced (x:0
, y:1
, z:2
; default is2
)).
The writer must be used as context manager using the with
syntax (see example below),
which results in a generator consuming np.ndarray-slices via writer.send(slice)
.
Exiting the context will automatically flush any remaining buffered data to disk.
Usage:
data_cube = ...
view = ...
with view.get_buffered_slice_writer() as writer:
for data_slice in data_cube:
writer.send(data_slice)
get_view
¶
get_view(offset: Optional[Vec3IntLike] = None, size: Optional[Vec3IntLike] = None, *, relative_offset: Optional[Vec3IntLike] = None, absolute_offset: Optional[Vec3IntLike] = None, relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None, read_only: Optional[bool] = None) -> View
get_views_on_disk
¶
get_views_on_disk(read_only: Optional[bool] = None) -> Iterator[View]
Yields a view for each file on disk, which can be used for efficient parallelization.
get_zarr_array
¶
get_zarr_array() -> NDArrayLike
Directly access the underlying Zarr array. Only available for Zarr-based datasets.
map_chunk
¶
map_chunk(func_per_chunk: Callable[[View], Any], chunk_shape: Optional[Vec3IntLike] = None, executor: Optional[Executor] = None, progress_desc: Optional[str] = None) -> List[Any]
The view is chunked into multiple sub-views of size chunk_shape
(in Mag(1)),
by default one chunk per file.
Then, func_per_chunk
is performed on each sub-view and the results are collected
in a list.
Additional parameters for func_per_chunk
can be specified using functools.partial
.
The computation of each chunk has to be independent of each other.
Therefore, the work can be parallelized with executor
.
If the View
is of type MagView
only the bounding box from the properties is chunked.
Example:
from webknossos.utils import named_partial
def some_work(view: View, some_parameter: int) -> None:
# perform operations on the view
...
# ...
# let 'mag1' be a `MagView`
func = named_partial(some_work, some_parameter=42)
results = mag1.map_chunk(
func,
)
read
¶
read(offset: Optional[Vec3IntLike] = None, size: Optional[Vec3IntLike] = None, *, relative_offset: Optional[Vec3IntLike] = None, absolute_offset: Optional[Vec3IntLike] = None, relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None) -> ndarray
read_bbox
¶
read_bbox(bounding_box: Optional[BoundingBox] = None) -> ndarray
⚠️ Deprecated. Please use read()
with relative_bounding_box
or absolute_bounding_box
in Mag(1) instead.
The user can specify the bounding_box
in the current mag of the requested data.
See read()
for more details.
read_xyz
¶
read_xyz(relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None) -> ndarray
The user can specify the bounding box in the dataset's coordinate system.
The default is to read all data of the view's bounding box.
Alternatively, one can supply one of the following keyword arguments:
* relative_bounding_box
in Mag(1)
* absolute_bounding_box
in Mag(1)
Returns the specified data as a np.array
.
write
¶
write(data: ndarray, offset: Optional[Vec3IntLike] = None, json_update_allowed: bool = True, *, relative_offset: Optional[Vec3IntLike] = None, absolute_offset: Optional[Vec3IntLike] = None, relative_bounding_box: Optional[NDBoundingBox] = None, absolute_bounding_box: Optional[NDBoundingBox] = None) -> None
- Get Help
- Community Forums
- Email Support