Stay updated

News & Insights

albumentations.augmentations.transforms3d.transforms


Crop the center of 3D volume.

CenterCrop3Dclass

CenterCrop3D(
    size: tuple[int, int, int],
    pad_if_needed: bool = False,
    fill: tuple[float, ...] | float = 0,
    fill_mask: tuple[float, ...] | float = 0,
    p: float = 1.0
)

Crop the center of 3D volume.

Parameters

NameTypeDefaultDescription
sizetuple[int, int, int]-Desired output size of the crop in format (depth, height, width)
pad_if_neededboolFalseWhether to pad if the volume is smaller than desired crop size. Default: False
fill
One of:
  • tuple[float, ...]
  • float
0Padding value for image if pad_if_needed is True. Default: 0
fill_mask
One of:
  • tuple[float, ...]
  • float
0Padding value for mask if pad_if_needed is True. Default: 0
pfloat1.0probability of applying the transform. Default: 1.0

Examples

>>> import numpy as np
>>> import albumentations as A
>>>
>>> # Prepare sample data
>>> volume = np.random.randint(0, 256, (20, 200, 200), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (20, 200, 200), dtype=np.uint8)    # (D, H, W)
>>> keypoints = np.array([[100, 100, 10], [150, 150, 15]], dtype=np.float32)  # (x, y, z)
>>> keypoint_labels = [1, 2]  # Labels for each keypoint
>>>
>>> # Create the transform - crop to 16x128x128 from center
>>> transform = A.Compose([
...     A.CenterCrop3D(
...         size=(16, 128, 128),        # Output size (depth, height, width)
...         pad_if_needed=True,         # Pad if input is smaller than crop size
...         fill=0,                     # Fill value for volume padding
...         fill_mask=1,                # Fill value for mask padding
...         p=1.0
...     )
... ], keypoint_params=A.KeypointParams(coord_format='xyz', label_fields=['keypoint_labels']))
>>>
>>> # Apply the transform
>>> transformed = transform(
...     volume=volume,
...     mask3d=mask3d,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> cropped_volume = transformed["volume"]           # Shape: (16, 128, 128)
>>> cropped_mask3d = transformed["mask3d"]           # Shape: (16, 128, 128)
>>> cropped_keypoints = transformed["keypoints"]     # Keypoints shifted relative to center crop
>>> cropped_keypoint_labels = transformed["keypoint_labels"]  # Labels remain unchanged
>>>
>>> # Example with a small volume that requires padding
>>> small_volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)
>>> small_transform = A.Compose([
...     A.CenterCrop3D(
...         size=(16, 128, 128),
...         pad_if_needed=True,   # Will pad since the input is smaller
...         fill=0,
...         p=1.0
...     )
... ])
>>> small_result = small_transform(volume=small_volume)
>>> padded_and_cropped = small_result["volume"]  # Shape: (16, 128, 128), padded to size

Notes

If you want to perform cropping only in the XY plane while preserving all slices along the Z axis, consider using CenterCrop instead. CenterCrop will apply the same XY crop to each slice independently, maintaining the full depth of the volume.

CoarseDropout3Dclass

CoarseDropout3D(
    num_holes_range: tuple[int, int] = (1, 1),
    hole_depth_range: tuple[float, float] = (0.1, 0.2),
    hole_height_range: tuple[float, float] = (0.1, 0.2),
    hole_width_range: tuple[float, float] = (0.1, 0.2),
    fill: tuple[float, ...] | float = 0,
    fill_mask: tuple[float, ...] | float | None,
    p: float = 0.5
)

CoarseDropout3D randomly drops out cuboid regions from a 3D volume and optionally, the corresponding regions in an associated 3D mask, to simulate occlusion and varied object sizes found in real-world volumetric data.

Parameters

NameTypeDefaultDescription
num_holes_rangetuple[int, int](1, 1)Range (min, max) for the number of cuboid regions to drop out. Default: (1, 1)
hole_depth_rangetuple[float, float](0.1, 0.2)Range (min, max) for the depth of dropout regions as a fraction of the volume depth (between 0 and 1). Default: (0.1, 0.2)
hole_height_rangetuple[float, float](0.1, 0.2)Range (min, max) for the height of dropout regions as a fraction of the volume height (between 0 and 1). Default: (0.1, 0.2)
hole_width_rangetuple[float, float](0.1, 0.2)Range (min, max) for the width of dropout regions as a fraction of the volume width (between 0 and 1). Default: (0.1, 0.2)
fill
One of:
  • tuple[float, ...]
  • float
0Value for the dropped voxels. Can be: - int or float: all channels are filled with this value - tuple: tuple of values for each channel Default: 0
fill_mask
One of:
  • tuple[float, ...]
  • float
  • None
-Fill value for dropout regions in the 3D mask. If None, mask regions corresponding to volume dropouts are unchanged. Default: None
pfloat0.5Probability of applying the transform. Default: 0.5

Examples

>>> import numpy as np
>>> import albumentations as A
>>> volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (10, 100, 100), dtype=np.uint8)    # (D, H, W)
>>> aug = A.CoarseDropout3D(
...     num_holes_range=(3, 6),
...     hole_depth_range=(0.1, 0.2),
...     hole_height_range=(0.1, 0.2),
...     hole_width_range=(0.1, 0.2),
...     fill=0,
...     p=1.0
... )
>>> transformed = aug(volume=volume, mask3d=mask3d)
>>> transformed_volume, transformed_mask3d = transformed["volume"], transformed["mask3d"]

Notes

- The actual number and size of dropout regions are randomly chosen within the specified ranges. - All values in hole_depth_range, hole_height_range and hole_width_range must be between 0 and 1. - If you want to apply dropout only in the XY plane while preserving the full depth dimension, consider using CoarseDropout instead. CoarseDropout will apply the same rectangular dropout to each slice independently, effectively creating cylindrical dropout regions that extend through the entire depth of the volume.

CubicSymmetryclass

CubicSymmetry(
    p: float = 1.0
)

Applies a random cubic symmetry transformation to a 3D volume. This transform is a 3D extension of D4. While D4 handles the 8 symmetries of a square (4 rotations x 2 reflections), CubicSymmetry handles all 48 symmetries of a cube. Like D4, this transform does not create any interpolation artifacts as it only remaps voxels from one position to another without any interpolation. The 48 transformations consist of: - 24 rotations (orientation-preserving): * 4 rotations around each face diagonal (6 face diagonals x 4 rotations = 24) - 24 rotoreflections (orientation-reversing): * Reflection through a plane followed by any of the 24 rotations For a cube, these transformations preserve: - All face centers (6) - All vertex positions (8) - All edge centers (12) works with 3D volumes and masks of the shape (D, H, W) or (D, H, W, C)

Parameters

NameTypeDefaultDescription
pfloat1.0Probability of applying the transform. Default: 1.0

Examples

>>> import numpy as np
>>> import albumentations as A
>>> volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (10, 100, 100), dtype=np.uint8)    # (D, H, W)
>>> transform = A.CubicSymmetry(p=1.0)
>>> transformed = transform(volume=volume, mask3d=mask3d)
>>> transformed_volume = transformed["volume"]
>>> transformed_mask3d = transformed["mask3d"]

Notes

- This transform is particularly useful for data augmentation in 3D medical imaging, crystallography, and voxel-based 3D modeling where the object's orientation is arbitrary. - All transformations preserve the object's chirality (handedness) when using pure rotations (indices 0-23) and invert it when using rotoreflections (indices 24-47).

GridShuffle3Dclass

GridShuffle3D(
    grid_zyx: tuple[int, int, int] = (2, 2, 2),
    p: float = 0.5
)

Randomly shuffles the grid's cells on a 3D volume, mask3d, or keypoints, effectively rearranging patches within the volume. This transformation divides the volume into a 3D grid and then permutes these grid cells based on a random mapping. Unlike the 2D version, this does not support bounding boxes as 3D bounding boxes are not yet implemented.

Parameters

NameTypeDefaultDescription
grid_zyxtuple[int, int, int](2, 2, 2)Size of the grid for splitting the volume into cells along (Z, Y, X) axes, corresponding to (depth, height, width) dimensions. Each cell is shuffled randomly. For example, (2, 3, 3) will divide the volume into 2 slices along Z, 3 along Y, and 3 along X, resulting in 18 cells to be shuffled. Default: (2, 2, 2)
pfloat0.5Probability that the transform will be applied. Should be in the range [0, 1]. Default: 0.5

Examples

>>> import numpy as np
>>> import albumentations as A
>>> # Prepare sample data
>>> volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (10, 100, 100), dtype=np.uint8)    # (D, H, W)
>>> keypoints = np.array([[20, 30, 5], [60, 70, 8]], dtype=np.float32)  # (x, y, z)
>>> keypoint_labels = [1, 2]  # Labels for each keypoint
>>>
>>> # Define transform with grid_zyx as a tuple (Z, Y, X)
>>> transform = A.Compose([
...     A.GridShuffle3D(grid_zyx=(2, 3, 3), p=1.0),
... ], keypoint_params=A.KeypointParams(coord_format='xyz', label_fields=['keypoint_labels']))
>>>
>>> # Apply the transform
>>> transformed = transform(
...     volume=volume,
...     mask3d=mask3d,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> transformed_volume = transformed['volume']           # Grid-shuffled volume
>>> transformed_mask3d = transformed['mask3d']           # Grid-shuffled mask
>>> transformed_keypoints = transformed['keypoints']     # Grid-shuffled keypoints
>>> transformed_keypoint_labels = transformed['keypoint_labels']  # Labels remain unchanged

Notes

- This transform maintains consistency across all targets. If applied to a volume and its corresponding mask3d or keypoints, the same shuffling will be applied to all. - The number of cells in the grid should be at least 2 (i.e., grid_zyx should be at least (1, 1, 2), (1, 2, 1), (2, 1, 1) or larger) for the transform to have any effect. - Keypoints are moved along with their corresponding grid cell. - The grid_zyx parameter corresponds to volume dimensions: Z (depth), Y (height), X (width).

Pad3Dclass

Pad3D(
    padding: int | tuple[int, int, int] | tuple[int, int, int, int, int, int],
    fill: tuple[float, ...] | float = 0,
    fill_mask: tuple[float, ...] | float = 0,
    p: float = 1.0
)

Pad the sides of a 3D volume by specified number of voxels.

Parameters

NameTypeDefaultDescription
padding
One of:
  • int
  • tuple[int, int, int]
  • tuple[int, int, int, int, int, int]
-Padding values. Can be: * int - pad all sides by this value * tuple[int, int, int] - symmetric padding (depth, height, width) where each value is applied to both sides of the corresponding dimension * tuple[int, int, int, int, int, int] - explicit padding per side in order: (depth_front, depth_back, height_top, height_bottom, width_left, width_right)
fill
One of:
  • tuple[float, ...]
  • float
0Padding value for image
fill_mask
One of:
  • tuple[float, ...]
  • float
0Padding value for mask
pfloat1.0probability of applying the transform. Default: 1.0.

Examples

>>> import numpy as np
>>> import albumentations as A
>>>
>>> # Prepare sample data
>>> volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (10, 100, 100), dtype=np.uint8)    # (D, H, W)
>>> keypoints = np.array([[20, 30, 5], [60, 70, 8]], dtype=np.float32)  # (x, y, z)
>>> keypoint_labels = [1, 2]  # Labels for each keypoint
>>>
>>> # Create the transform with symmetric padding
>>> transform = A.Compose([
...     A.Pad3D(
...         padding=(2, 5, 10),  # (depth, height, width) applied symmetrically
...         fill=0,
...         fill_mask=1,
...         p=1.0
...     )
... ], keypoint_params=A.KeypointParams(coord_format='xyz', label_fields=['keypoint_labels']))
>>>
>>> # Apply the transform
>>> transformed = transform(
...     volume=volume,
...     mask3d=mask3d,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> padded_volume = transformed["volume"]  # Shape: (14, 110, 120)
>>> padded_mask3d = transformed["mask3d"]  # Shape: (14, 110, 120)
>>> padded_keypoints = transformed["keypoints"]  # Keypoints shifted by padding
>>> padded_keypoint_labels = transformed["keypoint_labels"]  # Labels remain unchanged

Notes

Input volume should be a numpy array with dimensions ordered as (z, y, x) or (depth, height, width), with optional channel dimension as the last axis.

PadIfNeeded3Dclass

PadIfNeeded3D(
    min_zyx: tuple[int, int, int] | None,
    pad_divisor_zyx: tuple[int, int, int] | None,
    position: 'center' | 'random' = center,
    fill: tuple[float, ...] | float = 0,
    fill_mask: tuple[float, ...] | float = 0,
    p: float = 1.0
)

Pads the sides of a 3D volume if its dimensions are less than specified minimum dimensions. If the pad_divisor_zyx is specified, the function additionally ensures that the volume dimensions are divisible by these values.

Parameters

NameTypeDefaultDescription
min_zyx
One of:
  • tuple[int, int, int]
  • None
-Minimum desired size as (depth, height, width). Ensures volume dimensions are at least these values. If not specified, pad_divisor_zyx must be provided.
pad_divisor_zyx
One of:
  • tuple[int, int, int]
  • None
-If set, pads each dimension to make it divisible by corresponding value in format (depth_div, height_div, width_div). If not specified, min_zyx must be provided.
position
One of:
  • 'center'
  • 'random'
centerPosition where the volume is to be placed after padding. Default is 'center'.
fill
One of:
  • tuple[float, ...]
  • float
0Value to fill the border voxels for volume. Default: 0
fill_mask
One of:
  • tuple[float, ...]
  • float
0Value to fill the border voxels for masks. Default: 0
pfloat1.0Probability of applying the transform. Default: 1.0

Examples

>>> import numpy as np
>>> import albumentations as A
>>>
>>> # Prepare sample data
>>> volume = np.random.randint(0, 256, (10, 100, 100), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (10, 100, 100), dtype=np.uint8)    # (D, H, W)
>>> keypoints = np.array([[20, 30, 5], [60, 70, 8]], dtype=np.float32)  # (x, y, z)
>>> keypoint_labels = [1, 2]  # Labels for each keypoint
>>>
>>> # Create a transform with both min_zyx and pad_divisor_zyx
>>> transform = A.Compose([
...     A.PadIfNeeded3D(
...         min_zyx=(16, 128, 128),        # Minimum size (depth, height, width)
...         pad_divisor_zyx=(8, 16, 16),   # Make dimensions divisible by these values
...         position="center",              # Center the volume in the padded space
...         fill=0,                         # Fill value for volume
...         fill_mask=1,                    # Fill value for mask
...         p=1.0
...     )
... ], keypoint_params=A.KeypointParams(coord_format='xyz', label_fields=['keypoint_labels']))
>>>
>>> # Apply the transform
>>> transformed = transform(
...     volume=volume,
...     mask3d=mask3d,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> padded_volume = transformed["volume"]           # Shape: (16, 128, 128)
>>> padded_mask3d = transformed["mask3d"]           # Shape: (16, 128, 128)
>>> padded_keypoints = transformed["keypoints"]     # Keypoints shifted by padding
>>> padded_keypoint_labels = transformed["keypoint_labels"]  # Labels remain unchanged

Notes

Input volume should be a numpy array with dimensions ordered as (z, y, x) or (depth, height, width), with optional channel dimension as the last axis.

RandomCrop3Dclass

RandomCrop3D(
    size: tuple[int, int, int],
    pad_if_needed: bool = False,
    fill: tuple[float, ...] | float = 0,
    fill_mask: tuple[float, ...] | float = 0,
    p: float = 1.0
)

Crop random part of 3D volume.

Parameters

NameTypeDefaultDescription
sizetuple[int, int, int]-Desired output size of the crop in format (depth, height, width)
pad_if_neededboolFalseWhether to pad if the volume is smaller than desired crop size. Default: False
fill
One of:
  • tuple[float, ...]
  • float
0Padding value for image if pad_if_needed is True. Default: 0
fill_mask
One of:
  • tuple[float, ...]
  • float
0Padding value for mask if pad_if_needed is True. Default: 0
pfloat1.0probability of applying the transform. Default: 1.0

Examples

>>> import numpy as np
>>> import albumentations as A
>>>
>>> # Prepare sample data
>>> volume = np.random.randint(0, 256, (20, 200, 200), dtype=np.uint8)  # (D, H, W)
>>> mask3d = np.random.randint(0, 2, (20, 200, 200), dtype=np.uint8)    # (D, H, W)
>>> keypoints = np.array([[100, 100, 10], [150, 150, 15]], dtype=np.float32)  # (x, y, z)
>>> keypoint_labels = [1, 2]  # Labels for each keypoint
>>>
>>> # Create the transform with random crop and padding if needed
>>> transform = A.Compose([
...     A.RandomCrop3D(
...         size=(16, 128, 128),        # Output size (depth, height, width)
...         pad_if_needed=True,         # Pad if input is smaller than crop size
...         fill=0,                     # Fill value for volume padding
...         fill_mask=1,                # Fill value for mask padding
...         p=1.0
...     )
... ], keypoint_params=A.KeypointParams(coord_format='xyz', label_fields=['keypoint_labels']))
>>>
>>> # Apply the transform
>>> transformed = transform(
...     volume=volume,
...     mask3d=mask3d,
...     keypoints=keypoints,
...     keypoint_labels=keypoint_labels
... )
>>>
>>> # Get the transformed data
>>> cropped_volume = transformed["volume"]           # Shape: (16, 128, 128)
>>> cropped_mask3d = transformed["mask3d"]           # Shape: (16, 128, 128)
>>> cropped_keypoints = transformed["keypoints"]     # Keypoints shifted relative to random crop
>>> cropped_keypoint_labels = transformed["keypoint_labels"]  # Labels remain unchanged

Notes

If you want to perform random cropping only in the XY plane while preserving all slices along the Z axis, consider using RandomCrop instead. RandomCrop will apply the same XY crop to each slice independently, maintaining the full depth of the volume.