Skip to content

ChannelDropout augmentation (augmentations.dropout.channel_dropout)

class ChannelDropout (channel_drop_range=(1, 1), fill_value=0, always_apply=None, p=0.5) [view source on GitHub]

Randomly drop channels in the input image.

This transform randomly selects a number of channels to drop from the input image and replaces them with a specified fill value. This can improve model robustness to missing or corrupted channels.

The technique is conceptually similar to: - Dropout layers in neural networks, which randomly set input units to 0 during training. - CoarseDropout augmentation, which drops out regions in the spatial dimensions of the image.

However, ChannelDropout operates on the channel dimension, effectively "dropping out" entire color channels or feature maps.

Parameters:

Name Type Description
channel_drop_range tuple[int, int]

Range from which to choose the number of channels to drop. The actual number will be randomly selected from the inclusive range [min, max]. Default: (1, 1).

fill_value float

Pixel value used to fill the dropped channels. Default: 0.

p float

Probability of applying the transform. Must be in the range [0, 1]. Default: 0.5.

Exceptions:

Type Description
NotImplementedError

If the input image has only one channel.

ValueError

If the upper bound of channel_drop_range is greater than or equal to the number of channels in the input image.

Targets

image

Image types: uint8, float32

Examples:

Python
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.ChannelDropout(channel_drop_range=(1, 2), fill_value=128, p=1.0)
>>> result = transform(image=image)
>>> dropped_image = result['image']
>>> assert dropped_image.shape == image.shape
>>> assert np.any(dropped_image != image)  # Some channels should be different

Note

  • The number of channels to drop is randomly chosen within the specified range.
  • Channels are randomly selected for dropping.
  • This transform is not applicable to single-channel (grayscale) images.
  • The transform will raise an error if it's not possible to drop the specified number of channels (e.g., trying to drop 3 channels from an RGB image).
  • This augmentation can be particularly useful for training models to be robust against missing or corrupted channel data in multi-spectral or hyperspectral imagery.

Interactive Tool Available!

Explore this transform visually and adjust parameters interactively using this tool:

Open Tool

Source code in albumentations/augmentations/dropout/channel_dropout.py
Python
class ChannelDropout(ImageOnlyTransform):
    """Randomly drop channels in the input image.

    This transform randomly selects a number of channels to drop from the input image
    and replaces them with a specified fill value. This can improve model robustness
    to missing or corrupted channels.

    The technique is conceptually similar to:
    - Dropout layers in neural networks, which randomly set input units to 0 during training.
    - CoarseDropout augmentation, which drops out regions in the spatial dimensions of the image.

    However, ChannelDropout operates on the channel dimension, effectively "dropping out"
    entire color channels or feature maps.

    Args:
        channel_drop_range (tuple[int, int]): Range from which to choose the number
            of channels to drop. The actual number will be randomly selected from
            the inclusive range [min, max]. Default: (1, 1).
        fill_value (float): Pixel value used to fill the dropped channels.
            Default: 0.
        p (float): Probability of applying the transform. Must be in the range
            [0, 1]. Default: 0.5.

    Raises:
        NotImplementedError: If the input image has only one channel.
        ValueError: If the upper bound of channel_drop_range is greater than or
            equal to the number of channels in the input image.

    Targets:
        image

    Image types:
        uint8, float32

    Example:
        >>> import numpy as np
        >>> import albumentations as A
        >>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
        >>> transform = A.ChannelDropout(channel_drop_range=(1, 2), fill_value=128, p=1.0)
        >>> result = transform(image=image)
        >>> dropped_image = result['image']
        >>> assert dropped_image.shape == image.shape
        >>> assert np.any(dropped_image != image)  # Some channels should be different

    Note:
        - The number of channels to drop is randomly chosen within the specified range.
        - Channels are randomly selected for dropping.
        - This transform is not applicable to single-channel (grayscale) images.
        - The transform will raise an error if it's not possible to drop the specified
          number of channels (e.g., trying to drop 3 channels from an RGB image).
        - This augmentation can be particularly useful for training models to be robust
          against missing or corrupted channel data in multi-spectral or hyperspectral imagery.

    """

    class InitSchema(BaseTransformInitSchema):
        channel_drop_range: Annotated[tuple[int, int], AfterValidator(check_1plus)]
        fill_value: Annotated[float, Field(description="Pixel value for the dropped channel.")]

    def __init__(
        self,
        channel_drop_range: tuple[int, int] = (1, 1),
        fill_value: float = 0,
        always_apply: bool | None = None,
        p: float = 0.5,
    ):
        super().__init__(p=p, always_apply=always_apply)

        self.channel_drop_range = channel_drop_range
        self.fill_value = fill_value

    def apply(self, img: np.ndarray, channels_to_drop: tuple[int, ...], **params: Any) -> np.ndarray:
        return channel_dropout(img, channels_to_drop, self.fill_value)

    def get_params_dependent_on_data(self, params: Mapping[str, Any], data: Mapping[str, Any]) -> dict[str, Any]:
        image = data["image"] if "image" in data else data["images"][0]
        num_channels = get_num_channels(image)

        if num_channels == 1:
            msg = "Images has one channel. ChannelDropout is not defined."
            raise NotImplementedError(msg)

        if self.channel_drop_range[1] >= num_channels:
            msg = "Can not drop all channels in ChannelDropout."
            raise ValueError(msg)

        num_drop_channels = random.randint(*self.channel_drop_range)

        channels_to_drop = random.sample(range(num_channels), k=num_drop_channels)

        return {"channels_to_drop": channels_to_drop}

    def get_transform_init_args_names(self) -> tuple[str, ...]:
        return "channel_drop_range", "fill_value"

class InitSchema

Interactive Tool Available!

Explore this transform visually and adjust parameters interactively using this tool:

Open Tool

Source code in albumentations/augmentations/dropout/channel_dropout.py
Python
class InitSchema(BaseTransformInitSchema):
    channel_drop_range: Annotated[tuple[int, int], AfterValidator(check_1plus)]
    fill_value: Annotated[float, Field(description="Pixel value for the dropped channel.")]

__class_vars__ special

The names of the class variables defined on the model.

__private_attributes__ special

Metadata about the private attributes of the model.

__pydantic_complete__ special

Whether model building is completed, or if there are still undefined fields.

__pydantic_custom_init__ special

Whether the model has a custom __init__ method.

__pydantic_decorators__ special

Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.

__pydantic_generic_metadata__ special

Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.

__pydantic_parent_namespace__ special

Parent namespace of the model, used for automatic rebuilding of models.

__pydantic_post_init__ special

The name of the post-init method for the model, if defined.

__signature__ special

The synthesized __init__ [Signature][inspect.Signature] of the model.

model_computed_fields

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

apply (self, img, channels_to_drop, **params)

Apply transform on image.

Source code in albumentations/augmentations/dropout/channel_dropout.py
Python
def apply(self, img: np.ndarray, channels_to_drop: tuple[int, ...], **params: Any) -> np.ndarray:
    return channel_dropout(img, channels_to_drop, self.fill_value)

get_params_dependent_on_data (self, params, data)

Returns parameters dependent on input.

Source code in albumentations/augmentations/dropout/channel_dropout.py
Python
def get_params_dependent_on_data(self, params: Mapping[str, Any], data: Mapping[str, Any]) -> dict[str, Any]:
    image = data["image"] if "image" in data else data["images"][0]
    num_channels = get_num_channels(image)

    if num_channels == 1:
        msg = "Images has one channel. ChannelDropout is not defined."
        raise NotImplementedError(msg)

    if self.channel_drop_range[1] >= num_channels:
        msg = "Can not drop all channels in ChannelDropout."
        raise ValueError(msg)

    num_drop_channels = random.randint(*self.channel_drop_range)

    channels_to_drop = random.sample(range(num_channels), k=num_drop_channels)

    return {"channels_to_drop": channels_to_drop}

get_transform_init_args_names (self)

Returns names of arguments that are used in init method of the transform.

Source code in albumentations/augmentations/dropout/channel_dropout.py
Python
def get_transform_init_args_names(self) -> tuple[str, ...]:
    return "channel_drop_range", "fill_value"