albumentations.augmentations.blur.transforms
Transform classes for applying various blur operations to images. This module contains transform classes that implement different blur effects including standard blur, motion blur, median blur, Gaussian blur, glass blur, advanced blur, defocus, and zoom blur. These transforms are designed to work within the albumentations pipeline and support parameters for controlling the intensity and properties of the blur effects.
Members
- classAdvancedBlur
- classBlur
- classBlurInitSchema
- classDefocus
- classGaussianBlur
- classGlassBlur
- classMedianBlur
- classMotionBlur
- classZoomBlur
AdvancedBlurclass
AdvancedBlur(
blur_limit: tuple[int, int] | int = (3, 7),
sigma_x_limit: tuple[float, float] | float = (0.2, 1.0),
sigma_y_limit: tuple[float, float] | float = (0.2, 1.0),
rotate_limit: tuple[int, int] | int = (-90, 90),
beta_limit: tuple[float, float] | float = (0.5, 8.0),
noise_limit: tuple[float, float] | float = (0.9, 1.1),
p: float = 0.5
)
Applies a Generalized Gaussian blur to the input image with randomized parameters for advanced data augmentation. This transform creates a custom blur kernel based on the Generalized Gaussian distribution, which allows for a wide range of blur effects beyond standard Gaussian blur. It then applies this kernel to the input image through convolution. The transform also incorporates noise into the kernel, resulting in a unique combination of blurring and noise injection. Key features of this augmentation: 1. Generalized Gaussian Kernel: Uses a generalized normal distribution to create kernels that can range from box-like blurs to very peaked blurs, controlled by the beta parameter. 2. Anisotropic Blurring: Allows for different blur strengths in horizontal and vertical directions (controlled by sigma_x and sigma_y), and rotation of the kernel. 3. Kernel Noise: Adds multiplicative noise to the kernel before applying it to the image, creating more diverse and realistic blur effects. Implementation Details: The kernel is generated using a 2D Generalized Gaussian function. The process involves: 1. Creating a 2D grid based on the kernel size 2. Applying rotation to this grid 3. Calculating the kernel values using the Generalized Gaussian formula 4. Adding multiplicative noise to the kernel 5. Normalizing the kernel The resulting kernel is then applied to the image using convolution.
Parameters
Name | Type | Default | Description |
---|---|---|---|
blur_limit | One of:
| (3, 7) | Controls the size of the blur kernel. If a single int is provided, the kernel size will be randomly chosen between 3 and that value. Must be odd and ≥ 3. Larger values create stronger blur effects. Default: (3, 7) |
sigma_x_limit | One of:
| (0.2, 1.0) | Controls the spread of the blur in the x direction. Higher values increase blur strength. If a single float is provided, the range will be (0, limit). Default: (0.2, 1.0) |
sigma_y_limit | One of:
| (0.2, 1.0) | Controls the spread of the blur in the y direction. Higher values increase blur strength. If a single float is provided, the range will be (0, limit). Default: (0.2, 1.0) |
rotate_limit | One of:
| (-90, 90) | Range of angles (in degrees) for rotating the kernel. This rotation allows for diagonal blur directions. If limit is a single int, an angle is picked from (-rotate_limit, rotate_limit). Default: (-90, 90) |
beta_limit | One of:
| (0.5, 8.0) | Shape parameter of the Generalized Gaussian distribution. - beta = 1 gives a standard Gaussian distribution - beta < 1 creates heavier tails, resulting in more uniform, box-like blur - beta > 1 creates lighter tails, resulting in more peaked, focused blur Default: (0.5, 8.0) |
noise_limit | One of:
| (0.9, 1.1) | Controls the strength of multiplicative noise applied to the kernel. Values around 1.0 keep the original kernel mostly intact, while values further from 1.0 introduce more variation. Default: (0.75, 1.25) |
p | float | 0.5 | Probability of applying the transform. Default: 0.5 |
Notes
- This transform is particularly useful for simulating complex, real-world blur effects that go beyond simple Gaussian blur. - The combination of blur and noise can help in creating more robust models by simulating a wider range of image degradations. - Extreme values, especially for beta and noise, may result in unrealistic effects and should be used cautiously.
Blurclass
Blur(
blur_limit: tuple[int, int] | int = (3, 7),
p: float = 0.5
)
Apply uniform box blur to the input image using a randomly sized square kernel. This transform uses OpenCV's cv2.blur function, which performs a simple box filter blur. The size of the blur kernel is randomly selected for each application, allowing for varying degrees of blur intensity.
Parameters
Name | Type | Default | Description |
---|---|---|---|
blur_limit | One of:
| (3, 7) | Controls the range of the blur kernel size. - If a single int is provided, the kernel size will be randomly chosen between 3 and that value. - If a tuple of two ints is provided, it defines the inclusive range of possible kernel sizes. The kernel size must be odd and greater than or equal to 3. Larger kernel sizes produce stronger blur effects. Default: (3, 7) |
p | float | 0.5 | Probability of applying the transform. Default: 0.5 |
Example
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.Blur(blur_limit=(3, 7), p=1.0)
>>> result = transform(image=image)
>>> blurred_image = result["image"]
Notes
- The blur kernel is always square (same width and height). - Only odd kernel sizes are used to ensure the blur has a clear center pixel. - Box blur is faster than Gaussian blur but may produce less natural results. - This blur method averages all pixels under the kernel area, which can reduce noise but also reduce image detail.
BlurInitSchemaclass
BlurInitSchema(
p: Annotated,
strict: bool = False,
blur_limit: tuple[int, int] | int
)
Parameters
Name | Type | Default | Description |
---|---|---|---|
p | Annotated | - | - |
strict | bool | False | - |
blur_limit | One of:
| - | - |
Defocusclass
Defocus(
radius: tuple[int, int] | int = (3, 10),
alias_blur: tuple[float, float] | float = (0.1, 0.5),
p: float = 0.5
)
Apply defocus blur to the input image. This transform simulates the effect of an out-of-focus camera by applying a defocus blur to the image. It uses a combination of disc kernels and Gaussian blur to create a realistic defocus effect.
Parameters
Name | Type | Default | Description |
---|---|---|---|
radius | One of:
| (3, 10) | Range for the radius of the defocus blur. If a single int is provided, the range will be [1, radius]. Larger values create a stronger blur effect. Default: (3, 10) |
alias_blur | One of:
| (0.1, 0.5) | Range for the standard deviation of the Gaussian blur applied after the main defocus blur. This helps to reduce aliasing artifacts. If a single float is provided, the range will be (0, alias_blur). Larger values create a smoother, more aliased effect. Default: (0.1, 0.5) |
p | float | 0.5 | Probability of applying the transform. Should be in the range [0, 1]. Default: 0.5 |
Example
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.Defocus(radius=(4, 8), alias_blur=(0.2, 0.4))
>>> result = transform(image=image)
>>> defocused_image = result['image']
Notes
- The defocus effect is created using a disc kernel, which simulates the shape of a camera's aperture. - The additional Gaussian blur (alias_blur) helps to soften the edges of the disc kernel, creating a more natural-looking defocus effect. - Larger radius values will create a stronger, more noticeable defocus effect. - The alias_blur parameter can be used to fine-tune the appearance of the defocus, with larger values creating a smoother, potentially more realistic effect.
References
- Defocus aberration: https://en.wikipedia.org/wiki/Defocus_aberration
GaussianBlurclass
GaussianBlur(
blur_limit: tuple[int, int] | int = 0,
sigma_limit: tuple[float, float] | float = (0.5, 3.0),
p: float = 0.5
)
Apply Gaussian blur to the input image using a randomly sized kernel. This transform blurs the input image using a Gaussian filter with a random kernel size and sigma value. Gaussian blur is a widely used image processing technique that reduces image noise and detail, creating a smoothing effect.
Parameters
Name | Type | Default | Description |
---|---|---|---|
blur_limit | One of:
| 0 | Controls the range of the Gaussian kernel size. - If a single int is provided, the kernel size will be randomly chosen between 0 and that value. - If a tuple of two ints is provided, it defines the inclusive range of possible kernel sizes. Must be zero or odd and in range [0, inf). If set to 0 (default), the kernel size will be computed from sigma as `int(sigma * 3.5) * 2 + 1` to exactly match PIL's implementation. Default: 0 |
sigma_limit | One of:
| (0.5, 3.0) | Range for the Gaussian kernel standard deviation (sigma). Must be more or equal than 0. - If a single float is provided, sigma will be randomly chosen between 0 and that value. - If a tuple of two floats is provided, it defines the inclusive range of possible sigma values. Default: (0.5, 3.0) |
p | float | 0.5 | Probability of applying the transform. Default: 0.5 |
Example
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> # Default behavior: matches PIL's GaussianBlur
>>> transform = A.GaussianBlur(p=1.0, sigma_limit=(0.5, 3.0))
>>> # Or manual kernel size range
>>> transform = A.GaussianBlur(blur_limit=(3, 7), sigma_limit=(0.5, 3.0), p=1.0)
>>> result = transform(image=image)
>>> blurred_image = result["image"]
Notes
- When blur_limit=0 (default), this implementation exactly matches PIL's
GlassBlurclass
GlassBlur(
sigma: float = 0.7,
max_delta: int = 4,
iterations: int = 2,
mode: Literal['fast', 'exact'] = fast,
p: float = 0.5
)
Apply a glass blur effect to the input image. This transform simulates the effect of looking through textured glass by locally shuffling pixels in the image. It creates a distorted, frosted glass-like appearance.
Parameters
Name | Type | Default | Description |
---|---|---|---|
sigma | float | 0.7 | Standard deviation for the Gaussian kernel used in the process. Higher values increase the blur effect. Must be non-negative. Default: 0.7 |
max_delta | int | 4 | Maximum distance in pixels for shuffling. Determines how far pixels can be moved. Larger values create more distortion. Must be a positive integer. Default: 4 |
iterations | int | 2 | Number of times to apply the glass blur effect. More iterations create a stronger effect but increase computation time. Must be a positive integer. Default: 2 |
mode | One of:
| fast | Mode of computation. Options are: - "fast": Uses a faster but potentially less accurate method. - "exact": Uses a slower but more precise method. Default: "fast" |
p | float | 0.5 | Probability of applying the transform. Should be in the range [0, 1]. Default: 0.5 |
Example
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.GlassBlur(sigma=0.7, max_delta=4, iterations=3, mode="fast", p=1)
>>> result = transform(image=image)
>>> glass_blurred_image = result["image"]
Notes
- This transform is particularly effective for creating a 'looking through glass' effect or simulating the view through a frosted window. - The 'fast' mode is recommended for most use cases as it provides a good balance between effect quality and computation speed. - Increasing 'iterations' will strengthen the effect but also increase the processing time linearly.
References
- This implementation is based on the technique described in: "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" https://arxiv.org/abs/1903.12261
- Original implementation: https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py
MedianBlurclass
MedianBlur(
blur_limit: tuple[int, int] | int = (3, 7),
p: float = 0.5
)
Apply median blur to the input image. This transform uses a median filter to blur the input image. Median filtering is particularly effective at removing salt-and-pepper noise while preserving edges, making it a popular choice for noise reduction in image processing.
Parameters
Name | Type | Default | Description |
---|---|---|---|
blur_limit | One of:
| (3, 7) | Maximum aperture linear size for blurring the input image. Must be odd and in the range [3, inf). - If a single int is provided, the kernel size will be randomly chosen between 3 and that value. - If a tuple of two ints is provided, it defines the inclusive range of possible kernel sizes. Default: (3, 7) |
p | float | 0.5 | Probability of applying the transform. Default: 0.5 |
Example
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.MedianBlur(blur_limit=(3, 7), p=0.5)
>>> result = transform(image=image)
>>> blurred_image = result["image"]
Notes
- The kernel size (aperture linear size) must always be odd and greater than 1. - Unlike mean blur or Gaussian blur, median blur uses the median of all pixels under the kernel area, making it more robust to outliers. - This transform is particularly useful for: * Removing salt-and-pepper noise * Preserving edges while smoothing images * Pre-processing images for edge detection algorithms - For color images, the median is calculated independently for each channel. - Larger kernel sizes result in stronger blurring effects but may also remove fine details from the image.
References
MotionBlurclass
MotionBlur(
blur_limit: tuple[int, int] | int = (3, 7),
allow_shifted: bool = True,
angle_range: tuple[float, float] = (0, 360),
direction_range: tuple[float, float] = (-1.0, 1.0),
p: float = 0.5
)
Apply motion blur to the input image using a directional kernel. This transform simulates motion blur effects that occur during image capture, such as camera shake or object movement. It creates a directional blur using a line-shaped kernel with controllable angle, direction, and position.
Parameters
Name | Type | Default | Description |
---|---|---|---|
blur_limit | One of:
| (3, 7) | Maximum kernel size for blurring. Should be in range [3, inf). - If int: kernel size will be randomly chosen from [3, blur_limit] - If tuple: kernel size will be randomly chosen from [min, max] Larger values create stronger blur effects. Default: (3, 7) |
allow_shifted | bool | True | Allow random kernel position shifts. - If True: Kernel can be randomly offset from center - If False: Kernel will always be centered Default: True |
angle_range | tuple[float, float] | (0, 360) | Range of possible angles in degrees. Controls the rotation of the motion blur line: - 0°: Horizontal motion blur → - 45°: Diagonal motion blur ↗ - 90°: Vertical motion blur ↑ - 135°: Diagonal motion blur ↖ Default: (0, 360) |
direction_range | tuple[float, float] | (-1.0, 1.0) | Range for motion bias. Controls how the blur extends from the center: - -1.0: Blur extends only backward (←) - 0.0: Blur extends equally in both directions (←→) - 1.0: Blur extends only forward (→) For example, with angle=0: - direction=-1.0: ←• - direction=0.0: ←•→ - direction=1.0: •→ Default: (-1.0, 1.0) |
p | float | 0.5 | Probability of applying the transform. Default: 0.5 |
Example
>>> import albumentations as A
>>> # Horizontal camera shake (symmetric)
>>> transform = A.MotionBlur(
... angle_range=(-5, 5), # Near-horizontal motion
... direction_range=(0, 0), # Symmetric blur
... p=1.0
... )
>>>
>>> # Object moving right
>>> transform = A.MotionBlur(
... angle_range=(0, 0), # Horizontal motion
... direction_range=(0.8, 1.0), # Strong forward bias
... p=1.0
... )
Notes
- angle controls the orientation of the motion line - direction controls the distribution of the blur along that line - Together they can simulate various motion effects: * Camera shake: Small angle range + direction near 0 * Object motion: Specific angle + direction=1.0 * Complex motion: Random angle + random direction
References
- Motion blur fundamentals: https://en.wikipedia.org/wiki/Motion_blur
- Directional blur kernels: https://www.sciencedirect.com/topics/computer-science/directional-blur
- OpenCV filter2D (used for convolution): https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga27c049795ce870216ddfb366086b5a04
- Research on motion blur simulation: "Understanding and Evaluating Blind Deconvolution Algorithms" (CVPR 2009) https://doi.org/10.1109/CVPR.2009.5206815
- Motion blur in photography: "The Manual of Photography", Chapter 7: Motion in Photography ISBN: 978-0240520377
- Kornia's implementation (similar approach): https://kornia.readthedocs.io/en/latest/augmentation.html#kornia.augmentation.RandomMotionBlur
ZoomBlurclass
ZoomBlur(
max_factor: tuple[float, float] | float = (1, 1.31),
step_factor: tuple[float, float] | float = (0.01, 0.03),
p: float = 0.5
)
Apply zoom blur transform.
Parameters
Name | Type | Default | Description |
---|---|---|---|
max_factor | One of:
| (1, 1.31) | range for max factor for blurring. If max_factor is a single float, the range will be (1, limit). Default: (1, 1.31). All max_factor values should be larger than 1. |
step_factor | One of:
| (0.01, 0.03) | If single float will be used as step parameter for np.arange. If tuple of float step_factor will be in range `[step_factor[0], step_factor[1])`. Default: (0.01, 0.03). All step_factor values should be positive. |
p | float | 0.5 | probability of applying the transform. Default: 0.5. |