albumentations.augmentations.pixel.weather
Add depth-dependent fog via the atmospheric scattering equation and a synthetic depth map. Use for outdoor and driving robustness to haze.
Members
- classAtmosphericFog
- classRandomFog
- classRandomGravel
- classRandomRain
- classRandomShadow
- classRandomSnow
- classRandomSunFlare
- classSpatter
AtmosphericFogclass
AtmosphericFog(
density_range: tuple[float, float] = (1.0, 3.0),
fog_color: tuple[int, ...] = (200, 200, 200),
depth_mode: 'linear' | 'diagonal' | 'radial' = linear,
p: float = 0.5
)Add depth-dependent fog via the atmospheric scattering equation and a synthetic depth map. Use for outdoor and driving robustness to haze. Unlike RandomFog (which overlays circular fog patches), this transform uses a physically-based scattering model: farther pixels (by synthetic depth) get more fog, producing realistic distance-dependent haze. Depth is derived from image position (linear, diagonal, or radial), not from a real depth map. Formula: `result = image * exp(-density * depth) + fog_color * (1 - exp(-density * depth))`
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| density_range | tuple[float, float] | (1.0, 3.0) | Range for fog density. Higher values give thicker fog. Default: (1.0, 3.0). |
| fog_color | tuple[int, ...] | (200, 200, 200) | Fog color per channel, e.g. (R, G, B) for 3 channels. Length must match image channels. Default: (200, 200, 200). |
| depth_mode | One of:
| linear | How synthetic depth is generated: - "linear": top of image = far, bottom = near (sky vs ground). - "diagonal": top-left = far. - "radial": center = near, edges = far. Default: "linear". |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>> transform = A.AtmosphericFog(density_range=(1.0, 2.5), depth_mode="linear", p=1.0)
>>> result = transform(image=image)["image"]
>>> # Radial fog (center clear, edges foggy)
>>> transform_radial = A.AtmosphericFog(density_range=(1.5, 3.0), depth_mode="radial", p=1.0)
>>> result_radial = transform_radial(image=image)["image"]Notes
- Depth is synthetic (from pixel position), not from scene geometry. - For typical outdoor frames, "linear" matches sky far / ground near.
RandomFogclass
RandomFog(
alpha_coef: float = 0.08,
fog_coef_range: tuple[float, float] = (0.3, 1),
p: float = 0.5
)Simulate fog by overlaying semi-transparent circles and blending with a fog color. Good for driving or outdoor robustness to weather. Fog is built from random circles with controllable intensity; an image-size-dependent Gaussian blur is applied to the result. Patch-based (no depth); for distance-dependent fog use AtmosphericFog.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| alpha_coef | float | 0.08 | Transparency of the fog circles in [0, 1]. Default: 0.08. |
| fog_coef_range | tuple[float, float] | (0.3, 1) | Range for fog intensity coefficient in [0, 1]. Default: (0.3, 1). |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8)
# Default usage
>>> transform = A.RandomFog(p=1.0)
>>> foggy_image = transform(image=image)["image"]
# Custom fog intensity range
>>> transform = A.RandomFog(fog_coef_range=(0.3, 0.8), p=1.0)
>>> foggy_image = transform(image=image)["image"]
# Adjust fog transparency
>>> transform = A.RandomFog(fog_coef_range=(0.2, 0.5), alpha_coef=0.1, p=1.0)
>>> foggy_image = transform(image=image)["image"]Notes
- Fog is created by overlaying semi-transparent circles at random positions and with random radius; alpha is controlled by alpha_coef. - Higher fog_coef values give denser fog; effect is typically stronger toward center and gradually decreases toward the edges. - A Gaussian blur (dependent on the shorter image dimension) is applied after blending to reduce sharpness.
References
- [{'description': 'Fog', 'source': 'https://en.wikipedia.org/wiki/Fog'}, {'description': 'Atmospheric perspective', 'source': 'https://en.wikipedia.org/wiki/Aerial_perspective'}]
RandomGravelclass
RandomGravel(
gravel_roi: tuple[float, float, float, float] = (0.1, 0.4, 0.9, 0.9),
number_of_patches: int = 2,
p: float = 0.5
)Add gravel-like particle artifacts on the image. Number and size of particles and ROI are configurable. Simulates dirt or debris on a lens or surface. This transform simulates the appearance of gravel or small stones scattered across specific regions of an image. It's particularly useful for augmenting datasets of road or terrain images, adding realistic texture variations.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| gravel_roi | tuple[float, float, float, float] | (0.1, 0.4, 0.9, 0.9) | Region of interest where gravel will be added, specified as (x_min, y_min, x_max, y_max) in relative coordinates [0, 1]. Default: (0.1, 0.4, 0.9, 0.9). |
| number_of_patches | int | 2 | Number of gravel patch regions to generate within the ROI. Each patch will contain multiple gravel particles. Default: 2. |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8)
# Default usage
>>> transform = A.RandomGravel(p=1.0)
>>> augmented_image = transform(image=image)["image"]
# Custom ROI and number of patches
>>> transform = A.RandomGravel(
... gravel_roi=(0.2, 0.2, 0.8, 0.8),
... number_of_patches=5,
... p=1.0
... )
>>> augmented_image = transform(image=image)["image"]
# Combining with other transforms
>>> transform = A.Compose([
... A.RandomGravel(p=0.7),
... A.RandomBrightnessContrast(p=0.5),
... ])
>>> augmented_image = transform(image=image)["image"]Notes
- The gravel effect is created by modifying the saturation channel in the HLS color space. - Gravel particles are distributed within randomly generated patches inside the specified ROI. - This transform is particularly useful for: * Augmenting datasets for road condition analysis * Simulating variations in terrain for computer vision tasks * Adding realistic texture to synthetic images of outdoor scenes
References
- [{'description': 'Road surface textures', 'source': 'https://en.wikipedia.org/wiki/Road_surface'}, {'description': 'HLS color space', 'source': 'https://en.wikipedia.org/wiki/HSL_and_HSV'}]
RandomRainclass
RandomRain(
slant_range: tuple[float, float] = (-10, 10),
drop_length: int | None,
drop_width: int = 1,
drop_color: tuple[int, int, int] = (200, 200, 200),
blur_value: int = 7,
brightness_coefficient: float = 0.7,
rain_type: 'drizzle' | 'heavy' | 'torrential' | 'default' = default,
p: float = 0.5
)Add rain streaks (semi-transparent lines), optional blur and brightness reduction. Good for outdoor or driving robustness to rainy conditions. Streaks are drawn with configurable slant, length, and width; blur and darkening simulate wet, low-contrast views. Density and style are configurable (e.g. drizzle, heavy, torrential).
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| slant_range | tuple[float, float] | (-10, 10) | Range for the rain slant angle in degrees. Negative values slant to the left, positive to the right. Default: (-10, 10). |
| drop_length | One of:
| - | Length of the rain drops in pixels. If None, drop length will be automatically calculated as height // 8. This allows the rain effect to scale with the image size. Default: None |
| drop_width | int | 1 | Width of the rain drops in pixels. Default: 1. |
| drop_color | tuple[int, int, int] | (200, 200, 200) | Color of the rain drops in RGB format. Default: (200, 200, 200). |
| blur_value | int | 7 | Blur value for simulating rain effect. Rainy views are typically blurry. Default: 7. |
| brightness_coefficient | float | 0.7 | Coefficient to adjust the brightness of the image. Rainy scenes are usually darker. Should be in the range (0, 1]. Default: 0.7. |
| rain_type | One of:
| default | Type of rain to simulate. |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
>>>
>>> # Default usage
>>> transform = A.RandomRain(p=1.0)
>>> rainy_image = transform(image=image)["image"]
>>>
>>> # Custom rain parameters
>>> transform = A.RandomRain(
... slant_range=(-15, 15),
... drop_length=30,
... drop_width=2,
... drop_color=(180, 180, 180),
... blur_value=5,
... brightness_coefficient=0.8,
... p=1.0
... )
>>> rainy_image = transform(image=image)["image"]
>>>
>>> # Heavy rain
>>> transform = A.RandomRain(rain_type="heavy", p=1.0)
>>> heavy_rain_image = transform(image=image)["image"]Notes
- Rain is drawn as semi-transparent lines; slant simulates wind. - rain_type (drizzle, heavy, torrential, default) controls drop count and style. - Blur and brightness reduction mimic wet, darker scenes.
References
- [{'description': 'Rain visualization techniques', 'source': 'https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-27-real-time-rain-rendering'}, {'description': 'Weather effects in computer vision', 'source': 'https://www.sciencedirect.com/science/article/pii/S1077314220300692'}]
RandomShadowclass
RandomShadow(
shadow_roi: tuple[float, float, float, float] = (0, 0.5, 1, 1),
num_shadows_limit: tuple[int, int] = (1, 2),
shadow_dimension: int = 5,
shadow_intensity_range: tuple[float, float] = (0.5, 0.5),
p: float = 0.5
)Simulate cast shadows by darkening random regions. shadow_roi, num_shadows, shadow_dimension control placement and softness. Improves lighting robustness. This transform adds realistic shadow effects to images, which can be useful for augmenting datasets for outdoor scene analysis, autonomous driving, or any computer vision task where shadows may be present.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| shadow_roi | tuple[float, float, float, float] | (0, 0.5, 1, 1) | Region of the image where shadows will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1]. Default: (0, 0.5, 1, 1). |
| num_shadows_limit | tuple[int, int] | (1, 2) | Lower and upper limits for the possible number of shadows. Default: (1, 2). |
| shadow_dimension | int | 5 | Number of edges in the shadow polygons. Default: 5. |
| shadow_intensity_range | tuple[float, float] | (0.5, 0.5) | Range for the shadow intensity. Larger value means darker shadow. Should be two float values between 0 and 1. Default: (0.5, 0.5). |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8)
# Default usage
>>> transform = A.RandomShadow(p=1.0)
>>> shadowed_image = transform(image=image)["image"]
# Custom shadow parameters
>>> transform = A.RandomShadow(
... shadow_roi=(0.2, 0.2, 0.8, 0.8),
... num_shadows_limit=(2, 4),
... shadow_dimension=8,
... shadow_intensity_range=(0.3, 0.7),
... p=1.0
... )
>>> shadowed_image = transform(image=image)["image"]
# Combining with other transforms
>>> transform = A.Compose([
... A.RandomShadow(p=0.5),
... A.RandomBrightnessContrast(p=0.5),
... ])
>>> augmented_image = transform(image=image)["image"]Notes
- Shadows are created by generating random polygons within the specified ROI and reducing the brightness of the image in these areas. - The number of shadows, their shapes, and intensities can be randomized for variety. - This transform is particularly useful for: * Augmenting datasets for outdoor scene understanding * Improving robustness of object detection models to shadowed conditions * Simulating different lighting conditions in synthetic datasets
References
- [{'description': 'Shadow detection and removal', 'source': 'https://www.sciencedirect.com/science/article/pii/S1047320315002035'}, {'description': 'Shadows in computer vision', 'source': 'https://en.wikipedia.org/wiki/Shadow_detection'}]
RandomSnowclass
RandomSnow(
brightness_coeff: float = 2.5,
snow_point_range: tuple[float, float] = (0.1, 0.3),
method: 'bleach' | 'texture' = bleach,
p: float = 0.5
)Add snow overlay via bleach (brightness threshold) or texture (noise-based overlay). Good for winter or snowy-scene robustness in outdoor imagery. Two methods: "bleach" brightens pixels above a threshold (faster, simpler); "texture" adds a depth-weighted snow layer with sparkle (more realistic, heavier).
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| brightness_coeff | float | 2.5 | Brightness multiplier for snow; must be > 0. Default: 2.5. |
| snow_point_range | tuple[float, float] | (0.1, 0.3) | Range for snow intensity threshold in (0, 1). Default: (0.1, 0.3). |
| method | One of:
| bleach | "bleach" = threshold + brighten; "texture" = noise-based overlay with depth and sparkle. Default: "bleach". |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8)
# Default usage (bleach method)
>>> transform = A.RandomSnow(p=1.0)
>>> snowy_image = transform(image=image)["image"]
# Using texture method with custom parameters
>>> transform = A.RandomSnow(
... snow_point_range=(0.2, 0.4),
... brightness_coeff=2.0,
... method="texture",
... p=1.0
... )
>>> snowy_image = transform(image=image)["image"]Notes
- "bleach": brightness threshold in HLS; pixels above snow_point are scaled by brightness_coeff. Fast, less realistic. - "texture": HSV brightness boost, Gaussian noise texture, depth gradient (stronger at top), alpha blend, blue tint, sparkle. More realistic, heavier.
References
- [{'description': 'Bleach method', 'source': 'https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library'}, {'description': 'Texture method', 'source': 'Inspired by computer graphics techniques for snow rendering and atmospheric scattering simulations.'}]
RandomSunFlareclass
RandomSunFlare(
flare_roi: tuple[float, float, float, float] = (0, 0, 1, 0.5),
src_radius: int = 400,
src_color: tuple[int, ...] = (255, 255, 255),
angle_range: tuple[float, float] = (0, 1),
num_flare_circles_range: tuple[int, int] = (6, 10),
method: 'overlay' | 'physics_based' = overlay,
p: float = 0.5
)Simulate lens flare: circles of light and rays. src_radius, num_flare_circles, angle control the effect. Good for outdoor robustness. This transform creates a sun flare effect by overlaying multiple semi-transparent circles of varying sizes and intensities along a line originating from a "sun" point. It offers two methods: a simple overlay technique and a more complex physics-based approach.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| flare_roi | tuple[float, float, float, float] | (0, 0, 1, 0.5) | Region of interest where the sun flare can appear. Values are in the range [0, 1] and represent (x_min, y_min, x_max, y_max) in relative coordinates. Default: (0, 0, 1, 0.5). |
| src_radius | int | 400 | Radius of the sun circle in pixels. Default: 400. |
| src_color | tuple[int, ...] | (255, 255, 255) | Color of the sun in RGB format. Default: (255, 255, 255). |
| angle_range | tuple[float, float] | (0, 1) | Range of angles (in radians) for the flare direction. Values should be in the range [0, 1], where 0 represents 0 radians and 1 represents 2π radians. Default: (0, 1). |
| num_flare_circles_range | tuple[int, int] | (6, 10) | Range for the number of flare circles to generate. Default: (6, 10). |
| method | One of:
| overlay | Method to use for generating the sun flare. "overlay" uses a simple alpha blending technique, while "physics_based" simulates more realistic optical phenomena. Default: "overlay". |
| p | float | 0.5 | Probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> image = np.random.randint(0, 256, [1000, 1000, 3], dtype=np.uint8)
# Default sun flare (overlay method)
>>> transform = A.RandomSunFlare(p=1.0)
>>> flared_image = transform(image=image)["image"]
# Physics-based sun flare with custom parameters
# Default sun flare
>>> transform = A.RandomSunFlare(p=1.0)
>>> flared_image = transform(image=image)["image"]
# Custom sun flare parameters
>>> transform = A.RandomSunFlare(
... flare_roi=(0.1, 0, 0.9, 0.3),
... angle_range=(0.25, 0.75),
... num_flare_circles_range=(5, 15),
... src_radius=200,
... src_color=(255, 200, 100),
... method="physics_based",
... p=1.0
... )
>>> flared_image = transform(image=image)["image"]References
- [{'description': 'Lens flare', 'source': 'https://en.wikipedia.org/wiki/Lens_flare'}, {'description': 'Alpha compositing', 'source': 'https://en.wikipedia.org/wiki/Alpha_compositing'}, {'description': 'Diffraction', 'source': 'https://en.wikipedia.org/wiki/Diffraction'}, {'description': 'Chromatic aberration', 'source': 'https://en.wikipedia.org/wiki/Chromatic_aberration'}, {'description': 'Screen blending', 'source': 'https://en.wikipedia.org/wiki/Blend_modes#Screen'}]
Spatterclass
Spatter(
mean: tuple[float, float] | float = (0.65, 0.65),
std: tuple[float, float] | float = (0.3, 0.3),
gauss_sigma: tuple[float, float] | float = (2, 2),
cutout_threshold: tuple[float, float] | float = (0.68, 0.68),
intensity: tuple[float, float] | float = (0.6, 0.6),
mode: 'rain' | 'mud' = rain,
color: Sequence | None,
p: float = 0.5
)Simulate lens occlusion from rain or mud: splatter patterns and optional blur. fill and spread control appearance. Good for dirty or wet lens robustness.
Parameters
| Name | Type | Default | Description |
|---|---|---|---|
| mean | One of:
| (0.65, 0.65) | Mean value of normal distribution for generating liquid layer. If single float mean will be sampled from `(0, mean)` If tuple of float mean will be sampled from range `(mean[0], mean[1])`. If you want constant value use (mean, mean). Default (0.65, 0.65) |
| std | One of:
| (0.3, 0.3) | Standard deviation value of normal distribution for generating liquid layer. If single float the number will be sampled from `(0, std)`. If tuple of float std will be sampled from range `(std[0], std[1])`. If you want constant value use (std, std). Default: (0.3, 0.3). |
| gauss_sigma | One of:
| (2, 2) | Sigma value for gaussian filtering of liquid layer. If single float the number will be sampled from `(0, gauss_sigma)`. If tuple of float gauss_sigma will be sampled from range `(gauss_sigma[0], gauss_sigma[1])`. If you want constant value use (gauss_sigma, gauss_sigma). Default: (2, 3). |
| cutout_threshold | One of:
| (0.68, 0.68) | Threshold for filtering liquid layer (determines number of drops). If single float it will used as cutout_threshold. If single float the number will be sampled from `(0, cutout_threshold)`. If tuple of float cutout_threshold will be sampled from range `(cutout_threshold[0], cutout_threshold[1])`. If you want constant value use `(cutout_threshold, cutout_threshold)`. Default: (0.68, 0.68). |
| intensity | One of:
| (0.6, 0.6) | Intensity of corruption. If single float the number will be sampled from `(0, intensity)`. If tuple of float intensity will be sampled from range `(intensity[0], intensity[1])`. If you want constant value use `(intensity, intensity)`. Default: (0.6, 0.6). |
| mode | One of:
| rain | Type of corruption. Default: "rain". |
| color | One of:
| - | Corruption elements color. If list uses provided list as color for the effect. If None uses default colors based on mode (rain: (238, 238, 175), mud: (20, 42, 63)). |
| p | float | 0.5 | probability of applying the transform. Default: 0.5. |
Examples
>>> import numpy as np
>>> import albumentations as A
>>> import cv2
>>>
>>> # Create a sample image
>>> image = np.ones((300, 300, 3), dtype=np.uint8) * 200 # Light gray background
>>> # Add some gradient to make effects more visible
>>> for i in range(300):
... image[i, :, :] = np.clip(image[i, :, :] - i // 3, 0, 255)
>>>
>>> # Example 1: Rain effect with default parameters
>>> rain_transform = A.Spatter(
... mode="rain",
... p=1.0
... )
>>> rain_result = rain_transform(image=image)
>>> rain_image = rain_result['image'] # Image with rain drops
>>>
>>> # Example 2: Heavy rain with custom parameters
>>> heavy_rain = A.Spatter(
... mode="rain",
... mean=(0.7, 0.7), # Higher mean = more coverage
... std=(0.2, 0.2), # Lower std = more uniform effect
... cutout_threshold=(0.65, 0.65), # Lower threshold = more drops
... intensity=(0.8, 0.8), # Higher intensity = more visible effect
... color=(200, 200, 255), # Blueish rain drops
... p=1.0
... )
>>> heavy_rain_result = heavy_rain(image=image)
>>> heavy_rain_image = heavy_rain_result['image']
>>>
>>> # Example 3: Mud effect
>>> mud_transform = A.Spatter(
... mode="mud",
... mean=(0.6, 0.6),
... std=(0.3, 0.3),
... cutout_threshold=(0.62, 0.62),
... intensity=(0.7, 0.7),
... p=1.0
... )
>>> mud_result = mud_transform(image=image)
>>> mud_image = mud_result['image'] # Image with mud splatters
>>>
>>> # Example 4: Custom colored mud
>>> red_mud = A.Spatter(
... mode="mud",
... mean=(0.55, 0.55),
... std=(0.25, 0.25),
... cutout_threshold=(0.7, 0.7),
... intensity=(0.6, 0.6),
... color=(120, 40, 40), # Reddish-brown mud
... p=1.0
... )
>>> red_mud_result = red_mud(image=image)
>>> red_mud_image = red_mud_result['image']
>>>
>>> # Example 5: Random effect (50% chance of applying)
>>> random_spatter = A.Compose([
... A.Spatter(
... mode="rain" if np.random.random() < 0.5 else "mud",
... p=0.5
... )
... ])
>>> random_result = random_spatter(image=image)
>>> result_image = random_result['image'] # May or may not have spatter effectReferences
- [{'description': 'Benchmarking Neural Network Robustness to Common Corruptions and Perturbations', 'source': 'https://arxiv.org/abs/1903.12261'}]