Stay updated
News & Insightsexampleexample bboxesexample bboxes2example chromatic aberrationexample d4example documentsexample domain adaptationexample gridshuffleexample hfhubexample kaggle saltexample keypointsexample mosaicexample multi targetexample obb affine boatsexample OverlayElementsexample textimageexample ultralyticsexample weather transformsexample xymaskingface landmarks tutorialkeras cats dogs classificationkeras pretrained segmentationmigrating from torchvision to albumentationspytorch classificationpytorch semantic segmentationreplayserializationshowcase
title: "example weather transforms" notebookName: "example_weather_transforms.ipynb"
Open in Google ColabRun this notebook interactively
Weather augmentations in Albumentations
This notebook demonstrates weather augmentations that are supported by Albumentations.
Import the required libraries
import albumentations as A
import cv2
from matplotlib import pyplot as plt
/opt/homebrew/Caskroom/miniconda/base/envs/albumentations_examples/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
Define a function to visualize an image
def visualize(image):
plt.figure(figsize=(20, 10))
plt.axis("off")
plt.imshow(image)
Load the image from the disk
image = cv2.imread("images/weather_example.jpg", cv2.IMREAD_COLOR_RGB)
Visualize the original image
visualize(image)
No code provided
No code providedRandomRain
We fix the random seed for visualization purposes, so the augmentation will always produce the same result. In a real computer vision pipeline, you shouldn't fix the random seed before applying a transform to the image because, in that case, the pipeline will always output the same image. The purpose of image augmentation is to use different transformations each time.
transform = A.Compose(
[A.RandomRain(brightness_coefficient=0.8, drop_width=1, blur_value=5, p=1, rain_type="heavy")],
strict=True,
seed=137,
)
transformed = transform(image=image)
visualize(transformed["image"])
No code provided
No code providedRandomSnow
transform = A.Compose(
[A.RandomSnow(brightness_coeff=2.5, snow_point_range=(0.3, 0.5), p=1)],
strict=True,
seed=137,
)
transformed = transform(image=image)
visualize(transformed["image"])
No code provided
No code providedRandomSunFlare
transform = A.Compose(
[A.RandomSunFlare(flare_roi=(0, 0, 1, 0.5), p=1)],
strict=True,
seed=137,
)
transformed = transform(image=image)
visualize(transformed["image"])
No code provided
No code providedRandomShadow
transform = A.Compose(
[A.RandomShadow(num_shadows_limit=(4, 4), shadow_dimension=5, shadow_roi=(0, 0.5, 1, 1), p=1)],
strict=True,
seed=137,
)
transformed = transform(image=image)
visualize(transformed["image"])
No code provided
No code providedRandomFog
transform = A.Compose(
[A.RandomFog(fog_coef_range=(1, 1), alpha_coef=0.4, p=1)],
strict=True,
seed=137,
)
transformed = transform(image=image)
visualize(transformed["image"])
No code provided
No code provided