Skip to content

Transforms (augmentations.transforms)

class albumentations.augmentations.transforms.ChannelShuffle [view source on GitHub]

Randomly rearrange channels of the input RGB image.

Parameters:

Name Type Description
p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.CLAHE (clip_limit=4.0, tile_grid_size=(8, 8), always_apply=False, p=0.5) [view source on GitHub]

Apply Contrast Limited Adaptive Histogram Equalization to the input image.

Parameters:

Name Type Description
clip_limit float or [float, float]

upper threshold value for contrast limiting. If clip_limit is a single float value, the range will be (1, clip_limit). Default: (1, 4).

tile_grid_size [int, int]

size of grid for histogram equalization. Default: (8, 8).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8

class albumentations.augmentations.transforms.ColorJitter (brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2, always_apply=False, p=0.5) [view source on GitHub]

Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision, this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8 overflow, but we use value saturation.

Parameters:

Name Type Description
brightness float or tuple of float (min, max

How much to jitter brightness. brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.

contrast float or tuple of float (min, max

How much to jitter contrast. contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.

saturation float or tuple of float (min, max

How much to jitter saturation. saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.

hue float or tuple of float (min, max

How much to jitter hue. hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. Should have 0 <= hue <= 0.5 or -0.5 <= min <= max <= 0.5.

class albumentations.augmentations.transforms.Downscale (scale_min=0.25, scale_max=0.25, interpolation=None, always_apply=False, p=0.5) [view source on GitHub]

Decreases image quality by downscaling and upscaling back.

Parameters:

Name Type Description
scale_min float

lower bound on the image scale. Should be < 1.

scale_max float

lower bound on the image scale. Should be .

interpolation

cv2 interpolation method. Could be: - single cv2 interpolation flag - selected method will be used for downscale and upscale. - dict(downscale=flag, upscale=flag) - Downscale.Interpolation(downscale=flag, upscale=flag) - Default: Interpolation(downscale=cv2.INTER_NEAREST, upscale=cv2.INTER_NEAREST)

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.Emboss (alpha=(0.2, 0.5), strength=(0.2, 0.7), always_apply=False, p=0.5) [view source on GitHub]

Emboss the input image and overlays the result with the original image.

Parameters:

Name Type Description
alpha [float, float]

range to choose the visibility of the embossed image. At 0, only the original image is visible,at 1.0 only its embossed version is visible. Default: (0.2, 0.5).

strength [float, float]

strength range of the embossing. Default: (0.2, 0.7).

p float

probability of applying the transform. Default: 0.5.

Targets: image

class albumentations.augmentations.transforms.Equalize (mode='cv', by_channels=True, mask=None, mask_params=(), always_apply=False, p=0.5) [view source on GitHub]

Equalize the image histogram.

Parameters:

Name Type Description
mode str

{'cv', 'pil'}. Use OpenCV or Pillow equalization method.

by_channels bool

If True, use equalization by channels separately, else convert image to YCbCr representation and use equalization by Y channel.

mask np.ndarray, callable

If given, only the pixels selected by the mask are included in the analysis. Maybe 1 channel or 3 channel array or callable. Function signature must include image argument.

mask_params list of str

Params for mask function.

Targets: image

Image types: uint8

class albumentations.augmentations.transforms.FancyPCA (alpha=0.1, always_apply=False, p=0.5) [view source on GitHub]

Augment RGB image using FancyPCA from Krizhevsky's paper "ImageNet Classification with Deep Convolutional Neural Networks"

Parameters:

Name Type Description
alpha float

how much to perturb/scale the eigen vecs and vals. scale is samples from gaussian distribution (mu=0, sigma=alpha)

Targets: image

Image types: 3-channel uint8 images only

Credit: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf https://deshanadesai.github.io/notes/Fancy-PCA-with-Scikit-Image https://pixelatedbrian.github.io/2018-04-29-fancy_pca/

class albumentations.augmentations.transforms.FromFloat (dtype='uint16', max_value=None, always_apply=False, p=1.0) [view source on GitHub]

Take an input array where all values should lie in the range [0, 1.0], multiply them by max_value and then cast the resulted value to a type specified by dtype. If max_value is None the transform will try to infer the maximum value for the data type from the dtype argument.

This is the inverse transform for :class:~albumentations.augmentations.transforms.ToFloat.

Parameters:

Name Type Description
max_value float

maximum possible input value. Default: None.

dtype string or numpy data type

data type of the output. See the 'Data types' page from the NumPy docs_. Default: 'uint16'.

p float

probability of applying the transform. Default: 1.0.

Targets: image

Image types: float32

.. _'Data types' page from the NumPy docs: https://docs.scipy.org/doc/numpy/user/basics.types.html

class albumentations.augmentations.transforms.GaussNoise (var_limit=(10.0, 50.0), mean=0, per_channel=True, always_apply=False, p=0.5) [view source on GitHub]

Apply gaussian noise to the input image.

Parameters:

Name Type Description
var_limit [float, float] or float

variance range for noise. If var_limit is a single float, the range will be (0, var_limit). Default: (10.0, 50.0).

mean float

mean of the noise. Default: 0

per_channel bool

if set to True, noise will be sampled for each channel independently. Otherwise, the noise will be sampled once for all channels. Default: True

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.HueSaturationValue (hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, always_apply=False, p=0.5) [view source on GitHub]

Randomly change hue, saturation and value of the input image.

Parameters:

Name Type Description
hue_shift_limit [int, int] or int

range for changing hue. If hue_shift_limit is a single int, the range will be (-hue_shift_limit, hue_shift_limit). Default: (-20, 20).

sat_shift_limit [int, int] or int

range for changing saturation. If sat_shift_limit is a single int, the range will be (-sat_shift_limit, sat_shift_limit). Default: (-30, 30).

val_shift_limit [int, int] or int

range for changing value. If val_shift_limit is a single int, the range will be (-val_shift_limit, val_shift_limit). Default: (-20, 20).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.ImageCompression (quality_lower=99, quality_upper=100, compression_type=<ImageCompressionType.JPEG: 0>, always_apply=False, p=0.5) [view source on GitHub]

Decreases image quality by Jpeg, WebP compression of an image.

Parameters:

Name Type Description
quality_lower float

lower bound on the image quality. Should be in [0, 100] range for jpeg and [1, 100] for webp.

quality_upper float

upper bound on the image quality. Should be in [0, 100] range for jpeg and [1, 100] for webp.

compression_type ImageCompressionType

should be ImageCompressionType.JPEG or ImageCompressionType.WEBP. Default: ImageCompressionType.JPEG

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.ImageCompression.ImageCompressionType

An enumeration.

class albumentations.augmentations.transforms.InvertImg [view source on GitHub]

Invert the input image by subtracting pixel values from 255.

Parameters:

Name Type Description
p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.ISONoise (color_shift=(0.01, 0.05), intensity=(0.1, 0.5), always_apply=False, p=0.5) [view source on GitHub]

Apply camera sensor noise.

Parameters:

Name Type Description
color_shift [float, float]

variance range for color hue change. Measured as a fraction of 360 degree Hue angle in HLS colorspace.

intensity [float, float]

Multiplicative factor that control strength of color and luminace noise.

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8

class albumentations.augmentations.transforms.JpegCompression (quality_lower=99, quality_upper=100, always_apply=False, p=0.5) [view source on GitHub]

Decreases image quality by Jpeg compression of an image.

Parameters:

Name Type Description
quality_lower float

lower bound on the jpeg quality. Should be in [0, 100] range

quality_upper float

upper bound on the jpeg quality. Should be in [0, 100] range

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.Lambda (image=None, mask=None, keypoint=None, bbox=None, name=None, always_apply=False, p=1.0) [view source on GitHub]

A flexible transformation class for using user-defined transformation functions per targets. Function signature must include **kwargs to accept optinal arguments like interpolation method, image size, etc:

Parameters:

Name Type Description
image callable

Image transformation function.

mask callable

Mask transformation function.

keypoint callable

Keypoint transformation function.

bbox callable

BBox transformation function.

always_apply bool

Indicates whether this transformation should be always applied.

p float

probability of applying the transform. Default: 1.0.

Targets: image, mask, bboxes, keypoints

Image types: Any

class albumentations.augmentations.transforms.MultiplicativeNoise (multiplier=(0.9, 1.1), per_channel=False, elementwise=False, always_apply=False, p=0.5) [view source on GitHub]

Multiply image to random number or array of numbers.

Parameters:

Name Type Description
multiplier float or tuple of floats

If single float image will be multiplied to this number. If tuple of float multiplier will be in range [multiplier[0], multiplier[1]). Default: (0.9, 1.1).

per_channel bool

If False, same values for all channels will be used. If True use sample values for each channels. Default False.

elementwise bool

If False multiply multiply all pixels in an image with a random value sampled once. If True Multiply image pixels with values that are pixelwise randomly sampled. Defaule: False.

Targets: image

Image types: Any

class albumentations.augmentations.transforms.Normalize (mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225), max_pixel_value=255.0, always_apply=False, p=1.0) [view source on GitHub]

Normalization is applied by the formula: img = (img - mean * max_pixel_value) / (std * max_pixel_value)

Parameters:

Name Type Description
mean float, list of float

mean values

std (float, list of float

std values

max_pixel_value float

maximum possible pixel value

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.PixelDropout (dropout_prob=0.01, per_channel=False, drop_value=0, mask_drop_value=None, always_apply=False, p=0.5) [view source on GitHub]

Set pixels to 0 with some probability.

Parameters:

Name Type Description
dropout_prob float

pixel drop probability. Default: 0.01

per_channel bool

if set to True drop mask will be sampled fo each channel, otherwise the same mask will be sampled for all channels. Default: False

drop_value number or sequence of numbers or None

Value that will be set in dropped place. If set to None value will be sampled randomly, default ranges will be used: - uint8 - [0, 255] - uint16 - [0, 65535] - uint32 - [0, 4294967295] - float, double - [0, 1] Default: 0

mask_drop_value number or sequence of numbers or None

Value that will be set in dropped place in masks. If set to None masks will be unchanged. Default: 0

p float

probability of applying the transform. Default: 0.5.

Targets: image, mask Image types: any

class albumentations.augmentations.transforms.Posterize (num_bits=4, always_apply=False, p=0.5) [view source on GitHub]

Reduce the number of bits for each color channel.

Parameters:

Name Type Description
num_bits [int, int] or int, or list of ints [r, g, b], or list of ints [[r1, r1], [g1, g2], [b1, b2]]

number of high bits. If num_bits is a single value, the range will be [num_bits, num_bits]. Must be in range [0, 8]. Default: 4.

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8

class albumentations.augmentations.transforms.RandomBrightness (limit=0.2, always_apply=False, p=0.5) [view source on GitHub]

Randomly change brightness of the input image.

Parameters:

Name Type Description
limit [float, float] or float

factor range for changing brightness. If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomBrightnessContrast (brightness_limit=0.2, contrast_limit=0.2, brightness_by_max=True, always_apply=False, p=0.5) [view source on GitHub]

Randomly change brightness and contrast of the input image.

Parameters:

Name Type Description
brightness_limit [float, float] or float

factor range for changing brightness. If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).

contrast_limit [float, float] or float

factor range for changing contrast. If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).

brightness_by_max Boolean

If True adjust contrast by image dtype maximum, else adjust contrast by image mean.

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomContrast (limit=0.2, always_apply=False, p=0.5) [view source on GitHub]

Randomly change contrast of the input image.

Parameters:

Name Type Description
limit [float, float] or float

factor range for changing contrast. If limit is a single float, the range will be (-limit, limit). Default: (-0.2, 0.2).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomFog (fog_coef_lower=0.3, fog_coef_upper=1, alpha_coef=0.08, always_apply=False, p=0.5) [view source on GitHub]

Simulates fog for the image

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
fog_coef_lower float

lower limit for fog intensity coefficient. Should be in [0, 1] range.

fog_coef_upper float

upper limit for fog intensity coefficient. Should be in [0, 1] range.

alpha_coef float

transparency of the fog circles. Should be in [0, 1] range.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomGamma (gamma_limit=(80, 120), eps=None, always_apply=False, p=0.5) [view source on GitHub]

Parameters:

Name Type Description
gamma_limit float or [float, float]

If gamma_limit is a single float value, the range will be (-gamma_limit, gamma_limit). Default: (80, 120).

eps

Deprecated.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomGravel (gravel_roi=(0.1, 0.4, 0.9, 0.9), number_of_patches=2, always_apply=False, p=0.5) [view source on GitHub]

Add gravels.

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
gravel_roi float, float, float, float

(top-left x, top-left y, bottom-right x, bottom right y). Should be in [0, 1] range

number_of_patches int

no. of gravel patches required

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomGridShuffle (grid=(3, 3), always_apply=False, p=0.5) [view source on GitHub]

Random shuffle grid's cells on image.

Parameters:

Name Type Description
grid [int, int]

size of grid for splitting image.

Targets: image, mask, keypoints

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomRain (slant_lower=-10, slant_upper=10, drop_length=20, drop_width=1, drop_color=(200, 200, 200), blur_value=7, brightness_coefficient=0.7, rain_type=None, always_apply=False, p=0.5) [view source on GitHub]

Adds rain effects.

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
slant_lower

should be in range [-20, 20].

slant_upper

should be in range [-20, 20].

drop_length

should be in range [0, 100].

drop_width

should be in range [1, 5].

drop_color list of (r, g, b

rain lines color.

blur_value int

rainy view are blurry

brightness_coefficient float

rainy days are usually shady. Should be in range [0, 1].

rain_type

One of [None, "drizzle", "heavy", "torrential"]

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomShadow (shadow_roi=(0, 0.5, 1, 1), num_shadows_lower=1, num_shadows_upper=2, shadow_dimension=5, always_apply=False, p=0.5) [view source on GitHub]

Simulates shadows for the image

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
shadow_roi float, float, float, float

region of the image where shadows will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].

num_shadows_lower int

Lower limit for the possible number of shadows. Should be in range [0, num_shadows_upper].

num_shadows_upper int

Lower limit for the possible number of shadows. Should be in range [num_shadows_lower, inf].

shadow_dimension int

number of edges in the shadow polygons

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomSnow (snow_point_lower=0.1, snow_point_upper=0.3, brightness_coeff=2.5, always_apply=False, p=0.5) [view source on GitHub]

Bleach out some pixel values simulating snow.

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
snow_point_lower float

lower_bond of the amount of snow. Should be in [0, 1] range

snow_point_upper float

upper_bond of the amount of snow. Should be in [0, 1] range

brightness_coeff float

larger number will lead to a more snow on the image. Should be >= 0

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomSunFlare (flare_roi=(0, 0, 1, 0.5), angle_lower=0, angle_upper=1, num_flare_circles_lower=6, num_flare_circles_upper=10, src_radius=400, src_color=(255, 255, 255), always_apply=False, p=0.5) [view source on GitHub]

Simulates Sun Flare for the image

From https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

Parameters:

Name Type Description
flare_roi float, float, float, float

region of the image where flare will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].

angle_lower float

should be in range [0, angle_upper].

angle_upper float

should be in range [angle_lower, 1].

num_flare_circles_lower int

lower limit for the number of flare circles. Should be in range [0, num_flare_circles_upper].

num_flare_circles_upper int

upper limit for the number of flare circles. Should be in range [num_flare_circles_lower, inf].

src_radius int
src_color int, int, int

color of the flare

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RandomToneCurve (scale=0.1, always_apply=False, p=0.5) [view source on GitHub]

Randomly change the relationship between bright and dark areas of the image by manipulating its tone curve.

Parameters:

Name Type Description
scale float

standard deviation of the normal distribution. Used to sample random distances to move two control points that modify the image's curve. Values should be in range [0, 1]. Default: 0.1

Targets: image

Image types: uint8

class albumentations.augmentations.transforms.RGBShift (r_shift_limit=20, g_shift_limit=20, b_shift_limit=20, always_apply=False, p=0.5) [view source on GitHub]

Randomly shift values for each channel of the input RGB image.

Parameters:

Name Type Description
r_shift_limit [int, int] or int

range for changing values for the red channel. If r_shift_limit is a single int, the range will be (-r_shift_limit, r_shift_limit). Default: (-20, 20).

g_shift_limit [int, int] or int

range for changing values for the green channel. If g_shift_limit is a single int, the range will be (-g_shift_limit, g_shift_limit). Default: (-20, 20).

b_shift_limit [int, int] or int

range for changing values for the blue channel. If b_shift_limit is a single int, the range will be (-b_shift_limit, b_shift_limit). Default: (-20, 20).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.RingingOvershoot (blur_limit=(7, 15), cutoff=(0.7853981633974483, 1.5707963267948966), always_apply=False, p=0.5) [view source on GitHub]

Create ringing or overshoot artefacts by conlvolving image with 2D sinc filter.

Parameters:

Name Type Description
blur_limit int, [int, int]

maximum kernel size for sinc filter. Should be in range [3, inf). Default: (7, 15).

cutoff float, [float, float]

range to choose the cutoff frequency in radians. Should be in range (0, np.pi) Default: (np.pi / 4, np.pi / 2).

p float

probability of applying the transform. Default: 0.5.

Reference: dsp.stackexchange.com/questions/58301/2-d-circularly-symmetric-low-pass-filter https://arxiv.org/abs/2107.10833

Targets: image

class albumentations.augmentations.transforms.Sharpen (alpha=(0.2, 0.5), lightness=(0.5, 1.0), always_apply=False, p=0.5) [view source on GitHub]

Sharpen the input image and overlays the result with the original image.

Parameters:

Name Type Description
alpha [float, float]

range to choose the visibility of the sharpened image. At 0, only the original image is visible, at 1.0 only its sharpened version is visible. Default: (0.2, 0.5).

lightness [float, float]

range to choose the lightness of the sharpened image. Default: (0.5, 1.0).

p float

probability of applying the transform. Default: 0.5.

Targets: image

class albumentations.augmentations.transforms.Solarize (threshold=128, always_apply=False, p=0.5) [view source on GitHub]

Invert all pixel values above a threshold.

Parameters:

Name Type Description
threshold [int, int] or int, or [float, float] or float

range for solarizing threshold. If threshold is a single value, the range will be [threshold, threshold]. Default: 128.

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: any

class albumentations.augmentations.transforms.Spatter (mean=0.65, std=0.3, gauss_sigma=2, cutout_threshold=0.68, intensity=0.6, mode='rain', color=None, always_apply=False, p=0.5) [view source on GitHub]

Apply spatter transform. It simulates corruption which can occlude a lens in the form of rain or mud.

Parameters:

Name Type Description
mean float, or tuple of floats

Mean value of normal distribution for generating liquid layer. If single float it will be used as mean. If tuple of float mean will be sampled from range [mean[0], mean[1]). Default: (0.65).

std float, or tuple of floats

Standard deviation value of normal distribution for generating liquid layer. If single float it will be used as std. If tuple of float std will be sampled from range [std[0], std[1]). Default: (0.3).

gauss_sigma float, or tuple of floats

Sigma value for gaussian filtering of liquid layer. If single float it will be used as gauss_sigma. If tuple of float gauss_sigma will be sampled from range [sigma[0], sigma[1]). Default: (2).

cutout_threshold float, or tuple of floats

Threshold for filtering liqued layer (determines number of drops). If single float it will used as cutout_threshold. If tuple of float cutout_threshold will be sampled from range [cutout_threshold[0], cutout_threshold[1]). Default: (0.68).

intensity float, or tuple of floats

Intensity of corruption. If single float it will be used as intensity. If tuple of float intensity will be sampled from range [intensity[0], intensity[1]). Default: (0.6).

mode string, or list of strings

Type of corruption. Currently, supported options are 'rain' and 'mud'. If list is provided type of corruption will be sampled list. Default: ("rain").

color list of (r, g, b) or dict or None

Corruption elements color. If list uses provided list as color for specified mode. If dict uses provided color for specified mode. Color for each specified mode should be provided in dict. If None uses default colors (rain: (238, 238, 175), mud: (20, 42, 63)).

p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

Reference: | https://arxiv.org/pdf/1903.12261.pdf | https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py

class albumentations.augmentations.transforms.Superpixels (p_replace=0.1, n_segments=100, max_size=128, interpolation=1, always_apply=False, p=0.5) [view source on GitHub]

Transform images partially/completely to their superpixel representation. This implementation uses skimage's version of the SLIC algorithm.

Parameters:

Name Type Description
p_replace float or tuple of float

Defines for any segment the probability that the pixels within that segment are replaced by their average color (otherwise, the pixels are not changed). Examples: * A probability of 0.0 would mean, that the pixels in no segment are replaced by their average color (image is not changed at all). * A probability of 0.5 would mean, that around half of all segments are replaced by their average color. * A probability of 1.0 would mean, that all segments are replaced by their average color (resulting in a voronoi image). Behaviour based on chosen data types for this parameter: * If a float, then that flat will always be used. * If tuple (a, b), then a random probability will be sampled from the interval [a, b] per image.

n_segments int, or tuple of int

Rough target number of how many superpixels to generate (the algorithm may deviate from this number). Lower value will lead to coarser superpixels. Higher values are computationally more intensive and will hence lead to a slowdown * If a single int, then that value will always be used as the number of segments. * If a tuple (a, b), then a value from the discrete interval [a..b] will be sampled per image.

max_size int or None

Maximum image size at which the augmentation is performed. If the width or height of an image exceeds this value, it will be downscaled before the augmentation so that the longest side matches max_size. This is done to speed up the process. The final output image has the same size as the input image. Note that in case p_replace is below 1.0, the down-/upscaling will affect the not-replaced pixels too. Use None to apply no down-/upscaling.

interpolation OpenCV flag

flag that is used to specify the interpolation algorithm. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR.

p float

probability of applying the transform. Default: 0.5.

Targets: image

class albumentations.augmentations.transforms.TemplateTransform (templates, img_weight=0.5, template_weight=0.5, template_transform=None, name=None, always_apply=False, p=0.5) [view source on GitHub]

Apply blending of input image with specified templates

Parameters:

Name Type Description
templates numpy array or list of numpy arrays

Images as template for transform.

img_weight [float, float] or float

If single float will be used as weight for input image. If tuple of float img_weight will be in range [img_weight[0], img_weight[1]). Default: 0.5.

template_weight [float, float] or float

If single float will be used as weight for template. If tuple of float template_weight will be in range [template_weight[0], template_weight[1]). Default: 0.5.

template_transform

transformation object which could be applied to template, must produce template the same size as input image.

name string

(Optional) Name of transform, used only for deserialization.

p float

probability of applying the transform. Default: 0.5.

Targets: image Image types: uint8, float32

class albumentations.augmentations.transforms.ToFloat (max_value=None, always_apply=False, p=1.0) [view source on GitHub]

Divide pixel values by max_value to get a float32 output array where all values lie in the range [0, 1.0]. If max_value is None the transform will try to infer the maximum value by inspecting the data type of the input image.

See Also: :class:~albumentations.augmentations.transforms.FromFloat

Parameters:

Name Type Description
max_value float

maximum possible input value. Default: None.

p float

probability of applying the transform. Default: 1.0.

Targets: image

Image types: any type

class albumentations.augmentations.transforms.ToGray [view source on GitHub]

Convert the input RGB image to grayscale. If the mean pixel value for the resulting image is greater than 127, invert the resulting grayscale image.

Parameters:

Name Type Description
p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.ToRGB (always_apply=True, p=1.0) [view source on GitHub]

Convert the input grayscale image to RGB.

Parameters:

Name Type Description
p float

probability of applying the transform. Default: 1.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.ToSepia (always_apply=False, p=0.5) [view source on GitHub]

Applies sepia filter to the input RGB image

Parameters:

Name Type Description
p float

probability of applying the transform. Default: 0.5.

Targets: image

Image types: uint8, float32

class albumentations.augmentations.transforms.UnsharpMask (blur_limit=(3, 7), sigma_limit=0.0, alpha=(0.2, 0.5), threshold=10, always_apply=False, p=0.5) [view source on GitHub]

Sharpen the input image using Unsharp Masking processing and overlays the result with the original image.

Parameters:

Name Type Description
blur_limit int, [int, int]

maximum Gaussian kernel size for blurring the input image. Must be zero or odd and in range [0, inf). If set to 0 it will be computed from sigma as round(sigma * (3 if img.dtype == np.uint8 else 4) * 2 + 1) + 1. If set single value blur_limit will be in range (0, blur_limit). Default: (3, 7).

sigma_limit float, [float, float]

Gaussian kernel standard deviation. Must be in range [0, inf). If set single value sigma_limit will be in range (0, sigma_limit). If set to 0 sigma will be computed as sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8. Default: 0.

alpha float, [float, float]

range to choose the visibility of the sharpened image. At 0, only the original image is visible, at 1.0 only its sharpened version is visible. Default: (0.2, 0.5).

threshold int

Value to limit sharpening only for areas with high pixel difference between original image and it's smoothed version. Higher threshold means less sharpening on flat areas. Must be in range [0, 255]. Default: 10.

p float

probability of applying the transform. Default: 0.5.

Reference: arxiv.org/pdf/2107.10833.pdf

Targets: image