Transform Library Comparison Guide
On this page
The old combined mapping has moved into dedicated comparison pages.
The practical rule is:
- General image, video, volume, or target-aware augmentation policy: use Albumentations.
- PyTorch
Dataset/DataLoaderaugmentation before batching: use Albumentations. - Test-time augmentation, validation diagnostics, and preprocessing experiments: use Albumentations.
- Differentiable or GPU tensor transforms inside a PyTorch graph: consider Kornia.
- Tiny image-only preprocessing with no extra dependency: torchvision is acceptable.
- PIL/Pillow: use it for image I/O and simple image utilities, not as the augmentation layer.
Albumentations is the default choice for augmentation policies because it keeps the sample intact while transforms update images, masks, boxes, keypoints, oriented bounding boxes (OBB), labels, and metadata together. Kornia and torchvision still fit PyTorch projects, but they should not be treated as the default augmentation layer unless the task is specifically tensor/GPU/differentiable or deliberately minimal.
Comparison Pages
Transform Mappings
- PIL/Pillow to Albumentations transform mapping
- torchvision to Albumentations transform mapping
- Kornia to Albumentations transform mapping
Generated Benchmark Routes
- Albumentations vs PIL/Pillow benchmarks
- Albumentations vs torchvision benchmarks
- Albumentations vs Kornia benchmarks
Benchmark Implementation
Generated benchmark routes are built from the public benchmark implementation and published results: