Shortcuts

Customize Data Pipelines

  1. Write a new transform in a file, e.g., in my_pipeline.py. It takes a dict as input and returns a dict.

    import random
    from mmcv.transforms import BaseTransform
    from mmdet.registry import TRANSFORMS
    
    
    @TRANSFORMS.register_module()
    class MyTransform(BaseTransform):
        """Add your transform
    
        Args:
            p (float): Probability of shifts. Default 0.5.
        """
    
        def __init__(self, prob=0.5):
            self.prob = prob
    
        def transform(self, results):
            if random.random() > self.prob:
                results['dummy'] = True
            return results
    
  2. Import and use the pipeline in your config file. Make sure the import is relative to where your train script is located.

    custom_imports = dict(imports=['path.to.my_pipeline'], allow_failed_imports=False)
    
    train_pipeline = [
        dict(type='LoadImageFromFile'),
        dict(type='LoadAnnotations', with_bbox=True),
        dict(type='Resize', scale=(1333, 800), keep_ratio=True),
        dict(type='RandomFlip', prob=0.5),
        dict(type='MyTransform', prob=0.2),
        dict(type='PackDetInputs')
    ]
    
  3. Visualize the output of your transforms pipeline

    To visualize the output of your transforms pipeline, tools/misc/browse_dataset.py can help the user to browse a detection dataset (both images and bounding box annotations) visually, or save the image to a designated directory. More details can refer to visualization documentation

Read the Docs v: dev-3.x
Versions
latest
stable
3.x
v3.3.0
v3.2.0
v3.1.0
v3.0.0
v3.0.0rc0
v2.28.2
v2.28.1
v2.28.0
v2.27.0
v2.26.0
v2.25.3
v2.25.2
v2.25.1
v2.25.0
v2.24.1
v2.24.0
v2.23.0
v2.22.0
v2.21.0
v2.20.0
v2.19.1
v2.19.0
v2.18.1
v2.18.0
v2.17.0
v2.16.0
v2.15.1
v2.15.0
v2.14.0
v2.13.0
v2.12.0
v2.11.0
v2.10.0
v2.9.0
v2.8.0
v2.7.0
v2.6.0
v2.5.0
v2.4.0
v2.3.0
v2.2.1
v2.2.0
v2.1.0
v2.0.0
v1.2.0
test-3.0.0rc0
main
dev-3.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.