Shortcuts

mmdet.apis

mmdet.datasets

datasets

api_wrappers

class mmdet.datasets.api_wrappers.COCO(*args: Any, **kwargs: Any)[源代码]

This class is almost the same as official pycocotools package.

It implements some snake case function aliases. So that the COCO class has the same interface as LVIS class.

class mmdet.datasets.api_wrappers.COCOPanoptic(*args: Any, **kwargs: Any)[源代码]

This wrapper is for loading the panoptic style annotation file.

The format is shown in the CocoPanopticDataset class.

参数

annotation_file (str, optional) – Path of annotation file. Defaults to None.

createIndex()None[源代码]

Create index.

load_anns(ids: Union[List[int], int] = [])Optional[List[dict]][源代码]

Load anns with the specified ids.

self.anns is a list of annotation lists instead of a list of annotations.

参数

ids (Union[List[int], int]) – Integer ids specifying anns.

返回

Loaded ann objects.

返回类型

anns (List[dict], optional)

samplers

class mmdet.datasets.samplers.AspectRatioBatchSampler(sampler: torch.utils.data.sampler.Sampler, batch_size: int, drop_last: bool = False)[源代码]

A sampler wrapper for grouping images with similar aspect ratio (< 1 or.

>= 1) into a same batch.

参数
  • sampler (Sampler) – Base sampler.

  • batch_size (int) – Size of mini-batch.

  • drop_last (bool) – If True, the sampler will drop the last batch if its size would be less than batch_size.

class mmdet.datasets.samplers.ClassAwareSampler(dataset: mmengine.dataset.base_dataset.BaseDataset, seed: Optional[int] = None, num_sample_class: int = 1)[源代码]

Sampler that restricts data loading to the label of the dataset.

A class-aware sampling strategy to effectively tackle the non-uniform class distribution. The length of the training data is consistent with source data. Simple improvements based on Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

The implementation logic is referred to https://github.com/Sense-X/TSD/blob/master/mmdet/datasets/samplers/distributed_classaware_sampler.py

参数
  • dataset – Dataset used for sampling.

  • seed (int, optional) – random seed used to shuffle the sampler. This number should be identical across all processes in the distributed group. Defaults to None.

  • num_sample_class (int) – The number of samples taken from each per-label list. Defaults to 1.

get_cat2imgs()Dict[int, list][源代码]

Get a dict with class as key and img_ids as values.

返回

A dict of per-label image list, the item of the dict indicates a label index, corresponds to the image index that contains the label.

返回类型

dict[int, list]

set_epoch(epoch: int)None[源代码]

Sets the epoch for this sampler.

When shuffle=True, this ensures all replicas use a different random ordering for each epoch. Otherwise, the next iteration of this sampler will yield the same ordering.

参数

epoch (int) – Epoch number.

class mmdet.datasets.samplers.GroupMultiSourceSampler(dataset: mmengine.dataset.base_dataset.BaseDataset, batch_size: int, source_ratio: List[Union[int, float]], shuffle: bool = True, seed: Optional[int] = None)[源代码]

Group Multi-Source Infinite Sampler.

According to the sampling ratio, sample data from different datasets but the same group to form batches.

参数
  • dataset (Sized) – The dataset.

  • batch_size (int) – Size of mini-batch.

  • source_ratio (list[int | float]) – The sampling ratio of different source datasets in a mini-batch.

  • shuffle (bool) – Whether shuffle the dataset or not. Defaults to True.

  • seed (int, optional) – Random seed. If None, set a random seed. Defaults to None.

class mmdet.datasets.samplers.MultiSourceSampler(dataset: Sized, batch_size: int, source_ratio: List[Union[int, float]], shuffle: bool = True, seed: Optional[int] = None)[源代码]

Multi-Source Infinite Sampler.

According to the sampling ratio, sample data from different datasets to form batches.

参数
  • dataset (Sized) – The dataset.

  • batch_size (int) – Size of mini-batch.

  • source_ratio (list[int | float]) – The sampling ratio of different source datasets in a mini-batch.

  • shuffle (bool) – Whether shuffle the dataset or not. Defaults to True.

  • seed (int, optional) – Random seed. If None, set a random seed. Defaults to None.

实际案例

>>> dataset_type = 'ConcatDataset'
>>> sub_dataset_type = 'CocoDataset'
>>> data_root = 'data/coco/'
>>> sup_ann = '../coco_semi_annos/instances_train2017.1@10.json'
>>> unsup_ann = '../coco_semi_annos/' \
>>>             'instances_train2017.1@10-unlabeled.json'
>>> dataset = dict(type=dataset_type,
>>>     datasets=[
>>>         dict(
>>>             type=sub_dataset_type,
>>>             data_root=data_root,
>>>             ann_file=sup_ann,
>>>             data_prefix=dict(img='train2017/'),
>>>             filter_cfg=dict(filter_empty_gt=True, min_size=32),
>>>             pipeline=sup_pipeline),
>>>         dict(
>>>             type=sub_dataset_type,
>>>             data_root=data_root,
>>>             ann_file=unsup_ann,
>>>             data_prefix=dict(img='train2017/'),
>>>             filter_cfg=dict(filter_empty_gt=True, min_size=32),
>>>             pipeline=unsup_pipeline),
>>>         ])
>>>     train_dataloader = dict(
>>>         batch_size=5,
>>>         num_workers=5,
>>>         persistent_workers=True,
>>>         sampler=dict(type='MultiSourceSampler',
>>>             batch_size=5, source_ratio=[1, 4]),
>>>         batch_sampler=None,
>>>         dataset=dataset)
set_epoch(epoch: int)None[源代码]

Not supported in `epoch-based runner.

transforms

mmdet.engine

hooks

class mmdet.engine.hooks.CheckInvalidLossHook(interval: int = 50)[源代码]

Check invalid loss hook.

This hook will regularly check whether the loss is valid during training.

参数

interval (int) – Checking interval (every k iterations). Default: 50.

after_train_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: Optional[dict] = None, outputs: Optional[dict] = None)None[源代码]

Regularly check whether the loss is valid every n iterations.

参数
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (dict, Optional) – Data from dataloader. Defaults to None.

  • outputs (dict, Optional) – Outputs from model. Defaults to None.

class mmdet.engine.hooks.DetVisualizationHook(draw: bool = False, interval: int = 50, score_thr: float = 0.3, show: bool = False, wait_time: float = 0.0, test_out_dir: Optional[str] = None, backend_args: Optional[dict] = None)[源代码]

Detection Visualization Hook. Used to visualize validation and testing process prediction results.

In the testing phase:

  1. If show is True, it means that only the prediction results are

    visualized without storing data, so vis_backends needs to be excluded.

  2. If test_out_dir is specified, it means that the prediction results

    need to be saved to test_out_dir. In order to avoid vis_backends also storing data, so vis_backends needs to be excluded.

  3. vis_backends takes effect if the user does not specify show

    and test_out_dir`. You can set vis_backends to WandbVisBackend or TensorboardVisBackend to store the prediction result in Wandb or Tensorboard.

参数
  • draw (bool) – whether to draw prediction results. If it is False, it means that no drawing will be done. Defaults to False.

  • interval (int) – The interval of visualization. Defaults to 50.

  • score_thr (float) – The threshold to visualize the bboxes and masks. Defaults to 0.3.

  • show (bool) – Whether to display the drawn image. Default to False.

  • wait_time (float) – The interval of show (s). Defaults to 0.

  • test_out_dir (str, optional) – directory where painted images will be saved in testing process.

  • backend_args (dict, optional) – Arguments to instantiate the corresponding backend. Defaults to None.

after_test_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: dict, outputs: Sequence[mmdet.structures.det_data_sample.DetDataSample])None[源代码]

Run after every testing iterations.

参数
  • runner (Runner) – The runner of the testing process.

  • batch_idx (int) – The index of the current batch in the val loop.

  • data_batch (dict) – Data from dataloader.

  • outputs (Sequence[DetDataSample]) – A batch of data samples that contain annotations and predictions.

after_val_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: dict, outputs: Sequence[mmdet.structures.det_data_sample.DetDataSample])None[源代码]

Run after every self.interval validation iterations.

参数
  • runner (Runner) – The runner of the validation process.

  • batch_idx (int) – The index of the current batch in the val loop.

  • data_batch (dict) – Data from dataloader.

  • outputs (Sequence[DetDataSample]]) – A batch of data samples that contain annotations and predictions.

class mmdet.engine.hooks.MeanTeacherHook(momentum: float = 0.001, interval: int = 1, skip_buffer=True)[源代码]

Mean Teacher Hook.

Mean Teacher is an efficient semi-supervised learning method in Mean Teacher. This method requires two models with exactly the same structure, as the student model and the teacher model, respectively. The student model updates the parameters through gradient descent, and the teacher model updates the parameters through exponential moving average of the student model. Compared with the student model, the teacher model is smoother and accumulates more knowledge.

参数
  • momentum (float) –

    The momentum used for updating teacher’s parameter.

    Teacher’s parameter are updated with the formula:

    teacher = (1-momentum) * teacher + momentum * student.

    Defaults to 0.001.

  • interval (int) – Update teacher’s parameter every interval iteration. Defaults to 1.

  • skip_buffers (bool) – Whether to skip the model buffers, such as batchnorm running stats (running_mean, running_var), it does not perform the ema operation. Default to True.

after_train_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: Optional[dict] = None, outputs: Optional[dict] = None)None[源代码]

Update teacher’s parameter every self.interval iterations.

before_train(runner: mmengine.runner.runner.Runner)None[源代码]

To check that teacher model and student model exist.

momentum_update(model: torch.nn.modules.module.Module, momentum: float)None[源代码]

Compute the moving average of the parameters using exponential moving average.

class mmdet.engine.hooks.MemoryProfilerHook(interval: int = 50)[源代码]

Memory profiler hook recording memory information including virtual memory, swap memory, and the memory of the current process.

参数

interval (int) – Checking interval (every k iterations). Default: 50.

after_test_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: Optional[dict] = None, outputs: Optional[Sequence[mmdet.structures.det_data_sample.DetDataSample]] = None)None[源代码]

Regularly record memory information.

参数
  • runner (Runner) – The runner of the testing process.

  • batch_idx (int) – The index of the current batch in the test loop.

  • data_batch (dict, optional) – Data from dataloader. Defaults to None.

  • outputs (Sequence[DetDataSample], optional) – Outputs from model. Defaults to None.

after_train_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: Optional[dict] = None, outputs: Optional[dict] = None)None[源代码]

Regularly record memory information.

参数
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (dict, optional) – Data from dataloader. Defaults to None.

  • outputs (dict, optional) – Outputs from model. Defaults to None.

after_val_iter(runner: mmengine.runner.runner.Runner, batch_idx: int, data_batch: Optional[dict] = None, outputs: Optional[Sequence[mmdet.structures.det_data_sample.DetDataSample]] = None)None[源代码]

Regularly record memory information.

参数
  • runner (Runner) – The runner of the validation process.

  • batch_idx (int) – The index of the current batch in the val loop.

  • data_batch (dict, optional) – Data from dataloader. Defaults to None.

  • outputs (Sequence[DetDataSample], optional) – Outputs from model. Defaults to None.

class mmdet.engine.hooks.NumClassCheckHook[源代码]

Check whether the num_classes in head matches the length of classes in dataset.metainfo.

before_train_epoch(runner: mmengine.runner.runner.Runner)None[源代码]

Check whether the training dataset is compatible with head.

参数

runner (Runner) – The runner of the training or evaluation process.

before_val_epoch(runner: mmengine.runner.runner.Runner)None[源代码]

Check whether the dataset in val epoch is compatible with head.

参数

runner (Runner) – The runner of the training or evaluation process.

class mmdet.engine.hooks.PipelineSwitchHook(switch_epoch, switch_pipeline)[源代码]

Switch data pipeline at switch_epoch.

参数
  • switch_epoch (int) – switch pipeline at this epoch.

  • switch_pipeline (list[dict]) – the pipeline to switch to.

before_train_epoch(runner)[源代码]

switch pipeline.

class mmdet.engine.hooks.SetEpochInfoHook[源代码]

Set runner’s epoch information to the model.

before_train_epoch(runner)[源代码]

All subclasses should override this method, if they need any operations before each training epoch.

参数

runner (Runner) – The runner of the training process.

class mmdet.engine.hooks.SyncNormHook[源代码]

Synchronize Norm states before validation, currently used in YOLOX.

before_val_epoch(runner)[源代码]

Synchronizing norm.

class mmdet.engine.hooks.YOLOXModeSwitchHook(num_last_epochs: int = 15, skip_type_keys: Sequence[str] = ('Mosaic', 'RandomAffine', 'MixUp'))[源代码]

Switch the mode of YOLOX during training.

This hook turns off the mosaic and mixup data augmentation and switches to use L1 loss in bbox_head.

参数

num_last_epochs – The number of latter epochs in the end of the training to close the data augmentation and switch to L1 loss. Defaults to 15.

before_train_epoch(runner)None[源代码]

Close mosaic and mixup augmentation and switches to use L1 loss.

optimizers

class mmdet.engine.optimizers.LearningRateDecayOptimizerConstructor(optim_wrapper_cfg: dict, paramwise_cfg: Optional[dict] = None)[源代码]
add_params(params: List[dict], module: torch.nn.modules.module.Module, **kwargs)None[源代码]

Add all parameters of module to the params list.

The parameters of the given module will be added to the list of param groups, with specific rules defined by paramwise_cfg.

参数
  • params (list[dict]) – A list of param groups, it will be modified in place.

  • module (nn.Module) – The module to be added.

runner

class mmdet.engine.runner.TeacherStudentValLoop(runner, dataloader: Union[torch.utils.data.dataloader.DataLoader, Dict], evaluator: Union[mmengine.evaluator.evaluator.Evaluator, Dict, List], fp16: bool = False)[源代码]

Loop for validation of model teacher and student.

run()[源代码]

Launch validation for model teacher and student.

schedulers

class mmdet.engine.schedulers.QuadraticWarmupLR(optimizer, *args, **kwargs)[源代码]

Warm up the learning rate of each parameter group by quadratic formula.

参数
  • optimizer (Optimizer) – Wrapped optimizer.

  • begin (int) – Step at which to start updating the parameters. Defaults to 0.

  • end (int) – Step at which to stop updating the parameters. Defaults to INF.

  • last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.

  • by_epoch (bool) – Whether the scheduled parameters are updated by epochs. Defaults to True.

  • verbose (bool) – Whether to print the value for each update. Defaults to False.

class mmdet.engine.schedulers.QuadraticWarmupMomentum(optimizer, *args, **kwargs)[源代码]

Warm up the momentum value of each parameter group by quadratic formula.

参数
  • optimizer (Optimizer) – Wrapped optimizer.

  • begin (int) – Step at which to start updating the parameters. Defaults to 0.

  • end (int) – Step at which to stop updating the parameters. Defaults to INF.

  • last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.

  • by_epoch (bool) – Whether the scheduled parameters are updated by epochs. Defaults to True.

  • verbose (bool) – Whether to print the value for each update. Defaults to False.

class mmdet.engine.schedulers.QuadraticWarmupParamScheduler(optimizer: torch.optim.optimizer.Optimizer, param_name: str, begin: int = 0, end: int = 1000000000, last_step: int = - 1, by_epoch: bool = True, verbose: bool = False)[源代码]

Warm up the parameter value of each parameter group by quadratic formula:

\[X_{t} = X_{t-1} + \frac{2t+1}{{(end-begin)}^{2}} \times X_{base}\]
参数
  • optimizer (Optimizer) – Wrapped optimizer.

  • param_name (str) – Name of the parameter to be adjusted, such as lr, momentum.

  • begin (int) – Step at which to start updating the parameters. Defaults to 0.

  • end (int) – Step at which to stop updating the parameters. Defaults to INF.

  • last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.

  • by_epoch (bool) – Whether the scheduled parameters are updated by epochs. Defaults to True.

  • verbose (bool) – Whether to print the value for each update. Defaults to False.

classmethod build_iter_from_epoch(*args, begin=0, end=1000000000, by_epoch=True, epoch_length=None, **kwargs)[源代码]

Build an iter-based instance of this scheduler from an epoch-based config.

mmdet.evaluation

functional

mmdet.evaluation.functional.average_precision(recalls, precisions, mode='area')[源代码]

Calculate average precision (for single or multiple scales).

参数
  • recalls (ndarray) – shape (num_scales, num_dets) or (num_dets, )

  • precisions (ndarray) – shape (num_scales, num_dets) or (num_dets, )

  • mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1]

返回

calculated average precision

返回类型

float or ndarray

mmdet.evaluation.functional.bbox_overlaps(bboxes1, bboxes2, mode='iou', eps=1e-06, use_legacy_coordinate=False)[源代码]

Calculate the ious between each bbox of bboxes1 and bboxes2.

参数
  • bboxes1 (ndarray) – Shape (n, 4)

  • bboxes2 (ndarray) – Shape (k, 4)

  • mode (str) – IOU (intersection over union) or IOF (intersection over foreground)

  • use_legacy_coordinate (bool) – Whether to use coordinate system in mmdet v1.x. which means width, height should be calculated as ‘x2 - x1 + 1` and ‘y2 - y1 + 1’ respectively. Note when function is used in VOCDataset, it should be True to align with the official implementation http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar Default: False.

返回

Shape (n, k)

返回类型

ious (ndarray)

mmdet.evaluation.functional.cityscapes_classes()list[源代码]

Class names of Cityscapes.

mmdet.evaluation.functional.coco_classes()list[源代码]

Class names of COCO.

mmdet.evaluation.functional.coco_panoptic_classes()list[源代码]

Class names of COCO panoptic.

mmdet.evaluation.functional.eval_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, ioa_thr=None, dataset=None, logger=None, tpfp_fn=None, nproc=4, use_legacy_coordinate=False, use_group_of=False, eval_mode='area')[源代码]

Evaluate mAP of a dataset.

参数
  • det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.

  • annotations (list[dict]) –

    Ground truth annotations where each item of the list indicates an image. Keys of annotations are:

    • bboxes: numpy array of shape (n, 4)

    • labels: numpy array of shape (n, )

    • bboxes_ignore (optional): numpy array of shape (k, 4)

    • labels_ignore (optional): numpy array of shape (k, )

  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Defaults to None.

  • iou_thr (float) – IoU threshold to be considered as matched. Defaults to 0.5.

  • ioa_thr (float | None) – IoA threshold to be considered as matched, which only used in OpenImages evaluation. Defaults to None.

  • dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. “voc”, “imagenet_det”, etc. Defaults to None.

  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmengine.logging.print_log() for details. Defaults to None.

  • tpfp_fn (callable | None) – The function used to determine true/ false positives. If None, tpfp_default() is used as default unless dataset is ‘det’ or ‘vid’ (tpfp_imagenet() in this case). If it is given as a function, then this function is used to evaluate tp & fp. Default None.

  • nproc (int) – Processes used for computing TP and FP. Defaults to 4.

  • use_legacy_coordinate (bool) – Whether to use coordinate system in mmdet v1.x. which means width, height should be calculated as ‘x2 - x1 + 1` and ‘y2 - y1 + 1’ respectively. Defaults to False.

  • use_group_of (bool) – Whether to use group of when calculate TP and FP, which only used in OpenImages evaluation. Defaults to False.

  • eval_mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1], PASCAL VOC2007 uses 11points as default evaluate mode, while others are ‘area’. Defaults to ‘area’.

返回

(mAP, [dict, dict, …])

返回类型

tuple

mmdet.evaluation.functional.eval_recalls(gts, proposals, proposal_nums=None, iou_thrs=0.5, logger=None, use_legacy_coordinate=False)[源代码]

Calculate recalls.

参数
  • gts (list[ndarray]) – a list of arrays of shape (n, 4)

  • proposals (list[ndarray]) – a list of arrays of shape (k, 4) or (k, 5)

  • proposal_nums (int | Sequence[int]) – Top N proposals to be evaluated.

  • iou_thrs (float | Sequence[float]) – IoU thresholds. Default: 0.5.

  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmengine.logging.print_log() for details. Default: None.

  • use_legacy_coordinate (bool) – Whether use coordinate system in mmdet v1.x. “1” was added to both height and width which means w, h should be computed as ‘x2 - x1 + 1` and ‘y2 - y1 + 1’. Default: False.

返回

recalls of different ious and proposal nums

返回类型

ndarray

mmdet.evaluation.functional.evaluateImgLists(prediction_list: list, groundtruth_list: list, args: object, backend_args: Optional[dict] = None, dump_matches: bool = False)dict[源代码]

A wrapper of obj:``cityscapesscripts.evaluation.

evalInstanceLevelSemanticLabeling.evaluateImgLists``. Support loading groundtruth image from file backend. :param prediction_list: A list of prediction txt file. :type prediction_list: list :param groundtruth_list: A list of groundtruth image file. :type groundtruth_list: list :param args: A global object setting in

obj:cityscapesscripts.evaluation. evalInstanceLevelSemanticLabeling

参数
  • backend_args (dict, optional) – Arguments to instantiate the preifx of uri corresponding backend. Defaults to None.

  • dump_matches (bool) – whether dump matches.json. Defaults to False.

返回

The computed metric.

返回类型

dict

mmdet.evaluation.functional.get_classes(dataset)list[源代码]

Get class names of a dataset.

mmdet.evaluation.functional.imagenet_det_classes()list[源代码]

Class names of ImageNet Det.

mmdet.evaluation.functional.imagenet_vid_classes()list[源代码]

Class names of ImageNet VID.

mmdet.evaluation.functional.objects365v1_classes()list[源代码]

Class names of Objects365 V1.

mmdet.evaluation.functional.objects365v2_classes()list[源代码]

Class names of Objects365 V2.

mmdet.evaluation.functional.oid_challenge_classes()list[源代码]

Class names of Open Images Challenge.

mmdet.evaluation.functional.oid_v6_classes()list[源代码]

Class names of Open Images V6.

mmdet.evaluation.functional.plot_iou_recall(recalls, iou_thrs)[源代码]

Plot IoU-Recalls curve.

参数
  • recalls (ndarray or list) – shape (k,)

  • iou_thrs (ndarray or list) – same shape as recalls

mmdet.evaluation.functional.plot_num_recall(recalls, proposal_nums)[源代码]

Plot Proposal_num-Recalls curve.

参数
  • recalls (ndarray or list) – shape (k,)

  • proposal_nums (ndarray or list) – same shape as recalls

mmdet.evaluation.functional.pq_compute_multi_core(matched_annotations_list, gt_folder, pred_folder, categories, backend_args=None, nproc=32)[源代码]

Evaluate the metrics of Panoptic Segmentation with multithreading.

Same as the function with the same name in panopticapi.

参数
  • matched_annotations_list (list) – The matched annotation list. Each element is a tuple of annotations of the same image with the format (gt_anns, pred_anns).

  • gt_folder (str) – The path of the ground truth images.

  • pred_folder (str) – The path of the prediction images.

  • categories (str) – The categories of the dataset.

  • backend_args (object) – The file client of the dataset. If None, the backend will be set to local.

  • nproc (int) – Number of processes for panoptic quality computing. Defaults to 32. When nproc exceeds the number of cpu cores, the number of cpu cores is used.

mmdet.evaluation.functional.pq_compute_single_core(proc_id, annotation_set, gt_folder, pred_folder, categories, backend_args=None, print_log=False)[源代码]

The single core function to evaluate the metric of Panoptic Segmentation.

Same as the function with the same name in panopticapi. Only the function to load the images is changed to use the file client.

参数
  • proc_id (int) – The id of the mini process.

  • gt_folder (str) – The path of the ground truth images.

  • pred_folder (str) – The path of the prediction images.

  • categories (str) – The categories of the dataset.

  • backend_args (object) – The Backend of the dataset. If None, the backend will be set to local.

  • print_log (bool) – Whether to print the log. Defaults to False.

mmdet.evaluation.functional.print_map_summary(mean_ap, results, dataset=None, scale_ranges=None, logger=None)[源代码]

Print mAP and results of each class.

A table will be printed to show the gts/dets/recall/AP of each class and the mAP.

参数
  • mean_ap (float) – Calculated from eval_map().

  • results (list[dict]) – Calculated from eval_map().

  • dataset (list[str] | str | None) – Dataset name or dataset classes.

  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated.

  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmengine.logging.print_log() for details. Defaults to None.

mmdet.evaluation.functional.print_recall_summary(recalls, proposal_nums, iou_thrs, row_idxs=None, col_idxs=None, logger=None)[源代码]

Print recalls in a table.

参数
  • recalls (ndarray) – calculated from bbox_recalls

  • proposal_nums (ndarray or list) – top N proposals

  • iou_thrs (ndarray or list) – iou thresholds

  • row_idxs (ndarray) – which rows(proposal nums) to print

  • col_idxs (ndarray) – which cols(iou thresholds) to print

  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmengine.logging.print_log() for details. Default: None.

mmdet.evaluation.functional.voc_classes()list[源代码]

Class names of PASCAL VOC.

metrics

mmdet.models

backbones

data_preprocessors

dense_heads

detectors

layers

losses

necks

roi_heads

seg_heads

task_modules

test_time_augs

utils

mmdet.structures

structures

class mmdet.structures.DetDataSample(*, metainfo: Optional[dict] = None, **kwargs)[源代码]

A data structure interface of MMDetection. They are used as interfaces between different components.

The attributes in DetDataSample are divided into several parts:

  • ``proposals``(InstanceData): Region proposals used in two-stage

    detectors.

  • ``gt_instances``(InstanceData): Ground truth of instance annotations.

  • ``pred_instances``(InstanceData): Instances of model predictions.

  • ``ignored_instances``(InstanceData): Instances to be ignored during

    training/testing.

  • ``gt_panoptic_seg``(PixelData): Ground truth of panoptic

    segmentation.

  • ``pred_panoptic_seg``(PixelData): Prediction of panoptic

    segmentation.

  • ``gt_sem_seg``(PixelData): Ground truth of semantic segmentation.

  • ``pred_sem_seg``(PixelData): Prediction of semantic segmentation.

实际案例

>>> import torch
>>> import numpy as np
>>> from mmengine.structures import InstanceData
>>> from mmdet.structures import DetDataSample
>>> data_sample = DetDataSample()
>>> img_meta = dict(img_shape=(800, 1196),
...                 pad_shape=(800, 1216))
>>> gt_instances = InstanceData(metainfo=img_meta)
>>> gt_instances.bboxes = torch.rand((5, 4))
>>> gt_instances.labels = torch.rand((5,))
>>> data_sample.gt_instances = gt_instances
>>> assert 'img_shape' in data_sample.gt_instances.metainfo_keys()
>>> len(data_sample.gt_instances)
5
>>> print(data_sample)

<DetDataSample(

META INFORMATION

DATA FIELDS gt_instances: <InstanceData(

META INFORMATION pad_shape: (800, 1216) img_shape: (800, 1196)

DATA FIELDS labels: tensor([0.8533, 0.1550, 0.5433, 0.7294, 0.5098]) bboxes: tensor([[9.7725e-01, 5.8417e-01, 1.7269e-01, 6.5694e-01],

[1.7894e-01, 5.1780e-01, 7.0590e-01, 4.8589e-01], [7.0392e-01, 6.6770e-01, 1.7520e-01, 1.4267e-01], [2.2411e-01, 5.1962e-01, 9.6953e-01, 6.6994e-01], [4.1338e-01, 2.1165e-01, 2.7239e-04, 6.8477e-01]])

) at 0x7f21fb1b9190>

) at 0x7f21fb1b9880>
>>> pred_instances = InstanceData(metainfo=img_meta)
>>> pred_instances.bboxes = torch.rand((5, 4))
>>> pred_instances.scores = torch.rand((5,))
>>> data_sample = DetDataSample(pred_instances=pred_instances)
>>> assert 'pred_instances' in data_sample
>>> data_sample = DetDataSample()
>>> gt_instances_data = dict(
...                        bboxes=torch.rand(2, 4),
...                        labels=torch.rand(2),
...                        masks=np.random.rand(2, 2, 2))
>>> gt_instances = InstanceData(**gt_instances_data)
>>> data_sample.gt_instances = gt_instances
>>> assert 'gt_instances' in data_sample
>>> assert 'masks' in data_sample.gt_instances
>>> data_sample = DetDataSample()
>>> gt_panoptic_seg_data = dict(panoptic_seg=torch.rand(2, 4))
>>> gt_panoptic_seg = PixelData(**gt_panoptic_seg_data)
>>> data_sample.gt_panoptic_seg = gt_panoptic_seg
>>> print(data_sample)

<DetDataSample(

META INFORMATION

DATA FIELDS _gt_panoptic_seg: <BaseDataElement(

META INFORMATION

DATA FIELDS panoptic_seg: tensor([[0.7586, 0.1262, 0.2892, 0.9341],

[0.3200, 0.7448, 0.1052, 0.5371]])

) at 0x7f66c2bb7730>

gt_panoptic_seg: <BaseDataElement(

META INFORMATION

DATA FIELDS panoptic_seg: tensor([[0.7586, 0.1262, 0.2892, 0.9341],

[0.3200, 0.7448, 0.1052, 0.5371]])

) at 0x7f66c2bb7730>

) at 0x7f66c2bb7280> >>> data_sample = DetDataSample() >>> gt_segm_seg_data = dict(segm_seg=torch.rand(2, 2, 2)) >>> gt_segm_seg = PixelData(**gt_segm_seg_data) >>> data_sample.gt_segm_seg = gt_segm_seg >>> assert ‘gt_segm_seg’ in data_sample >>> assert ‘segm_seg’ in data_sample.gt_segm_seg

bbox

mask

mmdet.testing

mmdet.visualization

mmdet.utils

class mmdet.utils.AvoidOOM(to_cpu=True, test=False)[源代码]

Try to convert inputs to FP16 and CPU if got a PyTorch’s CUDA Out of Memory error. It will do the following steps:

  1. First retry after calling torch.cuda.empty_cache().

  2. If that still fails, it will then retry by converting inputs

to FP16.

  1. If that still fails trying to convert inputs to CPUs.

In this case, it expects the function to dispatch to CPU implementation.

参数
  • to_cpu (bool) – Whether to convert outputs to CPU if get an OOM error. This will slow down the code significantly. Defaults to True.

  • test (bool) – Skip _ignore_torch_cuda_oom operate that can use lightweight data in unit test, only used in test unit. Defaults to False.

实际案例

>>> from mmdet.utils.memory import AvoidOOM
>>> AvoidCUDAOOM = AvoidOOM()
>>> output = AvoidOOM.retry_if_cuda_oom(
>>>     some_torch_function)(input1, input2)
>>> # To use as a decorator
>>> # from mmdet.utils import AvoidCUDAOOM
>>> @AvoidCUDAOOM.retry_if_cuda_oom
>>> def function(*args, **kwargs):
>>>     return None

```

注解

  1. The output may be on CPU even if inputs are on GPU. Processing

    on CPU will slow down the code significantly.

  2. When converting inputs to CPU, it will only look at each argument

    and check if it has .device and .to for conversion. Nested structures of tensors are not supported.

  3. Since the function might be called more than once, it has to be

    stateless.

retry_if_cuda_oom(func)[源代码]

Makes a function retry itself after encountering pytorch’s CUDA OOM error.

The implementation logic is referred to https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/memory.py

参数

func – a stateless callable that takes tensor-like objects as arguments.

返回

a callable which retries func if OOM is encountered.

返回类型

func

mmdet.utils.all_reduce_dict(py_dict, op='sum', group=None, to_float=True)[源代码]

Apply all reduce function for python dict object.

The code is modified from https://github.com/Megvii- BaseDetection/YOLOX/blob/main/yolox/utils/allreduce_norm.py.

NOTE: make sure that py_dict in different ranks has the same keys and the values should be in the same shape. Currently only supports nccl backend.

参数
  • py_dict (dict) – Dict to be applied all reduce op.

  • op (str) – Operator, could be ‘sum’ or ‘mean’. Default: ‘sum’

  • group (torch.distributed.group, optional) – Distributed group, Default: None.

  • to_float (bool) – Whether to convert all values of dict to float. Default: True.

返回

reduced python dict object.

返回类型

OrderedDict

mmdet.utils.allreduce_grads(params, coalesce=True, bucket_size_mb=- 1)[源代码]

Allreduce gradients.

参数
  • params (list[torch.Parameters]) – List of parameters of a model

  • coalesce (bool, optional) – Whether allreduce parameters as a whole. Defaults to True.

  • bucket_size_mb (int, optional) – Size of bucket, the unit is MB. Defaults to -1.

mmdet.utils.collect_env()[源代码]

Collect the information of the running environments.

mmdet.utils.compat_cfg(cfg)[源代码]

This function would modify some filed to keep the compatibility of config.

For example, it will move some args which will be deprecated to the correct fields.

mmdet.utils.find_latest_checkpoint(path, suffix='pth')[源代码]

Find the latest checkpoint from the working directory.

参数
  • path (str) – The path to find checkpoints.

  • suffix (str) – File extension. Defaults to pth.

返回

File path of the latest checkpoint.

返回类型

latest_path(str | None)

引用

1

https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py

mmdet.utils.get_caller_name()[源代码]

Get name of caller method.

mmdet.utils.get_test_pipeline_cfg(cfg: Union[str, mmengine.config.config.ConfigDict])mmengine.config.config.ConfigDict[源代码]

Get the test dataset pipeline from entire config.

参数

cfg (str or ConfigDict) – the entire config. Can be a config file or a ConfigDict.

返回

the config of test dataset.

返回类型

ConfigDict

mmdet.utils.log_img_scale(img_scale, shape_order='hw', skip_square=False)[源代码]

Log image size.

参数
  • img_scale (tuple) – Image size to be logged.

  • shape_order (str, optional) – The order of image shape. ‘hw’ for (height, width) and ‘wh’ for (width, height). Defaults to ‘hw’.

  • skip_square (bool, optional) – Whether to skip logging for square img_scale. Defaults to False.

返回

Whether to have done logging.

返回类型

bool

mmdet.utils.reduce_mean(tensor)[源代码]

“Obtain the mean of tensor on different GPUs.

mmdet.utils.register_all_modules(init_default_scope: bool = True)None[源代码]

Register all modules in mmdet into the registries.

参数

init_default_scope (bool) – Whether initialize the mmdet default scope. When init_default_scope=True, the global default scope will be set to mmdet, and all registries will build modules from mmdet’s registry node. To understand more about the registry, please refer to https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/registry.md Defaults to True.

mmdet.utils.replace_cfg_vals(ori_cfg)[源代码]

Replace the string “${key}” with the corresponding value.

Replace the “${key}” with the value of ori_cfg.key in the config. And support replacing the chained ${key}. Such as, replace “${key0.key1}” with the value of cfg.key0.key1. Code is modified from `vars.py < https://github.com/microsoft/SoftTeacher/blob/main/ssod/utils/vars.py>`_ # noqa: E501

参数

ori_cfg (mmengine.config.Config) – The origin config with “${key}” generated from a file.

返回

The config with “${key}” replaced by the corresponding value.

返回类型

updated_cfg [mmengine.config.Config]

mmdet.utils.setup_cache_size_limit_of_dynamo()[源代码]

Setup cache size limit of dynamo.

Note: Due to the dynamic shape of the loss calculation and post-processing parts in the object detection algorithm, these functions must be compiled every time they are run. Setting a large value for torch._dynamo.config.cache_size_limit may result in repeated compilation, which can slow down training and testing speed. Therefore, we need to set the default value of cache_size_limit smaller. An empirical value is 4.

mmdet.utils.setup_multi_processes(cfg)[源代码]

Setup multi-processing environment variables.

mmdet.utils.split_batch(img, img_metas, kwargs)[源代码]

Split data_batch by tags.

Code is modified from <https://github.com/microsoft/SoftTeacher/blob/main/ssod/utils/structure_utils.py> # noqa: E501

参数
  • img (Tensor) – of shape (N, C, H, W) encoding input images. Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys, see mmdet.datasets.pipelines.Collect.

  • kwargs (dict) – Specific to concrete implementation.

返回

a dict that data_batch splited by tags,

such as ‘sup’, ‘unsup_teacher’, and ‘unsup_student’.

返回类型

data_groups (dict)

mmdet.utils.sync_random_seed(seed=None, device='cuda')[源代码]

Make sure different ranks share the same seed.

All workers must call this function, otherwise it will deadlock. This method is generally used in DistributedSampler, because the seed should be identical across all processes in the distributed group.

In distributed sampling, different ranks should sample non-overlapped data in the dataset. Therefore, this function is used to make sure that each rank shuffles the data indices in the same order based on the same seed. Then different ranks could use different indices to select non-overlapped data from the same data list.

参数
  • seed (int, Optional) – The seed. Default to None.

  • device (str) – The device where the seed will be put on. Default to ‘cuda’.

返回

Seed to be used.

返回类型

int

mmdet.utils.update_data_root(cfg, logger=None)[源代码]

Update data root according to env MMDET_DATASETS.

If set env MMDET_DATASETS, update cfg.data_root according to MMDET_DATASETS. Otherwise, using cfg.data_root as default.

参数
  • cfg (Config) – The model config need to modify

  • logger (logging.Logger | str | None) – the way to print msg

Read the Docs v: 3.x
Versions
latest
stable
3.x
v2.28.2
v2.28.1
v2.28.0
v2.27.0
v2.26.0
v2.25.3
v2.25.2
v2.25.1
v2.25.0
v2.24.1
v2.24.0
v2.23.0
v2.22.0
v2.21.0
v2.20.0
v2.19.1
v2.19.0
v2.18.1
v2.18.0
v2.17.0
v2.16.0
v2.15.1
v2.15.0
v2.14.0
v2.13.0
dev-3.x
dev
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.