API Reference

mmdet.apis

mmdet.apis.get_root_logger(log_file=None, log_level=20)[source]

Get root logger

Parameters:
  • log_file (str, optional) – File path of log. Defaults to None.
  • log_level (int, optional) – The level of logger. Defaults to logging.INFO.
Returns:

The obtained logger

Return type:

logging.Logger

mmdet.apis.set_random_seed(seed, deterministic=False)[source]

Set random seed.

Parameters:
  • seed (int) – Seed to be used.
  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Default: False.
mmdet.apis.init_detector(config, checkpoint=None, device='cuda:0')[source]

Initialize a detector from config file.

Parameters:
  • config (str or mmcv.Config) – Config file path or the config object.
  • checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.
Returns:

The constructed detector.

Return type:

nn.Module

mmdet.apis.async_inference_detector(model, img)[source]

Async inference image(s) with the detector.

Parameters:
  • model (nn.Module) – The loaded detector.
  • imgs (str/ndarray or list[str/ndarray]) – Either image files or loaded images.
Returns:

Awaitable detection results.

mmdet.apis.inference_detector(model, img)[source]

Inference image(s) with the detector.

Parameters:
  • model (nn.Module) – The loaded detector.
  • imgs (str/ndarray or list[str/ndarray]) – Either image files or loaded images.
Returns:

If imgs is a str, a generator will be returned, otherwise return the detection results directly.

mmdet.apis.show_result_pyplot(model, img, result, score_thr=0.3, fig_size=(15, 10))[source]

Visualize the detection results on the image.

Parameters:
  • model (nn.Module) – The loaded detector.
  • img (str or np.ndarray) – Image filename or loaded image.
  • result (tuple[list] or list) – The detection result, can be either (bbox, segm) or just bbox.
  • score_thr (float) – The threshold to visualize the bboxes and masks.
  • fig_size (tuple) – Figure size of the pyplot figure.
mmdet.apis.multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False)[source]

Test model with multiple gpus.

This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting ‘gpu_collect=True’ it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to ‘tmpdir’ and collects them by the rank 0 worker.

Parameters:
  • model (nn.Module) – Model to be tested.
  • data_loader (nn.Dataloader) – Pytorch data loader.
  • tmpdir (str) – Path of directory to save the temporary results from different gpus under cpu mode.
  • gpu_collect (bool) – Option to use either gpu or cpu to collect results.
Returns:

The prediction results.

Return type:

list

mmdet.core

anchor

class mmdet.core.anchor.AnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]

Standard anchor generator for 2D anchor-based detectors

Parameters:
  • strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels.
  • ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
  • scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
  • base_sizes (list[int] | None) – The basic sizes of anchors in multiple levels. If None is given, strides will be used as base_sizes. (If strides are non square, the shortest stride is taken.)
  • scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
  • octave_base_scale (int) – The base scale of octave.
  • scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
  • centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. If a list of tuple of float is given, they will be used to shift the centers of anchors.
  • center_offset (float) – The offset of center in propotion to anchors’ width and height. By default it is 0 in V2.0.

Examples

>>> from mmdet.core import AnchorGenerator
>>> self = AnchorGenerator([16], [1.], [1.], [9])
>>> all_anchors = self.grid_anchors([(2, 2)], device='cpu')
>>> print(all_anchors)
[tensor([[-4.5000, -4.5000,  4.5000,  4.5000],
        [11.5000, -4.5000, 20.5000,  4.5000],
        [-4.5000, 11.5000,  4.5000, 20.5000],
        [11.5000, 11.5000, 20.5000, 20.5000]])]
>>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18])
>>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu')
>>> print(all_anchors)
[tensor([[-4.5000, -4.5000,  4.5000,  4.5000],
        [11.5000, -4.5000, 20.5000,  4.5000],
        [-4.5000, 11.5000,  4.5000, 20.5000],
        [11.5000, 11.5000, 20.5000, 20.5000]]),         tensor([[-9., -9., 9., 9.]])]
gen_base_anchors()[source]

Generate base anchors

Returns:
Base anchors of a feature grid in multiple
feature levels.
Return type:list(torch.Tensor)
gen_single_level_base_anchors(base_size, scales, ratios, center=None)[source]

Generate base anchors of a single level

Parameters:
  • base_size (int | float) – Basic size of an anchor.
  • scales (torch.Tensor) – Scales of the anchor.
  • ratios (torch.Tensor) – The ratio between between the height and width of anchors in a single level.
  • center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
Returns:

Anchors in a single-level feature maps

Return type:

torch.Tensor

grid_anchors(featmap_sizes, device='cuda')[source]

Generate grid anchors in multiple feature levels

Parameters:
  • featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels.
  • device (str) – Device where the anchors will be put on.
Returns:

Anchors in multiple feature levels.

The sizes of each tensor should be [N, 4], where N = width * height * num_base_anchors, width and height are the sizes of the corresponding feature lavel, num_base_anchors is the number of anchors for that level.

Return type:

list[torch.Tensor]

num_base_anchors

total number of base anchors in a feature grid

Type:list[int]
num_levels

number of feature levels that the generator will be applied

Type:int
single_level_grid_anchors(base_anchors, featmap_size, stride=(16, 16), device='cuda')[source]

Generate grid anchors of a single level.

Note

This function is usually called by method self.grid_anchors.

Parameters:
  • base_anchors (torch.Tensor) – The base anchors of a feature grid.
  • featmap_size (tuple[int]) – Size of the feature maps.
  • stride (tuple[int], optional) – Stride of the feature map. Defaults to (16, 16).
  • device (str, optional) – Device the tensor will be put on. Defaults to ‘cuda’.
Returns:

Anchors in the overall feature maps.

Return type:

torch.Tensor

single_level_valid_flags(featmap_size, valid_size, num_base_anchors, device='cuda')[source]

Generate the valid flags of anchor in a single feature map

Parameters:
  • featmap_size (tuple[int]) – The size of feature maps.
  • valid_size (tuple[int]) – The valid size of the feature maps.
  • num_base_anchors (int) – The number of base anchors.
  • device (str, optional) – Device where the flags will be put on. Defaults to ‘cuda’.
Returns:

The valid flags of each anchor in a single level

feature map.

Return type:

torch.Tensor

valid_flags(featmap_sizes, pad_shape, device='cuda')[source]

Generate valid flags of anchors in multiple feature levels

Parameters:
  • featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
  • pad_shape (tuple) – The padded shape of the image.
  • device (str) – Device where the anchors will be put on.
Returns:

Valid flags of anchors in multiple levels.

Return type:

list(torch.Tensor)

class mmdet.core.anchor.LegacyAnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]

Legacy anchor generator used in MMDetection V1.x

Difference to the V2.0 anchor generator:

  1. The center offset of V1.x anchors are set to be 0.5 rather than 0.
  2. The width/height are minused by 1 when calculating the anchors’ centers and corners to meet the V1.x coordinate system.
  3. The anchors’ corners are quantized.
Parameters:
  • strides (list[int] | list[tuple[int]]) – Strides of anchors in multiple feature levels.
  • ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
  • scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
  • base_sizes (list[int]) – The basic sizes of anchors in multiple levels. If None is given, strides will be used to generate base_sizes.
  • scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
  • octave_base_scale (int) – The base scale of octave.
  • scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
  • centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. It a list of float is given, this list will be used to shift the centers of anchors.
  • center_offset (float) – The offset of center in propotion to anchors’ width and height. By default it is 0.5 in V2.0 but it should be 0.5 in v1.x models.

Examples

>>> from mmdet.core import LegacyAnchorGenerator
>>> self = LegacyAnchorGenerator(
>>>     [16], [1.], [1.], [9], center_offset=0.5)
>>> all_anchors = self.grid_anchors(((2, 2),), device='cpu')
>>> print(all_anchors)
[tensor([[ 0.,  0.,  8.,  8.],
        [16.,  0., 24.,  8.],
        [ 0., 16.,  8., 24.],
        [16., 16., 24., 24.]])]
gen_single_level_base_anchors(base_size, scales, ratios, center=None)[source]

Generate base anchors of a single level

Note

The width/height of anchors are minused by 1 when calculating
the centers and corners to meet the V1.x coordinate system.
Parameters:
  • base_size (int | float) – Basic size of an anchor.
  • scales (torch.Tensor) – Scales of the anchor.
  • ratios (torch.Tensor) – The ratio between between the height. and width of anchors in a single level.
  • center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
Returns:

Anchors in a single-level feature map.

Return type:

torch.Tensor

mmdet.core.anchor.anchor_inside_flags(flat_anchors, valid_flags, img_shape, allowed_border=0)[source]

Check whether the anchors are inside the border

Parameters:
  • flat_anchors (torch.Tensor) – Flatten anchors, shape (n, 4).
  • valid_flags (torch.Tensor) – An existing valid flags of anchors.
  • img_shape (tuple(int)) – Shape of current image.
  • allowed_border (int, optional) – The border to allow the valid anchor. Defaults to 0.
Returns:

Flags indicating whether the anchors are inside a

valid range.

Return type:

torch.Tensor

mmdet.core.anchor.images_to_levels(target, num_levels)[source]

Convert targets by image to targets by feature level.

[target_img0, target_img1] -> [target_level0, target_level1, …]

mmdet.core.anchor.calc_region(bbox, ratio, featmap_size=None)[source]

Calculate a proportional bbox region.

The bbox center are fixed and the new h’ and w’ is h * ratio and w * ratio.

Parameters:
  • bbox (Tensor) – Bboxes to calculate regions, shape (n, 4).
  • ratio (float) – Ratio of the output region.
  • featmap_size (tuple) – Feature map size used for clipping the boundary.
Returns:

x1, y1, x2, y2

Return type:

tuple

bbox

mmdet.core.bbox.bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-06)[source]

Calculate overlap between two set of bboxes.

If is_aligned is False, then calculate the ious between each bbox of bboxes1 and bboxes2, otherwise the ious between each aligned pair of bboxes1 and bboxes2.

Parameters:
  • bboxes1 (Tensor) – shape (m, 4) in <x1, y1, x2, y2> format or empty.
  • bboxes2 (Tensor) – shape (n, 4) in <x1, y1, x2, y2> format or empty. If is_aligned is True, then m and n must be equal.
  • mode (str) – “iou” (intersection over union) or iof (intersection over foreground).
Returns:

shape (m, n) if is_aligned == False else shape (m, 1)

Return type:

ious(Tensor)

Example

>>> bboxes1 = torch.FloatTensor([
>>>     [0, 0, 10, 10],
>>>     [10, 10, 20, 20],
>>>     [32, 32, 38, 42],
>>> ])
>>> bboxes2 = torch.FloatTensor([
>>>     [0, 0, 10, 20],
>>>     [0, 10, 10, 19],
>>>     [10, 10, 20, 20],
>>> ])
>>> bbox_overlaps(bboxes1, bboxes2)
tensor([[0.5000, 0.0000, 0.0000],
        [0.0000, 0.0000, 1.0000],
        [0.0000, 0.0000, 0.0000]])

Example

>>> empty = torch.FloatTensor([])
>>> nonempty = torch.FloatTensor([
>>>     [0, 0, 10, 9],
>>> ])
>>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
>>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
>>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
class mmdet.core.bbox.BboxOverlaps2D[source]

2D IoU Calculator

class mmdet.core.bbox.BaseAssigner[source]

Base assigner that assigns boxes to ground truth boxes

assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign boxes to either a ground truth boxe or a negative boxes

class mmdet.core.bbox.MaxIoUAssigner(pos_iou_thr, neg_iou_thr, min_pos_iou=0.0, gt_max_assign_all=True, ignore_iof_thr=-1, ignore_wrt_candidates=True, match_low_quality=True, gpu_assign_thr=-1, iou_calculator={'type': 'BboxOverlaps2D'})[source]

Assign a corresponding gt bbox or background to each bbox.

Each proposals will be assigned with -1, or a semi-positive integer indicating the ground truth index.

  • -1: negative sample, no assigned gt
  • semi-positive integer: positive sample, index (0-based) of assigned gt
Parameters:
  • pos_iou_thr (float) – IoU threshold for positive bboxes.
  • neg_iou_thr (float or tuple) – IoU threshold for negative bboxes.
  • min_pos_iou (float) – Minimum iou for a bbox to be considered as a positive bbox. Positive samples can have smaller IoU than pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
  • gt_max_assign_all (bool) – Whether to assign all bboxes with the same highest overlap with some gt to that gt.
  • ignore_iof_thr (float) – IoF threshold for ignoring bboxes (if gt_bboxes_ignore is specified). Negative values mean not ignoring any bboxes.
  • ignore_wrt_candidates (bool) – Whether to compute the iof between bboxes and gt_bboxes_ignore, or the contrary.
  • match_low_quality (bool) – Whether to allow low quality matches. This is usually allowed for RPN and single stage detectors, but not allowed in the second stage. Details are demonetrated in Step 4.
  • gpu_assign_thr (int) – The upper bound of the number of GT for GPU assign. When the number of gt is above this threshold, will assign on CPU device. Negative values mean not assign on CPU.
assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

This method assign a gt bbox to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt. The assignment is done in following steps, the order matters.

  1. assign every bbox to the background
  2. assign proposals whose iou with all gts < neg_iou_thr to 0
  3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, assign it to that bbox
  4. for each gt bbox, assign its nearest proposals (may be more than one) to itself
Parameters:
  • bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
  • gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
  • gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).
Returns:

The assign result.

Return type:

AssignResult

Example

>>> self = MaxIoUAssigner(0.5, 0.5)
>>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
>>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]])
>>> assign_result = self.assign(bboxes, gt_bboxes)
>>> expected_gt_inds = torch.LongTensor([1, 0])
>>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
assign_wrt_overlaps(overlaps, gt_labels=None)[source]

Assign w.r.t. the overlaps of bboxes with gts.

Parameters:
  • overlaps (Tensor) – Overlaps between k gt_bboxes and n bboxes, shape(k, n).
  • gt_labels (Tensor, optional) – Labels of k gt_bboxes, shape (k, ).
Returns:

The assign result.

Return type:

AssignResult

class mmdet.core.bbox.AssignResult(num_gts, gt_inds, max_overlaps, labels=None)[source]

Stores assignments between predicted and truth boxes.

num_gts

the number of truth boxes considered when computing this assignment

Type:int
gt_inds

for each predicted box indicates the 1-based index of the assigned truth box. 0 means unassigned and -1 means ignore.

Type:LongTensor
max_overlaps

the iou between the predicted box and its assigned truth box.

Type:FloatTensor
labels

If specified, for each predicted box indicates the category label of the assigned truth box.

Type:None | LongTensor

Example

>>> # An assign result between 4 predicted boxes and 9 true boxes
>>> # where only two boxes were assigned.
>>> num_gts = 9
>>> max_overlaps = torch.LongTensor([0, .5, .9, 0])
>>> gt_inds = torch.LongTensor([-1, 1, 2, 0])
>>> labels = torch.LongTensor([0, 3, 4, 0])
>>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels)
>>> print(str(self))  # xdoctest: +IGNORE_WANT
<AssignResult(num_gts=9, gt_inds.shape=(4,), max_overlaps.shape=(4,),
              labels.shape=(4,))>
>>> # Force addition of gt labels (when adding gt as proposals)
>>> new_labels = torch.LongTensor([3, 4, 5])
>>> self.add_gt_(new_labels)
>>> print(str(self))  # xdoctest: +IGNORE_WANT
<AssignResult(num_gts=9, gt_inds.shape=(7,), max_overlaps.shape=(7,),
              labels.shape=(7,))>
add_gt_(gt_labels)[source]

Add ground truth as assigned results

Parameters:gt_labels (torch.Tensor) – Labels of gt boxes
get_extra_property(key)[source]

Get user-defined property

info

a dictionary of info about the object

Type:dict
num_preds

the number of predictions in this assignment

Type:int
classmethod random(**kwargs)[source]

Create random AssignResult for tests or debugging.

Parameters:
  • num_preds – number of predicted boxes
  • num_gts – number of true boxes
  • p_ignore (float) – probability of a predicted box assinged to an ignored truth
  • p_assigned (float) – probability of a predicted box not being assigned
  • p_use_label (float | bool) – with labels or not
  • rng (None | int | numpy.random.RandomState) – seed or state
Returns:

Randomly generated assign results.

Return type:

AssignResult

Example

>>> from mmdet.core.bbox.assigners.assign_result import *  # NOQA
>>> self = AssignResult.random()
>>> print(self.info)
set_extra_property(key, value)[source]

Set user-defined new property

class mmdet.core.bbox.BaseSampler(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]

Base class of samplers

sample(assign_result, bboxes, gt_bboxes, gt_labels=None, **kwargs)[source]

Sample positive and negative bboxes.

This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.

Parameters:
  • assign_result (AssignResult) – Bbox assigning results.
  • bboxes (Tensor) – Boxes to be sampled from.
  • gt_bboxes (Tensor) – Ground truth bboxes.
  • gt_labels (Tensor, optional) – Class labels of ground truth bboxes.
Returns:

Sampling result.

Return type:

SamplingResult

Example

>>> from mmdet.core.bbox import RandomSampler
>>> from mmdet.core.bbox import AssignResult
>>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes
>>> rng = ensure_rng(None)
>>> assign_result = AssignResult.random(rng=rng)
>>> bboxes = random_boxes(assign_result.num_preds, rng=rng)
>>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng)
>>> gt_labels = None
>>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1,
>>>                      add_gt_as_proposals=False)
>>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels)
class mmdet.core.bbox.PseudoSampler(**kwargs)[source]

A pseudo sampler that does not do sampling actually.

sample(assign_result, bboxes, gt_bboxes, **kwargs)[source]

Directly returns the positive and negative indices of samples

Parameters:
  • assign_result (AssignResult) – Assigned results
  • bboxes (torch.Tensor) – Bounding boxes
  • gt_bboxes (torch.Tensor) – Ground truth boxes
Returns:

sampler results

Return type:

SamplingResult

class mmdet.core.bbox.RandomSampler(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]

Random sampler

Parameters:
  • num (int) – Number of samples
  • pos_fraction (float) – Fraction of positive samples
  • neg_pos_up (int, optional) – Upper bound number of negative and positive samples. Defaults to -1.
  • add_gt_as_proposals (bool, optional) – Whether to add ground truth boxes as proposals. Defaults to True.
random_choice(gallery, num)[source]

Random select some elements from the gallery.

If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.

Parameters:
  • gallery (Tensor | ndarray | list) – indices pool.
  • num (int) – expected sample num.
Returns:

sampled indices.

Return type:

Tensor or ndarray

class mmdet.core.bbox.InstanceBalancedPosSampler(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]

Instance balanced sampler that samples equal number of positive samples for each instance.

class mmdet.core.bbox.IoUBalancedNegSampler(num, pos_fraction, floor_thr=-1, floor_fraction=0, num_bins=3, **kwargs)[source]

IoU Balanced Sampling

arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)

Sampling proposals according to their IoU. floor_fraction of needed RoIs are sampled from proposals whose IoU are lower than floor_thr randomly. The others are sampled from proposals whose IoU are higher than floor_thr. These proposals are sampled from some bins evenly, which are split by num_bins via IoU evenly.

Parameters:
  • num (int) – number of proposals.
  • pos_fraction (float) – fraction of positive proposals.
  • floor_thr (float) – threshold (minimum) IoU for IoU balanced sampling, set to -1 if all using IoU balanced sampling.
  • floor_fraction (float) – sampling fraction of proposals under floor_thr.
  • num_bins (int) – number of bins in IoU balanced sampling.
sample_via_interval(max_overlaps, full_set, num_expected)[source]

Sample according to the iou interval

Parameters:
  • max_overlaps (torch.Tensor) – IoU between bounding boxes and ground truth boxes.
  • full_set (set(int)) – A full set of indices of boxes。
  • num_expected (int) – Number of expected samples。
Returns:

Indices of samples

Return type:

np.ndarray

class mmdet.core.bbox.CombinedSampler(pos_sampler, neg_sampler, **kwargs)[source]

A sampler that combines positive sampler and negative sampler

class mmdet.core.bbox.SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, gt_flags)[source]

Bbox sampling result.

Example

>>> # xdoctest: +IGNORE_WANT
>>> from mmdet.core.bbox.samplers.sampling_result import *  # NOQA
>>> self = SamplingResult.random(rng=10)
>>> print(f'self = {self}')
self = <SamplingResult({
    'neg_bboxes': torch.Size([12, 4]),
    'neg_inds': tensor([ 0,  1,  2,  4,  5,  6,  7,  8,  9, 10, 11, 12]),
    'num_gts': 4,
    'pos_assigned_gt_inds': tensor([], dtype=torch.int64),
    'pos_bboxes': torch.Size([0, 4]),
    'pos_inds': tensor([], dtype=torch.int64),
    'pos_is_gt': tensor([], dtype=torch.uint8)
})>
bboxes

concatenated positive and negative boxes

Type:torch.Tensor
info

Returns a dictionary of info about the object.

classmethod random(rng=None, **kwargs)[source]
Parameters:
  • rng (None | int | numpy.random.RandomState) – seed or state.
  • kwargs (keyword arguments) –
    • num_preds: number of predicted boxes
    • num_gts: number of true boxes
    • p_ignore (float): probability of a predicted box assinged to
      an ignored truth.
    • p_assigned (float): probability of a predicted box not being
      assigned.
    • p_use_label (float | bool): with labels or not.
Returns:

Randomly generated sampling result.

Return type:

SamplingResult

Example

>>> from mmdet.core.bbox.samplers.sampling_result import *  # NOQA
>>> self = SamplingResult.random()
>>> print(self.__dict__)
to(device)[source]

Change the device of the data inplace.

Example

>>> self = SamplingResult.random()
>>> print(f'self = {self.to(None)}')
>>> # xdoctest: +REQUIRES(--gpu)
>>> print(f'self = {self.to(0)}')
mmdet.core.bbox.build_assigner(cfg, **default_args)[source]

Builder of box assigner

mmdet.core.bbox.build_sampler(cfg, **default_args)[source]

Builder of box sampler

mmdet.core.bbox.bbox_flip(bboxes, img_shape, direction='horizontal')[source]

Flip bboxes horizontally or vertically.

Parameters:
  • bboxes (Tensor) – Shape (…, 4*k)
  • img_shape (tuple) – Image shape.
  • direction (str) – Flip direction, options are “horizontal” and “vertical”. Default: “horizontal”
Returns:

Flipped bboxes.

Return type:

Tensor

mmdet.core.bbox.bbox_mapping(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]

Map bboxes from the original image scale to testing scale

mmdet.core.bbox.bbox_mapping_back(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]

Map bboxes from testing scale to original image scale

mmdet.core.bbox.bbox2roi(bbox_list)[source]

Convert a list of bboxes to roi format.

Parameters:bbox_list (list[Tensor]) – a list of bboxes corresponding to a batch of images.
Returns:shape (n, 5), [batch_ind, x1, y1, x2, y2]
Return type:Tensor
mmdet.core.bbox.roi2bbox(rois)[source]

Convert rois to bounding box format

Parameters:rois (torch.Tensor) – RoIs with the shape (n, 5) where the first column indicates batch id of each RoI.
Returns:Converted boxes of corresponding rois.
Return type:list[torch.Tensor]
mmdet.core.bbox.bbox2result(bboxes, labels, num_classes)[source]

Convert detection results to a list of numpy arrays.

Parameters:
  • bboxes (Tensor) – shape (n, 5)
  • labels (Tensor) – shape (n, )
  • num_classes (int) – class number, including background class
Returns:

bbox results of each class

Return type:

list(ndarray)

mmdet.core.bbox.distance2bbox(points, distance, max_shape=None)[source]

Decode distance prediction to bounding box.

Parameters:
  • points (Tensor) – Shape (n, 2), [x, y].
  • distance (Tensor) – Distance from the given point to 4 boundaries (left, top, right, bottom).
  • max_shape (tuple) – Shape of the image.
Returns:

Decoded bboxes.

Return type:

Tensor

mmdet.core.bbox.bbox2distance(points, bbox, max_dis=None, eps=0.1)[source]

Decode bounding box based on distances.

Parameters:
  • points (Tensor) – Shape (n, 2), [x, y].
  • bbox (Tensor) – Shape (n, 4), “xyxy” format
  • max_dis (float) – Upper bound of the distance.
  • eps (float) – a small value to ensure target < max_dis, instead <=
Returns:

Decoded distances.

Return type:

Tensor

mmdet.core.bbox.build_bbox_coder(cfg, **default_args)[source]

Builder of box coder

class mmdet.core.bbox.BaseBBoxCoder(**kwargs)[source]

Base bounding box coder

decode(bboxes, bboxes_pred)[source]

Decode the predicted bboxes according to prediction and base boxes

encode(bboxes, gt_bboxes)[source]

Encode deltas between bboxes and ground truth boxes

class mmdet.core.bbox.PseudoBBoxCoder(**kwargs)[source]

Pseudo bounding box coder

decode(bboxes, pred_bboxes)[source]

torch.Tensor: return the given pred_bboxes

encode(bboxes, gt_bboxes)[source]

torch.Tensor: return the given bboxes

class mmdet.core.bbox.DeltaXYWHBBoxCoder(target_means=(0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0))[source]

Delta XYWH BBox coder

Following the practice in R-CNN, this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).

Parameters:
  • target_means (Sequence[float]) – Denormalizing means of target for delta coordinates
  • target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates
decode(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]

Apply transformation pred_bboxes to boxes.

Parameters:
  • boxes (torch.Tensor) – Basic boxes.
  • pred_bboxes (torch.Tensor) – Encoded boxes with shape
  • max_shape (tuple[int], optional) – Maximum shape of boxes. Defaults to None.
  • wh_ratio_clip (float, optional) – The allowed ratio between width and height.
Returns:

Decoded boxes.

Return type:

torch.Tensor

encode(bboxes, gt_bboxes)[source]

Get box regression transformation deltas that can be used to transform the bboxes into the gt_bboxes.

Parameters:
  • bboxes (torch.Tensor) – Source boxes, e.g., object proposals.
  • gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.
Returns:

Box transformation deltas

Return type:

torch.Tensor

class mmdet.core.bbox.TBLRBBoxCoder(normalizer=4.0)[source]

TBLR BBox coder

Following the practice in FSAF, this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, right) and decode it back to the original.

Parameters:normalizer (list | float) – Normalization factor to be divided with when coding the coordinates. If it is a list, it should have length of 4 indicating normalization factor in tblr dims. Otherwise it is a unified float factor for all dims. Default: 4.0
decode(bboxes, pred_bboxes, max_shape=None)[source]

Apply transformation pred_bboxes to boxes.

Parameters:
  • boxes (torch.Tensor) – Basic boxes.
  • pred_bboxes (torch.Tensor) – Encoded boxes with shape
  • max_shape (tuple[int], optional) – Maximum shape of boxes. Defaults to None.
Returns:

Decoded boxes.

Return type:

torch.Tensor

encode(bboxes, gt_bboxes)[source]

Get box regression transformation deltas that can be used to transform the bboxes into the gt_bboxes in the top, left, bottom, right order.

Parameters:
  • bboxes (torch.Tensor) – source boxes, e.g., object proposals.
  • gt_bboxes (torch.Tensor) – target of the transformation, e.g., ground truth boxes.
Returns:

Box transformation deltas

Return type:

torch.Tensor

class mmdet.core.bbox.CenterRegionAssigner(pos_scale, neg_scale, min_pos_iof=0.01, ignore_gt_scale=0.5, iou_calculator={'type': 'BboxOverlaps2D'})[source]

Assign pixels at the center region of a bbox as positive.

Each proposals will be assigned with -1, 0, or a positive integer indicating the ground truth index. - -1: negative samples - semi-positive numbers: positive sample, index (0-based) of assigned gt

Parameters:
  • pos_scale (float) – Threshold within which pixels are labelled as positive.
  • neg_scale (float) – Threshold above which pixels are labelled as positive.
  • min_pos_iof (float) – Minimum iof of a pixel with a gt to be labelled as positive. Default: 1e-2
  • ignore_gt_scale (float) – Threshold within which the pixels are ignored when the gt is labelled as shadowed. Default: 0.5
assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

This method assigns gts to every bbox (proposal/anchor), each bbox will
be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt.
Parameters:
  • bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
  • gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
  • gt_bboxes_ignore (tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
  • gt_labels (tensor, optional) – Label of gt_bboxes, shape (num_gts,).
Returns:

The assigned result. Note that shadowed_labels

of shape (N, 2) is also added as an assign_result attribute. shadowed_labels is a tensor composed of N pairs of [anchor_ind, class_label], where N is the number of anchors that lie in the outer region of a gt, anchor_ind is the shadowed anchor index and class_label is the shadowed class label.

Return type:

AssignResult

Example

>>> self = CenterRegionAssigner(0.2, 0.2)
>>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
>>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]])
>>> assign_result = self.assign(bboxes, gt_bboxes)
>>> expected_gt_inds = torch.LongTensor([1, 0])
>>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
assign_one_hot_gt_indices(is_bbox_in_gt_core, is_bbox_in_gt_shadow, gt_priority=None)[source]

Assign only one gt index to each prior box

Gts with large gt_priority are more likely to be assigned.

Parameters:
  • is_bbox_in_gt_core (Tensor) – Bool tensor indicating the bbox center is in the core area of a gt (e.g. 0-0.2). Shape: (num_prior, num_gt).
  • is_bbox_in_gt_shadow (Tensor) – Bool tensor indicating the bbox center is in the shadowed area of a gt (e.g. 0.2-0.5). Shape: (num_prior, num_gt).
  • gt_priority (Tensor) – Priorities of gts. The gt with a higher priority is more likely to be assigned to the bbox when the bbox match with multiple gts. Shape: (num_gt, ).
Returns:

The assigned gt index of each prior bbox

(i.e. index from 1 to num_gts). Shape: (num_prior, ).

shadowed_gt_inds: shadowed gt indices. It is a tensor of shape

(num_ignore, 2) with first column being the shadowed prior bbox indices and the second column the shadowed gt indices (1-based)

Return type:

assigned_gt_inds

get_gt_priorities(gt_bboxes)[source]

Get gt priorities according to their areas.

Smaller gt has higher priority.

Parameters:gt_bboxes (Tensor) – Ground truth boxes, shape (k, 4).
Returns:
The priority of gts so that gts with larger priority is
more likely to be assigned. Shape (k, )
Return type:Tensor

mask

mmdet.core.mask.split_combined_polys(polys, poly_lens, polys_per_mask)[source]

Split the combined 1-D polys into masks.

A mask is represented as a list of polys, and a poly is represented as a 1-D array. In dataset, all masks are concatenated into a single 1-D tensor. Here we need to split the tensor into original representations.

Parameters:
  • polys (list) – a list (length = image num) of 1-D tensors
  • poly_lens (list) – a list (length = image num) of poly length
  • polys_per_mask (list) – a list (length = image num) of poly number of each mask
Returns:

a list (length = image num) of list (length = mask num) of

list (length = poly num) of numpy array

Return type:

list

mmdet.core.mask.mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, cfg)[source]

Compute mask target for positive proposals in multiple images.

Parameters:
  • pos_proposals_list (list[Tensor]) – Positive proposals in multiple images.
  • pos_assigned_gt_inds_list (list[Tensor]) – Assigned GT indices for each positive proposals.
  • gt_masks_list (list[BaseInstanceMasks]) – Ground truth masks of each image.
  • cfg (dict) – Config dict that specifies the mask size.
Returns:

Mask target of each image.

Return type:

list[Tensor]

class mmdet.core.mask.BitmapMasks(masks, height, width)[source]

This class represents masks in the form of bitmaps.

Parameters:
  • masks (ndarray) – ndarray of masks in shape (N, H, W), where N is the number of objects.
  • height (int) – height of masks
  • width (int) – width of masks
areas

See BaseInstanceMasks.areas().

crop(bbox)[source]

See BaseInstanceMasks.crop().

crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear')[source]

See BaseInstanceMasks.crop_and_resize().

expand(expanded_h, expanded_w, top, left)[source]

See BaseInstanceMasks.expand().

flip(flip_direction='horizontal')[source]

See BaseInstanceMasks.flip().

pad(out_shape, pad_val=0)[source]

See BaseInstanceMasks.pad().

rescale(scale, interpolation='nearest')[source]

See BaseInstanceMasks.rescale().

resize(out_shape, interpolation='nearest')[source]

See BaseInstanceMasks.resize().

to_ndarray()[source]

See BaseInstanceMasks.to_ndarray().

to_tensor(dtype, device)[source]

See BaseInstanceMasks.to_tensor().

class mmdet.core.mask.PolygonMasks(masks, height, width)[source]

This class represents masks in the form of polygons.

Polygons is a list of three levels. The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates

Parameters:
  • masks (list[list[ndarray]]) – The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
  • height (int) – height of masks
  • width (int) – width of masks
areas

Compute areas of masks.

This func is modified from https://github.com/facebookresearch/detectron2/blob/ffff8acc35ea88ad1cb1806ab0f00b4c1c5dbfd9/detectron2/structures/masks.py#L387 Only works with Polygons, using the shoelace formula

Returns:areas of each instance
Return type:ndarray
crop(bbox)[source]

see BaseInstanceMasks.crop()

crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear')[source]

see BaseInstanceMasks.crop_and_resize()

expand(*args, **kwargs)[source]

TODO: Add expand for polygon

flip(flip_direction='horizontal')[source]

see BaseInstanceMasks.flip()

pad(out_shape, pad_val=0)[source]

padding has no effect on polygons`

rescale(scale, interpolation=None)[source]

see BaseInstanceMasks.rescale()

resize(out_shape, interpolation=None)[source]

see BaseInstanceMasks.resize()

to_bitmap()[source]

convert polygon masks to bitmap masks

to_ndarray()[source]

Convert masks to the format of ndarray.

to_tensor(dtype, device)[source]

See BaseInstanceMasks.to_tensor().

mmdet.core.mask.encode_mask_results(mask_results)[source]

Encode bitmap mask to RLE code.

Parameters:mask_results (list | tuple[list]) – bitmap mask results. In mask scoring rcnn, mask_results is a tuple of (segm_results, segm_cls_score).
Returns:RLE encoded mask.
Return type:list | tuple

evaluation

mmdet.core.evaluation.get_classes(dataset)[source]

Get class names of a dataset.

class mmdet.core.evaluation.DistEvalHook(dataloader, interval=1, gpu_collect=False, **eval_kwargs)[source]

Distributed evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
tmpdir

Temporary directory to save the results of all processes. Default: None.

Type:str | None
gpu_collect

Whether to use gpu or cpu to collect results. Default: False.

Type:bool
class mmdet.core.evaluation.EvalHook(dataloader, interval=1, **eval_kwargs)[source]

Evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
mmdet.core.evaluation.average_precision(recalls, precisions, mode='area')[source]

Calculate average precision (for single or multiple scales).

Parameters:
  • recalls (ndarray) – shape (num_scales, num_dets) or (num_dets, )
  • precisions (ndarray) – shape (num_scales, num_dets) or (num_dets, )
  • mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1]
Returns:

calculated average precision

Return type:

float or ndarray

mmdet.core.evaluation.eval_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, dataset=None, logger=None, nproc=4)[source]

Evaluate mAP of a dataset.

Parameters:
  • det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.
  • annotations (list[dict]) –

    Ground truth annotations where each item of the list indicates an image. Keys of annotations are:

    • bboxes: numpy array of shape (n, 4)
    • labels: numpy array of shape (n, )
    • bboxes_ignore (optional): numpy array of shape (k, 4)
    • labels_ignore (optional): numpy array of shape (k, )
  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None.
  • iou_thr (float) – IoU threshold to be considered as matched. Default: 0.5.
  • dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datsets, e.g. “voc07”, “imagenet_det”, etc. Default: None.
  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
  • nproc (int) – Processes used for computing TP and FP. Default: 4.
Returns:

(mAP, [dict, dict, …])

Return type:

tuple

mmdet.core.evaluation.print_map_summary(mean_ap, results, dataset=None, scale_ranges=None, logger=None)[source]

Print mAP and results of each class.

A table will be printed to show the gts/dets/recall/AP of each class and the mAP.

Parameters:
  • mean_ap (float) – Calculated from eval_map().
  • results (list[dict]) – Calculated from eval_map().
  • dataset (list[str] | str | None) – Dataset name or dataset classes.
  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated.
  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
mmdet.core.evaluation.eval_recalls(gts, proposals, proposal_nums=None, iou_thrs=0.5, logger=None)[source]

Calculate recalls.

Parameters:
  • gts (list[ndarray]) – a list of arrays of shape (n, 4)
  • proposals (list[ndarray]) – a list of arrays of shape (k, 4) or (k, 5)
  • proposal_nums (int | Sequence[int]) – Top N proposals to be evaluated.
  • iou_thrs (float | Sequence[float]) – IoU thresholds. Default: 0.5.
  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
Returns:

recalls of different ious and proposal nums

Return type:

ndarray

mmdet.core.evaluation.print_recall_summary(recalls, proposal_nums, iou_thrs, row_idxs=None, col_idxs=None, logger=None)[source]

Print recalls in a table.

Parameters:
  • recalls (ndarray) – calculated from bbox_recalls
  • proposal_nums (ndarray or list) – top N proposals
  • iou_thrs (ndarray or list) – iou thresholds
  • row_idxs (ndarray) – which rows(proposal nums) to print
  • col_idxs (ndarray) – which cols(iou thresholds) to print
  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
mmdet.core.evaluation.plot_num_recall(recalls, proposal_nums)[source]

Plot Proposal_num-Recalls curve.

Parameters:
  • recalls (ndarray or list) – shape (k,)
  • proposal_nums (ndarray or list) – same shape as recalls
mmdet.core.evaluation.plot_iou_recall(recalls, iou_thrs)[source]

Plot IoU-Recalls curve.

Parameters:
  • recalls (ndarray or list) – shape (k,)
  • iou_thrs (ndarray or list) – same shape as recalls

post_processing

mmdet.core.post_processing.multiclass_nms(multi_bboxes, multi_scores, score_thr, nms_cfg, max_num=-1, score_factors=None)[source]

NMS for multi-class bboxes.

Parameters:
  • multi_bboxes (Tensor) – shape (n, #class*4) or (n, 4)
  • multi_scores (Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.
  • score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.
  • nms_thr (float) – NMS IoU threshold
  • max_num (int) – if there are more than max_num bboxes after NMS, only top max_num will be kept.
  • score_factors (Tensor) – The factors multiplied to scores before applying NMS
Returns:

(bboxes, labels), tensors of shape (k, 5) and (k, 1). Labels

are 0-based.

Return type:

tuple

mmdet.core.post_processing.merge_aug_proposals(aug_proposals, img_metas, rpn_test_cfg)[source]

Merge augmented proposals (multiscale, flip, etc.)

Parameters:
  • aug_proposals (list[Tensor]) – proposals from different testing schemes, shape (n, 5). Note that they are not rescaled to the original image size.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • rpn_test_cfg (dict) – rpn test config.
Returns:

shape (n, 4), proposals corresponding to original image scale.

Return type:

Tensor

mmdet.core.post_processing.merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)[source]

Merge augmented detection bboxes and scores.

Parameters:
  • aug_bboxes (list[Tensor]) – shape (n, 4*#class)
  • aug_scores (list[Tensor] or None) – shape (n, #class)
  • img_shapes (list[Tensor]) – shape (3, ).
  • rcnn_test_cfg (dict) – rcnn test config.
Returns:

(bboxes, scores)

Return type:

tuple

mmdet.core.post_processing.merge_aug_scores(aug_scores)[source]

Merge augmented bbox scores.

mmdet.core.post_processing.merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None)[source]

Merge augmented mask prediction.

Parameters:
  • aug_masks (list[ndarray]) – shape (n, #class, h, w)
  • img_shapes (list[ndarray]) – shape (3, ).
  • rcnn_test_cfg (dict) – rcnn test config.
Returns:

(bboxes, scores)

Return type:

tuple

fp16

mmdet.core.fp16.auto_fp16(apply_to=None, out_fp32=False)[source]

Decorator to enable fp16 training automatically.

This decorator is useful when you write custom modules and want to support mixed precision training. If inputs arguments are fp32 tensors, they will be converted to fp16 automatically. Arguments other than fp32 tensors are ignored.

Parameters:
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
  • out_fp32 (bool) – Whether to convert the output back to fp32.

Example

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp16
>>>     @auto_fp16()
>>>     def forward(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp16
>>>     @auto_fp16(apply_to=('pred', ))
>>>     def do_something(self, pred, others):
>>>         pass
mmdet.core.fp16.force_fp32(apply_to=None, out_fp16=False)[source]

Decorator to convert input arguments to fp32 in force.

This decorator is useful when you write custom modules and want to support mixed precision training. If there are some inputs that must be processed in fp32 mode, then this decorator can handle it. If inputs arguments are fp16 tensors, they will be converted to fp32 automatically. Arguments other than fp16 tensors are ignored.

Parameters:
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
  • out_fp16 (bool) – Whether to convert the output back to fp16.

Example

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp32
>>>     @force_fp32()
>>>     def loss(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp32
>>>     @force_fp32(apply_to=('pred', ))
>>>     def post_process(self, pred, others):
>>>         pass
class mmdet.core.fp16.Fp16OptimizerHook(grad_clip=None, coalesce=True, bucket_size_mb=-1, loss_scale=512.0, distributed=True)[source]

FP16 optimizer hook.

The steps of fp16 optimizer is as follows. 1. Scale the loss value. 2. BP in the fp16 model. 2. Copy gradients from fp16 model to fp32 weights. 3. Update fp32 weights. 4. Copy updated parameters from fp32 weights to fp16 model.

Refer to https://arxiv.org/abs/1710.03740 for more details.

Parameters:loss_scale (float) – Scale factor multiplied with loss.
after_train_iter(runner)[source]

Backward optimization steps for Mixed Precision Training.

  1. Scale the loss by a scale factor.
  2. Backward the loss to obtain the gradients (fp16).
  3. Copy gradients from the model to the fp32 weight copy.
  4. Scale the gradients back and update the fp32 weight copy.
  5. Copy back the params from fp32 weight copy to the fp16 model.
before_run(runner)[source]

Preparing steps before Mixed Precision Training.

  1. Make a master copy of fp32 weights for optimization.
  2. Convert the main model from fp32 to fp16.
copy_grads_to_fp32(fp16_net, fp32_weights)[source]

Copy gradients from fp16 model to fp32 weight copy.

copy_params_to_fp16(fp16_net, fp32_weights)[source]

Copy updated params from fp32 weight copy to fp16 model.

mmdet.core.fp16.wrap_fp16_model(model)[source]

Wrap the FP32 model to FP16.

  1. Convert FP32 model to FP16.
  2. Remain some necessary layers to be FP32, e.g., normalization layers.
Parameters:model (nn.Module) – Model in FP32.

optimizer

utils

mmdet.core.utils.allreduce_grads(params, coalesce=True, bucket_size_mb=-1)[source]

Allreduce gradients

Parameters:
  • params (list[torch.Parameters]) – List of parameters of a model
  • coalesce (bool, optional) – Whether allreduce parameters as a whole. Defaults to True.
  • bucket_size_mb (int, optional) – Size of bucket, the unit is MB. Defaults to -1.
class mmdet.core.utils.DistOptimizerHook(*args, **kwargs)[source]

Deprecated optimizer hook for distributed training

mmdet.core.utils.tensor2imgs(tensor, mean=(0, 0, 0), std=(1, 1, 1), to_rgb=True)[source]

Convert tensor to images

Parameters:
  • tensor (torch.Tensor) – Tensor that contains multiple images
  • mean (tuple[float], optional) – Mean of images. Defaults to (0, 0, 0).
  • std (tuple[float], optional) – Standard deviation of images. Defaults to (1, 1, 1).
  • to_rgb (bool, optional) – Whether convert the images to RGB format. Defaults to True.
Returns:

A list that contains multiple images.

Return type:

list[np.ndarray]

mmdet.core.utils.multi_apply(func, *args, **kwargs)[source]

Apply function to a list of arguments

Note

This function applies the func to multiple inputs and
map the multiple outputs of the func into different list. Each list contains the same type of outputs corresponding to different inputs.
Parameters:func (Function) – A function that will be applied to a list of arguments
Returns:
A tuple containing multiple list, each list contains
a kind of returned results by the function
Return type:tuple(list)
mmdet.core.utils.unmap(data, count, inds, fill=0)[source]

Unmap a subset of item (data) back to the original set of items (of size count)

mmdet.datasets

datasets

class mmdet.datasets.CustomDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]

Custom dataset for detection.

The annotation format is shown as follows. The ann field is optional for testing.

[
    {
        'filename': 'a.jpg',
        'width': 1280,
        'height': 720,
        'ann': {
            'bboxes': <np.ndarray> (n, 4),
            'labels': <np.ndarray> (n, ),
            'bboxes_ignore': <np.ndarray> (k, 4), (optional field)
            'labels_ignore': <np.ndarray> (k, 4) (optional field)
        }
    },
    ...
]
Parameters:
  • ann_file (str) – Annotation file path.
  • pipeline (list[dict]) – Processing pipeline.
  • classes (str | Sequence[str], optional) – Specify classes to load. If is None, cls.CLASSES will be used. Default: None.
  • data_root (str, optional) – Data root for ann_file, img_prefix, seg_prefix, proposal_file if specified.
  • test_mode (bool, optional) – If set True, annotation will not be loaded.
  • filter_empty_gt (bool, optional) – If set true, images without bounding boxes will be filtered out.
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate the dataset.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.
format_results(results, **kwargs)[source]

Place holder to format result to dataset specific output

get_ann_info(idx)[source]

Get annotation by index

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get category ids by index

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
classmethod get_classes(classes=None)[source]

Get class names of current dataset

Parameters:classes (Sequence[str] | str | None) – If classes is None, use default CLASSES defined by builtin dataset. If classes is a string, take it as a file name. The file contains the name of classes where each line contains one class name. If classes is a tuple or list, override the CLASSES defined by the dataset.
load_annotations(ann_file)[source]

Load annotation from annotation file

load_proposals(proposal_file)[source]

Load proposal from proposal file

pre_pipeline(results)[source]

Prepare results dict for pipeline

prepare_test_img(idx)[source]

Get testing data after pipeline.

Parameters:idx (int) – Index of data.
Returns:
Testing data after pipeline with new keys intorduced by
piepline.
Return type:dict
prepare_train_img(idx)[source]

Get training data and annotations after pipeline.

Parameters:idx (int) – Index of data.
Returns:
Training data and annotation after pipeline with new keys
introduced by pipeline.
Return type:dict
class mmdet.datasets.XMLDataset(min_size=None, **kwargs)[source]

XML dataset for detection.

Parameters:min_size (int | float, optional) – The minimum size of bounding boxes in the images. If the size of a bounding box is less than min_size, it would be add to ignored field.
get_ann_info(idx)[source]

Get annotation from XML file by index.

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get category ids in XML file by index.

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
get_subset_by_classes()[source]

Filter imgs by user-defined categories

load_annotations(ann_file)[source]

Load annotation from XML style ann_file.

Parameters:ann_file (str) – Path of XML file.
Returns:Annotation info from XML file.
Return type:list[dict]
class mmdet.datasets.CocoDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in COCO protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

COCO style evaluation metric.

Return type:

dict[str, float]

format_results(results, jsonfile_prefix=None, **kwargs)[source]

Format the results to json (standard format for COCO evaluation).

Parameters:
  • results (list[tuple | numpy.ndarray]) – Testing results of the dataset.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving json files when jsonfile_prefix is not specified.

Return type:

tuple

get_ann_info(idx)[source]

Get COCO annotation by index.

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get COCO category ids by index.

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
get_subset_by_classes()[source]

Get img ids that contain any category in class_ids.

Different from the coco.getImgIds(), this function returns the id if the img contains one of the categories rather than all.

Parameters:class_ids (list[int]) – list of category ids
Returns:integer list of img ids
Return type:ids (list[int])
load_annotations(ann_file)[source]

Load annotation from COCO style annotation file.

Parameters:ann_file (str) – Path of annotation file.
Returns:Annotation info from COCO api.
Return type:list[dict]
results2json(results, outfile_prefix)[source]

Dump the detection results to a COCO style json file.

There are 3 types of results: proposals, bbox predictions, mask predictions, and they have different data types. This method will automatically recognize the type, and dump them to json files.

Parameters:
  • results (list[list | tuple | ndarray]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the json files will be named “somepath/xxx.bbox.json”, “somepath/xxx.segm.json”, “somepath/xxx.proposal.json”.
Returns:

str]: Possible keys are “bbox”, “segm”, “proposal”, and

values are corresponding filenames.

Return type:

dict[str

xyxy2xywh(bbox)[source]

Convert xyxy style bounding boxes to xywh style for COCO evaluation.

Parameters:bbox (numpy.ndarray) – The bounding boxes, shape (4, ), in xyxy order.
Returns:The converted bounding boxes, in xywh order.
Return type:list[float]
class mmdet.datasets.DeepFashionDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
class mmdet.datasets.VOCDataset(**kwargs)[source]
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate in VOC protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘mAP’, ‘recall’.
  • logger (logging.Logger | str, optional) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple], optional) – Scale ranges for evaluating mAP. If not specified, all bounding boxes would be included in evaluation. Default: None.
Returns:

AP/recall metrics.

Return type:

dict[str, float]

class mmdet.datasets.CityscapesDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, outfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in Cityscapes/COCO protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • outfile_prefix (str | None) – The prefix of output file. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If results are evaluated with COCO protocol, it would be the prefix of output json file. For example, the metric is ‘bbox’ and ‘segm’, then json files would be “a/b/prefix.bbox.json” and “a/b/prefix.segm.json”. If results are evaluated with cityscapes protocol, it would be the prefix of output txt/png files. The output files would be png images under folder “a/b/prefix/xxx/” and the file name of images would be written into a txt file “a/b/prefix/xxx_pred.txt”, where “xxx” is the video name of cityscapes. If not specified, a temp file will be created. Default: None.
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

COCO style evaluation metric or cityscapes mAP

and AP@50.

Return type:

dict[str, float]

format_results(results, txtfile_prefix=None)[source]

Format the results to txt (standard format for Cityscapes evaluation).

Parameters:
  • results (list) – Testing results of the dataset.
  • txtfile_prefix (str | None) – The prefix of txt files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving txt/png files when txtfile_prefix is not specified.

Return type:

tuple

results2txt(results, outfile_prefix)[source]

Dump the detection results to a txt file.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the txt files will be named “somepath/xxx.txt”.
Returns:

str]: result txt files which contains corresponding instance segmentation images.

Return type:

list[str

class mmdet.datasets.LVISDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in LVIS protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • jsonfile_prefix (str | None) –
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

LVIS style metrics.

Return type:

dict[str, float]

load_annotations(ann_file)[source]

Load annotation from lvis style annotation file

Parameters:ann_file (str) – Path of annotation file.
Returns:Annotation info from LVIS api.
Return type:list[dict]
class mmdet.datasets.GroupSampler(dataset, samples_per_gpu=1)[source]
class mmdet.datasets.CustomDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]

Custom dataset for detection.

The annotation format is shown as follows. The ann field is optional for testing.

[
    {
        'filename': 'a.jpg',
        'width': 1280,
        'height': 720,
        'ann': {
            'bboxes': <np.ndarray> (n, 4),
            'labels': <np.ndarray> (n, ),
            'bboxes_ignore': <np.ndarray> (k, 4), (optional field)
            'labels_ignore': <np.ndarray> (k, 4) (optional field)
        }
    },
    ...
]
Parameters:
  • ann_file (str) – Annotation file path.
  • pipeline (list[dict]) – Processing pipeline.
  • classes (str | Sequence[str], optional) – Specify classes to load. If is None, cls.CLASSES will be used. Default: None.
  • data_root (str, optional) – Data root for ann_file, img_prefix, seg_prefix, proposal_file if specified.
  • test_mode (bool, optional) – If set True, annotation will not be loaded.
  • filter_empty_gt (bool, optional) – If set true, images without bounding boxes will be filtered out.
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate the dataset.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.
format_results(results, **kwargs)[source]

Place holder to format result to dataset specific output

get_ann_info(idx)[source]

Get annotation by index

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get category ids by index

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
classmethod get_classes(classes=None)[source]

Get class names of current dataset

Parameters:classes (Sequence[str] | str | None) – If classes is None, use default CLASSES defined by builtin dataset. If classes is a string, take it as a file name. The file contains the name of classes where each line contains one class name. If classes is a tuple or list, override the CLASSES defined by the dataset.
load_annotations(ann_file)[source]

Load annotation from annotation file

load_proposals(proposal_file)[source]

Load proposal from proposal file

pre_pipeline(results)[source]

Prepare results dict for pipeline

prepare_test_img(idx)[source]

Get testing data after pipeline.

Parameters:idx (int) – Index of data.
Returns:
Testing data after pipeline with new keys intorduced by
piepline.
Return type:dict
prepare_train_img(idx)[source]

Get training data and annotations after pipeline.

Parameters:idx (int) – Index of data.
Returns:
Training data and annotation after pipeline with new keys
introduced by pipeline.
Return type:dict
class mmdet.datasets.XMLDataset(min_size=None, **kwargs)[source]

XML dataset for detection.

Parameters:min_size (int | float, optional) – The minimum size of bounding boxes in the images. If the size of a bounding box is less than min_size, it would be add to ignored field.
get_ann_info(idx)[source]

Get annotation from XML file by index.

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get category ids in XML file by index.

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
get_subset_by_classes()[source]

Filter imgs by user-defined categories

load_annotations(ann_file)[source]

Load annotation from XML style ann_file.

Parameters:ann_file (str) – Path of XML file.
Returns:Annotation info from XML file.
Return type:list[dict]
class mmdet.datasets.CocoDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in COCO protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

COCO style evaluation metric.

Return type:

dict[str, float]

format_results(results, jsonfile_prefix=None, **kwargs)[source]

Format the results to json (standard format for COCO evaluation).

Parameters:
  • results (list[tuple | numpy.ndarray]) – Testing results of the dataset.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving json files when jsonfile_prefix is not specified.

Return type:

tuple

get_ann_info(idx)[source]

Get COCO annotation by index.

Parameters:idx (int) – Index of data.
Returns:Annotation info of specified index.
Return type:dict
get_cat_ids(idx)[source]

Get COCO category ids by index.

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
get_subset_by_classes()[source]

Get img ids that contain any category in class_ids.

Different from the coco.getImgIds(), this function returns the id if the img contains one of the categories rather than all.

Parameters:class_ids (list[int]) – list of category ids
Returns:integer list of img ids
Return type:ids (list[int])
load_annotations(ann_file)[source]

Load annotation from COCO style annotation file.

Parameters:ann_file (str) – Path of annotation file.
Returns:Annotation info from COCO api.
Return type:list[dict]
results2json(results, outfile_prefix)[source]

Dump the detection results to a COCO style json file.

There are 3 types of results: proposals, bbox predictions, mask predictions, and they have different data types. This method will automatically recognize the type, and dump them to json files.

Parameters:
  • results (list[list | tuple | ndarray]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the json files will be named “somepath/xxx.bbox.json”, “somepath/xxx.segm.json”, “somepath/xxx.proposal.json”.
Returns:

str]: Possible keys are “bbox”, “segm”, “proposal”, and

values are corresponding filenames.

Return type:

dict[str

xyxy2xywh(bbox)[source]

Convert xyxy style bounding boxes to xywh style for COCO evaluation.

Parameters:bbox (numpy.ndarray) – The bounding boxes, shape (4, ), in xyxy order.
Returns:The converted bounding boxes, in xywh order.
Return type:list[float]
class mmdet.datasets.VOCDataset(**kwargs)[source]
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate in VOC protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘mAP’, ‘recall’.
  • logger (logging.Logger | str, optional) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple], optional) – Scale ranges for evaluating mAP. If not specified, all bounding boxes would be included in evaluation. Default: None.
Returns:

AP/recall metrics.

Return type:

dict[str, float]

class mmdet.datasets.CityscapesDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, outfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in Cityscapes/COCO protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • outfile_prefix (str | None) – The prefix of output file. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If results are evaluated with COCO protocol, it would be the prefix of output json file. For example, the metric is ‘bbox’ and ‘segm’, then json files would be “a/b/prefix.bbox.json” and “a/b/prefix.segm.json”. If results are evaluated with cityscapes protocol, it would be the prefix of output txt/png files. The output files would be png images under folder “a/b/prefix/xxx/” and the file name of images would be written into a txt file “a/b/prefix/xxx_pred.txt”, where “xxx” is the video name of cityscapes. If not specified, a temp file will be created. Default: None.
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

COCO style evaluation metric or cityscapes mAP

and AP@50.

Return type:

dict[str, float]

format_results(results, txtfile_prefix=None)[source]

Format the results to txt (standard format for Cityscapes evaluation).

Parameters:
  • results (list) – Testing results of the dataset.
  • txtfile_prefix (str | None) – The prefix of txt files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving txt/png files when txtfile_prefix is not specified.

Return type:

tuple

results2txt(results, outfile_prefix)[source]

Dump the detection results to a txt file.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the txt files will be named “somepath/xxx.txt”.
Returns:

str]: result txt files which contains corresponding instance segmentation images.

Return type:

list[str

class mmdet.datasets.LVISDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in LVIS protocol.

Parameters:
  • results (list[list | tuple]) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated. Options are ‘bbox’, ‘segm’, ‘proposal’, ‘proposal_fast’.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • jsonfile_prefix (str | None) –
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

LVIS style metrics.

Return type:

dict[str, float]

load_annotations(ann_file)[source]

Load annotation from lvis style annotation file

Parameters:ann_file (str) – Path of annotation file.
Returns:Annotation info from LVIS api.
Return type:list[dict]
class mmdet.datasets.DeepFashionDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
class mmdet.datasets.GroupSampler(dataset, samples_per_gpu=1)[source]
class mmdet.datasets.DistributedGroupSampler(dataset, samples_per_gpu=1, num_replicas=None, rank=None)[source]

Sampler that restricts data loading to a subset of the dataset.

It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.

Note

Dataset is assumed to be of constant size.

Parameters:
  • dataset – Dataset used for sampling.
  • num_replicas (optional) – Number of processes participating in distributed training.
  • rank (optional) – Rank of the current process within num_replicas.
class mmdet.datasets.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True)[source]
mmdet.datasets.build_dataloader(dataset, samples_per_gpu, workers_per_gpu, num_gpus=1, dist=True, shuffle=True, seed=None, **kwargs)[source]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

Parameters:
  • dataset (Dataset) – A PyTorch dataset.
  • samples_per_gpu (int) – Number of training samples on each GPU, i.e., batch size of each GPU.
  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.
  • num_gpus (int) – Number of GPUs. Only used in non-distributed training.
  • dist (bool) – Distributed training/test or not. Default: True.
  • shuffle (bool) – Whether to shuffle the data at every epoch. Default: True.
  • kwargs – any keyword argument to be used to initialize DataLoader
Returns:

A PyTorch dataloader.

Return type:

DataLoader

class mmdet.datasets.ConcatDataset(datasets)[source]

A wrapper of concatenated dataset.

Same as torch.utils.data.dataset.ConcatDataset, but concat the group flag for image aspect ratio.

Parameters:datasets (list[Dataset]) – A list of datasets.
get_cat_ids(idx)[source]

Get category ids of concatenated dataset by index

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
class mmdet.datasets.RepeatDataset(dataset, times)[source]

A wrapper of repeated dataset.

The length of repeated dataset will be times larger than the original dataset. This is useful when the data loading time is long but the dataset is small. Using RepeatDataset can reduce the data loading time between epochs.

Parameters:
  • dataset (Dataset) – The dataset to be repeated.
  • times (int) – Repeat times.
get_cat_ids(idx)[source]

Get category ids of repeat dataset by index

Parameters:idx (int) – Index of data.
Returns:All categories in the image of specified index.
Return type:list[int]
class mmdet.datasets.ClassBalancedDataset(dataset, oversample_thr)[source]

A wrapper of repeated dataset with repeat factor.

Suitable for training on class imbalanced datasets like LVIS. Following the sampling strategy in [1], in each epoch, an image may appear multiple times based on its “repeat factor”. The repeat factor for an image is a function of the frequency the rarest category labeled in that image. The “frequency of category c” in [0, 1] is defined by the fraction of images in the training set (without repeats) in which category c appears. The dataset needs to instantiate self.get_cat_ids(idx)() to support ClassBalancedDataset. The repeat factor is computed as followed. 1. For each category c, compute the fraction # of images

that contain it: f(c)
  1. For each category c, compute the category-level repeat factor:
    r(c) = max(1, sqrt(t/f(c)))
  2. For each image I, compute the image-level repeat factor:
    r(I) = max_{c in I} r(c)

References

[1]https://arxiv.org/pdf/1903.00621v2.pdf
Parameters:
  • dataset (CustomDataset) – The dataset to be repeated.
  • oversample_thr (float) – frequency threshold below which data is repeated. For categories with f_c >= oversample_thr, there is no oversampling. For categories with f_c < oversample_thr, the degree of oversampling following the square-root inverse frequency heuristic above.
class mmdet.datasets.WIDERFaceDataset(**kwargs)[source]

Reader for the WIDER Face dataset in PASCAL VOC format. Conversion scripts can be found in https://github.com/sovrasov/wider-face-pascal-voc-annotations

load_annotations(ann_file)[source]

Load annotation from WIDERFace XML style annotation file.

Parameters:ann_file (str) – Path of XML file.
Returns:Annotation info from XML file.
Return type:list[dict]

pipelines

class mmdet.datasets.pipelines.Compose(transforms)[source]

Compose multiple transforms sequentially.

Parameters:transforms (Sequence[dict | callable]) – Sequence of transform object or config dict to be composed.
mmdet.datasets.pipelines.to_tensor(data)[source]

Convert objects of various python types to torch.Tensor.

Supported types are: numpy.ndarray, torch.Tensor, Sequence, int and float.

Parameters:data (torch.Tensor | numpy.ndarray | Sequence | int | float) – Data to be converted.
class mmdet.datasets.pipelines.ToTensor(keys)[source]

Convert some results to torch.Tensor by given keys.

Parameters:keys (Sequence[str]) – Keys that need to be converted to Tensor.
class mmdet.datasets.pipelines.ImageToTensor(keys)[source]

Convert image to torch.Tensor by given keys.

The dimension order of input image is (H, W, C). The pipeline will convert it to (C, H, W). If only 2 dimension (H, W) is given, the output would be (1, H, W).

Parameters:keys (Sequence[str]) – Key of images to be converted to Tensor.
class mmdet.datasets.pipelines.ToDataContainer(fields=({'key': 'img', 'stack': True}, {'key': 'gt_bboxes'}, {'key': 'gt_labels'}))[source]

Convert results to mmcv.DataContainer by given fields.

Parameters:fields (Sequence[dict]) –

Each field is a dict like dict(key='xxx', **kwargs). The key in result will be converted to mmcv.DataContainer with **kwargs. Default: ``(dict(key=’img’, stack=True), dict(key=’gt_bboxes’),

dict(key=’gt_labels’))``.
class mmdet.datasets.pipelines.Transpose(keys, order)[source]

Transpose some results by given keys.

Parameters:
  • keys (Sequence[str]) – Keys of results to be transposed.
  • order (Sequence[int]) – Order of transpose.
class mmdet.datasets.pipelines.Collect(keys, meta_keys=('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg'))[source]

Collect data from the loader relevant to the specific task.

This is usually the last stage of the data loader pipeline. Typically keys is set to some subset of “img”, “proposals”, “gt_bboxes”, “gt_bboxes_ignore”, “gt_labels”, and/or “gt_masks”.

The “img_meta” item is always populated. The contents of the “img_meta” dictionary depends on “meta_keys”. By default this includes:

  • “img_shape”: shape of the image input to the network as a tuple
    (h, w, c). Note that images may be zero padded on the bottom/right if the batch tensor is larger than this shape.
  • “scale_factor”: a float indicating the preprocessing scale
  • “flip”: a boolean indicating if image flip transform was used
  • “filename”: path to the image file
  • “ori_shape”: original shape of the image as a tuple (h, w, c)
  • “pad_shape”: image shape after padding
  • “img_norm_cfg”: a dict of normalization information:
    • mean - per channel mean subtraction
    • std - per channel std divisor
    • to_rgb - bool indicating if bgr was converted to rgb
Parameters:
  • keys (Sequence[str]) – Keys of results to be collected in data.
  • meta_keys (Sequence[str], optional) – Meta keys to be converted to mmcv.DataContainer and collected in data[img_metas]. Default: ('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg')
class mmdet.datasets.pipelines.LoadAnnotations(with_bbox=True, with_label=True, with_mask=False, with_seg=False, poly2mask=True, file_client_args={'backend': 'disk'})[source]

Load mutiple types of annotations.

Parameters:
  • with_bbox (bool) – Whether to parse and load the bbox annotation. Default: True.
  • with_label (bool) – Whether to parse and load the label annotation. Default: True.
  • with_mask (bool) – Whether to parse and load the mask annotation. Default: False.
  • with_seg (bool) – Whether to parse and load the semantic segmentation annotation. Default: False.
  • poly2mask (bool) – Whether to convert the instance masks from polygons to bitmaps. Default: True.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
process_polygons(polygons)[source]

Convert polygons to list of ndarray and filter invalid polygons.

Parameters:polygons (list[list]) – Polygons of one instance.
Returns:Processed polygons.
Return type:list[numpy.ndarray]
class mmdet.datasets.pipelines.LoadImageFromFile(to_float32=False, color_type='color', file_client_args={'backend': 'disk'})[source]

Load an image from file.

Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).

Parameters:
  • to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
  • color_type (str) – The flag argument for mmcv.imfrombytes(). Defaults to ‘color’.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
class mmdet.datasets.pipelines.LoadMultiChannelImageFromFiles(to_float32=False, color_type='unchanged', file_client_args={'backend': 'disk'})[source]

Load multi-channel images from a list of separate channel files.

Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”, which is expected to be a list of filenames). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).

Parameters:
  • to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
  • color_type (str) – The flag argument for mmcv.imfrombytes(). Defaults to ‘color’.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
class mmdet.datasets.pipelines.LoadProposals(num_max_proposals=None)[source]

Load proposal pipeline.

Required key is “proposals”. Updated keys are “proposals”, “bbox_fields”.

Parameters:num_max_proposals (int, optional) – Maximum number of proposals to load. If not specified, all proposals will be loaded.
class mmdet.datasets.pipelines.MultiScaleFlipAug(transforms, img_scale=None, scale_factor=None, flip=False, flip_direction='horizontal')[source]

Test-time augmentation with multiple scales and flipping

An example configuration is as followed:

After MultiScaleFLipAug with above configuration, the results are wrapped into lists of the same length as followed:

Parameters:
  • transforms (list[dict]) – Transforms to apply in each augmentation.
  • img_scale (tuple | list[tuple] | None) – Images scales for resizing.
  • scale_factor (float | list[float] | None) – Scale factors for resizing.
  • flip (bool) – Whether apply flip augmentation. Default: False.
  • flip_direction (str | list[str]) – Flip augmentation directions, options are “horizontal” and “vertical”. If flip_direction is list, multiple flip augmentations will be applied. It has no effect when flip == False. Default: “horizontal”.
class mmdet.datasets.pipelines.Resize(img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True)[source]

Resize images & bbox & mask.

This transform resizes the input image to some scale. Bboxes and masks are then resized with the same scale factor. If the input dict contains the key “scale”, then the scale in the input dict is used, otherwise the specified scale in the init method is used. If the input dict contains the key “scale_factor” (if MultiScaleFlipAug does not give img_scale but scale_factor), the actual scale will be computed by image shape and scale_factor.

img_scale can either be a tuple (single-scale) or a list of tuple (multi-scale). There are 3 multiscale modes:

  • ratio_range is not None: randomly sample a ratio from the ratio range and multiply it with the image scale.
  • ratio_range is None and multiscale_mode == "range": randomly sample a scale from the multiscale range.
  • ratio_range is None and multiscale_mode == "value": randomly sample a scale from multiple scales.
Parameters:
  • img_scale (tuple or list[tuple]) – Images scales for resizing.
  • multiscale_mode (str) – Either “range” or “value”.
  • ratio_range (tuple[float]) – (min_ratio, max_ratio)
  • keep_ratio (bool) – Whether to keep the aspect ratio when resizing the image.
static random_sample(img_scales)[source]

Randomly sample an img_scale when multiscale_mode=='range'.

Parameters:img_scales (list[tuple]) – Images scale range for sampling. There must be two tuples in img_scales, which specify the lower and uper bound of image scales.
Returns:
Returns a tuple (img_scale, None), where
img_scale is sampled scale and None is just a placeholder to be consistent with random_select().
Return type:(tuple, None)
static random_sample_ratio(img_scale, ratio_range)[source]

Randomly sample an img_scale when ratio_range is specified.

A ratio will be randomly sampled from the range specified by ratio_range. Then it would be multiplied with img_scale to generate sampled scale.

Parameters:
  • img_scale (tuple) – Images scale base to multiply with ratio.
  • ratio_range (tuple[float]) – The minimum and maximum ratio to scale the img_scale.
Returns:

Returns a tuple (scale, None), where

scale is sampled ratio multiplied with img_scale and None is just a placeholder to be consistent with random_select().

Return type:

(tuple, None)

static random_select(img_scales)[source]

Randomly select an img_scale from given candidates.

Parameters:img_scales (list[tuple]) – Images scales for selection.
Returns:
Returns a tuple (img_scale, scale_dix),
where img_scale is the selected image scale and scale_idx is the selected index in the given candidates.
Return type:(tuple, int)
class mmdet.datasets.pipelines.RandomFlip(flip_ratio=None, direction='horizontal')[source]

Flip the image & bbox & mask.

If the input dict contains the key “flip”, then the flag will be used, otherwise it will be randomly decided by a ratio specified in the init method.

Parameters:
  • flip_ratio (float, optional) – The flipping probability. Default: None.
  • direction (str, optional) – The flipping direction. Options are ‘horizontal’ and ‘vertical’. Default: ‘horizontal’.
bbox_flip(bboxes, img_shape, direction)[source]

Flip bboxes horizontally.

Parameters:
  • bboxes (numpy.ndarray) – Bounding boxes, shape (…, 4*k)
  • img_shape (tuple[int]) – Image shape (height, width)
  • direction (str) – Flip direction. Options are ‘horizontal’, ‘vertical’.
Returns:

Flipped bounding boxes.

Return type:

numpy.ndarray

class mmdet.datasets.pipelines.Pad(size=None, size_divisor=None, pad_val=0)[source]

Pad the image & mask.

There are two padding modes: (1) pad to a fixed size and (2) pad to the minimum size that is divisible by some number. Added keys are “pad_shape”, “pad_fixed_size”, “pad_size_divisor”,

Parameters:
  • size (tuple, optional) – Fixed padding size.
  • size_divisor (int, optional) – The divisor of padded size.
  • pad_val (float, optional) – Padding value, 0 by default.
class mmdet.datasets.pipelines.RandomCrop(crop_size)[source]

Random crop the image & bboxes & masks.

Parameters:crop_size (tuple) – Expected size after cropping, (h, w).

Notes

  • If the image is smaller than the crop size, return the original image
  • The keys for bboxes, labels and masks must be aligned. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore corresponds to gt_labels_ignore and gt_masks_ignore.
  • If there are gt bboxes in an image and the cropping area does not have intersection with any gt bbox, this image is skipped.
class mmdet.datasets.pipelines.Normalize(mean, std, to_rgb=True)[source]

Normalize the image.

Added key is “img_norm_cfg”.

Parameters:
  • mean (sequence) – Mean values of 3 channels.
  • std (sequence) – Std values of 3 channels.
  • to_rgb (bool) – Whether to convert the image from BGR to RGB, default is true.
class mmdet.datasets.pipelines.SegRescale(scale_factor=1)[source]

Rescale semantic segmentation maps.

Parameters:scale_factor (float) – The scale factor of the final output.
class mmdet.datasets.pipelines.MinIoURandomCrop(min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), min_crop_size=0.3)[source]

Random crop the image & bboxes, the cropped patches have minimum IoU requirement with original image & bboxes, the IoU threshold is randomly selected from min_ious.

Parameters:
  • min_ious (tuple) – minimum IoU threshold for all intersections with
  • boxes (bounding) –
  • min_crop_size (float) – minimum crop’s size (i.e. h,w := a*h, a*w,
  • a >= min_crop_size) (where) –

Notes

The keys for bboxes, labels and masks should be paired. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore to gt_labels_ignore and gt_masks_ignore.

class mmdet.datasets.pipelines.Expand(mean=(0, 0, 0), to_rgb=True, ratio_range=(1, 4), seg_ignore_label=None, prob=0.5)[source]

Random expand the image & bboxes.

Randomly place the original image on a canvas of ‘ratio’ x original image size filled with mean values. The ratio is in the range of ratio_range.

Parameters:
  • mean (tuple) – mean value of dataset.
  • to_rgb (bool) – if need to convert the order of mean to align with RGB.
  • ratio_range (tuple) – range of expand ratio.
  • prob (float) – probability of applying this transformation
class mmdet.datasets.pipelines.PhotoMetricDistortion(brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18)[source]

Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last.

  1. random brightness
  2. random contrast (mode 0)
  3. convert color from BGR to HSV
  4. random saturation
  5. random hue
  6. convert color from HSV to BGR
  7. random contrast (mode 1)
  8. randomly swap channels
Parameters:
  • brightness_delta (int) – delta of brightness.
  • contrast_range (tuple) – range of contrast.
  • saturation_range (tuple) – range of saturation.
  • hue_delta (int) – delta of hue.
class mmdet.datasets.pipelines.Albu(transforms, bbox_params=None, keymap=None, update_pad_shape=False, skip_img_without_anno=False)[source]

Albumentation augmentation.

Adds custom transformations from Albumentations library. Please, visit https://albumentations.readthedocs.io to get more information.

An example of transforms is as followed:

Parameters:
  • transforms (list[dict]) – A list of albu transformations
  • bbox_params (dict) – Bbox_params for albumentation Compose
  • keymap (dict) – Contains {‘input key’:’albumentation-style key’}
  • skip_img_without_anno (bool) – Whether to skip the image if no ann left after aug
albu_builder(cfg)[source]

Import a module from albumentations. Inherits some of build_from_cfg logic.

Parameters:cfg (dict) – Config dict. It should at least contain the key “type”.
Returns:The constructed object.
Return type:obj
static mapper(d, keymap)[source]

Dictionary mapper. Renames keys according to keymap provided.

Parameters:
  • d (dict) – old dict
  • keymap (dict) – {‘old_key’:’new_key’}
Returns:

new dict.

Return type:

dict

class mmdet.datasets.pipelines.InstaBoost(action_candidate=('normal', 'horizontal', 'skip'), action_prob=(1, 0, 0), scale=(0.8, 1.2), dx=15, dy=15, theta=(-1, 1), color_prob=0.5, hflag=False, aug_ratio=0.5)[source]

Data augmentation method in paper “InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting” Implementation details can refer to https://github.com/GothicAi/Instaboost.

class mmdet.datasets.pipelines.RandomCenterCropPad(crop_size=None, ratios=(0.9, 1.0, 1.1), border=128, mean=None, std=None, to_rgb=None, test_mode=False, test_pad_mode=('logical_or', 127))[source]

Random center crop and random around padding for CornerNet.

This operation generates randomly cropped image from the original image and pads it simultaneously. Different from RandomCrop, the output shape may not equal to crop_size strictly. We choose a random value from ratios and the output shape could be larger or smaller than crop_size. Also the pad in this operation is different from Pad, actually we use around padding instead of right-bottom padding.

The relation between output image (padding image) and original image:

Code-block:

output image

+——|----------------------------|———-+ | | cropped area | | | | +—————+ | | | | | . center | | | original image | | | range | | | | | +—————+ | | +——|----------------------------|———-+

padded area |
There are 5 main areas in the figure:
  • output image: output image of this operation, also called padding
    image in following instruction.
  • original image: input image of this operation.
  • padded area: non-intersect area of output image and original image.
  • cropped area: the overlap of output image and original image.
  • center range: a smaller area where random center chosen from.
    center range is computed by border and original image’s shape to avoid our random center is too close to original image’s border.

Also this operation act differently in train and test mode, the summary pipeline is listed below.

Train pipeline:
  1. Choose a random_ratio from ratios, the shape of padding image
    will be random_ratio * crop_size.
  2. Choose a random_center in center range.
  3. Generate padding image with center matches the random_center.
  4. Initialize the padding image with pixel value equals to mean.
  5. Copy the cropped area to padding image.
  6. Refine annotations.
Test pipeline:
  1. Compute output shape according to test_pad_mode.
  2. Generate padding image with center matches the original image
    center.
  3. Initialize the padding image with pixel value equals to mean.
  4. Copy the cropped area to padding image.
Parameters:
  • crop_size (tuple | None) – expected size after crop, final size will computed according to ratio. Requires (h, w) in train mode, and None in test mode.
  • ratios (tuple) – random select a ratio from tuple and crop image to (crop_size[0] * ratio) * (crop_size[1] * ratio). Only available in train mode.
  • border (int) – max distance from center select area to image border. Only available in train mode.
  • mean (sequence) – Mean values of 3 channels.
  • std (sequence) – Std values of 3 channels.
  • to_rgb (bool) – Whether to convert the image from BGR to RGB.
  • test_mode (bool) – whether involve random variables in transform. In train mode, crop_size is fixed, center coords and ratio is random selected from predefined lists. In test mode, crop_size is image’s original shape, center coords and ratio is fixed.
  • test_pad_mode (tuple) –

    padding method and padding shape value, only available in test mode. Default is using ‘logical_or’ with 127 as padding shape value.

    • ’logical_or’: final_shape = input_shape | padding_shape_value
    • ’size_divisor’: final_shape = int(
      ceil(input_shape / padding_shape_value) * padding_shape_value)
class mmdet.datasets.pipelines.AutoAugment(policies)[source]

Auto augmentation.

This data augmentation is proposed in Learning Data Augmentation Strategies for Object Detection # noqa: E501

Parameters:policies (list[list[dict]]) – The policies of auto augmentation. Each policy in policies is a specific augmentation policy, and is composed by several augmentations (dict). When AutoAugment is called, a random policy in policies will be selected to augment images.

Examples

>>> replace = (104, 116, 124)
>>> policies = [
>>>     [
>>>         dict(type='Sharpness', prob=0.0, level=8),
>>>         dict(
>>>             type='Shear',
>>>             prob=0.4,
>>>             level=0,
>>>             replace=replace,
>>>             axis='x')
>>>     ],
>>>     [
>>>         dict(
>>>             type='Rotate',
>>>             prob=0.6,
>>>             level=10,
>>>             replace=replace),
>>>         dict(type='Color', prob=1.0, level=6)
>>>     ]
>>> ]
>>> augmentation = AutoAugment(policies)
>>> img = np.ones(100, 100, 3)
>>> gt_bboxes = np.ones(10, 4)
>>> results = dict(img=img, gt_bboxes=gt_bboxes)
>>> results = augmentation(results)

mmdet.models

detectors

class mmdet.models.detectors.ATSS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.BaseDetector[source]

Base class for detectors

aug_test(imgs, img_metas, **kwargs)[source]

Test function with test time augmentation

extract_feat(imgs)[source]

Extract features from images

extract_feats(imgs)[source]

Extract features from multiple images

Parameters:imgs (list[torch.Tensor]) – A list of images. The images are augmented from the same image but in different ways.
Returns:Features of different images
Return type:list[torch.Tensor]
forward(img, img_metas, return_loss=True, **kwargs)[source]

Calls either forward_train or forward_test depending on whether return_loss=True. Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

forward_test(imgs, img_metas, **kwargs)[source]
Parameters:
  • imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
  • img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
forward_train(imgs, img_metas, **kwargs)[source]
Parameters:
  • img (list[Tensor]) – List of tensors of shape (1, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys, see mmdet.datasets.pipelines.Collect.
  • kwargs (keyword arguments) – Specific to concrete implementation.
init_weights(pretrained=None)[source]

Initialize the weights in detector

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
show_result(img, result, score_thr=0.3, bbox_color='green', text_color='green', thickness=1, font_scale=0.5, win_name='', show=False, wait_time=0, out_file=None)[source]

Draw result over img.

Parameters:
  • img (str or Tensor) – The image to be displayed.
  • result (Tensor or tuple) – The results to draw over img bbox_result or (bbox_result, segm_result).
  • score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.
  • bbox_color (str or tuple or Color) – Color of bbox lines.
  • text_color (str or tuple or Color) – Color of texts.
  • thickness (int) – Thickness of lines.
  • font_scale (float) – Font scales of texts.
  • win_name (str) – The window name.
  • wait_time (int) – Value of waitKey param. Default: 0.
  • show (bool) – Whether to show the image. Default: False.
  • out_file (str or None) – The filename to write the image. Default: None.
Returns:

Only if not show or out_file

Return type:

img (Tensor)

train_step(data, optimizer)[source]

The iteration step during training.

This method defines an iteration step during training, except for the back propagation and optimizer updating, which are done in an optimizer hook. Note that in some complicated cases or models, the whole process including back propagation and optimizer updating is also defined in this method, such as GAN.

Parameters:
  • data (dict) – The output of dataloader.
  • optimizer (torch.optim.Optimizer | dict) – The optimizer of runner is passed to train_step(). This argument is unused and reserved.
Returns:

It should contain at least 3 keys: loss, log_vars,

num_samples. loss is a tensor for back propagation, which can be a weighted sum of multiple losses. log_vars contains all the variables to be sent to the logger. num_samples indicates the batch size (when the model is DDP, it means the batch size on each GPU), which is used for averaging the logs.

Return type:

dict

val_step(data, optimizer)[source]

The iteration step during validation.

This method shares the same signature as train_step(), but used during val epochs. Note that the evaluation after training epochs is not implemented with this method, but an evaluation hook.

with_bbox

whether the detector has a bbox head

Type:bool
with_mask

whether the detector has a mask head

Type:bool
with_neck

whether the detector has a neck

Type:bool
with_shared_head

whether the detector has a shared head in the RoI Head

Type:bool
class mmdet.models.detectors.SingleStageDetector(backbone, neck=None, bbox_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Base class for single-stage detectors.

Single-stage detectors directly and densely predict bounding boxes on the output features of the backbone+neck.

aug_test(imgs, img_metas, rescale=False)[source]

Test function with test time augmentation

extract_feat(img)[source]

Directly extract features from the backbone+neck

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]
Parameters:
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – Class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

init_weights(pretrained=None)[source]

Initialize the weights in detector

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(img, img_metas, rescale=False)[source]

Test function without test time augmentation

Parameters:
  • imgs (list[torch.Tensor]) – List of multiple images
  • img_metas (list[dict]) – List of image information.
  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns:

proposals

Return type:

np.ndarray

class mmdet.models.detectors.TwoStageDetector(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Base class for two-stage detectors.

Two-stage detectors typically consisting of a region proposal network and a task-specific regression head.

async_simple_test(img, img_meta, proposals=None, rescale=False)[source]

Async test without augmentation.

aug_test(imgs, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

extract_feat(img)[source]

Directly extract features from the backbone+neck

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, proposals=None, **kwargs)[source]
Parameters:
  • img (Tensor) – of shape (N, C, H, W) encoding input images. Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
  • proposals – override rpn proposals with custom proposals. Use when with_rpn is False.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

init_weights(pretrained=None)[source]

Initialize the weights in detector

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(img, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

with_roi_head

whether the detector has a RoI head

Type:bool
with_rpn

whether the detector has RPN

Type:bool
class mmdet.models.detectors.RPN(backbone, neck, rpn_head, train_cfg, test_cfg, pretrained=None)[source]

Implementation of Region Proposal Network

aug_test(imgs, img_metas, rescale=False)[source]

Test function with test time augmentation

Parameters:
  • imgs (list[torch.Tensor]) – List of multiple images
  • img_metas (list[dict]) – List of image information.
  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns:

proposals

Return type:

np.ndarray

extract_feat(img)[source]

Extract features

Parameters:img (torch.Tensor) – Image tensor with shape (n, c, h ,w).
Returns:
Multi-level features that may have
different resolutions.
Return type:list[torch.Tensor]
forward_dummy(img)[source]

Dummy forward function

forward_train(img, img_metas, gt_bboxes=None, gt_bboxes_ignore=None)[source]
Parameters:
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

init_weights(pretrained=None)[source]

Initialize the weights in detector

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
show_result(data, result, dataset=None, top_k=20)[source]

Show RPN proposals on the image.

Although we assume batch size is 1, this method supports arbitrary batch size.

simple_test(img, img_metas, rescale=False)[source]

Test function without test time augmentation

Parameters:
  • imgs (list[torch.Tensor]) – List of multiple images
  • img_metas (list[dict]) – List of image information.
  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns:

proposals

Return type:

np.ndarray

class mmdet.models.detectors.FastRCNN(backbone, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Implementation of Fast R-CNN

forward_test(imgs, img_metas, proposals, **kwargs)[source]
Parameters:
  • imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
  • img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
  • proposals (List[List[Tensor]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. The Tensor should have a shape Px4, where P is the number of proposals.
class mmdet.models.detectors.FasterRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Implementation of Faster R-CNN

class mmdet.models.detectors.MaskRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Implementation of Mask R-CNN

class mmdet.models.detectors.CascadeRCNN(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of Cascade R-CNN and Cascade Mask R-CNN

show_result(data, result, **kwargs)[source]

Show prediction results of the detector

class mmdet.models.detectors.HybridTaskCascade(**kwargs)[source]

Implementation of HTC

with_semantic

whether the detector has a semantic head

Type:bool
class mmdet.models.detectors.RetinaNet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of RetinaNet

class mmdet.models.detectors.FCOS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of FCOS

class mmdet.models.detectors.GridRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Grid R-CNN.

This detector is the implementation of: - Grid R-CNN (https://arxiv.org/abs/1811.12030) - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)

class mmdet.models.detectors.MaskScoringRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Mask Scoring RCNN.

https://arxiv.org/abs/1903.00241

class mmdet.models.detectors.RepPointsDetector(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

RepPoints: Point Set Representation for Object Detection.

This detector is the implementation of: - RepPoints detector (https://arxiv.org/pdf/1904.11490)

aug_test(imgs, img_metas, rescale=False)[source]

Test function with test time augmentation

Parameters:
  • imgs (list[torch.Tensor]) – List of multiple images
  • img_metas (list[dict]) – List of image information.
  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns:

bbox results of each class

Return type:

list[ndarray]

merge_aug_results(aug_bboxes, aug_scores, img_metas)[source]

Merge augmented detection bboxes and scores.

Parameters:
  • aug_bboxes (list[Tensor]) – shape (n, 4*#class)
  • aug_scores (list[Tensor] or None) – shape (n, #class)
  • img_shapes (list[Tensor]) – shape (3, ).
Returns:

(bboxes, scores)

Return type:

tuple

class mmdet.models.detectors.FOVEA(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of FoveaBox

class mmdet.models.detectors.FSAF(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of FSAF

class mmdet.models.detectors.NASFCOS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

NAS-FCOS: Fast Neural Architecture Search for Object Detection.

https://arxiv.org/abs/1906.0442

class mmdet.models.detectors.PointRend(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

PointRend: Image Segmentation as Rendering

This detector is the implementation of PointRend.

class mmdet.models.detectors.GFL(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

backbones

class mmdet.models.backbones.RegNet(arch, in_channels=3, stem_channels=32, base_channels=32, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]

RegNet backbone.

More details can be found in paper .

Parameters:
  • arch (dict) – The parameter of RegNets. - w0 (int): initial width - wa (float): slope of width - wm (float): quantization parameter to quantize the width - depth (int): depth of the backbone - group_w (int): width of group - bot_mul (float): bottleneck ratio, i.e. expansion of bottlneck.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • base_channels (int) – Base channels after stem layer.
  • in_channels (int) – Number of input image channels. Default: 3.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import RegNet
>>> import torch
>>> self = RegNet(
        arch=dict(
            w0=88,
            wa=26.31,
            wm=2.25,
            group_w=48,
            depth=25,
            bot_mul=1.0))
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 96, 8, 8)
(1, 192, 4, 4)
(1, 432, 2, 2)
(1, 1008, 1, 1)
adjust_width_group(widths, bottleneck_ratio, groups)[source]

Adjusts the compatibility of widths and groups.

Parameters:
  • widths (list[int]) – Width of each stage.
  • bottleneck_ratio (float) – Bottleneck ratio.
  • groups (int) – number of groups in each stage
Returns:

The adjusted widths and groups of each stage.

Return type:

tuple(list)

forward(x)[source]

Forward function

generate_regnet(initial_width, width_slope, width_parameter, depth, divisor=8)[source]

Generates per block width from RegNet parameters.

Parameters:
  • initial_width ([int]) – Initial width of the backbone
  • width_slope ([float]) – Slope of the quantized linear function
  • width_parameter ([int]) – Parameter used to quantize the width.
  • depth ([int]) – Depth of the backbone.
  • divisor (int, optional) – The divisor of channels. Defaults to 8.
Returns:

return a list of widths of each stage and the number of

stages

Return type:

list, int

get_stages_from_blocks(widths)[source]

Gets widths/stage_blocks of network at each stage

Parameters:widths (list[int]) – Width in each stage.
Returns:width and depth of each stage
Return type:tuple(list)
static quantize_float(number, divisor)[source]

Converts a float to closest non-zero int divisible by divior.

Parameters:
  • number (int) – Original number to be quantized.
  • divisor (int) – Divisor used to quantize the number.
Returns:

quantized number that is divisible by devisor.

Return type:

int

class mmdet.models.backbones.ResNet(depth, in_channels=3, stem_channels=64, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]

ResNet backbone.

Parameters:
  • depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
  • stem_channels (int) – Number of stem channels. Default: 64.
  • base_channels (int) – Number of base channels of res layer. Default: 64.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Resnet stages. Default: 4.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.
  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • plugins (list[dict]) –

    List of plugins for stages, each dict contains:

    • cfg (dict, required): Cfg dict to build plugin.
    • position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
    • stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import ResNet
>>> import torch
>>> self = ResNet(depth=18)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 8, 8)
(1, 128, 4, 4)
(1, 256, 2, 2)
(1, 512, 1, 1)
forward(x)[source]

Forward function

init_weights(pretrained=None)[source]

Initialize the weights in backbone

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer

make_stage_plugins(plugins, stage_idx)[source]

make plugins for ResNet ‘stage_idx’th stage .

Currently we support to insert ‘context_block’, ‘empirical_attention_block’, ‘nonlocal_block’ into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck. An example of plugins format could be:

>>> plugins=[
...     dict(cfg=dict(type='xxx', arg1='xxx'),
...          stages=(False, True, True, True),
...          position='after_conv2'),
...     dict(cfg=dict(type='yyy'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='1'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='2'),
...          stages=(True, True, True, True),
...          position='after_conv3')
... ]
>>> self = ResNet(depth=18)
>>> stage_plugins = self.make_stage_plugins(plugins, 0)
>>> assert len(stage_plugins) == 3

Suppose ‘stage_idx=0’, the structure of blocks in the stage would be:

conv1-> conv2->conv3->yyy->zzz1->zzz2

Suppose ‘stage_idx=1’, the structure of blocks in the stage would be:

conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2

If stages is missing, the plugin would be applied to all stages.

Parameters:
  • plugins (list[dict]) – List of plugins cfg to build. The postfix is required if multiple same type plugins are inserted.
  • stage_idx (int) – Index of stage to build
Returns:

Plugins for current stage

Return type:

list[dict]

norm1

the normalization layer named “norm1”

Type:nn.Module
train(mode=True)[source]

Convert the model into training mode while keep normalization layer freezed

class mmdet.models.backbones.ResNetV1d(**kwargs)[source]

ResNetV1d variant described in Bag of Tricks.

Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.

class mmdet.models.backbones.ResNeXt(groups=1, base_width=4, **kwargs)[source]

ResNeXt backbone.

Parameters:
  • depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Resnet stages. Default: 4.
  • groups (int) – Group of resnext.
  • base_width (int) – Base width of resnext.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer

class mmdet.models.backbones.SSDVGG(input_size, depth, with_last_pool=False, ceil_mode=True, out_indices=(3, 4), out_feature_indices=(22, 34), l2_norm_scale=20.0)[source]

VGG Backbone network for single-shot-detection

Parameters:
  • input_size (int) – width and height of input, from {300, 512}.
  • depth (int) – Depth of vgg, from {11, 13, 16, 19}.
  • out_indices (Sequence[int]) – Output from which stages.

Example

>>> self = SSDVGG(input_size=300, depth=11)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 300, 300)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 1024, 19, 19)
(1, 512, 10, 10)
(1, 256, 5, 5)
(1, 256, 3, 3)
(1, 256, 1, 1)
forward(x)[source]

Forward function

init_weights(pretrained=None)[source]

Initialize the weights in backbone

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
class mmdet.models.backbones.HRNet(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=True, with_cp=False, zero_init_residual=False)[source]

HRNet backbone.

High-Resolution Representations for Labeling Pixels and Regions arXiv: https://arxiv.org/abs/1904.04514

Parameters:
  • extra (dict) – detailed configuration for each stage of HRNet.
  • in_channels (int) – Number of input image channels. Default: 3.
  • conv_cfg (dict) – dictionary to construct and config conv layer.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import HRNet
>>> import torch
>>> extra = dict(
>>>     stage1=dict(
>>>         num_modules=1,
>>>         num_branches=1,
>>>         block='BOTTLENECK',
>>>         num_blocks=(4, ),
>>>         num_channels=(64, )),
>>>     stage2=dict(
>>>         num_modules=1,
>>>         num_branches=2,
>>>         block='BASIC',
>>>         num_blocks=(4, 4),
>>>         num_channels=(32, 64)),
>>>     stage3=dict(
>>>         num_modules=4,
>>>         num_branches=3,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4),
>>>         num_channels=(32, 64, 128)),
>>>     stage4=dict(
>>>         num_modules=3,
>>>         num_branches=4,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4, 4),
>>>         num_channels=(32, 64, 128, 256)))
>>> self = HRNet(extra, in_channels=1)
>>> self.eval()
>>> inputs = torch.rand(1, 1, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 32, 8, 8)
(1, 64, 4, 4)
(1, 128, 2, 2)
(1, 256, 1, 1)
forward(x)[source]

Forward function

init_weights(pretrained=None)[source]

Initialize the weights in backbone

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
norm1

the normalization layer named “norm1”

Type:nn.Module
norm2

the normalization layer named “norm2”

Type:nn.Module
train(mode=True)[source]

Convert the model into training mode whill keeping the normalization layer freezed

class mmdet.models.backbones.Res2Net(scales=4, base_width=26, style='pytorch', deep_stem=True, avg_down=True, **kwargs)[source]

Res2Net backbone.

Parameters:
  • scales (int) – Scales used in Res2Net. Default: 4
  • base_width (int) – Basic width of each scale. Default: 26
  • depth (int) – Depth of res2net, from {50, 101, 152}.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Res2net stages. Default: 4.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottle2neck.
  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • plugins (list[dict]) –

    List of plugins for stages, each dict contains:

    • cfg (dict, required): Cfg dict to build plugin.
    • position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
    • stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import Res2Net
>>> import torch
>>> self = Res2Net(depth=50, scales=4, base_width=26)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 256, 8, 8)
(1, 512, 4, 4)
(1, 1024, 2, 2)
(1, 2048, 1, 1)
make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer

class mmdet.models.backbones.HourglassNet(downsample_times=5, num_stacks=2, stage_channels=(256, 256, 384, 384, 384, 512), stage_blocks=(2, 2, 2, 2, 2, 4), feat_channel=256, norm_cfg={'requires_grad': True, 'type': 'BN'})[source]

HourglassNet backbone.

Stacked Hourglass Networks for Human Pose Estimation. More details can be found in the paper .

Parameters:
  • downsample_times (int) – Downsample times in a HourglassModule.
  • num_stacks (int) – Number of HourglassModule modules stacked, 1 for Hourglass-52, 2 for Hourglass-104.
  • stage_channels (list[int]) – Feature channel of each sub-module in a HourglassModule.
  • stage_blocks (list[int]) – Number of sub-modules stacked in a HourglassModule.
  • feat_channel (int) – Feature channel of conv after a HourglassModule.
  • norm_cfg (dict) – Dictionary to construct and config norm layer.

Example

>>> from mmdet.models import HourglassNet
>>> import torch
>>> self = HourglassNet()
>>> self.eval()
>>> inputs = torch.rand(1, 3, 511, 511)
>>> level_outputs = self.forward(inputs)
>>> for level_output in level_outputs:
...     print(tuple(level_output.shape))
(1, 256, 128, 128)
(1, 256, 128, 128)
forward(x)[source]

Forward function

init_weights(pretrained=None)[source]

We do nothing in this function because all modules we used (ConvModule, BasicBlock and etc.) have default initialization, and currently we don’t provide pretrained model of HourglassNet. Detector’s __init__() will call backbone’s init_weights() with pretrained as input, so we keep this function.

class mmdet.models.backbones.DetectoRS_ResNet(sac=None, stage_with_sac=(False, False, False, False), rfp_inplanes=None, output_img=False, pretrained=None, **kwargs)[source]

ResNet backbone for DetectoRS.

Parameters:
  • sac (dict, optional) – Dictionary to construct SAC (Switchable Atrous Convolution). Default: None.
  • stage_with_sac (list) – Which stage to use sac. Default: (False, False, False, False).
  • rfp_inplanes (int, optional) – The number of channels from RFP. Default: None. If specified, an additional conv layer will be added for rfp_feat. Otherwise, the structure is the same as base class.
  • output_img (bool) – If True, the input image will be inserted into the starting position of output. Default: False.
  • pretrained (str, optional) – The pretrained model to load.
forward(x)[source]

Forward function

make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer for DetectoRS

rfp_forward(x, rfp_feats)[source]

Forward function for RFP

class mmdet.models.backbones.DetectoRS_ResNeXt(groups=1, base_width=4, **kwargs)[source]

ResNeXt backbone for DetectoRS.

Parameters:
  • groups (int) – The number of groups in ResNeXt.
  • base_width (int) – The base width of ResNeXt.
make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer for DetectoRS

necks

class mmdet.models.necks.FPN(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, upsample_cfg={'mode': 'nearest'})[source]

Feature Pyramid Network.

This is an implementation of - Feature Pyramid Networks for Object Detection (https://arxiv.org/abs/1612.03144)

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool | str) –

    If bool, it decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs. If str, it specifies the source feature map of the extra convs. Only the following options are allowed

    • ’on_input’: Last feat map of neck inputs (i.e. backbone feature).
    • ’on_lateral’: Last feature map after lateral convs.
    • ’on_output’: The last output feature map after fpn convs.
  • extra_convs_on_inputs (bool, deprecated) – Whether to apply extra convs on the original feature from the backbone. If True, it is equivalent to add_extra_convs=’on_input’. If False, it is equivalent to set add_extra_convs=’on_output’. Default to True.
  • relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
  • no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
  • conv_cfg (dict) – Config dict for convolution layer. Default: None.
  • norm_cfg (dict) – Config dict for normalization layer. Default: None.
  • act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
  • upsample_cfg (dict) – Config dict for interpolate layer. Default: dict(mode=’nearest’)

Example

>>> import torch
>>> in_channels = [2, 3, 5, 7]
>>> scales = [340, 170, 84, 43]
>>> inputs = [torch.rand(1, c, s, s)
...           for c, s in zip(in_channels, scales)]
>>> self = FPN(in_channels, 11, len(in_channels)).eval()
>>> outputs = self.forward(inputs)
>>> for i in range(len(outputs)):
...     print(f'outputs[{i}].shape = {outputs[i].shape}')
outputs[0].shape = torch.Size([1, 11, 340, 340])
outputs[1].shape = torch.Size([1, 11, 170, 170])
outputs[2].shape = torch.Size([1, 11, 84, 84])
outputs[3].shape = torch.Size([1, 11, 43, 43])
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of FPN module

class mmdet.models.necks.BFP(Balanced Feature Pyrmamids)[source]

BFP takes multi-level features as inputs and gather them into a single one, then refine the gathered feature and scatter the refined results to multi-level features. This module is used in Libra R-CNN (CVPR 2019), see https://arxiv.org/pdf/1904.02701.pdf for details.

Parameters:
  • in_channels (int) – Number of input channels (feature maps of all levels should have the same channels).
  • num_levels (int) – Number of input feature levels.
  • conv_cfg (dict) – The config dict for convolution layers.
  • norm_cfg (dict) – The config dict for normalization layers.
  • refine_level (int) – Index of integration and refine level of BSF in multi-level features from bottom to top.
  • refine_type (str) – Type of the refine op, currently support [None, ‘conv’, ‘non_local’].
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of FPN module

class mmdet.models.necks.HRFPN(High Resolution Feature Pyrmamids)[source]

arXiv: https://arxiv.org/abs/1904.04514

Parameters:
  • in_channels (list) – number of channels for each branch.
  • out_channels (int) – output channels of feature pyramids.
  • num_outs (int) – number of output stages.
  • pooling_type (str) – pooling for generating feature pyramids from {MAX, AVG}.
  • conv_cfg (dict) – dictionary to construct and config conv layer.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • stride (int) – stride of 3x3 convolutional layers
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of module

class mmdet.models.necks.NASFPN(in_channels, out_channels, num_outs, stack_times, start_level=0, end_level=-1, add_extra_convs=False, norm_cfg=None)[source]

NAS-FPN.

Implementation of NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • stack_times (int) – The number of times the pyramid architecture will be stacked.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of module

class mmdet.models.necks.FPN_CARAFE(in_channels, out_channels, num_outs, start_level=0, end_level=-1, norm_cfg=None, act_cfg=None, order=('conv', 'norm', 'act'), upsample_cfg={'encoder_dilation': 1, 'encoder_kernel': 3, 'type': 'carafe', 'up_group': 1, 'up_kernel': 5})[source]

FPN_CARAFE is a more flexible implementation of FPN. It allows more choice for upsample methods during the top-down pathway.

It can reproduce the preformance of ICCV 2019 paper CARAFE: Content-Aware ReAssembly of FEatures Please refer to https://arxiv.org/abs/1905.02188 for more details.

Parameters:
  • in_channels (list[int]) – Number of channels for each input feature map.
  • out_channels (int) – Output channels of feature pyramids.
  • num_outs (int) – Number of output stages.
  • start_level (int) – Start level of feature pyramids. (Default: 0)
  • end_level (int) – End level of feature pyramids. (Default: -1 indicates the last level).
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • activate (str) – Type of activation function in ConvModule (Default: None indicates w/o activation).
  • order (dict) – Order of components in ConvModule.
  • upsample (str) – Type of upsample layer.
  • upsample_cfg (dict) – Dictionary to construct and config upsample layer.
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of module

slice_as(src, dst)[source]

Slice src as dst

Note

src should have the same or larger size than dst.

Parameters:
  • src (torch.Tensor) – Tensors to be sliced.
  • dst (torch.Tensor) – src will be sliced to have the same size as dst.
Returns:

Sliced tensor.

Return type:

torch.Tensor

tensor_add(a, b)[source]

Add tensors a and b that might have different sizes

class mmdet.models.necks.PAFPN(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None)[source]

Path Aggregation Network for Instance Segmentation.

This is an implementation of the PAFPN in Path Aggregation Network.

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool) – Whether to add conv layers on top of the original feature maps. Default: False.
  • extra_convs_on_inputs (bool) – Whether to apply extra conv on the original feature from the backbone. Default: False.
  • relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
  • no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
  • conv_cfg (dict) – Config dict for convolution layer. Default: None.
  • norm_cfg (dict) – Config dict for normalization layer. Default: None.
  • act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
forward(inputs)[source]

Forward function

class mmdet.models.necks.NASFCOS_FPN(in_channels, out_channels, num_outs, start_level=1, end_level=-1, add_extra_convs=False, conv_cfg=None, norm_cfg=None)[source]

FPN structure in NASFPN

Implementation of paper NAS-FCOS: Fast Neural Architecture Search for Object Detection

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
  • conv_cfg (dict) – dictionary to construct and config conv layer.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of module

class mmdet.models.necks.RFP(Recursive Feature Pyramid)[source]

This is an implementation of RFP in DetectoRS. Different from standard FPN, the input of RFP should be multi level features along with origin input image of backbone.

Parameters:
  • rfp_steps (int) – Number of unrolled steps of RFP.
  • rfp_backbone (dict) – Configuration of the backbone for RFP.
  • aspp_out_channels (int) – Number of output channels of ASPP module.
  • aspp_dilations (tuple[int]) – Dilation rates of four branches. Default: (1, 3, 6, 1)
forward(inputs)[source]

Forward function

init_weights()[source]

Initialize the weights of FPN module

dense_heads

class mmdet.models.dense_heads.AnchorFreeHead(num_classes, in_channels, feat_channels=256, stacked_convs=4, strides=(4, 8, 16, 32, 64), dcn_on_last_conv=False, conv_bias='auto', background_label=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, conv_cfg=None, norm_cfg=None, train_cfg=None, test_cfg=None)[source]

Anchor-free head (FCOS, Fovea, RepPoints, etc.).

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of hidden channels. Used in child classes.
  • stacked_convs (int) – Number of stacking convs of the head.
  • strides (tuple) – Downsample factor of each feature map.
  • dcn_on_last_conv (bool) – If true, use dcn in the last layer of towers. Default: False.
  • conv_bias (bool | str) – If specified as auto, it will be decided by the norm_cfg. Bias of conv will be set as True if norm_cfg is None, otherwise False. Default: “auto”.
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of localization loss.
  • conv_cfg (dict) – Config dict for convolution layer. Default: None.
  • norm_cfg (dict) – Config dict for normalization layer. Default: None.
  • train_cfg (dict) – Training config of anchor head.
  • test_cfg (dict) – Testing config of anchor head.
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually contain classification scores and bbox predictions.
cls_scores (list[Tensor]): Box scores for each scale level,
each is a 4D-tensor, the channel number is num_points * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for each scale
level, each is a 4D-tensor, the channel number is num_points * 4.
Return type:tuple
forward_single(x)[source]

Forward features of a single scale levle.

Parameters:x (Tensor) – FPN feature maps of the specified stride.
Returns:
Scores for each class, bbox predictions, features
after classification and regression conv layers, some models needs these features like FCOS.
Return type:tuple
get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=None)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
get_points(featmap_sizes, dtype, device, flatten=False)[source]

Get points according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • dtype (torch.dtype) – Type of points.
  • device (torch.device) – Device of points.
Returns:

points of each image.

Return type:

tuple

get_targets(points, gt_bboxes_list, gt_labels_list)[source]
Compute regression, classification and centerss targets for points
in multiple images.
Parameters:
  • points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
  • gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute loss of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
class mmdet.models.dense_heads.AnchorHead(num_classes, in_channels, feat_channels=256, anchor_generator={'ratios': [0.5, 1.0, 2.0], 'scales': [8, 16, 32], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, background_label=None, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, train_cfg=None, test_cfg=None)[source]

Anchor-based head (RPN, RetinaNet, SSD, etc.).

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of hidden channels. Used in child classes.
  • anchor_generator (dict) – Config dict for anchor generator
  • bbox_coder (dict) – Config of bounding box coder.
  • reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of localization loss.
  • train_cfg (dict) – Training config of anchor head.
  • test_cfg (dict) – Testing config of anchor head.
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually a tuple of classification scores and bbox prediction
cls_scores (list[Tensor]): Classification scores for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type:tuple
forward_single(x)[source]

Forward feature of a single scale level.

Parameters:x (Tensor) – Features of a single scale level.
Returns:
cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4.
Return type:tuple
get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
  • device (torch.device | str) – Device for returned tensors
Returns:

anchor_list (list[Tensor]): Anchors of each image valid_flag_list (list[Tensor]): Valid flags of each image

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space. Default: False.
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class labelof the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True, return_sampling_results=False)[source]
Compute regression and classification targets for anchors in
multiple images.
Parameters:
  • anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 4).
  • valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
  • gt_labels_list (list[Tensor]) – Ground truth labels of each box.
  • label_channels (int) – Channel of label.
  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns:

labels_list (list[Tensor]): Labels of each level

label_weights_list (list[Tensor]): Label weights of each level bbox_targets_list (list[Tensor]): BBox targets of each level bbox_weights_list (list[Tensor]): BBox weights of each level num_total_pos (int): Number of positive samples in all images num_total_neg (int): Number of negative samples in all images

additional_returns: This function enables user-defined returns from

self._get_targets_single. These returns are currently refined to properties at each feature map (i.e. having HxW dimension). The results will be concatenated after the end

Return type:

tuple

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

loss_single(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters:
  • cls_score (Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).
  • bbox_pred (Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W).
  • anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
  • labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
  • label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
  • bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
  • bbox_weights (Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 4).
  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.GuidedAnchorHead(num_classes, in_channels, feat_channels=256, approx_anchor_generator={'octave_base_scale': 8, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, square_anchor_generator={'ratios': [1.0], 'scales': [8], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, anchor_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, deformable_groups=4, loc_filter_thr=0.01, background_label=None, train_cfg=None, test_cfg=None, loss_loc={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_shape={'beta': 0.2, 'loss_weight': 1.0, 'type': 'BoundedIoULoss'}, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]

Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.).

This GuidedAnchorHead will predict high-quality feature guided anchors and locations where anchors will be kept in inference. There are mainly 3 categories of bounding-boxes.

  • Sampled 9 pairs for target assignment. (approxes)
  • The square boxes where the predicted anchors are based on. (squares)
  • Guided anchors.

Please refer to https://arxiv.org/abs/1901.03278 for more details.

Parameters:
  • num_classes (int) – Number of classes.
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of hidden channels.
  • approx_anchor_generator (dict) – Config dict for approx generator
  • square_anchor_generator (dict) – Config dict for square generator
  • anchor_coder (dict) – Config dict for anchor coder
  • bbox_coder (dict) – Config dict for bbox coder
  • deformable_groups – (int): Group number of DCN in FeatureAdaption module.
  • loc_filter_thr (float) – Threshold to filter out unconcerned regions.
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_loc (dict) – Config of location loss.
  • loss_shape (dict) – Config of anchor shape loss.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of bbox regression loss.
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually a tuple of classification scores and bbox prediction
cls_scores (list[Tensor]): Classification scores for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type:tuple
forward_single(x)[source]

Forward feature of a single scale level.

Parameters:x (Tensor) – Features of a single scale level.
Returns:
cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4.
Return type:tuple
ga_loc_targets(gt_bboxes_list, featmap_sizes)[source]

Compute location targets for guided anchoring.

Each feature map is divided into positive, negative and ignore regions. - positive regions: target 1, weight 1 - ignore regions: target 0, weight 0 - negative regions: target 0, weight 0.1

Parameters:
  • gt_bboxes_list (list[Tensor]) – Gt bboxes of each image.
  • featmap_sizes (list[tuple]) – Multi level sizes of each feature maps.
Returns:

tuple

ga_shape_targets(approx_list, inside_flag_list, square_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, unmap_outputs=True)[source]

Compute guided anchoring targets.

Parameters:
  • approx_list (list[list]) – Multi level approxs of each image.
  • inside_flag_list (list[list]) – Multi level inside flags of each image.
  • square_list (list[list]) – Multi level squares of each image.
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – ignore list of gt bboxes.
  • unmap_outputs (bool) – unmap outputs or not.
Returns:

tuple

get_anchors(featmap_sizes, shape_preds, loc_preds, img_metas, use_loc_filter=False, device='cuda')[source]

Get squares according to feature map sizes and guided anchors.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • shape_preds (list[tensor]) – Multi-level shape predictions.
  • loc_preds (list[tensor]) – Multi-level location predictions.
  • img_metas (list[dict]) – Image meta info.
  • use_loc_filter (bool) – Use loc filter or not.
  • device (torch.device | str) – device for returned tensors
Returns:

square approxs of each image, guided anchors of each image,

loc masks of each image

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, shape_preds, loc_preds, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space. Default: False.
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class labelof the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_sampled_approxs(featmap_sizes, img_metas, device='cuda')[source]

Get sampled approxs and inside flags according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
  • device (torch.device | str) – device for returned tensors
Returns:

approxes of each image, inside flags of each image

Return type:

tuple

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, shape_preds, loc_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.FeatureAdaption(in_channels, out_channels, kernel_size=3, deformable_groups=4)[source]

Feature Adaption Module.

Feature Adaption Module is implemented based on DCN v1. It uses anchor shape prediction rather than feature map to predict offsets of deformable conv layer.

Parameters:
  • in_channels (int) – Number of channels in the input feature map.
  • out_channels (int) – Number of channels in the output feature map.
  • kernel_size (int) – Deformable conv kernel size.
  • deformable_groups (int) – Deformable conv group size.
forward(x, shape)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.dense_heads.RPNHead(in_channels, **kwargs)[source]

RPN head.

Parameters:in_channels (int) – Number of channels in the input feature map.
forward_single(x)[source]

Forward feature map of a single scale level.

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.GARPNHead(in_channels, **kwargs)[source]

Guided-Anchor-based RPN head.

forward_single(x)[source]

Forward feature of a single scale level.

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, shape_preds, loc_preds, gt_bboxes, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.RetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

An anchor-based head used in RetinaNet.

The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors.

Example

>>> import torch
>>> self = RetinaHead(11, 7)
>>> x = torch.rand(1, 7, 32, 32)
>>> cls_score, bbox_pred = self.forward_single(x)
>>> # Each anchor predicts a score for each class except background
>>> cls_per_anchor = cls_score.shape[1] / self.num_anchors
>>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors
>>> assert cls_per_anchor == (self.num_classes)
>>> assert box_per_anchor == 4
forward_single(x)[source]

Forward feature of a single scale level.

Parameters:x (Tensor) – Features of a single scale level.
Returns:
cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4.
Return type:tuple
init_weights()[source]

Initialize weights of the head.

class mmdet.models.dense_heads.RetinaSepBNHead(num_classes, num_ins, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]

“RetinaHead with separate BN.

In RetinaHead, conv/norm layers are shared across different FPN levels, while in RetinaSepBNHead, conv layers are shared across different FPN levels, but BN layers are separated.

forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually a tuple of classification scores and bbox prediction
cls_scores (list[Tensor]): Classification scores for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type:tuple
init_weights()[source]

Initialize weights of the head.

class mmdet.models.dense_heads.GARetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]

Guided-Anchor-based RetinaNet head.

forward_single(x)[source]

Forward feature map of a single scale level.

init_weights()[source]

Initialize weights of the layer.

class mmdet.models.dense_heads.SSDHead(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]

SSD head used in https://arxiv.org/abs/1512.02325.

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • anchor_generator (dict) – Config dict for anchor generator
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • bbox_coder (dict) – Config of bounding box coder.
  • reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False
  • train_cfg (dict) – Training config of anchor head.
  • test_cfg (dict) – Testing config of anchor head.
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
cls_scores (list[Tensor]): Classification scores for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type:tuple
init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

loss_single(cls_score, bbox_pred, anchor, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single image.

Parameters:
  • cls_score (Tensor) – Box scores for eachimage Has shape (num_total_anchors, num_classes).
  • bbox_pred (Tensor) – Box energies / deltas for each image level with shape (num_total_anchors, 4).
  • anchors (Tensor) – Box reference for each scale level with shape (num_total_anchors, 4).
  • labels (Tensor) – Labels of each anchors with shape (num_total_anchors,).
  • label_weights (Tensor) – Label weights of each anchor with shape (num_total_anchors,)
  • bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (num_total_anchors, 4).
  • bbox_weights (Tensor) – BBox regression loss weights of each anchor with shape (num_total_anchors, 4).
  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.FCOSHead(num_classes, in_channels, regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, norm_on_bbox=False, centerness_on_reg=False, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, **kwargs)[source]

Anchor-free head used in FCOS.

The FCOS head does not use anchor boxes. Instead bounding boxes are predicted at each pixel and a centerness measure is used to supress low-quality predictions. Here norm_on_bbox, centerness_on_reg, dcn_on_last_conv are training tricks used in official repo, which will bring remarkable mAP gains of up to 4.9. Please see https://github.com/tianzhi0549/FCOS for more detail.

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • strides (list[int] | list[tuple[int, int]]) – Strides of points in multiple feature levels. Default: (4, 8, 16, 32, 64).
  • regress_ranges (tuple[tuple[int, int]]) – Regress range of multiple level points.
  • center_sampling (bool) – If true, use center sampling. Default: False.
  • center_sample_radius (float) – Radius of center sampling. Default: 1.5.
  • norm_on_bbox (bool) – If true, normalize the regression targets with FPN strides. Default: False.
  • centerness_on_reg (bool) – If true, position centerness on the regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. Default: False.
  • conv_bias (bool | str) – If specified as auto, it will be decided by the norm_cfg. Bias of conv will be set as True if norm_cfg is None, otherwise False. Default: “auto”.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of localization loss.
  • loss_centerness (dict) – Config of centerness loss.
  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: norm_cfg=dict(type=’GN’, num_groups=32, requires_grad=True).

Example

>>> self = FCOSHead(11, 7)
>>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
>>> cls_score, bbox_pred, centerness = self.forward(feats)
>>> assert len(cls_score) == len(self.scales)
centerness_target(pos_bbox_targets)[source]

Compute centerness targets.

Parameters:pos_bbox_targets (Tensor) – BBox targets of positive bboxes in shape (num_pos, 4)
Returns:Centerness target.
Return type:Tensor
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
cls_scores (list[Tensor]): Box scores for each scale level,
each is a 4D-tensor, the channel number is num_points * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for each scale
level, each is a 4D-tensor, the channel number is num_points * 4.
centernesses (list[Tensor]): Centerss for each scale level,
each is a 4D-tensor, the channel number is num_points * 1.
Return type:tuple
forward_single(x, scale, stride)[source]

Forward features of a single scale levle.

Parameters:
  • x (Tensor) – FPN feature maps of the specified stride.
  • ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
  • stride (int) – The corresponding stride for feature maps, only used to normalize the bbox prediction when self.norm_on_bbox is True.
Returns:

scores for each class, bbox predictions and centerness

predictions of input feature maps.

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, centernesses, img_metas, cfg=None, rescale=None)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
  • centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_points * 1, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

get_targets(points, gt_bboxes_list, gt_labels_list)[source]
Compute regression, classification and centerss targets for points
in multiple images.
Parameters:
  • points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
  • gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
Returns:

concat_lvl_labels (list[Tensor]): Labels of each level. concat_lvl_bbox_targets (list[Tensor]): BBox targets of each

level.

Return type:

tuple

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, centernesses, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute loss of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
  • centernesses (list[Tensor]) – Centerss for each scale level, each is a 4D-tensor, the channel number is num_points * 1.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.RepPointsHead(num_classes, in_channels, point_feat_channels=256, num_points=9, gradient_mul=0.1, point_strides=[8, 16, 32, 64, 128], point_base_scale=4, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_init={'beta': 0.1111111111111111, 'loss_weight': 0.5, 'type': 'SmoothL1Loss'}, loss_bbox_refine={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, use_grid_points=False, center_init=True, transform_method='moment', moment_mul=0.01, **kwargs)[source]

RepPoint head.

Parameters:
  • point_feat_channels (int) – Number of channels of points features.
  • gradient_mul (float) – The multiplier to gradients from points refinement and recognition.
  • point_strides (Iterable) – points strides.
  • point_base_scale (int) – bbox scale for assigning labels.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox_init (dict) – Config of initial points loss.
  • loss_bbox_refine (dict) – Config of points loss in refinement.
  • use_grid_points (bool) – If we use bounding box representation, the
  • is represented as grid points on the bounding box. (reppoints) –
  • center_init (bool) – Whether to use center point assignment.
  • transform_method (str) – The methods to transform RepPoints to bbox.
centers_to_bboxes(point_list)[source]

Get bboxes according to center points. Only used in MaxIOUAssigner.

forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually contain classification scores and bbox predictions.
cls_scores (list[Tensor]): Box scores for each scale level,
each is a 4D-tensor, the channel number is num_points * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for each scale
level, each is a 4D-tensor, the channel number is num_points * 4.
Return type:tuple
forward_single(x)[source]

Forward feature map of a single FPN level.

gen_grid_from_reg(reg, previous_boxes)[source]

Base on the previous bboxes and regression values, we compute the regressed bboxes and generate the grids on the bboxes.

Parameters:
  • reg – the regression value to previous bboxes.
  • previous_boxes – previous bboxes.
Returns:

generate grids on the regressed bboxes.

get_bboxes(cls_scores, pts_preds_init, pts_preds_refine, img_metas, cfg=None, rescale=False, nms=True)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
get_points(featmap_sizes, img_metas)[source]

Get points according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
Returns:

points of each image, valid flags of each image

Return type:

tuple

get_targets(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]

Compute corresponding GT box and classification targets for proposals.

Parameters:
  • proposals_list (list[list]) – Multi level points/bboxes of each image.
  • valid_flag_list (list[list]) – Multi level valid flags of each image.
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
  • gt_bboxes_list – Ground truth labels of each box.
  • stage (str) – init or refine. Generate target for init stage or refine stage
  • label_channels (int) – Channel of label.
  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns:

  • labels_list (list[Tensor]): Labels of each level.
  • label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501
  • bbox_gt_list (list[Tensor]): Ground truth bbox of each level.
  • proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501
  • proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501
  • num_total_pos (int): Number of positive samples in all images. # noqa: E501
  • num_total_neg (int): Number of negative samples in all images. # noqa: E501

Return type:

tuple

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, pts_preds_init, pts_preds_refine, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute loss of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
offset_to_pts(center_list, pred_list)[source]

Change from point offset to point coordinate.

points2bbox(pts, y_first=True)[source]

Converting the points set into bounding box.

Parameters:
  • pts – the input points sets (fields), each points set (fields) is represented as 2n scalar.
  • y_first – if y_fisrt=True, the point set is represented as [y1, x1, y2, x2 … yn, xn], otherwise the point set is represented as [x1, y1, x2, y2 … xn, yn].
Returns:

each points set is converting to a bbox [x1, y1, x2, y2].

class mmdet.models.dense_heads.FoveaHead(num_classes, in_channels, base_edge_list=(16, 32, 64, 128, 256), scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), sigma=0.4, with_deform=False, deformable_groups=4, **kwargs)[source]

FoveaBox: Beyond Anchor-based Object Detector https://arxiv.org/abs/1904.03797

forward_single(x)[source]

Forward features of a single scale levle.

Parameters:x (Tensor) – FPN feature maps of the specified stride.
Returns:
Scores for each class, bbox predictions, features
after classification and regression conv layers, some models needs these features like FCOS.
Return type:tuple
get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=None)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
get_targets(gt_bbox_list, gt_label_list, featmap_sizes, points)[source]
Compute regression, classification and centerss targets for points
in multiple images.
Parameters:
  • points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
  • gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bbox_list, gt_label_list, img_metas, gt_bboxes_ignore=None)[source]

Compute loss of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
class mmdet.models.dense_heads.FreeAnchorRetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, pre_anchor_topk=50, bbox_thr=0.6, gamma=2.0, alpha=0.5, **kwargs)[source]

FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466.

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • stacked_convs (int) – Number of conv layers in cls and reg tower. Default: 4.
  • conv_cfg (dict) – dictionary to construct and config conv layer. Default: None.
  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: norm_cfg=dict(type=’GN’, num_groups=32, requires_grad=True).
  • pre_anchor_topk (int) – Number of boxes that be token in each bag.
  • bbox_thr (float) – The threshold of the saturated linear function. It is usually the same with the IoU threshold used in NMS.
  • gamma (float) – Gamma parameter in focal loss.
  • alpha (float) – Alpha parameter in focal loss.
loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

negative_bag_loss(cls_prob, box_prob)[source]

Compute negative bag loss.

\(FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))\).

\(P_{a_{j} \in A_{+}}\): Box_probability of matched samples.

\(P_{j}^{bg}\): Classification probability of negative samples.

Parameters:
  • cls_prob (Tensor) – Classification probability, in shape (num_img, num_anchors, num_classes).
  • box_prob (Tensor) – Box probability, in shape (num_img, num_anchors, num_classes).
Returns:

Negative bag loss in shape (num_img, num_anchors, num_classes).

Return type:

Tensor

positive_bag_loss(matched_cls_prob, matched_box_prob)[source]

Compute positive bag loss.

\(-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )\).

\(P_{ij}^{cls}\): matched_cls_prob, classification probability of matched samples.

\(P_{ij}^{loc}\): matched_box_prob, box probability of matched samples.

Parameters:
  • matched_cls_prob (Tensor) – Classification probabilty of matched samples in shape (num_gt, pre_anchor_topk).
  • matched_box_prob (Tensor) – BBox probability of matched samples, in shape (num_gt, pre_anchor_topk).
Returns:

Positive bag loss in shape (num_gt,).

Return type:

Tensor

class mmdet.models.dense_heads.ATSSHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, **kwargs)[source]

Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection

ATSS head structure is similar with FCOS, however ATSS use anchor boxes and assign label by Adaptive Training Sample Selection instead max-iou.

https://arxiv.org/abs/1912.02424

forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually a tuple of classification scores and bbox prediction
cls_scores (list[Tensor]): Classification scores for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
bbox_preds (list[Tensor]): Box energies / deltas for all scale
levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type:tuple
forward_single(x, scale)[source]

Forward feature of a single scale level.

Parameters:
  • x (Tensor) – Features of a single scale level.
  • ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
Returns:

cls_score (Tensor): Cls scores for a single scale level

the channels number is num_anchors * num_classes.

bbox_pred (Tensor): Box energies / deltas for a single scale

level, the channels number is num_anchors * 4.

centerness (Tensor): Centerness for a single scale level, the

channel number is (N, num_anchors * 1, H, W).

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, centernesses, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into bbox predictions.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_anchors * 1, H, W)
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used. Default: None.
  • rescale (bool) – If True, return boxes in original image space. Default: False.
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]

Get targets for ATSS head.

This method is almost the same as AnchorHead.get_targets(). Besides returning the targets as the parent method does, it also returns the anchors as the first element of the returned tuple.

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, centernesses, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_anchors * 1, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (list[Tensor] | None) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

loss_single(anchors, cls_score, bbox_pred, centerness, labels, label_weights, bbox_targets, num_total_samples)[source]

Compute loss of a single scale level.

Parameters:
  • cls_score (Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).
  • bbox_pred (Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W).
  • anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
  • labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
  • label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
  • bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
  • num_total_samples (int) – Number os positive samples that is reduced over all GPUs.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.dense_heads.FSAFHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

Anchor-free head used in FSAF.

The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors (num_anchors is 1 for anchor- free methods)

Example

>>> import torch
>>> self = FSAFHead(11, 7)
>>> x = torch.rand(1, 7, 32, 32)
>>> cls_score, bbox_pred = self.forward_single(x)
>>> # Each anchor predicts a score for each class except background
>>> cls_per_anchor = cls_score.shape[1] / self.num_anchors
>>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors
>>> assert cls_per_anchor == self.num_classes
>>> assert box_per_anchor == 4
calculate_accuracy(cls_scores, labels_list, pos_inds)[source]

Calculate accuracy of the classification prediction.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • labels_list – (list[Tensor]): Labels for each scale level.
  • pos_inds (list[Tensor]) – Positive inds for each scale level.
Returns:

Accuracy.

Return type:

Tensor

collect_loss_level_single(cls_loss, reg_loss, assigned_gt_inds, labels_seq)[source]

Get the average loss in each FPN level w.r.t. each gt label

Parameters:
  • cls_loss (Tensor) – Classification loss of each feature map pixel, shape (num_anchor, num_class)
  • reg_loss (Tensor) – Regression loss of each feature map pixel, shape (num_anchor, 4)
  • assigned_gt_inds (Tensor) – It indicates which gt the prior is assigned to (0-based, -1: no assignment). shape (num_anchor),
  • labels_seq – The rank of labels. shape (num_gt)
Returns:

(num_gt), average loss of each gt in this level

Return type:

shape

forward_single(x)[source]

Forward feature map of a single scale level.

Parameters:x (Tensor) – Feature map of a single scale level.
Returns:
cls_score (Tensor): Box scores for each scale level
Has shape (N, num_points * num_classes, H, W).
bbox_pred (Tensor): Box energies / deltas for each scale
level with shape (N, num_points * 4, H, W).
Return type:tuple (Tensor)
init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute loss of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W).
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W).
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

reweight_loss_single(cls_loss, reg_loss, assigned_gt_inds, labels, level, min_levels)[source]

Reweight loss values at each level.

Reassign loss values at each level by masking those where the pre-calculated loss is too large. Then return the reduced losses.

Parameters:
  • cls_loss (Tensor) – Element-wise classification loss. Shape: (num_anchors, num_classes)
  • reg_loss (Tensor) – Element-wise regression loss. Shape: (num_anchors, 4)
  • assigned_gt_inds (Tensor) – The gt indices that each anchor bbox is assigned to. -1 denotes a negative anchor, otherwise it is the gt index (0-based). Shape: (num_anchors, ),
  • labels (Tensor) – Label assigned to anchors. Shape: (num_anchors, ).
  • level (int) – The current level index in the pyramid (0-4 for RetinaNet)
  • min_levels (Tensor) – The best-matching level for each gt. Shape: (num_gts, ),
Returns:

  • cls_loss: Reduced corrected classification loss. Scalar.
  • reg_loss: Reduced corrected regression loss. Scalar.
  • pos_flags (Tensor): Corrected bool tensor indicating the final postive anchors. Shape: (num_anchors, ).

Return type:

tuple

class mmdet.models.dense_heads.NASFCOSHead(num_classes, in_channels, regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, norm_on_bbox=False, centerness_on_reg=False, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, **kwargs)[source]

Anchor-free head used in NASFCOS.

It is quite similar with FCOS head, except for the searched structure of classification branch and bbox regression branch, where a structure of “dconv3x3, conv3x3, dconv3x3, conv1x1” is utilized instead.

init_weights()[source]

Initialize weights of the head.

class mmdet.models.dense_heads.PISARetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

PISA Retinanet Head.

The head owns the same structure with Retinanet Head, but differs in two

aspects: 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to

change the positive loss weights.
  1. Classification-aware regression loss is adopted as a third loss.
loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes of each image with shape (num_obj, 4).
  • gt_labels (list[Tensor]) – Ground truth labels of each image with shape (num_obj, 4).
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (list[Tensor]) – Ignored gt bboxes of each image. Default: None.
Returns:

Loss dict, comprise classification loss, regression loss and

carl loss.

Return type:

dict

class mmdet.models.dense_heads.PISASSDHead(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]
loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • gt_bboxes (list[Tensor]) – Ground truth bboxes of each image with shape (num_obj, 4).
  • gt_labels (list[Tensor]) – Ground truth labels of each image with shape (num_obj, 4).
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (list[Tensor]) – Ignored gt bboxes of each image. Default: None.
Returns:

Loss dict, comprise classification loss regression loss and

carl loss.

Return type:

dict

class mmdet.models.dense_heads.GFLHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, loss_dfl={'loss_weight': 0.25, 'type': 'DistributionFocalLoss'}, reg_max=16, **kwargs)[source]

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

GFL head structure is similar with ATSS, however GFL uses 1) joint representation for classification and localization quality, and 2) flexible General distribution for bounding box locations, which are supervised by Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively

https://arxiv.org/abs/2006.04388

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • stacked_convs (int) – Number of conv layers in cls and reg tower. Default: 4.
  • conv_cfg (dict) – dictionary to construct and config conv layer. Default: None.
  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’GN’, num_groups=32, requires_grad=True).
  • loss_qfl (dict) – Config of Quality Focal Loss (QFL).
  • reg_max (int) – Max value of integral set :math: {0, …, reg_max} in QFL setting. Default: 16.

Example

>>> self = GFLHead(11, 7)
>>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
>>> cls_quality_score, bbox_pred = self.forward(feats)
>>> assert len(cls_quality_score) == len(self.scales)
anchor_center(anchors)[source]

Get anchor centers from anchors.

Parameters:anchors (Tensor) – Anchor list with shape (N, 4), “xyxy” format.
Returns:Anchor centers with shape (N, 2), “xy” format.
Return type:Tensor
forward(feats)[source]

Forward features from the upstream network.

Parameters:feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.
Returns:
Usually a tuple of classification scores and bbox prediction
cls_scores (list[Tensor]): Classification and quality (IoU)
joint scores for all scale levels, each is a 4D-tensor, the channel number is num_classes.
bbox_preds (list[Tensor]): Box distribution logits for all
scale levels, each is a 4D-tensor, the channel number is 4*(n+1), n is max value of integral set.
Return type:tuple
forward_single(x, scale)[source]

Forward feature of a single scale level.

Parameters:
  • x (Tensor) – Features of a single scale level.
  • ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
Returns:

cls_score (Tensor): Cls and quality joint scores for a single

scale level the channel number is num_classes.

bbox_pred (Tensor): Box distribution logits for a single scale

level, the channel number is 4*(n+1), n is max value of integral set.

Return type:

tuple

get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]

Get targets for GFL head.

This method is almost the same as AnchorHead.get_targets(). Besides returning the targets as the parent method does, it also returns the anchors as the first element of the returned tuple.

init_weights()[source]

Initialize weights of the head.

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters:
  • cls_scores (list[Tensor]) – Cls and quality scores for each scale level has shape (N, num_classes, H, W).
  • bbox_preds (list[Tensor]) – Box distribution logits for each scale level with shape (N, 4*(n+1), H, W), n is max value of integral set.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
  • gt_bboxes_ignore (list[Tensor] | None) – specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

loss_single(anchors, cls_score, bbox_pred, labels, label_weights, bbox_targets, stride, num_total_samples)[source]

Compute loss of a single scale level.

Parameters:
  • anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
  • cls_score (Tensor) – Cls and quality joint scores for each scale level has shape (N, num_classes, H, W).
  • bbox_pred (Tensor) – Box distribution logits for each scale level with shape (N, 4*(n+1), H, W), n is max value of integral set.
  • labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
  • label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
  • bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
  • stride (tuple) – Stride in this scale level.
  • num_total_samples (int) – Number of positive samples that is reduced over all GPUs.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

roi_heads

class mmdet.models.roi_heads.BaseRoIHead(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]

Base class for RoIHeads

async_simple_test(x, img_meta, **kwargs)[source]

Asynchronized test function

aug_test(x, proposal_list, img_metas, rescale=False, **kwargs)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_train(x, img_meta, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, **kwargs)[source]

Forward function during training

init_assigner_sampler()[source]

Initialize assigner and sampler

init_bbox_head()[source]

Initialize bbox_head

init_mask_head()[source]

Initialize mask_head

init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(x, proposal_list, img_meta, proposals=None, rescale=False, **kwargs)[source]

Test without augmentation.

with_bbox

whether the RoI head contains a bbox_head

Type:bool
with_mask

whether the RoI head contains a mask_head

Type:bool
with_shared_head

whether the RoI head contains a shared_head

Type:bool
class mmdet.models.roi_heads.CascadeRoIHead(num_stages, stage_loss_weights, bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]

Cascade roi head including one bbox head and one mask head.

https://arxiv.org/abs/1712.00726

aug_test(features, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_dummy(x, proposals)[source]

Dummy forward function

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters:
  • x (list[Tensor]) – list of multi-level img features.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – list of region proposals.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

init_assigner_sampler()[source]

Initialize assigner and sampler for each stage

init_bbox_head(bbox_roi_extractor, bbox_head)[source]

Initialize box head and box roi extractor

Parameters:
  • bbox_roi_extractor (dict) – Config of box roi extractor.
  • bbox_head (dict) – Config of box in box head.
init_mask_head(mask_roi_extractor, mask_head)[source]

Initialize mask head and mask roi extractor

Parameters:
  • mask_roi_extractor (dict) – Config of mask roi extractor.
  • mask_head (dict) – Config of mask in mask head.
init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

class mmdet.models.roi_heads.DoubleHeadRoIHead(reg_roi_scale_factor, **kwargs)[source]

RoI head for Double Head RCNN

https://arxiv.org/abs/1904.06493

class mmdet.models.roi_heads.MaskScoringRoIHead(mask_iou_head, **kwargs)[source]

Mask Scoring RoIHead for Mask Scoring RCNN.

https://arxiv.org/abs/1903.00241

init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test_mask(x, img_metas, det_bboxes, det_labels, rescale=False)[source]

Obtain mask prediction without augmentation

class mmdet.models.roi_heads.HybridTaskCascadeRoIHead(num_stages, stage_loss_weights, semantic_roi_extractor=None, semantic_head=None, semantic_fusion=('bbox', 'mask'), interleaved=True, mask_info_flow=True, **kwargs)[source]

Hybrid task cascade roi head including one bbox head and one mask head.

https://arxiv.org/abs/1901.07518

aug_test(img_feats, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_dummy(x, proposals)[source]

Dummy forward function

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, gt_semantic_seg=None)[source]
Parameters:
  • x (list[Tensor]) – list of multi-level img features.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposal_list (list[Tensors]) – list of region proposals.
  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None, list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None, Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
  • gt_semantic_seg (None, list[Tensor]) – semantic segmentation masks used if the architecture supports semantic segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

with_semantic

whether the head has semantic head

Type:bool
class mmdet.models.roi_heads.GridRoIHead(grid_roi_extractor, grid_head, **kwargs)[source]

Grid roi head for Grid R-CNN.

https://arxiv.org/abs/1811.12030

forward_dummy(x, proposals)[source]

Dummy forward function

init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
simple_test(x, proposal_list, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

class mmdet.models.roi_heads.ResLayer(depth, stage=3, stride=2, dilation=1, style='pytorch', norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, with_cp=False, dcn=None)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_weights(pretrained=None)[source]

Initialize the weights in the module

Parameters:pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
train(mode=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.
Returns:self
Return type:Module
class mmdet.models.roi_heads.BBoxHead(with_avg_pool=False, with_cls=True, with_reg=True, roi_feat_size=7, in_channels=256, num_classes=80, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'type': 'DeltaXYWHBBoxCoder'}, reg_class_agnostic=False, reg_decoded_bbox=False, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': False}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]

Simplest RoI head, with only two fc layers for classification and regression respectively

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

refine_bboxes(rois, labels, bbox_preds, pos_is_gts, img_metas)[source]

Refine bboxes during training.

Parameters:
  • rois (Tensor) – Shape (n*bs, 5), where n is image number per GPU, and bs is the sampled RoIs per image. The first column is the image id and the next 4 columns are x1, y1, x2, y2.
  • labels (Tensor) – Shape (n*bs, ).
  • bbox_preds (Tensor) – Shape (n*bs, 4) or (n*bs, 4*#class).
  • pos_is_gts (list[Tensor]) – Flags indicating if each positive bbox is a gt bbox.
  • img_metas (list[dict]) – Meta info of each image.
Returns:

Refined bboxes of each image in a mini-batch.

Return type:

list[Tensor]

Example

>>> # xdoctest: +REQUIRES(module:kwarray)
>>> import kwarray
>>> import numpy as np
>>> from mmdet.core.bbox.demodata import random_boxes
>>> self = BBoxHead(reg_class_agnostic=True)
>>> n_roi = 2
>>> n_img = 4
>>> scale = 512
>>> rng = np.random.RandomState(0)
>>> img_metas = [{'img_shape': (scale, scale)}
...              for _ in range(n_img)]
>>> # Create rois in the expected format
>>> roi_boxes = random_boxes(n_roi, scale=scale, rng=rng)
>>> img_ids = torch.randint(0, n_img, (n_roi,))
>>> img_ids = img_ids.float()
>>> rois = torch.cat([img_ids[:, None], roi_boxes], dim=1)
>>> # Create other args
>>> labels = torch.randint(0, 2, (n_roi,)).long()
>>> bbox_preds = random_boxes(n_roi, scale=scale, rng=rng)
>>> # For each image, pretend random positive boxes are gts
>>> is_label_pos = (labels.numpy() > 0).astype(np.int)
>>> lbl_per_img = kwarray.group_items(is_label_pos,
...                                   img_ids.numpy())
>>> pos_per_img = [sum(lbl_per_img.get(gid, []))
...                for gid in range(n_img)]
>>> pos_is_gts = [
>>>     torch.randint(0, 2, (npos,)).byte().sort(
>>>         descending=True)[0]
>>>     for npos in pos_per_img
>>> ]
>>> bboxes_list = self.refine_bboxes(rois, labels, bbox_preds,
>>>                    pos_is_gts, img_metas)
>>> print(bboxes_list)
regress_by_class(rois, label, bbox_pred, img_meta)[source]

Regress the bbox for the predicted class. Used in Cascade R-CNN.

Parameters:
  • rois (Tensor) – shape (n, 4) or (n, 5)
  • label (Tensor) – shape (n, )
  • bbox_pred (Tensor) – shape (n, 4*(#class)) or (n, 4)
  • img_meta (dict) – Image meta info.
Returns:

Regressed bboxes, the same shape as input rois.

Return type:

Tensor

class mmdet.models.roi_heads.ConvFCBBoxHead(num_shared_convs=0, num_shared_fcs=0, num_cls_convs=0, num_cls_fcs=0, num_reg_convs=0, num_reg_fcs=0, conv_out_channels=256, fc_out_channels=1024, conv_cfg=None, norm_cfg=None, *args, **kwargs)[source]

More general bbox head, with shared conv and fc layers and two optional separated branches.

                            /-> cls convs -> cls fcs -> cls
shared convs -> shared fcs
                            \-> reg convs -> reg fcs -> reg
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.Shared2FCBBoxHead(fc_out_channels=1024, *args, **kwargs)[source]
class mmdet.models.roi_heads.Shared4Conv1FCBBoxHead(fc_out_channels=1024, *args, **kwargs)[source]
class mmdet.models.roi_heads.DoubleConvFCBBoxHead(num_convs=0, num_fcs=0, conv_out_channels=1024, fc_out_channels=1024, conv_cfg=None, norm_cfg={'type': 'BN'}, **kwargs)[source]

Bbox head used in Double-Head R-CNN

                                  /-> cls
              /-> shared convs ->
                                  \-> reg
roi features
                                  /-> cls
              \-> shared fc    ->
                                  \-> reg
forward(x_cls, x_reg)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.FCNMaskHead(num_convs=4, roi_feat_size=14, in_channels=256, conv_kernel_size=3, conv_out_channels=256, num_classes=80, class_agnostic=False, upsample_cfg={'scale_factor': 2, 'type': 'deconv'}, conv_cfg=None, norm_cfg=None, loss_mask={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_mask': True})[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_seg_masks(mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, scale_factor, rescale)[source]

Get segmentation masks from mask_pred and bboxes.

Parameters:
  • mask_pred (Tensor or ndarray) – shape (n, #class, h, w). For single-scale testing, mask_pred is the direct output of model, whose type is Tensor, while for multi-scale testing, it will be converted to numpy array outside of this method.
  • det_bboxes (Tensor) – shape (n, 4/5)
  • det_labels (Tensor) – shape (n, )
  • img_shape (Tensor) – shape (3, )
  • rcnn_test_cfg (dict) – rcnn testing config
  • ori_shape – original image size
Returns:

encoded masks

Return type:

list[list]

class mmdet.models.roi_heads.HTCMaskHead(with_conv_res=True, *args, **kwargs)[source]
forward(x, res_feat=None, return_logits=True, return_feat=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.FusedSemanticHead(num_ins, fusion_level, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=183, ignore_label=255, loss_weight=0.2, conv_cfg=None, norm_cfg=None)[source]

Multi-level fused semantic segmentation head.

in_1 -> 1x1 conv ---
                    |
in_2 -> 1x1 conv -- |
                   ||
in_3 -> 1x1 conv - ||
                  |||                  /-> 1x1 conv (mask prediction)
in_4 -> 1x1 conv -----> 3x3 convs (*4)
                    |                  \-> 1x1 conv (feature)
in_5 -> 1x1 conv ---
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.GridHead(grid_points=9, num_convs=8, roi_feat_size=14, in_channels=256, conv_kernel_size=3, point_feat_channels=64, deconv_kernel_size=4, class_agnostic=False, loss_grid={'loss_weight': 15, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, conv_cfg=None, norm_cfg={'num_groups': 36, 'type': 'GN'})[source]
calc_sub_regions()[source]

Compute point specific representation regions.

See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.MaskIoUHead(num_convs=4, num_fcs=2, roi_feat_size=14, in_channels=256, conv_out_channels=256, fc_out_channels=1024, num_classes=80, loss_iou={'loss_weight': 0.5, 'type': 'MSELoss'})[source]

Mask IoU Head.

This head predicts the IoU of predicted masks and corresponding gt masks.

forward(mask_feat, mask_pred)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_mask_scores(mask_iou_pred, det_bboxes, det_labels)[source]

Get the mask scores.

mask_score = bbox_score * mask_iou

get_targets(sampling_results, gt_masks, mask_pred, mask_targets, rcnn_train_cfg)[source]

Compute target of mask IoU.

Mask IoU target is the IoU of the predicted mask (inside a bbox) and the gt mask of corresponding gt mask (the whole instance). The intersection area is computed inside the bbox, and the gt mask area is computed with two steps, firstly we compute the gt area inside the bbox, then divide it by the area ratio of gt area inside the bbox and the gt area of the whole instance.

Parameters:
  • sampling_results (list[SamplingResult]) – sampling results.
  • gt_masks (BitmapMask | PolygonMask) – Gt masks (the whole instance) of each image, with the same shape of the input image.
  • mask_pred (Tensor) – Predicted masks of each positive proposal, shape (num_pos, h, w).
  • mask_targets (Tensor) – Gt mask of each positive proposal, binary map of the shape (num_pos, h, w).
  • rcnn_train_cfg (dict) – Training config for R-CNN part.
Returns:

mask iou target (length == num positive).

Return type:

Tensor

class mmdet.models.roi_heads.SingleRoIExtractor(roi_layer, out_channels, featmap_strides, finest_scale=56)[source]

Extract RoI features from a single level feature map.

If there are multiple input feature levels, each RoI is mapped to a level according to its scale. The mapping rule is proposed in FPN.

Parameters:
  • roi_layer (dict) – Specify RoI layer type and arguments.
  • out_channels (int) – Output channels of RoI layers.
  • featmap_strides (int) – Strides of input feature maps.
  • finest_scale (int) – Scale threshold of mapping to level 0. Default: 56.
forward(feats, rois, roi_scale_factor=None)[source]

Forward function

map_roi_levels(rois, num_levels)[source]

Map rois to corresponding feature levels by scales.

  • scale < finest_scale * 2: level 0
  • finest_scale * 2 <= scale < finest_scale * 4: level 1
  • finest_scale * 4 <= scale < finest_scale * 8: level 2
  • scale >= finest_scale * 8: level 3
Parameters:
  • rois (Tensor) – Input RoIs, shape (k, 5).
  • num_levels (int) – Total level number.
Returns:

Level index (0-based) of each RoI, shape (k, )

Return type:

Tensor

class mmdet.models.roi_heads.PISARoIHead(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]
forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]

StandardRoIHead with PrIme Sample Attention (PISA), described in PISA.

Parameters:
  • x (list[Tensor]) – List of multi-level img features.
  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – List of region proposals.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – Class indices corresponding to each box
  • gt_bboxes_ignore (list[Tensor], optional) – Specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – True segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

class mmdet.models.roi_heads.PointRendRoIHead(point_head, *args, **kwargs)[source]

PointRend.

aug_test_mask(feats, img_metas, det_bboxes, det_labels)[source]

Test for mask head with test time augmentation.

init_point_head(point_head)[source]

Initialize point_head

init_weights(pretrained)[source]

Initialize the weights in head

Parameters:pretrained (str, optional) – Path to pre-trained weights.
simple_test_mask(x, img_metas, det_bboxes, det_labels, rescale=False)[source]

Obtain mask prediction without augmentation

class mmdet.models.roi_heads.MaskPointHead(num_classes, num_fcs=3, in_channels=256, fc_channels=256, class_agnostic=False, coarse_pred_each_layer=True, conv_cfg={'type': 'Conv1d'}, norm_cfg=None, act_cfg={'type': 'ReLU'}, loss_point={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_mask': True})[source]

A mask point head use in PointRend.

MaskPointHead use shared multi-layer perceptron (equivalent to nn.Conv1d) to predict the logit of input points. The fine-grained feature and coarse feature will be concatenate together for predication.

Parameters:
  • num_fcs (int) – Number of fc layers in the head. Default: 3.
  • in_channels (int) – Number of input channels. Default: 256.
  • fc_channels (int) – Number of fc channels. Default: 256.
  • num_classes (int) – Number of classes for logits. Default: 80.
  • class_agnostic (bool) – Whether use class agnostic classification. If so, the output channels of logits will be 1. Default: False.
  • coarse_pred_each_layer (bool) – Whether concatenate coarse feature with the output of each fc layer. Default: True.
  • conv_cfg (dict | None) – Dictionary to construct and config conv layer. Default: dict(type=’Conv1d’))
  • norm_cfg (dict | None) – Dictionary to construct and config norm layer. Default: None.
  • loss_point (dict) – Dictionary to construct and config loss layer of point head. Default: dict(type=’CrossEntropyLoss’, use_mask=True, loss_weight=1.0).
forward(fine_grained_feats, coarse_feats)[source]

Classify each point base on fine grained and coarse feats.

Parameters:
  • fine_grained_feats (Tensor) – Fine grained feature sampled from FPN, shape (num_rois, in_channels, num_points).
  • coarse_feats (Tensor) – Coarse feature sampled from CoarseMaskHead, shape (num_rois, num_classes, num_points).
Returns:

Point classification results,

shape (num_rois, num_class, num_points).

Return type:

Tensor

get_roi_rel_points_test(mask_pred, pred_label, cfg)[source]

Get num_points most uncertain points during test.

Parameters:
  • mask_pred (Tensor) – A tensor of shape (num_rois, num_classes, mask_height, mask_width) for class-specific or class-agnostic prediction.
  • pred_label (list) – The predication class for each instance.
  • cfg (dict) – Testing config of point head.
Returns:

A tensor of shape (num_rois, num_points)

that contains indices from [0, mask_height x mask_width) of the most uncertain points.

point_coords (Tensor): A tensor of shape (num_rois, num_points, 2)

that contains [0, 1] x [0, 1] normalized coordinates of the most uncertain points from the [mask_height, mask_width] grid .

Return type:

point_indices (Tensor)

get_roi_rel_points_train(mask_pred, labels, cfg)[source]

Get num_points most uncertain points with random points during train.

Sample points in [0, 1] x [0, 1] coordinate space based on their uncertainty. The uncertainties are calculated for each point using ‘_get_uncertainty()’ function that takes point’s logit prediction as input.

Parameters:
  • mask_pred (Tensor) – A tensor of shape (num_rois, num_classes, mask_height, mask_width) for class-specific or class-agnostic prediction.
  • labels (list) – The ground truth class for each instance.
  • cfg (dict) – Training config of point head.
Returns:

A tensor of shape (num_rois, num_points, 2)

that contains the coordinates sampled points.

Return type:

point_coords (Tensor)

get_targets(rois, rel_roi_points, sampling_results, gt_masks, cfg)[source]

Get training targets of MaskPointHead for all images.

Parameters:
  • rois (Tensor) – Region of Interest, shape (num_rois, 5).
  • rel_roi_points – Points coordinates relative to RoI, shape (num_rois, num_points, 2).
  • sampling_results (SamplingResult) – Sampling result after sampling and assignment.
  • gt_masks (Tensor) – Ground truth segmentation masks of corresponding boxes, shape (num_rois, height, width).
  • cfg (dict) – Training cfg.
Returns:

Point target, shape (num_rois, num_points).

Return type:

Tensor

init_weights()[source]

Initialize last classification layer of MaskPointHead, conv layers are already initialized by ConvModule

loss(point_pred, point_targets, labels)[source]

Calculate loss for MaskPointHead

Parameters:
  • point_pred (Tensor) – Point predication result, shape (num_rois, num_classes, num_points).
  • point_targets (Tensor) – Point targets, shape (num_roi, num_points).
  • labels (Tensor) – Class label of corresponding boxes, shape (num_rois, )
Returns:

a dictionary of point loss components

Return type:

dict[str, Tensor]

class mmdet.models.roi_heads.CoarseMaskHead(num_convs=0, num_fcs=2, fc_out_channels=1024, downsample_factor=2, *arg, **kwarg)[source]

Coarse mask head used in PointRend.

Compared with standard FCNMaskHead, CoarseMaskHead will downsample the input feature map instead of upsample it.

Parameters:
  • num_convs (int) – Number of conv layers in the head. Default: 0.
  • num_fcs (int) – Number of fc layers in the head. Default: 2.
  • fc_out_channels (int) – Number of output channels of fc layer. Default: 1024.
  • downsample_factor (int) – The factor that feature map is downsampled by. Default: 2.
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.DynamicRoIHead(**kwargs)[source]

RoI head for Dynamic R-CNN.

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters:
  • x (list[Tensor]) – list of multi-level img features.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – list of region proposals.
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

update_hyperparameters()[source]

Update hyperparameters like iou_thr and SmoothL1 beta based on the training statistics.

Returns:the updated iou_thr and SmoothL1 beta
Return type:tuple[float]

losses

mmdet.models.losses.accuracy(pred, target, topk=1)[source]

Calculate accuracy according to the prediction and target

Parameters:
  • pred (torch.Tensor) – The model prediction.
  • target (torch.Tensor) – The target of each prediction
  • topk (int | tuple[int], optional) – If the predictions in topk matches the target, the predictions will be regarded as correct ones. Defaults to 1.
Returns:

If the input topk is a single integer,

the function will return a single float as accuracy. If topk is a tuple containing multiple integers, the function will return a tuple containing accuracies of each topk number.

Return type:

float | tuple[float]

class mmdet.models.losses.Accuracy(topk=(1, ))[source]
forward(pred, target)[source]

Forward function to calculate accuracy

Parameters:
  • pred (torch.Tensor) – Prediction of models.
  • target (torch.Tensor) – Target for each prediction.
Returns:

The accuracies under different topk criterions.

Return type:

tuple[float]

mmdet.models.losses.cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None)[source]

Calculate the CrossEntropy loss.

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
  • label (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – Sample-wise loss weight.
  • reduction (str, optional) – The method used to reduce the loss.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • class_weight (list[float], optional) – The weight for each class.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.binary_cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None)[source]

Calculate the binary CrossEntropy loss.

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, 1).
  • label (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – Sample-wise loss weight.
  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • class_weight (list[float], optional) – The weight for each class.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.mask_cross_entropy(pred, target, label, reduction='mean', avg_factor=None, class_weight=None)[source]

Calculate the CrossEntropy loss for masks.

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
  • target (torch.Tensor) – The learning label of the prediction.
  • label (torch.Tensor) – label indicates the class label of the mask’ corresponding object. This will be used to select the mask in the of the class which the object belongs to when the mask prediction if not class-agnostic.
  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • class_weight (list[float], optional) – The weight for each class.
Returns:

The calculated loss

Return type:

torch.Tensor

class mmdet.models.losses.CrossEntropyLoss(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0)[source]
forward(cls_score, label, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters:
  • cls_score (torch.Tensor) – The prediction.
  • label (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – Sample-wise loss weight.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.sigmoid_focal_loss(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None)[source]

A warpper of cuda version Focal Loss

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
  • target (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – Sample-wise loss weight.
  • gamma (float, optional) – The gamma for calculating the modulating factor. Defaults to 2.0.
  • alpha (float, optional) – A balanced form for Focal Loss. Defaults to 0.25.
  • reduction (str, optional) – The method used to reduce the loss into a scalar. Defaults to ‘mean’. Options are “none”, “mean” and “sum”.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
class mmdet.models.losses.FocalLoss(use_sigmoid=True, gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function.

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.smooth_l1_loss(pred, target, beta=1.0)[source]

Smooth L1 loss

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
  • beta (float, optional) – The threshold in the piecewise function. Defaults to 1.0.
Returns:

Calculated loss

Return type:

torch.Tensor

class mmdet.models.losses.SmoothL1Loss(beta=1.0, reduction='mean', loss_weight=1.0)[source]

Smooth L1 loss

Parameters:
  • beta (float, optional) – The threshold in the piecewise function. Defaults to 1.0.
  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to “mean”.
  • loss_weight (float, optional) – The weight of loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
mmdet.models.losses.balanced_l1_loss(pred, target, beta=1.0, alpha=0.5, gamma=1.5, reduction='mean')[source]

Calculate balanced L1 loss

Please see the Libra R-CNN

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, 4).
  • target (torch.Tensor) – The learning target of the prediction with shape (N, 4).
  • beta (float) – The loss is a piecewise function of prediction and target and beta serves as a threshold for the difference between the prediction and target. Defaults to 1.0.
  • alpha (float) – The denominator alpha in the balanced L1 loss. Defaults to 0.5.
  • gamma (float) – The gamma in the balanced L1 loss. Defaults to 1.5.
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
Returns:

The calculated loss

Return type:

torch.Tensor

class mmdet.models.losses.BalancedL1Loss(alpha=0.5, gamma=1.5, beta=1.0, reduction='mean', loss_weight=1.0)[source]

Balanced L1 Loss

arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)

Parameters:
  • alpha (float) – The denominator alpha in the balanced L1 loss. Defaults to 0.5.
  • gamma (float) – The gamma in the balanced L1 loss. Defaults to 1.5.
  • beta (float, optional) – The loss is a piecewise function of prediction and target. beta serves as a threshold for the difference between the prediction and target. Defaults to 1.0.
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function of loss

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, 4).
  • target (torch.Tensor) – The learning target of the prediction with shape (N, 4).
  • weight (torch.Tensor, optional) – Sample-wise loss weight with shape (N, ).
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.mse_loss(pred, target)[source]

Warpper of mse loss

class mmdet.models.losses.MSELoss(reduction='mean', loss_weight=1.0)[source]
Parameters:
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0
forward(pred, target, weight=None, avg_factor=None)[source]

Forward function of loss

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
  • weight (torch.Tensor, optional) – Weight of the loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
Returns:

The calculated loss

Return type:

torch.Tensor

mmdet.models.losses.iou_loss(pred, target, eps=1e-06)[source]

IoU loss.

Computing the IoU loss between a set of predicted bboxes and target bboxes. The loss is calculated as negative log of IoU.

Parameters:
  • pred (torch.Tensor) – Predicted bboxes of format (x1, y1, x2, y2), shape (n, 4).
  • target (torch.Tensor) – Corresponding gt bboxes, shape (n, 4).
  • eps (float) – Eps to avoid log(0).
Returns:

Loss tensor.

Return type:

torch.Tensor

mmdet.models.losses.bounded_iou_loss(pred, target, beta=0.2, eps=0.001)[source]

Improving Object Localization with Fitness NMS and Bounded IoU Loss, https://arxiv.org/abs/1711.00164.

Parameters:
  • pred (torch.Tensor) – Predicted bboxes.
  • target (torch.Tensor) – Target bboxes.
  • beta (float) – beta parameter in smoothl1.
  • eps (float) – eps to avoid NaN.
class mmdet.models.losses.IoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Computing the IoU loss between a set of predicted bboxes and target bboxes.

Parameters:
  • eps (float) – Eps to avoid log(0).
  • reduction (str) – Options are “none”, “mean” and “sum”.
  • loss_weight (float) – Weight of loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None. Options are “none”, “mean” and “sum”.
class mmdet.models.losses.BoundedIoULoss(beta=0.2, eps=0.001, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.GIoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.GHMC(bins=10, momentum=0, use_sigmoid=True, loss_weight=1.0)[source]

GHM Classification Loss.

Details of the theorem can be viewed in the paper “Gradient Harmonized Single-stage Detector”. https://arxiv.org/abs/1811.05181

Parameters:
  • bins (int) – Number of the unit regions for distribution calculation.
  • momentum (float) – The parameter for moving average.
  • use_sigmoid (bool) – Can only be true for BCE based loss now.
  • loss_weight (float) – The weight of the total GHM-C loss.
forward(pred, target, label_weight, *args, **kwargs)[source]

Calculate the GHM-C loss.

Parameters:
  • pred (float tensor of size [batch_num, class_num]) – The direct prediction of classification fc layer.
  • target (float tensor of size [batch_num, class_num]) – Binary class target for each sample.
  • label_weight (float tensor of size [batch_num, class_num]) – the value is 1 if the sample is valid and 0 if ignored.
Returns:

The gradient harmonized loss.

class mmdet.models.losses.GHMR(mu=0.02, bins=10, momentum=0, loss_weight=1.0)[source]

GHM Regression Loss.

Details of the theorem can be viewed in the paper “Gradient Harmonized Single-stage Detector” https://arxiv.org/abs/1811.05181

Parameters:
  • mu (float) – The parameter for the Authentic Smooth L1 loss.
  • bins (int) – Number of the unit regions for distribution calculation.
  • momentum (float) – The parameter for moving average.
  • loss_weight (float) – The weight of the total GHM-R loss.
forward(pred, target, label_weight, avg_factor=None)[source]

Calculate the GHM-R loss.

Parameters:
  • pred (float tensor of size [batch_num, 4 (* class_num)]) – The prediction of box regression layer. Channel number can be 4 or 4 * class_num depending on whether it is class-agnostic.
  • target (float tensor of size [batch_num, 4 (* class_num)]) – The target regression values with the same size of pred.
  • label_weight (float tensor of size [batch_num, 4 (* class_num)]) – The weight of each sample, 0 if ignored.
Returns:

The gradient harmonized loss.

mmdet.models.losses.reduce_loss(loss, reduction)[source]

Reduce loss as specified.

Parameters:
  • loss (Tensor) – Elementwise loss tensor.
  • reduction (str) – Options are “none”, “mean” and “sum”.
Returns:

Reduced loss tensor.

Return type:

Tensor

mmdet.models.losses.weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None)[source]

Apply element-wise weight and reduce loss.

Parameters:
  • loss (Tensor) – Element-wise loss.
  • weight (Tensor) – Element-wise weights.
  • reduction (str) – Same as built-in losses of PyTorch.
  • avg_factor (float) – Avarage factor when computing the mean of losses.
Returns:

Processed loss values.

Return type:

Tensor

mmdet.models.losses.weighted_loss(loss_func)[source]

Create a weighted version of a given loss function.

To use this decorator, the loss function must have the signature like loss_func(pred, target, **kwargs). The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like loss_func(pred, target, weight=None, reduction=’mean’, avg_factor=None, **kwargs).

Example:
>>> import torch
>>> @weighted_loss
>>> def l1_loss(pred, target):
>>>     return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3])
>>> target = torch.Tensor([1, 1, 1])
>>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target)
tensor(1.3333)
>>> l1_loss(pred, target, weight)
tensor(1.)
>>> l1_loss(pred, target, reduction='none')
tensor([1., 1., 2.])
>>> l1_loss(pred, target, weight, avg_factor=2)
tensor(1.5000)
class mmdet.models.losses.L1Loss(reduction='mean', loss_weight=1.0)[source]

L1 loss

Parameters:
  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”.
  • loss_weight (float, optional) – The weight of loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
mmdet.models.losses.l1_loss(pred, target)[source]

L1 loss

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction.
Returns:

Calculated loss

Return type:

torch.Tensor

mmdet.models.losses.isr_p(cls_score, bbox_pred, bbox_targets, rois, sampling_results, loss_cls, bbox_coder, k=2, bias=0, num_class=80)[source]

Importance-based Sample Reweighting (ISR_P), positive part.

Parameters:
  • cls_score (Tensor) – Predicted classification scores.
  • bbox_pred (Tensor) – Predicted bbox deltas.
  • bbox_targets (tuple[Tensor]) – A tuple of bbox targets, the are labels, label_weights, bbox_targets, bbox_weights, respectively.
  • rois (Tensor) – Anchors (single_stage) in shape (n, 4) or RoIs (two_stage) in shape (n, 5).
  • sampling_results (obj) – Sampling results.
  • loss_cls (func) – Classification loss func of the head.
  • bbox_coder (obj) – BBox coder of the head.
  • k (float) – Power of the non-linear mapping.
  • bias (float) – Shift of the non-linear mapping.
  • num_class (int) – Number of classes, default: 80.
Returns:

labels, imp_based_label_weights, bbox_targets,

bbox_target_weights

Return type:

tuple([Tensor])

mmdet.models.losses.carl_loss(cls_score, labels, bbox_pred, bbox_targets, loss_bbox, k=1, bias=0.2, avg_factor=None, sigmoid=False, num_class=80)[source]

Classification-Aware Regression Loss (CARL).

Parameters:
  • cls_score (Tensor) – Predicted classification scores.
  • labels (Tensor) – Targets of classification.
  • bbox_pred (Tensor) – Predicted bbox deltas.
  • bbox_targets (Tensor) – Target of bbox regression.
  • loss_bbox (func) – Regression loss func of the head.
  • bbox_coder (obj) – BBox coder of the head.
  • k (float) – Power of the non-linear mapping.
  • bias (float) – Shift of the non-linear mapping.
  • avg_factor (int) – Average factor used in regression loss.
  • sigmoid (bool) – Activation of the classification score.
  • num_class (int) – Number of classes, default: 80.
Returns:

CARL loss dict.

Return type:

dict

class mmdet.models.losses.AssociativeEmbeddingLoss(pull_weight=0.25, push_weight=0.25)[source]

Associative Embedding Loss.

More details can be found in Associative Embedding and CornerNet . Code is modified from kp_utils.py # noqa: E501

Parameters:
  • pull_weight (float) – Loss weight for corners from same object.
  • push_weight (float) – Loss weight for corners from different object.
forward(pred, target, match)[source]

Forward function

class mmdet.models.losses.GaussianFocalLoss(alpha=2.0, gamma=4.0, reduction='mean', loss_weight=1.0)[source]

GaussianFocalLoss is a variant of focal loss.

More details can be found in the paper Code is modified from kp_utils.py # noqa: E501 Please notice that the target in GaussianFocalLoss is a gaussian heatmap, not 0/1 binary target.

Parameters:
  • alpha (float) – Power of prediction.
  • gamma (float) – Power of target for negtive samples.
  • reduction (str) – Options are “none”, “mean” and “sum”.
  • loss_weight (float) – Loss weight of current loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – The prediction.
  • target (torch.Tensor) – The learning target of the prediction in gaussian distribution.
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
class mmdet.models.losses.QualityFocalLoss(use_sigmoid=True, beta=2.0, reduction='mean', loss_weight=1.0)[source]

Quality Focal Loss (QFL) is a variant of Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection https://arxiv.org/abs/2006.04388

Parameters:
  • use_sigmoid (bool) – Whether sigmoid operation is conducted in QFL. Defaults to True.
  • beta (float) – The beta parameter for calculating the modulating factor. Defaults to 2.0.
  • reduction (str) – Options are “none”, “mean” and “sum”.
  • loss_weight (float) – Loss weight of current loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – Predicted joint representation of classification and quality (IoU) estimation with shape (N, C), C is the number of classes.
  • target (tuple([torch.Tensor])) – Target category label with shape (N,) and target quality label with shape (N,).
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
class mmdet.models.losses.DistributionFocalLoss(reduction='mean', loss_weight=1.0)[source]

Distribution Focal Loss (DFL) is a variant of Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection https://arxiv.org/abs/2006.04388

Parameters:
  • reduction (str) – Options are “none”, “mean” and “sum”.
  • loss_weight (float) – Loss weight of current loss.
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function

Parameters:
  • pred (torch.Tensor) – Predicted general distribution of bounding boxes (before softmax) with shape (N, n+1), n is the max value of the integral set {0, …, n} in paper.
  • target (torch.Tensor) – Target distance label for bounding boxes with shape (N,).
  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.