API Documentation

mmdet.apis

mmdet.apis.set_random_seed(seed, deterministic=False)[source]

Set random seed.

Parameters:
  • seed (int) – Seed to be used.
  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Default: False.
mmdet.apis.init_detector(config, checkpoint=None, device='cuda:0')[source]

Initialize a detector from config file.

Parameters:
  • config (str or mmcv.Config) – Config file path or the config object.
  • checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.
Returns:

The constructed detector.

Return type:

nn.Module

mmdet.apis.async_inference_detector(model, img)[source]

Async inference image(s) with the detector.

Parameters:
  • model (nn.Module) – The loaded detector.
  • imgs (str/ndarray or list[str/ndarray]) – Either image files or loaded images.
Returns:

Awaitable detection results.

mmdet.apis.inference_detector(model, img)[source]

Inference image(s) with the detector.

Parameters:
  • model (nn.Module) – The loaded detector.
  • imgs (str/ndarray or list[str/ndarray]) – Either image files or loaded images.
Returns:

If imgs is a str, a generator will be returned, otherwise return the detection results directly.

mmdet.apis.show_result_pyplot(model, img, result, score_thr=0.3, fig_size=(15, 10))[source]

Visualize the detection results on the image.

Parameters:
  • model (nn.Module) – The loaded detector.
  • img (str or np.ndarray) – Image filename or loaded image.
  • result (tuple[list] or list) – The detection result, can be either (bbox, segm) or just bbox.
  • score_thr (float) – The threshold to visualize the bboxes and masks.
  • fig_size (tuple) – Figure size of the pyplot figure.
mmdet.apis.multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False)[source]

Test model with multiple gpus.

This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting ‘gpu_collect=True’ it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to ‘tmpdir’ and collects them by the rank 0 worker.

Parameters:
  • model (nn.Module) – Model to be tested.
  • data_loader (nn.Dataloader) – Pytorch data loader.
  • tmpdir (str) – Path of directory to save the temporary results from different gpus under cpu mode.
  • gpu_collect (bool) – Option to use either gpu or cpu to collect results.
Returns:

The prediction results.

Return type:

list

mmdet.core

anchor

class mmdet.core.anchor.AnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]

Standard anchor generator for 2D anchor-based detectors

Parameters:
  • strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels.
  • ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
  • scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
  • base_sizes (list[int] | None) – The basic sizes of anchors in multiple levels. If None is given, strides will be used as base_sizes. (If strides are non square, the shortest stride is taken.)
  • scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
  • octave_base_scale (int) – The base scale of octave.
  • scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
  • centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. If a list of tuple of float is given, they will be used to shift the centers of anchors.
  • center_offset (float) – The offset of center in propotion to anchors’ width and height. By default it is 0 in V2.0.

Examples

>>> from mmdet.core import AnchorGenerator
>>> self = AnchorGenerator([16], [1.], [1.], [9])
>>> all_anchors = self.grid_anchors([(2, 2)], device='cpu')
>>> print(all_anchors)
[tensor([[-4.5000, -4.5000,  4.5000,  4.5000],
        [11.5000, -4.5000, 20.5000,  4.5000],
        [-4.5000, 11.5000,  4.5000, 20.5000],
        [11.5000, 11.5000, 20.5000, 20.5000]])]
>>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18])
>>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu')
>>> print(all_anchors)
[tensor([[-4.5000, -4.5000,  4.5000,  4.5000],
        [11.5000, -4.5000, 20.5000,  4.5000],
        [-4.5000, 11.5000,  4.5000, 20.5000],
        [11.5000, 11.5000, 20.5000, 20.5000]]),         tensor([[-9., -9., 9., 9.]])]
grid_anchors(featmap_sizes, device='cuda')[source]

Generate grid anchors in multiple feature levels

Parameters:
  • featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels.
  • device (str) – Device where the anchors will be put on.
Returns:

Anchors in multiple feature levels.

The sizes of each tensor should be [N, 4], where N = width * height * num_base_anchors, width and height are the sizes of the corresponding feature lavel, num_base_anchors is the number of anchors for that level.

Return type:

list[torch.Tensor]

valid_flags(featmap_sizes, pad_shape, device='cuda')[source]

Generate valid flags of anchors in multiple feature levels

Parameters:
  • featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
  • pad_shape (tuple) – The padded shape of the image.
  • device (str) – Device where the anchors will be put on.
Returns:

Valid flags of anchors in multiple levels.

Return type:

list(torch.Tensor)

class mmdet.core.anchor.LegacyAnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]

Legacy anchor generator used in MMDetection V1.x

Difference to the V2.0 anchor generator:

  1. The center offset of V1.x anchors are set to be 0.5 rather than 0.
  2. The width/height are minused by 1 when calculating the anchors’ centers and corners to meet the V1.x coordinate system.
  3. The anchors’ corners are quantized.
Parameters:
  • strides (list[int] | list[tuple[int]]) – Strides of anchors in multiple feature levels.
  • ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
  • scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
  • base_sizes (list[int]) – The basic sizes of anchors in multiple levels. If None is given, strides will be used to generate base_sizes.
  • scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
  • octave_base_scale (int) – The base scale of octave.
  • scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
  • centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. It a list of float is given, this list will be used to shift the centers of anchors.
  • center_offset (float) – The offset of center in propotion to anchors’ width and height. By default it is 0.5 in V2.0 but it should be 0.5 in v1.x models.

Examples

>>> from mmdet.core import LegacyAnchorGenerator
>>> self = LegacyAnchorGenerator(
>>>     [16], [1.], [1.], [9], center_offset=0.5)
>>> all_anchors = self.grid_anchors(((2, 2),), device='cpu')
>>> print(all_anchors)
[tensor([[ 0.,  0.,  8.,  8.],
        [16.,  0., 24.,  8.],
        [ 0., 16.,  8., 24.],
        [16., 16., 24., 24.]])]
mmdet.core.anchor.images_to_levels(target, num_levels)[source]

Convert targets by image to targets by feature level.

[target_img0, target_img1] -> [target_level0, target_level1, …]

mmdet.core.anchor.calc_region(bbox, ratio, featmap_size=None)[source]

Calculate a proportional bbox region.

The bbox center are fixed and the new h’ and w’ is h * ratio and w * ratio.

Parameters:
  • bbox (Tensor) – Bboxes to calculate regions, shape (n, 4)
  • ratio (float) – Ratio of the output region.
  • featmap_size (tuple) – Feature map size used for clipping the boundary.
Returns:

x1, y1, x2, y2

Return type:

tuple

bbox

mmdet.core.bbox.bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False)[source]

Calculate overlap between two set of bboxes.

If is_aligned is False, then calculate the ious between each bbox of bboxes1 and bboxes2, otherwise the ious between each aligned pair of bboxes1 and bboxes2.

Parameters:
  • bboxes1 (Tensor) – shape (m, 4) in <x1, y1, x2, y2> format or empty.
  • bboxes2 (Tensor) – shape (n, 4) in <x1, y1, x2, y2> format or empty. If is_aligned is True, then m and n must be equal.
  • mode (str) – “iou” (intersection over union) or iof (intersection over foreground).
Returns:

shape (m, n) if is_aligned == False else shape (m, 1)

Return type:

ious(Tensor)

Example

>>> bboxes1 = torch.FloatTensor([
>>>     [0, 0, 10, 10],
>>>     [10, 10, 20, 20],
>>>     [32, 32, 38, 42],
>>> ])
>>> bboxes2 = torch.FloatTensor([
>>>     [0, 0, 10, 20],
>>>     [0, 10, 10, 19],
>>>     [10, 10, 20, 20],
>>> ])
>>> bbox_overlaps(bboxes1, bboxes2)
tensor([[0.5000, 0.0000, 0.0000],
        [0.0000, 0.0000, 1.0000],
        [0.0000, 0.0000, 0.0000]])

Example

>>> empty = torch.FloatTensor([])
>>> nonempty = torch.FloatTensor([
>>>     [0, 0, 10, 9],
>>> ])
>>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
>>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
>>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
class mmdet.core.bbox.BboxOverlaps2D[source]

2D IoU Calculator

class mmdet.core.bbox.MaxIoUAssigner(pos_iou_thr, neg_iou_thr, min_pos_iou=0.0, gt_max_assign_all=True, ignore_iof_thr=-1, ignore_wrt_candidates=True, match_low_quality=True, gpu_assign_thr=-1, iou_calculator={'type': 'BboxOverlaps2D'})[source]

Assign a corresponding gt bbox or background to each bbox.

Each proposals will be assigned with -1, or a semi-positive integer indicating the ground truth index.

  • -1: negative sample, no assigned gt
  • semi-positive integer: positive sample, index (0-based) of assigned gt
Parameters:
  • pos_iou_thr (float) – IoU threshold for positive bboxes.
  • neg_iou_thr (float or tuple) – IoU threshold for negative bboxes.
  • min_pos_iou (float) – Minimum iou for a bbox to be considered as a positive bbox. Positive samples can have smaller IoU than pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
  • gt_max_assign_all (bool) – Whether to assign all bboxes with the same highest overlap with some gt to that gt.
  • ignore_iof_thr (float) – IoF threshold for ignoring bboxes (if gt_bboxes_ignore is specified). Negative values mean not ignoring any bboxes.
  • ignore_wrt_candidates (bool) – Whether to compute the iof between bboxes and gt_bboxes_ignore, or the contrary.
  • match_low_quality (bool) – Whether to allow low quality matches. This is usually allowed for RPN and single stage detectors, but not allowed in the second stage. Details are demonetrated in Step 4.
  • gpu_assign_thr (int) – The upper bound of the number of GT for GPU assign. When the number of gt is above this threshold, will assign on CPU device. Negative values mean not assign on CPU.
assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

This method assign a gt bbox to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt. The assignment is done in following steps, the order matters.

  1. assign every bbox to the background
  2. assign proposals whose iou with all gts < neg_iou_thr to 0
  3. for each bbox, if the iou with its nearest gt >= pos_iou_thr, assign it to that bbox
  4. for each gt bbox, assign its nearest proposals (may be more than one) to itself
Parameters:
  • bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
  • gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
  • gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).
Returns:

The assign result.

Return type:

AssignResult

Example

>>> self = MaxIoUAssigner(0.5, 0.5)
>>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
>>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]])
>>> assign_result = self.assign(bboxes, gt_bboxes)
>>> expected_gt_inds = torch.LongTensor([1, 0])
>>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
assign_wrt_overlaps(overlaps, gt_labels=None)[source]

Assign w.r.t. the overlaps of bboxes with gts.

Parameters:
  • overlaps (Tensor) – Overlaps between k gt_bboxes and n bboxes, shape(k, n).
  • gt_labels (Tensor, optional) – Labels of k gt_bboxes, shape (k, ).
Returns:

The assign result.

Return type:

AssignResult

class mmdet.core.bbox.AssignResult(num_gts, gt_inds, max_overlaps, labels=None)[source]

Stores assignments between predicted and truth boxes.

num_gts

the number of truth boxes considered when computing this assignment

Type:int
gt_inds

for each predicted box indicates the 1-based index of the assigned truth box. 0 means unassigned and -1 means ignore.

Type:LongTensor
max_overlaps

the iou between the predicted box and its assigned truth box.

Type:FloatTensor
labels

If specified, for each predicted box indicates the category label of the assigned truth box.

Type:None | LongTensor

Example

>>> # An assign result between 4 predicted boxes and 9 true boxes
>>> # where only two boxes were assigned.
>>> num_gts = 9
>>> max_overlaps = torch.LongTensor([0, .5, .9, 0])
>>> gt_inds = torch.LongTensor([-1, 1, 2, 0])
>>> labels = torch.LongTensor([0, 3, 4, 0])
>>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels)
>>> print(str(self))  # xdoctest: +IGNORE_WANT
<AssignResult(num_gts=9, gt_inds.shape=(4,), max_overlaps.shape=(4,),
              labels.shape=(4,))>
>>> # Force addition of gt labels (when adding gt as proposals)
>>> new_labels = torch.LongTensor([3, 4, 5])
>>> self.add_gt_(new_labels)
>>> print(str(self))  # xdoctest: +IGNORE_WANT
<AssignResult(num_gts=9, gt_inds.shape=(7,), max_overlaps.shape=(7,),
              labels.shape=(7,))>
get_extra_property(key)[source]

Get user-defined property

info

Returns a dictionary of info about the object

num_preds

Return the number of predictions in this assignment

classmethod random(**kwargs)[source]

Create random AssignResult for tests or debugging.

Parameters:
  • num_preds – number of predicted boxes
  • num_gts – number of true boxes
  • p_ignore (float) – probability of a predicted box assinged to an ignored truth
  • p_assigned (float) – probability of a predicted box not being assigned
  • p_use_label (float | bool) – with labels or not
  • rng (None | int | numpy.random.RandomState) – seed or state
Returns:

Randomly generated assign results.

Return type:

AssignResult

Example

>>> from mmdet.core.bbox.assigners.assign_result import *  # NOQA
>>> self = AssignResult.random()
>>> print(self.info)
set_extra_property(key, value)[source]

Set user-defined new property

class mmdet.core.bbox.IoUBalancedNegSampler(num, pos_fraction, floor_thr=-1, floor_fraction=0, num_bins=3, **kwargs)[source]

IoU Balanced Sampling

arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)

Sampling proposals according to their IoU. floor_fraction of needed RoIs are sampled from proposals whose IoU are lower than floor_thr randomly. The others are sampled from proposals whose IoU are higher than floor_thr. These proposals are sampled from some bins evenly, which are split by num_bins via IoU evenly.

Parameters:
  • num (int) – number of proposals.
  • pos_fraction (float) – fraction of positive proposals.
  • floor_thr (float) – threshold (minimum) IoU for IoU balanced sampling, set to -1 if all using IoU balanced sampling.
  • floor_fraction (float) – sampling fraction of proposals under floor_thr.
  • num_bins (int) – number of bins in IoU balanced sampling.
class mmdet.core.bbox.SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, gt_flags)[source]

Bbox sampling result.

Example

>>> # xdoctest: +IGNORE_WANT
>>> from mmdet.core.bbox.samplers.sampling_result import *  # NOQA
>>> self = SamplingResult.random(rng=10)
>>> print(f'self = {self}')
self = <SamplingResult({
    'neg_bboxes': torch.Size([12, 4]),
    'neg_inds': tensor([ 0,  1,  2,  4,  5,  6,  7,  8,  9, 10, 11, 12]),
    'num_gts': 4,
    'pos_assigned_gt_inds': tensor([], dtype=torch.int64),
    'pos_bboxes': torch.Size([0, 4]),
    'pos_inds': tensor([], dtype=torch.int64),
    'pos_is_gt': tensor([], dtype=torch.uint8)
})>
info

Returns a dictionary of info about the object.

classmethod random(rng=None, **kwargs)[source]
Parameters:
  • rng (None | int | numpy.random.RandomState) – seed or state.
  • kwargs (keyword arguments) –
    • num_preds: number of predicted boxes
    • num_gts: number of true boxes
    • p_ignore (float): probability of a predicted box assinged to
      an ignored truth.
    • p_assigned (float): probability of a predicted box not being
      assigned.
    • p_use_label (float | bool): with labels or not.
Returns:

Randomly generated sampling result.

Return type:

SamplingResult

Example

>>> from mmdet.core.bbox.samplers.sampling_result import *  # NOQA
>>> self = SamplingResult.random()
>>> print(self.__dict__)
to(device)[source]

Change the device of the data inplace.

Example

>>> self = SamplingResult.random()
>>> print(f'self = {self.to(None)}')
>>> # xdoctest: +REQUIRES(--gpu)
>>> print(f'self = {self.to(0)}')
mmdet.core.bbox.bbox_flip(bboxes, img_shape, direction='horizontal')[source]

Flip bboxes horizontally or vertically.

Parameters:
  • bboxes (Tensor) – Shape (…, 4*k)
  • img_shape (tuple) – Image shape.
  • direction (str) – Flip direction, options are “horizontal” and “vertical”. Default: “horizontal”
Returns:

Flipped bboxes.

Return type:

Tensor

mmdet.core.bbox.bbox_mapping(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]

Map bboxes from the original image scale to testing scale

mmdet.core.bbox.bbox_mapping_back(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]

Map bboxes from testing scale to original image scale

mmdet.core.bbox.bbox2roi(bbox_list)[source]

Convert a list of bboxes to roi format.

Parameters:bbox_list (list[Tensor]) – a list of bboxes corresponding to a batch of images.
Returns:shape (n, 5), [batch_ind, x1, y1, x2, y2]
Return type:Tensor
mmdet.core.bbox.bbox2result(bboxes, labels, num_classes)[source]

Convert detection results to a list of numpy arrays.

Parameters:
  • bboxes (Tensor) – shape (n, 5)
  • labels (Tensor) – shape (n, )
  • num_classes (int) – class number, including background class
Returns:

bbox results of each class

Return type:

list(ndarray)

mmdet.core.bbox.distance2bbox(points, distance, max_shape=None)[source]

Decode distance prediction to bounding box.

Parameters:
  • points (Tensor) – Shape (n, 2), [x, y].
  • distance (Tensor) – Distance from the given point to 4 boundaries (left, top, right, bottom).
  • max_shape (tuple) – Shape of the image.
Returns:

Decoded bboxes.

Return type:

Tensor

class mmdet.core.bbox.DeltaXYWHBBoxCoder(target_means=(0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0))[source]

Delta XYWH BBox coder

Following the practice in R-CNN, this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).

Parameters:
  • target_means (Sequence[float]) – denormalizing means of target for delta coordinates
  • target_stds (Sequence[float]) – denormalizing standard deviation of target for delta coordinates
class mmdet.core.bbox.TBLRBBoxCoder(normalizer=4.0)[source]

TBLR BBox coder

Following the practice in FSAF, this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, right) and decode it back to the original.

Parameters:normalizer (list | float) – Normalization factor to be divided with when coding the coordinates. If it is a list, it should have length of 4 indicating normalization factor in tblr dims. Otherwise it is a unified float factor for all dims. Default: 4.0
class mmdet.core.bbox.CenterRegionAssigner(pos_scale, neg_scale, min_pos_iof=0.01, ignore_gt_scale=0.5, iou_calculator={'type': 'BboxOverlaps2D'})[source]

Assign pixels at the center region of a bbox as positive.

Each proposals will be assigned with -1, 0, or a positive integer indicating the ground truth index. - -1: negative samples - semi-positive numbers: positive sample, index (0-based) of assigned gt

Parameters:
  • pos_scale (float) – Threshold within which pixels are labelled as positive.
  • neg_scale (float) – Threshold above which pixels are labelled as positive.
  • min_pos_iof (float) – Minimum iof of a pixel with a gt to be labelled as positive. Default: 1e-2
  • ignore_gt_scale (float) – Threshold within which the pixels are ignored when the gt is labelled as shadowed. Default: 0.5
assign(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

This method assigns gts to every bbox (proposal/anchor), each bbox will
be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt.
Parameters:
  • bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
  • gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
  • gt_bboxes_ignore (tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
  • gt_labels (tensor, optional) – Label of gt_bboxes, shape (num_gts,).
Returns:

The assigned result. Note that shadowed_labels

of shape (N, 2) is also added as an assign_result attribute. shadowed_labels is a tensor composed of N pairs of [anchor_ind, class_label], where N is the number of anchors that lie in the outer region of a gt, anchor_ind is the shadowed anchor index and class_label is the shadowed class label.

Return type:

AssignResult

Example

>>> self = CenterRegionAssigner(0.2, 0.2)
>>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]])
>>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]])
>>> assign_result = self.assign(bboxes, gt_bboxes)
>>> expected_gt_inds = torch.LongTensor([1, 0])
>>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
assign_one_hot_gt_indices(is_bbox_in_gt_core, is_bbox_in_gt_shadow, gt_priority=None)[source]

Assign only one gt index to each prior box

Gts with large gt_priority are more likely to be assigned.

Parameters:
  • is_bbox_in_gt_core (Tensor) – Bool tensor indicating the bbox center is in the core area of a gt (e.g. 0-0.2). Shape: (num_prior, num_gt).
  • is_bbox_in_gt_shadow (Tensor) – Bool tensor indicating the bbox center is in the shadowed area of a gt (e.g. 0.2-0.5). Shape: (num_prior, num_gt).
  • gt_priority (Tensor) – Priorities of gts. The gt with a higher priority is more likely to be assigned to the bbox when the bbox match with multiple gts. Shape: (num_gt, ).
Returns:

The assigned gt index of each prior bbox

(i.e. index from 1 to num_gts). Shape: (num_prior, ).

shadowed_gt_inds: shadowed gt indices. It is a tensor of shape

(num_ignore, 2) with first column being the shadowed prior bbox indices and the second column the shadowed gt indices (1-based)

Return type:

assigned_gt_inds

get_gt_priorities(gt_bboxes)[source]

Get gt priorities according to their areas.

Smaller gt has higher priority.

Parameters:gt_bboxes (Tensor) – Ground truth boxes, shape (k, 4).
Returns:
The priority of gts so that gts with larger priority is
more likely to be assigned. Shape (k, )
Return type:Tensor

mask

mmdet.core.mask.split_combined_polys(polys, poly_lens, polys_per_mask)[source]

Split the combined 1-D polys into masks.

A mask is represented as a list of polys, and a poly is represented as a 1-D array. In dataset, all masks are concatenated into a single 1-D tensor. Here we need to split the tensor into original representations.

Parameters:
  • polys (list) – a list (length = image num) of 1-D tensors
  • poly_lens (list) – a list (length = image num) of poly length
  • polys_per_mask (list) – a list (length = image num) of poly number of each mask
Returns:

a list (length = image num) of list (length = mask num) of

list (length = poly num) of numpy array

Return type:

list

class mmdet.core.mask.BitmapMasks(masks, height, width)[source]

This class represents masks in the form of bitmaps.

Parameters:
  • masks (ndarray) – ndarray of masks in shape (N, H, W), where N is the number of objects.
  • height (int) – height of masks
  • width (int) – width of masks
areas

Compute area of each instance

Returns:areas of each instance
Return type:ndarray
crop(bbox)[source]

Crop each mask by the given bbox.

Parameters:bbox (ndarray) – bbox in format [x1, y1, x2, y2], shape (4, )
Returns:the cropped masks.
Return type:BitmapMasks
crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear')[source]

Crop and resize masks by the given bboxes.

This function is mainly used in mask targets computation. It firstly align mask to bboxes by assigned_inds, then crop mask by the assigned bbox and resize to the size of (mask_h, mask_w)

Parameters:
  • bboxes (Tensor) – bboxes in format [x1, y1, x2, y2], shape (N, 4)
  • out_shape (tuple[int]) – target (h, w) of resized mask
  • inds (ndarray) – indexes to assign masks to each bbox
  • device (str) – device of bboxes
  • interpolation (str) – see mmcv.imresize
Returns:

the cropped and resized masks.

Return type:

ndarray

expand(expanded_h, expanded_w, top, left)[source]

see transforms.Expand.

flip(flip_direction='horizontal')[source]

flip masks alone the given direction.

Parameters:flip_direction (str) – either ‘horizontal’ or ‘vertical’
Returns:the flipped masks
Return type:BitmapMasks
pad(out_shape, pad_val=0)[source]

Pad masks to the given size of (h, w).

Parameters:
  • out_shape (tuple[int]) – target (h, w) of padded mask
  • pad_val (int) – the padded value
Returns:

the padded masks

Return type:

BitmapMasks

rescale(scale, interpolation='nearest')[source]

Rescale masks as large as possible while keeping the aspect ratio. For details can refer to mmcv.imrescale

Parameters:
  • scale (tuple[int]) – the maximum size (h, w) of rescaled mask
  • interpolation (str) – same as mmcv.imrescale()
Returns:

the rescaled masks

Return type:

BitmapMasks

resize(out_shape, interpolation='nearest')[source]

Resize masks to the given out_shape.

Parameters:
  • out_shape – target (h, w) of resized mask
  • interpolation (str) – see mmcv.imresize
Returns:

the resized masks

Return type:

BitmapMasks

class mmdet.core.mask.PolygonMasks(masks, height, width)[source]

This class represents masks in the form of polygons.

Polygons is a list of three levels. The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates

Parameters:
  • masks (list[list[ndarray]]) – The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
  • height (int) – height of masks
  • width (int) – width of masks
areas

Compute areas of masks.

This func is modified from https://github.com/facebookresearch/detectron2/blob/ffff8acc35ea88ad1cb1806ab0f00b4c1c5dbfd9/detectron2/structures/masks.py#L387 Only works with Polygons, using the shoelace formula

Returns:areas of each instance
Return type:ndarray
crop(bbox)[source]

see BitmapMasks.crop

crop_and_resize(bboxes, out_shape, inds, device='cpu', interpolation='bilinear')[source]

see BitmapMasks.crop_and_resize

flip(flip_direction='horizontal')[source]

see BitmapMasks.flip

pad(out_shape, pad_val=0)[source]

padding has no effect on polygons

rescale(scale, interpolation=None)[source]

see BitmapMasks.rescale

resize(out_shape, interpolation=None)[source]

see BitmapMasks.resize

to_bitmap()[source]

convert polygon masks to bitmap masks

mmdet.core.mask.encode_mask_results(mask_results)[source]

Encode bitmap mask to RLE code.

Parameters:mask_results (list | tuple[list]) – bitmap mask results. In mask scoring rcnn, mask_results is a tuple of (segm_results, segm_cls_score).
Returns:RLE encoded mask.
Return type:list | tuple

evaluation

mmdet.core.evaluation.get_classes(dataset)[source]

Get class names of a dataset.

class mmdet.core.evaluation.DistEvalHook(dataloader, interval=1, gpu_collect=False, **eval_kwargs)[source]

Distributed evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
tmpdir

Temporary directory to save the results of all processes. Default: None.

Type:str | None
gpu_collect

Whether to use gpu or cpu to collect results. Default: False.

Type:bool
class mmdet.core.evaluation.EvalHook(dataloader, interval=1, **eval_kwargs)[source]

Evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
mmdet.core.evaluation.average_precision(recalls, precisions, mode='area')[source]

Calculate average precision (for single or multiple scales).

Parameters:
  • recalls (ndarray) – shape (num_scales, num_dets) or (num_dets, )
  • precisions (ndarray) – shape (num_scales, num_dets) or (num_dets, )
  • mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1]
Returns:

calculated average precision

Return type:

float or ndarray

mmdet.core.evaluation.eval_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, dataset=None, logger=None, nproc=4)[source]

Evaluate mAP of a dataset.

Parameters:
  • det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.
  • annotations (list[dict]) –

    Ground truth annotations where each item of the list indicates an image. Keys of annotations are:

    • bboxes: numpy array of shape (n, 4)
    • labels: numpy array of shape (n, )
    • bboxes_ignore (optional): numpy array of shape (k, 4)
    • labels_ignore (optional): numpy array of shape (k, )
  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None.
  • iou_thr (float) – IoU threshold to be considered as matched. Default: 0.5.
  • dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datsets, e.g. “voc07”, “imagenet_det”, etc. Default: None.
  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
  • nproc (int) – Processes used for computing TP and FP. Default: 4.
Returns:

(mAP, [dict, dict, …])

Return type:

tuple

mmdet.core.evaluation.print_map_summary(mean_ap, results, dataset=None, scale_ranges=None, logger=None)[source]

Print mAP and results of each class.

A table will be printed to show the gts/dets/recall/AP of each class and the mAP.

Parameters:
  • mean_ap (float) – Calculated from eval_map().
  • results (list[dict]) – Calculated from eval_map().
  • dataset (list[str] | str | None) – Dataset name or dataset classes.
  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated.
  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
mmdet.core.evaluation.eval_recalls(gts, proposals, proposal_nums=None, iou_thrs=0.5, logger=None)[source]

Calculate recalls.

Parameters:
  • gts (list[ndarray]) – a list of arrays of shape (n, 4)
  • proposals (list[ndarray]) – a list of arrays of shape (k, 4) or (k, 5)
  • proposal_nums (int | Sequence[int]) – Top N proposals to be evaluated.
  • iou_thrs (float | Sequence[float]) – IoU thresholds. Default: 0.5.
  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
Returns:

recalls of different ious and proposal nums

Return type:

ndarray

mmdet.core.evaluation.print_recall_summary(recalls, proposal_nums, iou_thrs, row_idxs=None, col_idxs=None, logger=None)[source]

Print recalls in a table.

Parameters:
  • recalls (ndarray) – calculated from bbox_recalls
  • proposal_nums (ndarray or list) – top N proposals
  • iou_thrs (ndarray or list) – iou thresholds
  • row_idxs (ndarray) – which rows(proposal nums) to print
  • col_idxs (ndarray) – which cols(iou thresholds) to print
  • logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
mmdet.core.evaluation.plot_num_recall(recalls, proposal_nums)[source]

Plot Proposal_num-Recalls curve.

Parameters:
  • recalls (ndarray or list) – shape (k,)
  • proposal_nums (ndarray or list) – same shape as recalls
mmdet.core.evaluation.plot_iou_recall(recalls, iou_thrs)[source]

Plot IoU-Recalls curve.

Parameters:
  • recalls (ndarray or list) – shape (k,)
  • iou_thrs (ndarray or list) – same shape as recalls

post_processing

mmdet.core.post_processing.multiclass_nms(multi_bboxes, multi_scores, score_thr, nms_cfg, max_num=-1, score_factors=None)[source]

NMS for multi-class bboxes.

Parameters:
  • multi_bboxes (Tensor) – shape (n, #class*4) or (n, 4)
  • multi_scores (Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.
  • score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.
  • nms_thr (float) – NMS IoU threshold
  • max_num (int) – if there are more than max_num bboxes after NMS, only top max_num will be kept.
  • score_factors (Tensor) – The factors multiplied to scores before applying NMS
Returns:

(bboxes, labels), tensors of shape (k, 5) and (k, 1). Labels

are 0-based.

Return type:

tuple

mmdet.core.post_processing.merge_aug_proposals(aug_proposals, img_metas, rpn_test_cfg)[source]

Merge augmented proposals (multiscale, flip, etc.)

Parameters:
  • aug_proposals (list[Tensor]) – proposals from different testing schemes, shape (n, 5). Note that they are not rescaled to the original image size.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • rpn_test_cfg (dict) – rpn test config.
Returns:

shape (n, 4), proposals corresponding to original image scale.

Return type:

Tensor

mmdet.core.post_processing.merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)[source]

Merge augmented detection bboxes and scores.

Parameters:
  • aug_bboxes (list[Tensor]) – shape (n, 4*#class)
  • aug_scores (list[Tensor] or None) – shape (n, #class)
  • img_shapes (list[Tensor]) – shape (3, ).
  • rcnn_test_cfg (dict) – rcnn test config.
Returns:

(bboxes, scores)

Return type:

tuple

mmdet.core.post_processing.merge_aug_scores(aug_scores)[source]

Merge augmented bbox scores.

mmdet.core.post_processing.merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None)[source]

Merge augmented mask prediction.

Parameters:
  • aug_masks (list[ndarray]) – shape (n, #class, h, w)
  • img_shapes (list[ndarray]) – shape (3, ).
  • rcnn_test_cfg (dict) – rcnn test config.
Returns:

(bboxes, scores)

Return type:

tuple

fp16

mmdet.core.fp16.auto_fp16(apply_to=None, out_fp32=False)[source]

Decorator to enable fp16 training automatically.

This decorator is useful when you write custom modules and want to support mixed precision training. If inputs arguments are fp32 tensors, they will be converted to fp16 automatically. Arguments other than fp32 tensors are ignored.

Parameters:
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
  • out_fp32 (bool) – Whether to convert the output back to fp32.

Example

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp16
>>>     @auto_fp16()
>>>     def forward(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp16
>>>     @auto_fp16(apply_to=('pred', ))
>>>     def do_something(self, pred, others):
>>>         pass
mmdet.core.fp16.force_fp32(apply_to=None, out_fp16=False)[source]

Decorator to convert input arguments to fp32 in force.

This decorator is useful when you write custom modules and want to support mixed precision training. If there are some inputs that must be processed in fp32 mode, then this decorator can handle it. If inputs arguments are fp16 tensors, they will be converted to fp32 automatically. Arguments other than fp16 tensors are ignored.

Parameters:
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
  • out_fp16 (bool) – Whether to convert the output back to fp16.

Example

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp32
>>>     @force_fp32()
>>>     def loss(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp32
>>>     @force_fp32(apply_to=('pred', ))
>>>     def post_process(self, pred, others):
>>>         pass
class mmdet.core.fp16.Fp16OptimizerHook(grad_clip=None, coalesce=True, bucket_size_mb=-1, loss_scale=512.0, distributed=True)[source]

FP16 optimizer hook.

The steps of fp16 optimizer is as follows. 1. Scale the loss value. 2. BP in the fp16 model. 2. Copy gradients from fp16 model to fp32 weights. 3. Update fp32 weights. 4. Copy updated parameters from fp32 weights to fp16 model.

Refer to https://arxiv.org/abs/1710.03740 for more details.

Parameters:loss_scale (float) – Scale factor multiplied with loss.
copy_grads_to_fp32(fp16_net, fp32_weights)[source]

Copy gradients from fp16 model to fp32 weight copy.

copy_params_to_fp16(fp16_net, fp32_weights)[source]

Copy updated params from fp32 weight copy to fp16 model.

optimizer

utils

class mmdet.core.utils.DistOptimizerHook(*args, **kwargs)[source]

Deprecated optimizer hook for distributed training

mmdet.core.utils.unmap(data, count, inds, fill=0)[source]

Unmap a subset of item (data) back to the original set of items (of size count)

mmdet.datasets

datasets

class mmdet.datasets.CustomDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]

Custom dataset for detection.

The annotation format is shown as follows. The ann field is optional for testing.

[
    {
        'filename': 'a.jpg',
        'width': 1280,
        'height': 720,
        'ann': {
            'bboxes': <np.ndarray> (n, 4),
            'labels': <np.ndarray> (n, ),
            'bboxes_ignore': <np.ndarray> (k, 4), (optional field)
            'labels_ignore': <np.ndarray> (k, 4) (optional field)
        }
    },
    ...
]
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate the dataset.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.
classmethod get_classes(classes=None)[source]

Get class names of current dataset

Parameters:classes (Sequence[str] | str | None) – If classes is None, use default CLASSES defined by builtin dataset. If classes is a string, take it as a file name. The file contains the name of classes where each line contains one class name. If classes is a tuple or list, override the CLASSES defined by the dataset.
class mmdet.datasets.XMLDataset(min_size=None, **kwargs)[source]
get_subset_by_classes()[source]

Filter imgs by user-defined categories

class mmdet.datasets.CocoDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in COCO protocol.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

float]

Return type:

dict[str

format_results(results, jsonfile_prefix=None, **kwargs)[source]

Format the results to json (standard format for COCO evaluation).

Parameters:
  • results (list) – Testing results of the dataset.
  • jsonfile_prefix (str | None) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving json files when jsonfile_prefix is not specified.

Return type:

tuple

get_subset_by_classes()[source]

Get img ids that contain any category in class_ids.

Different from the coco.getImgIds(), this function returns the id if the img contains one of the categories rather than all.

Parameters:class_ids (list[int]) – list of category ids
Returns:integer list of img ids
Return type:ids (list[int])
results2json(results, outfile_prefix)[source]

Dump the detection results to a json file.

There are 3 types of results: proposals, bbox predictions, mask predictions, and they have different data types. This method will automatically recognize the type, and dump them to json files.

Parameters:
  • results (list[list | tuple | ndarray]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the json files will be named “somepath/xxx.bbox.json”, “somepath/xxx.segm.json”, “somepath/xxx.proposal.json”.
Returns:

str]: Possible keys are “bbox”, “segm”, “proposal”, and

values are corresponding filenames.

Return type:

dict[str

class mmdet.datasets.VOCDataset(**kwargs)[source]
evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None)[source]

Evaluate the dataset.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.
  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.
class mmdet.datasets.CityscapesDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, outfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in Cityscapes protocol.

Parameters:
  • results (list) – Testing results of the dataset.
  • metric (str | list[str]) – Metrics to be evaluated.
  • logger (logging.Logger | str | None) – Logger used for printing related information during evaluation. Default: None.
  • outfile_prefix (str | None) –
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

float]

Return type:

dict[str

format_results(results, txtfile_prefix=None)[source]

Format the results to txt (standard format for Cityscapes evaluation).

Parameters:
  • results (list) – Testing results of the dataset.
  • txtfile_prefix (str | None) – The prefix of txt files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Default: None.
Returns:

(result_files, tmp_dir), result_files is a dict containing

the json filepaths, tmp_dir is the temporal directory created for saving txt/png files when txtfile_prefix is not specified.

Return type:

tuple

results2txt(results, outfile_prefix)[source]

Dump the detection results to a txt file.

Parameters:
  • results (list[list | tuple | ndarray]) – Testing results of the dataset.
  • outfile_prefix (str) – The filename prefix of the json files. If the prefix is “somepath/xxx”, the txt files will be named “somepath/xxx.txt”.
Returns:

str]: result txt files which contains corresponding instance segmentation images.

Return type:

list[str

class mmdet.datasets.LVISDataset(ann_file, pipeline, classes=None, data_root=None, img_prefix='', seg_prefix=None, proposal_file=None, test_mode=False, filter_empty_gt=True)[source]
evaluate(results, metric='bbox', logger=None, jsonfile_prefix=None, classwise=False, proposal_nums=(100, 300, 1000), iou_thrs=array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]))[source]

Evaluation in LVIS protocol. :param results: Testing results of the dataset. :type results: list :param metric: Metrics to be evaluated. :type metric: str | list[str] :param logger: Logger used for printing

related information during evaluation. Default: None.
Parameters:
  • jsonfile_prefix (str | None) –
  • classwise (bool) – Whether to evaluating the AP for each class.
  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).
  • iou_thrs (Sequence[float]) – IoU threshold used for evaluating recalls. If set to a list, the average recall of all IoUs will also be computed. Default: 0.5.
Returns:

float]

Return type:

dict[str

class mmdet.datasets.GroupSampler(dataset, samples_per_gpu=1)[source]
class mmdet.datasets.DistributedGroupSampler(dataset, samples_per_gpu=1, num_replicas=None, rank=None)[source]

Sampler that restricts data loading to a subset of the dataset.

It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.

Note

Dataset is assumed to be of constant size.

Parameters:
  • dataset – Dataset used for sampling.
  • num_replicas (optional) – Number of processes participating in distributed training.
  • rank (optional) – Rank of the current process within num_replicas.
class mmdet.datasets.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True)[source]
mmdet.datasets.build_dataloader(dataset, samples_per_gpu, workers_per_gpu, num_gpus=1, dist=True, shuffle=True, seed=None, **kwargs)[source]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

Parameters:
  • dataset (Dataset) – A PyTorch dataset.
  • samples_per_gpu (int) – Number of training samples on each GPU, i.e., batch size of each GPU.
  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.
  • num_gpus (int) – Number of GPUs. Only used in non-distributed training.
  • dist (bool) – Distributed training/test or not. Default: True.
  • shuffle (bool) – Whether to shuffle the data at every epoch. Default: True.
  • kwargs – any keyword argument to be used to initialize DataLoader
Returns:

A PyTorch dataloader.

Return type:

DataLoader

class mmdet.datasets.ConcatDataset(datasets)[source]

A wrapper of concatenated dataset.

Same as torch.utils.data.dataset.ConcatDataset, but concat the group flag for image aspect ratio.

Parameters:datasets (list[Dataset]) – A list of datasets.
class mmdet.datasets.RepeatDataset(dataset, times)[source]

A wrapper of repeated dataset.

The length of repeated dataset will be times larger than the original dataset. This is useful when the data loading time is long but the dataset is small. Using RepeatDataset can reduce the data loading time between epochs.

Parameters:
  • dataset (Dataset) – The dataset to be repeated.
  • times (int) – Repeat times.
class mmdet.datasets.ClassBalancedDataset(dataset, oversample_thr)[source]

A wrapper of repeated dataset with repeat factor.

Suitable for training on class imbalanced datasets like LVIS. Following the sampling strategy in [1], in each epoch, an image may appear multiple times based on its “repeat factor”. The repeat factor for an image is a function of the frequency the rarest category labeled in that image. The “frequency of category c” in [0, 1] is defined by the fraction of images in the training set (without repeats) in which category c appears. The dataset needs to instantiate self.get_cat_ids(idx)() to support ClassBalancedDataset. The repeat factor is computed as followed. 1. For each category c, compute the fraction # of images

that contain it: f(c)
  1. For each category c, compute the category-level repeat factor:
    r(c) = max(1, sqrt(t/f(c)))
  2. For each image I, compute the image-level repeat factor:
    r(I) = max_{c in I} r(c)

References

[1]https://arxiv.org/pdf/1903.00621v2.pdf
Parameters:
  • dataset (CustomDataset) – The dataset to be repeated.
  • oversample_thr (float) – frequency threshold below which data is repeated. For categories with f_c >= oversample_thr, there is no oversampling. For categories with f_c < oversample_thr, the degree of oversampling following the square-root inverse frequency heuristic above.
class mmdet.datasets.WIDERFaceDataset(**kwargs)[source]

Reader for the WIDER Face dataset in PASCAL VOC format. Conversion scripts can be found in https://github.com/sovrasov/wider-face-pascal-voc-annotations

pipelines

mmdet.datasets.pipelines.to_tensor(data)[source]

Convert objects of various python types to torch.Tensor.

Supported types are: numpy.ndarray, torch.Tensor, Sequence, int and float.

class mmdet.datasets.pipelines.Collect(keys, meta_keys=('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg'))[source]

Collect data from the loader relevant to the specific task.

This is usually the last stage of the data loader pipeline. Typically keys is set to some subset of “img”, “proposals”, “gt_bboxes”, “gt_bboxes_ignore”, “gt_labels”, and/or “gt_masks”.

The “img_meta” item is always populated. The contents of the “img_meta” dictionary depends on “meta_keys”. By default this includes:

  • “img_shape”: shape of the image input to the network as a tuple
    (h, w, c). Note that images may be zero padded on the bottom/right if the batch tensor is larger than this shape.
  • “scale_factor”: a float indicating the preprocessing scale
  • “flip”: a boolean indicating if image flip transform was used
  • “filename”: path to the image file
  • “ori_shape”: original shape of the image as a tuple (h, w, c)
  • “pad_shape”: image shape after padding
  • “img_norm_cfg”: a dict of normalization information:
    • mean - per channel mean subtraction
    • std - per channel std divisor
    • to_rgb - bool indicating if bgr was converted to rgb
class mmdet.datasets.pipelines.LoadAnnotations(with_bbox=True, with_label=True, with_mask=False, with_seg=False, poly2mask=True, file_client_args={'backend': 'disk'})[source]

Load annotations.

Parameters:
  • with_bbox (bool) – Whether to parse and load the bbox annotation. Default: True.
  • with_label (bool) – Whether to parse and load the label annotation. Default: True.
  • with_mask (bool) – Whether to parse and load the mask annotation. Default: False.
  • with_seg (bool) – Whether to parse and load the semantic segmentation annotation. Default: False.
  • poly2mask (bool) – Whether to convert the instance masks from polygons to bitmaps. Default: True.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
process_polygons(polygons)[source]

Convert polygons to list of ndarray and filter invalid polygons.

Parameters:polygons (list[list]) – polygons of one instance.
Returns:processed polygons.
Return type:list[ndarray]
class mmdet.datasets.pipelines.LoadImageFromFile(to_float32=False, color_type='color', file_client_args={'backend': 'disk'})[source]

Load an image from file.

Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).

Parameters:
  • to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
  • color_type (str) – The flag argument for mmcv.imfrombytes(). Defaults to ‘color’.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
class mmdet.datasets.pipelines.LoadMultiChannelImageFromFiles(to_float32=False, color_type='unchanged', file_client_args={'backend': 'disk'})[source]

Load multi-channel images from a list of separate channel files.

Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”, which is expected to be a list of filenames). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).

Parameters:
  • to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
  • color_type (str) – The flag argument for mmcv.imfrombytes(). Defaults to ‘color’.
  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend='disk').
class mmdet.datasets.pipelines.MultiScaleFlipAug(transforms, img_scale, flip=False, flip_direction='horizontal')[source]

Test-time augmentation with multiple scales and flipping

Parameters:
  • transforms (list[dict]) – Transforms to apply in each augmentation.
  • (tuple | list[tuple] (img_scale) – Images scales for resizing.
  • flip (bool) – Whether apply flip augmentation. Default: False.
  • flip_direction (str | list[str]) – Flip augmentation directions, options are “horizontal” and “vertical”. If flip_direction is list, multiple flip augmentations will be applied. It has no effect when flip == False. Default: “horizontal”.
class mmdet.datasets.pipelines.Resize(img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True)[source]

Resize images & bbox & mask.

This transform resizes the input image to some scale. Bboxes and masks are then resized with the same scale factor. If the input dict contains the key “scale”, then the scale in the input dict is used, otherwise the specified scale in the init method is used.

img_scale can either be a tuple (single-scale) or a list of tuple (multi-scale). There are 3 multiscale modes:

  • ratio_range is not None: randomly sample a ratio from the ratio range and multiply it with the image scale.
  • ratio_range is None and multiscale_mode == "range": randomly sample a scale from the multiscale range.
  • ratio_range is None and multiscale_mode == "value": randomly sample a scale from multiple scales.
Parameters:
  • img_scale (tuple or list[tuple]) – Images scales for resizing.
  • multiscale_mode (str) – Either “range” or “value”.
  • ratio_range (tuple[float]) – (min_ratio, max_ratio)
  • keep_ratio (bool) – Whether to keep the aspect ratio when resizing the image.
class mmdet.datasets.pipelines.RandomFlip(flip_ratio=None, direction='horizontal')[source]

Flip the image & bbox & mask.

If the input dict contains the key “flip”, then the flag will be used, otherwise it will be randomly decided by a ratio specified in the init method.

Parameters:flip_ratio (float, optional) – The flipping probability.
bbox_flip(bboxes, img_shape, direction)[source]

Flip bboxes horizontally.

Parameters:
  • bboxes (ndarray) – shape (…, 4*k)
  • img_shape (tuple) – (height, width)
class mmdet.datasets.pipelines.Pad(size=None, size_divisor=None, pad_val=0)[source]

Pad the image & mask.

There are two padding modes: (1) pad to a fixed size and (2) pad to the minimum size that is divisible by some number.

Parameters:
  • size (tuple, optional) – Fixed padding size.
  • size_divisor (int, optional) – The divisor of padded size.
  • pad_val (float, optional) – Padding value, 0 by default.
class mmdet.datasets.pipelines.RandomCrop(crop_size)[source]

Random crop the image & bboxes & masks.

Parameters:crop_size (tuple) – Expected size after cropping, (h, w).

Notes

  • If the image is smaller than the crop size, return the original image
  • The keys for bboxes, labels and masks must be aligned. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore corresponds to gt_labels_ignore and gt_masks_ignore.
  • If there are gt bboxes in an image and the cropping area does not have intersection with any gt bbox, this image is skipped.
class mmdet.datasets.pipelines.Normalize(mean, std, to_rgb=True)[source]

Normalize the image.

Parameters:
  • mean (sequence) – Mean values of 3 channels.
  • std (sequence) – Std values of 3 channels.
  • to_rgb (bool) – Whether to convert the image from BGR to RGB, default is true.
class mmdet.datasets.pipelines.SegRescale(scale_factor=1)[source]

Rescale semantic segmentation maps.

Parameters:scale_factor (float) – The scale factor of the final output.
class mmdet.datasets.pipelines.MinIoURandomCrop(min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), min_crop_size=0.3)[source]

Random crop the image & bboxes, the cropped patches have minimum IoU requirement with original image & bboxes, the IoU threshold is randomly selected from min_ious.

Parameters:
  • min_ious (tuple) – minimum IoU threshold for all intersections with
  • boxes (bounding) –
  • min_crop_size (float) – minimum crop’s size (i.e. h,w := a*h, a*w,
  • a >= min_crop_size) (where) –

Notes

The keys for bboxes, labels and masks should be paired. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore to gt_labels_ignore and gt_masks_ignore.

class mmdet.datasets.pipelines.Expand(mean=(0, 0, 0), to_rgb=True, ratio_range=(1, 4), seg_ignore_label=None, prob=0.5)[source]

Random expand the image & bboxes.

Randomly place the original image on a canvas of ‘ratio’ x original image size filled with mean values. The ratio is in the range of ratio_range.

Parameters:
  • mean (tuple) – mean value of dataset.
  • to_rgb (bool) – if need to convert the order of mean to align with RGB.
  • ratio_range (tuple) – range of expand ratio.
  • prob (float) – probability of applying this transformation
class mmdet.datasets.pipelines.PhotoMetricDistortion(brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18)[source]

Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last.

  1. random brightness
  2. random contrast (mode 0)
  3. convert color from BGR to HSV
  4. random saturation
  5. random hue
  6. convert color from HSV to BGR
  7. random contrast (mode 1)
  8. randomly swap channels
Parameters:
  • brightness_delta (int) – delta of brightness.
  • contrast_range (tuple) – range of contrast.
  • saturation_range (tuple) – range of saturation.
  • hue_delta (int) – delta of hue.
class mmdet.datasets.pipelines.InstaBoost(action_candidate=('normal', 'horizontal', 'skip'), action_prob=(1, 0, 0), scale=(0.8, 1.2), dx=15, dy=15, theta=(-1, 1), color_prob=0.5, hflag=False, aug_ratio=0.5)[source]

Data augmentation method in paper “InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting” Implementation details can refer to https://github.com/GothicAi/Instaboost.

mmdet.models

detectors

class mmdet.models.detectors.ATSS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.BaseDetector[source]

Base class for detectors

forward(img, img_metas, return_loss=True, **kwargs)[source]

Calls either forward_train or forward_test depending on whether return_loss=True. Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

forward_test(imgs, img_metas, **kwargs)[source]
Parameters:
  • imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
  • img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
forward_train(imgs, img_metas, **kwargs)[source]
Parameters:
  • img (list[Tensor]) – List of tensors of shape (1, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys, see mmdet.datasets.pipelines.Collect.
  • kwargs (keyword arguments) – Specific to concrete implementation.
show_result(img, result, score_thr=0.3, bbox_color='green', text_color='green', thickness=1, font_scale=0.5, win_name='', show=False, wait_time=0, out_file=None)[source]

Draw result over img.

Parameters:
  • img (str or Tensor) – The image to be displayed.
  • result (Tensor or tuple) – The results to draw over img bbox_result or (bbox_result, segm_result).
  • score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.
  • bbox_color (str or tuple or Color) – Color of bbox lines.
  • text_color (str or tuple or Color) – Color of texts.
  • thickness (int) – Thickness of lines.
  • font_scale (float) – Font scales of texts.
  • win_name (str) – The window name.
  • wait_time (int) – Value of waitKey param. Default: 0.
  • show (bool) – Whether to show the image. Default: False.
  • out_file (str or None) – The filename to write the image. Default: None.
Returns:

Only if not show or out_file

Return type:

img (Tensor)

class mmdet.models.detectors.SingleStageDetector(backbone, neck=None, bbox_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Base class for single-stage detectors.

Single-stage detectors directly and densely predict bounding boxes on the output features of the backbone+neck.

extract_feat(img)[source]

Directly extract features from the backbone+neck

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]
Parameters:
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – Class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

class mmdet.models.detectors.TwoStageDetector(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Base class for two-stage detectors.

Two-stage detectors typically consisting of a region proposal network and a task-specific regression head.

async_simple_test(img, img_meta, proposals=None, rescale=False)[source]

Async test without augmentation.

aug_test(imgs, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

extract_feat(img)[source]

Directly extract features from the backbone+neck

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, proposals=None, **kwargs)[source]
Parameters:
  • img (Tensor) – of shape (N, C, H, W) encoding input images. Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
  • proposals – override rpn proposals with custom proposals. Use when with_rpn is False.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

simple_test(img, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

class mmdet.models.detectors.RPN(backbone, neck, rpn_head, train_cfg, test_cfg, pretrained=None)[source]
forward_train(img, img_metas, gt_bboxes=None, gt_bboxes_ignore=None)[source]
Parameters:
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns:

A dictionary of loss components.

Return type:

dict[str, Tensor]

show_result(data, result, dataset=None, top_k=20)[source]

Show RPN proposals on the image.

Although we assume batch size is 1, this method supports arbitrary batch size.

class mmdet.models.detectors.FastRCNN(backbone, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]
forward_test(imgs, img_metas, proposals, **kwargs)[source]
Parameters:
  • imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
  • img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
  • proposals (List[List[Tensor]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. The Tensor should have a shape Px4, where P is the number of proposals.
class mmdet.models.detectors.FasterRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]
class mmdet.models.detectors.MaskRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]
class mmdet.models.detectors.CascadeRCNN(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]
show_result(data, result, **kwargs)[source]

Draw result over img.

Parameters:
  • img (str or Tensor) – The image to be displayed.
  • result (Tensor or tuple) – The results to draw over img bbox_result or (bbox_result, segm_result).
  • score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.
  • bbox_color (str or tuple or Color) – Color of bbox lines.
  • text_color (str or tuple or Color) – Color of texts.
  • thickness (int) – Thickness of lines.
  • font_scale (float) – Font scales of texts.
  • win_name (str) – The window name.
  • wait_time (int) – Value of waitKey param. Default: 0.
  • show (bool) – Whether to show the image. Default: False.
  • out_file (str or None) – The filename to write the image. Default: None.
Returns:

Only if not show or out_file

Return type:

img (Tensor)

class mmdet.models.detectors.HybridTaskCascade(**kwargs)[source]
class mmdet.models.detectors.RetinaNet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.FCOS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.GridRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Grid R-CNN.

This detector is the implementation of: - Grid R-CNN (https://arxiv.org/abs/1811.12030) - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)

class mmdet.models.detectors.MaskScoringRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]

Mask Scoring RCNN.

https://arxiv.org/abs/1903.00241

class mmdet.models.detectors.RepPointsDetector(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

RepPoints: Point Set Representation for Object Detection.

This detector is the implementation of: - RepPoints detector (https://arxiv.org/pdf/1904.11490)

merge_aug_results(aug_bboxes, aug_scores, img_metas)[source]

Merge augmented detection bboxes and scores.

Parameters:
  • aug_bboxes (list[Tensor]) – shape (n, 4*#class)
  • aug_scores (list[Tensor] or None) – shape (n, #class)
  • img_shapes (list[Tensor]) – shape (3, ).
Returns:

(bboxes, scores)

Return type:

tuple

class mmdet.models.detectors.FOVEA(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.FSAF(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]
class mmdet.models.detectors.NASFCOS(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

NAS-FCOS: Fast Neural Architecture Search for Object Detection.

https://arxiv.org/abs/1906.0442

backbones

class mmdet.models.backbones.RegNet(arch, in_channels=3, base_channels=32, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]

RegNet backbone.

More details can be found in paper .

Parameters:
  • arch (dict) – The parameter of RegNets. - w0 (int): initial width - wa (float): slope of width - wm (float): quantization parameter to quantize the width - depth (int): depth of the backbone - group_w (int): width of group - bot_mul (float): bottleneck ratio, i.e. expansion of bottlneck.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • base_channels (int) – Base channels after stem layer.
  • in_channels (int) – Number of input image channels. Default: 3.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import RegNet
>>> import torch
>>> self = RegNet(
        arch=dict(
            w0=88,
            wa=26.31,
            wm=2.25,
            group_w=48,
            depth=25,
            bot_mul=1.0))
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 96, 8, 8)
(1, 192, 4, 4)
(1, 432, 2, 2)
(1, 1008, 1, 1)
adjust_width_group(widths, bottleneck_ratio, groups)[source]

Adjusts the compatibility of widths and groups.

Parameters:
  • widths (list[int]) – Width of each stage.
  • bottleneck_ratio (float) – Bottleneck ratio.
  • groups (int) – number of groups in each stage
Returns:

The adjusted widths and groups of each stage.

Return type:

tuple(list)

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

generate_regnet(initial_width, width_slope, width_parameter, depth, divisor=8)[source]

Generates per block width from RegNet parameters.

Parameters:
  • initial_width ([int]) – Initial width of the backbone
  • width_slope ([float]) – Slope of the quantized linear function
  • width_parameter ([int]) – Parameter used to quantize the width.
  • depth ([int]) – Depth of the backbone.
  • divisor (int, optional) – The divisor of channels. Defaults to 8.
Returns:

return a list of widths of each stage and the number of

stages

Return type:

list, int

get_stages_from_blocks(widths)[source]

Gets widths/stage_blocks of network at each stage

Parameters:widths (list[int]) – Width in each stage.
Returns:width and depth of each stage
Return type:tuple(list)
static quantize_float(number, divisor)[source]

Converts a float to closest non-zero int divisible by divior.

Parameters:
  • number (int) – Original number to be quantized.
  • divisor (int) – Divisor used to quantize the number.
Returns:

quantized number that is divisible by devisor.

Return type:

int

class mmdet.models.backbones.ResNet(depth, in_channels=3, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]

ResNet backbone.

Parameters:
  • depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Resnet stages. Default: 4.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.
  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • plugins (list[dict]) –

    List of plugins for stages, each dict contains:

    • cfg (dict, required): Cfg dict to build plugin.
    • position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
    • stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import ResNet
>>> import torch
>>> self = ResNet(depth=18)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 8, 8)
(1, 128, 4, 4)
(1, 256, 2, 2)
(1, 512, 1, 1)
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

make_stage_plugins(plugins, stage_idx)[source]

make plugins for ResNet ‘stage_idx’th stage .

Currently we support to insert ‘context_block’, ‘empirical_attention_block’, ‘nonlocal_block’ into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck. An example of plugins format could be:

>>> plugins=[
...     dict(cfg=dict(type='xxx', arg1='xxx'),
...          stages=(False, True, True, True),
...          position='after_conv2'),
...     dict(cfg=dict(type='yyy'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='1'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='2'),
...          stages=(True, True, True, True),
...          position='after_conv3')
... ]
>>> self = ResNet(depth=18)
>>> stage_plugins = self.make_stage_plugins(plugins, 0)
>>> assert len(stage_plugins) == 3

Suppose ‘stage_idx=0’, the structure of blocks in the stage would be:

conv1-> conv2->conv3->yyy->zzz1->zzz2

Suppose ‘stage_idx=1’, the structure of blocks in the stage would be:

conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2

If stages is missing, the plugin would be applied to all stages.

Parameters:
  • plugins (list[dict]) – List of plugins cfg to build. The postfix is required if multiple same type plugins are inserted.
  • stage_idx (int) – Index of stage to build
Returns:

Plugins for current stage

Return type:

list[dict]

train(mode=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.
Returns:self
Return type:Module
class mmdet.models.backbones.ResNetV1d(**kwargs)[source]

ResNetV1d variant described in Bag of Tricks.

Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.

class mmdet.models.backbones.ResNeXt(groups=1, base_width=4, **kwargs)[source]

ResNeXt backbone.

Parameters:
  • depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Resnet stages. Default: 4.
  • groups (int) – Group of resnext.
  • base_width (int) – Base width of resnext.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
class mmdet.models.backbones.SSDVGG(input_size, depth, with_last_pool=False, ceil_mode=True, out_indices=(3, 4), out_feature_indices=(22, 34), l2_norm_scale=20.0)[source]

VGG Backbone network for single-shot-detection

Parameters:
  • input_size (int) – width and height of input, from {300, 512}.
  • depth (int) – Depth of vgg, from {11, 13, 16, 19}.
  • out_indices (Sequence[int]) – Output from which stages.

Example

>>> self = SSDVGG(input_size=300, depth=11)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 300, 300)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 1024, 19, 19)
(1, 512, 10, 10)
(1, 256, 5, 5)
(1, 256, 3, 3)
(1, 256, 1, 1)
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.backbones.HRNet(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=True, with_cp=False, zero_init_residual=False)[source]

HRNet backbone.

High-Resolution Representations for Labeling Pixels and Regions arXiv: https://arxiv.org/abs/1904.04514

Parameters:
  • extra (dict) – detailed configuration for each stage of HRNet.
  • in_channels (int) – Number of input image channels. Default: 3.
  • conv_cfg (dict) – dictionary to construct and config conv layer.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import HRNet
>>> import torch
>>> extra = dict(
>>>     stage1=dict(
>>>         num_modules=1,
>>>         num_branches=1,
>>>         block='BOTTLENECK',
>>>         num_blocks=(4, ),
>>>         num_channels=(64, )),
>>>     stage2=dict(
>>>         num_modules=1,
>>>         num_branches=2,
>>>         block='BASIC',
>>>         num_blocks=(4, 4),
>>>         num_channels=(32, 64)),
>>>     stage3=dict(
>>>         num_modules=4,
>>>         num_branches=3,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4),
>>>         num_channels=(32, 64, 128)),
>>>     stage4=dict(
>>>         num_modules=3,
>>>         num_branches=4,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4, 4),
>>>         num_channels=(32, 64, 128, 256)))
>>> self = HRNet(extra, in_channels=1)
>>> self.eval()
>>> inputs = torch.rand(1, 1, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 32, 8, 8)
(1, 64, 4, 4)
(1, 128, 2, 2)
(1, 256, 1, 1)
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

train(mode=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.
Returns:self
Return type:Module
class mmdet.models.backbones.Res2Net(scales=4, base_width=26, style='pytorch', deep_stem=True, avg_down=True, **kwargs)[source]

Res2Net backbone.

Parameters:
  • scales (int) – Scales used in Res2Net. Default: 4
  • base_width (int) – Basic width of each scale. Default: 26
  • depth (int) – Depth of res2net, from {50, 101, 152}.
  • in_channels (int) – Number of input image channels. Default: 3.
  • num_stages (int) – Res2net stages. Default: 4.
  • strides (Sequence[int]) – Strides of the first block of each stage.
  • dilations (Sequence[int]) – Dilation of each stage.
  • out_indices (Sequence[int]) – Output from which stages.
  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottle2neck.
  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
  • plugins (list[dict]) –

    List of plugins for stages, each dict contains:

    • cfg (dict, required): Cfg dict to build plugin.
    • position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
    • stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

Example

>>> from mmdet.models import Res2Net
>>> import torch
>>> self = Res2Net(depth=50, scales=4, base_width=26)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 256, 8, 8)
(1, 512, 4, 4)
(1, 1024, 2, 2)
(1, 2048, 1, 1)

necks

class mmdet.models.necks.FPN(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, upsample_cfg={'mode': 'nearest'})[source]

Feature Pyramid Network.

This is an implementation of - Feature Pyramid Networks for Object Detection (https://arxiv.org/abs/1612.03144)

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool | str) –

    If bool, it decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs. If str, it specifies the source feature map of the extra convs. Only the following options are allowed

    • ’on_input’: Last feat map of neck inputs (i.e. backbone feature).
    • ’on_lateral’: Last feature map after lateral convs.
    • ’on_output’: The last output feature map after fpn convs.
  • extra_convs_on_inputs (bool, deprecated) – Whether to apply extra convs on the original feature from the backbone. If True, it is equivalent to add_extra_convs=’on_input’. If False, it is equivalent to set add_extra_convs=’on_output’. Default to True.
  • relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
  • no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
  • conv_cfg (dict) – Config dict for convolution layer. Default: None.
  • norm_cfg (dict) – Config dict for normalization layer. Default: None.
  • act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
  • upsample_cfg (dict) – Config dict for interpolate layer. Default: dict(mode=’nearest’)

Example

>>> import torch
>>> in_channels = [2, 3, 5, 7]
>>> scales = [340, 170, 84, 43]
>>> inputs = [torch.rand(1, c, s, s)
...           for c, s in zip(in_channels, scales)]
>>> self = FPN(in_channels, 11, len(in_channels)).eval()
>>> outputs = self.forward(inputs)
>>> for i in range(len(outputs)):
...     print(f'outputs[{i}].shape = {outputs[i].shape}')
outputs[0].shape = torch.Size([1, 11, 340, 340])
outputs[1].shape = torch.Size([1, 11, 170, 170])
outputs[2].shape = torch.Size([1, 11, 84, 84])
outputs[3].shape = torch.Size([1, 11, 43, 43])
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.BFP(Balanced Feature Pyrmamids)[source]

BFP takes multi-level features as inputs and gather them into a single one, then refine the gathered feature and scatter the refined results to multi-level features. This module is used in Libra R-CNN (CVPR 2019), see https://arxiv.org/pdf/1904.02701.pdf for details.

Parameters:
  • in_channels (int) – Number of input channels (feature maps of all levels should have the same channels).
  • num_levels (int) – Number of input feature levels.
  • conv_cfg (dict) – The config dict for convolution layers.
  • norm_cfg (dict) – The config dict for normalization layers.
  • refine_level (int) – Index of integration and refine level of BSF in multi-level features from bottom to top.
  • refine_type (str) – Type of the refine op, currently support [None, ‘conv’, ‘non_local’].
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.HRFPN(High Resolution Feature Pyrmamids)[source]

arXiv: https://arxiv.org/abs/1904.04514

Parameters:
  • in_channels (list) – number of channels for each branch.
  • out_channels (int) – output channels of feature pyramids.
  • num_outs (int) – number of output stages.
  • pooling_type (str) – pooling for generating feature pyramids from {MAX, AVG}.
  • conv_cfg (dict) – dictionary to construct and config conv layer.
  • norm_cfg (dict) – dictionary to construct and config norm layer.
  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
  • stride (int) – stride of 3x3 convolutional layers
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.NASFPN(in_channels, out_channels, num_outs, stack_times, start_level=0, end_level=-1, add_extra_convs=False, norm_cfg=None)[source]

NAS-FPN.

NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. (https://arxiv.org/abs/1904.07392)

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.FPN_CARAFE(in_channels, out_channels, num_outs, start_level=0, end_level=-1, norm_cfg=None, act_cfg=None, order=('conv', 'norm', 'act'), upsample_cfg={'encoder_dilation': 1, 'encoder_kernel': 3, 'type': 'carafe', 'up_group': 1, 'up_kernel': 5})[source]

FPN_CARAFE is a more flexible implementation of FPN. It allows more choice for upsample methods during the top-down pathway.

It can reproduce the preformance of ICCV 2019 paper CARAFE: Content-Aware ReAssembly of FEatures Please refer to https://arxiv.org/abs/1905.02188 for more details.

Parameters:
  • in_channels (list[int]) – Number of channels for each input feature map.
  • out_channels (int) – Output channels of feature pyramids.
  • num_outs (int) – Number of output stages.
  • start_level (int) – Start level of feature pyramids. (Default: 0)
  • end_level (int) – End level of feature pyramids. (Default: -1 indicates the last level).
  • norm_cfg (dict) – Dictionary to construct and config norm layer.
  • activate (str) – Type of activation function in ConvModule (Default: None indicates w/o activation).
  • order (dict) – Order of components in ConvModule.
  • upsample (str) – Type of upsample layer.
  • upsample_cfg (dict) – Dictionary to construct and config upsample layer.
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.PAFPN(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None)[source]

Path Aggregation Network for Instance Segmentation.

This is an implementation of the PAFPN in Path Aggregation Network (https://arxiv.org/abs/1803.01534).

Parameters:
  • in_channels (List[int]) – Number of input channels per scale.
  • out_channels (int) – Number of output channels (used at each scale)
  • num_outs (int) – Number of output scales.
  • start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
  • end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
  • add_extra_convs (bool) – Whether to add conv layers on top of the original feature maps. Default: False.
  • extra_convs_on_inputs (bool) – Whether to apply extra conv on the original feature from the backbone. Default: False.
  • relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
  • no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
  • conv_cfg (dict) – Config dict for convolution layer. Default: None.
  • norm_cfg (dict) – Config dict for normalization layer. Default: None.
  • act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.necks.NASFCOS_FPN(in_channels, out_channels, num_outs, start_level=1, end_level=-1, add_extra_convs=False, conv_cfg=None, norm_cfg=None)[source]

FPN structure in NASFPN

NAS-FCOS: Fast Neural Architecture Search for Object Detection <https://arxiv.org/abs/1906.04423>

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

dense_heads

class mmdet.models.dense_heads.AnchorHead(num_classes, in_channels, feat_channels=256, anchor_generator={'ratios': [0.5, 1.0, 2.0], 'scales': [8, 16, 32], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, background_label=None, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, train_cfg=None, test_cfg=None)[source]

Anchor-based head (RPN, RetinaNet, SSD, etc.).

Parameters:
  • num_classes (int) – Number of categories excluding the background category.
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of hidden channels. Used in child classes.
  • anchor_generator (dict) – Config dict for anchor generator
  • bbox_coder (dict) – Config of bounding box coder.
  • reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of localization loss.
  • train_cfg (dict) – Training config of anchor head.
  • test_cfg (dict) – Testing config of anchor head.
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
  • device (torch.device | str) – Device for returned tensors
Returns:

anchor_list (list[Tensor]): Anchors of each image valid_flag_list (list[Tensor]): Valid flags of each image

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into labeled boxes.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • img_metas (list[dict]) – Size / scale info for each image
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True, return_sampling_results=False)[source]
Compute regression and classification targets for anchors in
multiple images.
Parameters:
  • anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 4).
  • valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
  • gt_labels_list (list[Tensor]) – Ground truth labels of each box.
  • label_channels (int) – Channel of label.
  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns:

labels_list (list[Tensor]): Labels of each level

label_weights_list (list[Tensor]): Label weights of each level bbox_targets_list (list[Tensor]): BBox targets of each level bbox_weights_list (list[Tensor]): BBox weights of each level num_total_pos (int): Number of positive samples in all images num_total_neg (int): Number of negative samples in all images

additional_returns: This function enables user-defined returns from

self._get_targets_single. These returns are currently refined to properties at each feature map (i.e. having HxW dimension). The results will be concatenated after the end

Return type:

tuple

class mmdet.models.dense_heads.GuidedAnchorHead(num_classes, in_channels, feat_channels=256, approx_anchor_generator={'octave_base_scale': 8, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, square_anchor_generator={'ratios': [1.0], 'scales': [8], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, anchor_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, deformable_groups=4, loc_filter_thr=0.01, background_label=None, train_cfg=None, test_cfg=None, loss_loc={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_shape={'beta': 0.2, 'loss_weight': 1.0, 'type': 'BoundedIoULoss'}, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]

Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.).

This GuidedAnchorHead will predict high-quality feature guided anchors and locations where anchors will be kept in inference. There are mainly 3 categories of bounding-boxes.

  • Sampled 9 pairs for target assignment. (approxes)
  • The square boxes where the predicted anchors are based on. (squares)
  • Guided anchors.

Please refer to https://arxiv.org/abs/1901.03278 for more details.

Parameters:
  • num_classes (int) – Number of classes.
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of hidden channels.
  • approx_anchor_generator (dict) – Config dict for approx generator
  • square_anchor_generator (dict) – Config dict for square generator
  • anchor_coder (dict) – Config dict for anchor coder
  • bbox_coder (dict) – Config dict for bbox coder
  • deformable_groups – (int): Group number of DCN in FeatureAdaption module.
  • loc_filter_thr (float) – Threshold to filter out unconcerned regions.
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_loc (dict) – Config of location loss.
  • loss_shape (dict) – Config of anchor shape loss.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox (dict) – Config of bbox regression loss.
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ga_loc_targets(gt_bboxes_list, featmap_sizes)[source]

Compute location targets for guided anchoring.

Each feature map is divided into positive, negative and ignore regions. - positive regions: target 1, weight 1 - ignore regions: target 0, weight 0 - negative regions: target 0, weight 0.1

Parameters:
  • gt_bboxes_list (list[Tensor]) – Gt bboxes of each image.
  • featmap_sizes (list[tuple]) – Multi level sizes of each feature maps.
Returns:

tuple

ga_shape_targets(approx_list, inside_flag_list, square_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, unmap_outputs=True)[source]

Compute guided anchoring targets.

Parameters:
  • approx_list (list[list]) – Multi level approxs of each image.
  • inside_flag_list (list[list]) – Multi level inside flags of each image.
  • square_list (list[list]) – Multi level squares of each image.
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – ignore list of gt bboxes.
  • unmap_outputs (bool) – unmap outputs or not.
Returns:

tuple

get_anchors(featmap_sizes, shape_preds, loc_preds, img_metas, use_loc_filter=False, device='cuda')[source]

Get squares according to feature map sizes and guided anchors.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • shape_preds (list[tensor]) – Multi-level shape predictions.
  • loc_preds (list[tensor]) – Multi-level location predictions.
  • img_metas (list[dict]) – Image meta info.
  • use_loc_filter (bool) – Use loc filter or not.
  • device (torch.device | str) – device for returned tensors
Returns:

square approxs of each image, guided anchors of each image,

loc masks of each image

Return type:

tuple

get_bboxes(cls_scores, bbox_preds, shape_preds, loc_preds, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into labeled boxes.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • img_metas (list[dict]) – Size / scale info for each image
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_sampled_approxs(featmap_sizes, img_metas, device='cuda')[source]

Get sampled approxs and inside flags according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
  • device (torch.device | str) – device for returned tensors
Returns:

approxes of each image, inside flags of each image

Return type:

tuple

class mmdet.models.dense_heads.FeatureAdaption(in_channels, out_channels, kernel_size=3, deformable_groups=4)[source]

Feature Adaption Module.

Feature Adaption Module is implemented based on DCN v1. It uses anchor shape prediction rather than feature map to predict offsets of deformable conv layer.

Parameters:
  • in_channels (int) – Number of channels in the input feature map.
  • out_channels (int) – Number of channels in the output feature map.
  • kernel_size (int) – Deformable conv kernel size.
  • deformable_groups (int) – Deformable conv group size.
forward(x, shape)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.dense_heads.RPNHead(in_channels, **kwargs)[source]
class mmdet.models.dense_heads.GARPNHead(in_channels, **kwargs)[source]

Guided-Anchor-based RPN head.

class mmdet.models.dense_heads.RetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

An anchor-based head used in RetinaNet.

The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors.

Example

>>> import torch
>>> self = RetinaHead(11, 7)
>>> x = torch.rand(1, 7, 32, 32)
>>> cls_score, bbox_pred = self.forward_single(x)
>>> # Each anchor predicts a score for each class except background
>>> cls_per_anchor = cls_score.shape[1] / self.num_anchors
>>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors
>>> assert cls_per_anchor == (self.num_classes)
>>> assert box_per_anchor == 4
class mmdet.models.dense_heads.RetinaSepBNHead(num_classes, num_ins, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]

“RetinaHead with separate BN.

In RetinaHead, conv/norm layers are shared across different FPN levels, while in RetinaSepBNHead, conv layers are shared across different FPN levels, but BN layers are separated.

forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.dense_heads.GARetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]

Guided-Anchor-based RetinaNet head.

class mmdet.models.dense_heads.SSDHead(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.dense_heads.FCOSHead(num_classes, in_channels, feat_channels=256, stacked_convs=4, strides=(4, 8, 16, 32, 64), regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, background_label=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, train_cfg=None, test_cfg=None)[source]

Anchor-free head used in FCOS.

The FCOS head does not use anchor boxes. Instead bounding boxes are predicted at each pixel and a centerness measure is used to supress low-quality predictions.

Example

>>> self = FCOSHead(11, 7)
>>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
>>> cls_score, bbox_pred, centerness = self.forward(feats)
>>> assert len(cls_score) == len(self.scales)
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_points(featmap_sizes, dtype, device)[source]

Get points according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • dtype (torch.dtype) – Type of points.
  • device (torch.device) – Device of points.
Returns:

points of each image.

Return type:

tuple

class mmdet.models.dense_heads.RepPointsHead(num_classes, in_channels, feat_channels=256, point_feat_channels=256, stacked_convs=3, num_points=9, gradient_mul=0.1, point_strides=[8, 16, 32, 64, 128], point_base_scale=4, conv_cfg=None, norm_cfg=None, background_label=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_init={'beta': 0.1111111111111111, 'loss_weight': 0.5, 'type': 'SmoothL1Loss'}, loss_bbox_refine={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, use_grid_points=False, center_init=True, transform_method='moment', moment_mul=0.01, train_cfg=None, test_cfg=None)[source]

RepPoint head.

Parameters:
  • in_channels (int) – Number of channels in the input feature map.
  • feat_channels (int) – Number of channels of the feature map.
  • point_feat_channels (int) – Number of channels of points features.
  • stacked_convs (int) – How many conv layers are used.
  • gradient_mul (float) – The multiplier to gradients from points refinement and recognition.
  • point_strides (Iterable) – points strides.
  • point_base_scale (int) – bbox scale for assigning labels.
  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
  • loss_cls (dict) – Config of classification loss.
  • loss_bbox_init (dict) – Config of initial points loss.
  • loss_bbox_refine (dict) – Config of points loss in refinement.
  • use_grid_points (bool) – If we use bounding box representation, the
  • is represented as grid points on the bounding box. (reppoints) –
  • center_init (bool) – Whether to use center point assignment.
  • transform_method (str) – The methods to transform RepPoints to bbox.
centers_to_bboxes(point_list)[source]

Get bboxes according to center points. Only used in MaxIOUAssigner.

forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

gen_grid_from_reg(reg, previous_boxes)[source]

Base on the previous bboxes and regression values, we compute the regressed bboxes and generate the grids on the bboxes.

Parameters:
  • reg – the regression value to previous bboxes.
  • previous_boxes – previous bboxes.
Returns:

generate grids on the regressed bboxes.

get_points(featmap_sizes, img_metas)[source]

Get points according to feature map sizes.

Parameters:
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.
  • img_metas (list[dict]) – Image meta info.
Returns:

points of each image, valid flags of each image

Return type:

tuple

get_targets(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]

Compute corresponding GT box and classification targets for proposals.

Parameters:
  • proposals_list (list[list]) – Multi level points/bboxes of each image.
  • valid_flag_list (list[list]) – Multi level valid flags of each image.
  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
  • img_metas (list[dict]) – Meta info of each image.
  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
  • gt_bboxes_list – Ground truth labels of each box.
  • stage (str) – init or refine. Generate target for init stage or refine stage
  • label_channels (int) – Channel of label.
  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns:

  • labels_list (list[Tensor]): Labels of each level.
  • label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501
  • bbox_gt_list (list[Tensor]): Ground truth bbox of each level.
  • proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501
  • proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501
  • num_total_pos (int): Number of positive samples in all images. # noqa: E501
  • num_total_neg (int): Number of negative samples in all images. # noqa: E501

Return type:

tuple

offset_to_pts(center_list, pred_list)[source]

Change from point offset to point coordinate.

points2bbox(pts, y_first=True)[source]

Converting the points set into bounding box.

Parameters:
  • pts – the input points sets (fields), each points set (fields) is represented as 2n scalar.
  • y_first – if y_fisrt=True, the point set is represented as [y1, x1, y2, x2 … yn, xn], otherwise the point set is represented as [x1, y1, x2, y2 … xn, yn].
Returns:

each points set is converting to a bbox [x1, y1, x2, y2].

class mmdet.models.dense_heads.FoveaHead(num_classes, in_channels, feat_channels=256, stacked_convs=4, strides=(4, 8, 16, 32, 64), base_edge_list=(16, 32, 64, 128, 256), scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), sigma=0.4, with_deform=False, deformable_groups=4, background_label=None, loss_cls=None, loss_bbox=None, conv_cfg=None, norm_cfg=None, train_cfg=None, test_cfg=None)[source]

FoveaBox: Beyond Anchor-based Object Detector https://arxiv.org/abs/1904.03797

forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.dense_heads.FreeAnchorRetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, pre_anchor_topk=50, bbox_thr=0.6, gamma=2.0, alpha=0.5, **kwargs)[source]
class mmdet.models.dense_heads.ATSSHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, **kwargs)[source]

Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection

ATSS head structure is similar with FCOS, however ATSS use anchor boxes and assign label by Adaptive Training Sample Selection instead max-iou.

https://arxiv.org/abs/1912.02424

forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_bboxes(cls_scores, bbox_preds, centernesses, img_metas, cfg=None, rescale=False)[source]

Transform network output for a batch into labeled boxes.

Parameters:
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
  • img_metas (list[dict]) – Size / scale info for each image
  • cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
  • rescale (bool) – If True, return boxes in original image space
Returns:

Each item in result_list is 2-tuple.

The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type:

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]

Get targets for ATSS head.

This method is almost the same as AnchorHead.get_targets(). Besides returning the targets as the parent method does, it also returns the anchors as the first element of the returned tuple.

class mmdet.models.dense_heads.FSAFHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

Anchor-free head used in FSAF.

The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors (num_anchors is 1 for anchor- free methods)

Example

>>> import torch
>>> self = FSAFHead(11, 7)
>>> x = torch.rand(1, 7, 32, 32)
>>> cls_score, bbox_pred = self.forward_single(x)
>>> # Each anchor predicts a score for each class except background
>>> cls_per_anchor = cls_score.shape[1] / self.num_anchors
>>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors
>>> assert cls_per_anchor == self.num_classes
>>> assert box_per_anchor == 4
collect_loss_level_single(cls_loss, reg_loss, assigned_gt_inds, labels_seq)[source]

Get the average loss in each FPN level w.r.t. each gt label

Parameters:
  • cls_loss (Tensor) – Classification loss of each feature map pixel, shape (num_anchor, num_class)
  • reg_loss (Tensor) – Regression loss of each feature map pixel, shape (num_anchor, 4)
  • assigned_gt_inds (Tensor) – It indicates which gt the prior is assigned to (0-based, -1: no assignment). shape (num_anchor),
  • labels_seq – The rank of labels. shape (num_gt)
Returns:

(num_gt), average loss of each gt in this level

Return type:

shape

reweight_loss_single(cls_loss, reg_loss, assigned_gt_inds, labels, level, min_levels)[source]

Reweight loss values at each level.

Reassign loss values at each level by masking those where the pre-calculated loss is too large. Then return the reduced losses.

Parameters:
  • cls_loss (Tensor) – Element-wise classification loss. Shape: (num_anchors, num_classes)
  • reg_loss (Tensor) – Element-wise regression loss. Shape: (num_anchors, 4)
  • assigned_gt_inds (Tensor) – The gt indices that each anchor bbox is assigned to. -1 denotes a negative anchor, otherwise it is the gt index (0-based). Shape: (num_anchors, ),
  • labels (Tensor) – Label assigned to anchors. Shape: (num_anchors, ).
  • level (int) – The current level index in the pyramid (0-4 for RetinaNet)
  • min_levels (Tensor) – The best-matching level for each gt. Shape: (num_gts, ),
Returns:

  • cls_loss: Reduced corrected classification loss. Scalar.
  • reg_loss: Reduced corrected regression loss. Scalar.
  • pos_flags (Tensor): Corrected bool tensor indicating the final postive anchors. Shape: (num_anchors, ).

Return type:

tuple

class mmdet.models.dense_heads.NASFCOSHead(num_classes, in_channels, feat_channels=256, stacked_convs=4, strides=(4, 8, 16, 32, 64), regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, background_label=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, train_cfg=None, test_cfg=None)[source]

Anchor-free head used in NASFCOS.

It is quite similar with FCOS head, except for the searched structure of classification branch and bbox regression branch, where a structure of “dconv3x3, conv3x3, dconv3x3, conv1x1” is utilized instead.

class mmdet.models.dense_heads.PISARetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]

PISA Retinanet Head.

The head owns the same structure with Retinanet Head, but differs in two

aspects: 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to

change the positive loss weights.
  1. Classification-aware regression loss is adopted as a third loss.
class mmdet.models.dense_heads.PISASSDHead(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]

roi_heads

class mmdet.models.roi_heads.BaseRoIHead(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]

Base class for RoIHeads

aug_test(x, proposal_list, img_metas, rescale=False, **kwargs)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_train(x, img_meta, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, **kwargs)[source]

Forward function during training

simple_test(x, proposal_list, img_meta, proposals=None, rescale=False, **kwargs)[source]

Test without augmentation.

class mmdet.models.roi_heads.CascadeRoIHead(num_stages, stage_loss_weights, bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]

Cascade roi head including one bbox head and one mask head.

https://arxiv.org/abs/1712.00726

aug_test(features, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters:
  • x (list[Tensor]) – list of multi-level img features.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – list of region proposals.
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

class mmdet.models.roi_heads.DoubleHeadRoIHead(reg_roi_scale_factor, **kwargs)[source]

RoI head for Double Head RCNN

https://arxiv.org/abs/1904.06493

class mmdet.models.roi_heads.MaskScoringRoIHead(mask_iou_head, **kwargs)[source]

Mask Scoring RoIHead for Mask Scoring RCNN.

https://arxiv.org/abs/1903.00241

class mmdet.models.roi_heads.HybridTaskCascadeRoIHead(num_stages, stage_loss_weights, semantic_roi_extractor=None, semantic_head=None, semantic_fusion=('bbox', 'mask'), interleaved=True, mask_info_flow=True, **kwargs)[source]

Hybrid task cascade roi head including one bbox head and one mask head.

https://arxiv.org/abs/1901.07518

aug_test(img_feats, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, gt_semantic_seg=None)[source]
Parameters:
  • x (list[Tensor]) – list of multi-level img features.
  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – list of region proposals.
  • gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – class indices corresponding to each box
  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

class mmdet.models.roi_heads.GridRoIHead(grid_roi_extractor, grid_head, **kwargs)[source]

Grid roi head for Grid R-CNN.

https://arxiv.org/abs/1811.12030

simple_test(x, proposal_list, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

class mmdet.models.roi_heads.ResLayer(depth, stage=3, stride=2, dilation=1, style='pytorch', norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, with_cp=False, dcn=None)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

train(mode=True)[source]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.
Returns:self
Return type:Module
class mmdet.models.roi_heads.BBoxHead(with_avg_pool=False, with_cls=True, with_reg=True, roi_feat_size=7, in_channels=256, num_classes=80, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'type': 'DeltaXYWHBBoxCoder'}, reg_class_agnostic=False, reg_decoded_bbox=False, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': False}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]

Simplest RoI head, with only two fc layers for classification and regression respectively

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

refine_bboxes(rois, labels, bbox_preds, pos_is_gts, img_metas)[source]

Refine bboxes during training.

Parameters:
  • rois (Tensor) – Shape (n*bs, 5), where n is image number per GPU, and bs is the sampled RoIs per image. The first column is the image id and the next 4 columns are x1, y1, x2, y2.
  • labels (Tensor) – Shape (n*bs, ).
  • bbox_preds (Tensor) – Shape (n*bs, 4) or (n*bs, 4*#class).
  • pos_is_gts (list[Tensor]) – Flags indicating if each positive bbox is a gt bbox.
  • img_metas (list[dict]) – Meta info of each image.
Returns:

Refined bboxes of each image in a mini-batch.

Return type:

list[Tensor]

Example

>>> # xdoctest: +REQUIRES(module:kwarray)
>>> import kwarray
>>> import numpy as np
>>> from mmdet.core.bbox.demodata import random_boxes
>>> self = BBoxHead(reg_class_agnostic=True)
>>> n_roi = 2
>>> n_img = 4
>>> scale = 512
>>> rng = np.random.RandomState(0)
>>> img_metas = [{'img_shape': (scale, scale)}
...              for _ in range(n_img)]
>>> # Create rois in the expected format
>>> roi_boxes = random_boxes(n_roi, scale=scale, rng=rng)
>>> img_ids = torch.randint(0, n_img, (n_roi,))
>>> img_ids = img_ids.float()
>>> rois = torch.cat([img_ids[:, None], roi_boxes], dim=1)
>>> # Create other args
>>> labels = torch.randint(0, 2, (n_roi,)).long()
>>> bbox_preds = random_boxes(n_roi, scale=scale, rng=rng)
>>> # For each image, pretend random positive boxes are gts
>>> is_label_pos = (labels.numpy() > 0).astype(np.int)
>>> lbl_per_img = kwarray.group_items(is_label_pos,
...                                   img_ids.numpy())
>>> pos_per_img = [sum(lbl_per_img.get(gid, []))
...                for gid in range(n_img)]
>>> pos_is_gts = [
>>>     torch.randint(0, 2, (npos,)).byte().sort(
>>>         descending=True)[0]
>>>     for npos in pos_per_img
>>> ]
>>> bboxes_list = self.refine_bboxes(rois, labels, bbox_preds,
>>>                    pos_is_gts, img_metas)
>>> print(bboxes_list)
regress_by_class(rois, label, bbox_pred, img_meta)[source]

Regress the bbox for the predicted class. Used in Cascade R-CNN.

Parameters:
  • rois (Tensor) – shape (n, 4) or (n, 5)
  • label (Tensor) – shape (n, )
  • bbox_pred (Tensor) – shape (n, 4*(#class)) or (n, 4)
  • img_meta (dict) – Image meta info.
Returns:

Regressed bboxes, the same shape as input rois.

Return type:

Tensor

class mmdet.models.roi_heads.ConvFCBBoxHead(num_shared_convs=0, num_shared_fcs=0, num_cls_convs=0, num_cls_fcs=0, num_reg_convs=0, num_reg_fcs=0, conv_out_channels=256, fc_out_channels=1024, conv_cfg=None, norm_cfg=None, *args, **kwargs)[source]

More general bbox head, with shared conv and fc layers and two optional separated branches.

                            /-> cls convs -> cls fcs -> cls
shared convs -> shared fcs
                            \-> reg convs -> reg fcs -> reg
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.Shared2FCBBoxHead(fc_out_channels=1024, *args, **kwargs)[source]
class mmdet.models.roi_heads.Shared4Conv1FCBBoxHead(fc_out_channels=1024, *args, **kwargs)[source]
class mmdet.models.roi_heads.DoubleConvFCBBoxHead(num_convs=0, num_fcs=0, conv_out_channels=1024, fc_out_channels=1024, conv_cfg=None, norm_cfg={'type': 'BN'}, **kwargs)[source]

Bbox head used in Double-Head R-CNN

                                  /-> cls
              /-> shared convs ->
                                  \-> reg
roi features
                                  /-> cls
              \-> shared fc    ->
                                  \-> reg
forward(x_cls, x_reg)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.FCNMaskHead(num_convs=4, roi_feat_size=14, in_channels=256, conv_kernel_size=3, conv_out_channels=256, num_classes=80, class_agnostic=False, upsample_cfg={'scale_factor': 2, 'type': 'deconv'}, conv_cfg=None, norm_cfg=None, loss_mask={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_mask': True})[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_seg_masks(mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, scale_factor, rescale)[source]

Get segmentation masks from mask_pred and bboxes.

Parameters:
  • mask_pred (Tensor or ndarray) – shape (n, #class, h, w). For single-scale testing, mask_pred is the direct output of model, whose type is Tensor, while for multi-scale testing, it will be converted to numpy array outside of this method.
  • det_bboxes (Tensor) – shape (n, 4/5)
  • det_labels (Tensor) – shape (n, )
  • img_shape (Tensor) – shape (3, )
  • rcnn_test_cfg (dict) – rcnn testing config
  • ori_shape – original image size
Returns:

encoded masks

Return type:

list[list]

class mmdet.models.roi_heads.HTCMaskHead(with_conv_res=True, *args, **kwargs)[source]
forward(x, res_feat=None, return_logits=True, return_feat=True)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.FusedSemanticHead(num_ins, fusion_level, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=183, ignore_label=255, loss_weight=0.2, conv_cfg=None, norm_cfg=None)[source]

Multi-level fused semantic segmentation head.

in_1 -> 1x1 conv ---
                    |
in_2 -> 1x1 conv -- |
                   ||
in_3 -> 1x1 conv - ||
                  |||                  /-> 1x1 conv (mask prediction)
in_4 -> 1x1 conv -----> 3x3 convs (*4)
                    |                  \-> 1x1 conv (feature)
in_5 -> 1x1 conv ---
forward(feats)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.GridHead(grid_points=9, num_convs=8, roi_feat_size=14, in_channels=256, conv_kernel_size=3, point_feat_channels=64, deconv_kernel_size=4, class_agnostic=False, loss_grid={'loss_weight': 15, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, conv_cfg=None, norm_cfg={'num_groups': 36, 'type': 'GN'})[source]
calc_sub_regions()[source]

Compute point specific representation regions.

See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.roi_heads.MaskIoUHead(num_convs=4, num_fcs=2, roi_feat_size=14, in_channels=256, conv_out_channels=256, fc_out_channels=1024, num_classes=80, loss_iou={'loss_weight': 0.5, 'type': 'MSELoss'})[source]

Mask IoU Head.

This head predicts the IoU of predicted masks and corresponding gt masks.

forward(mask_feat, mask_pred)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_mask_scores(mask_iou_pred, det_bboxes, det_labels)[source]

Get the mask scores.

mask_score = bbox_score * mask_iou

get_targets(sampling_results, gt_masks, mask_pred, mask_targets, rcnn_train_cfg)[source]

Compute target of mask IoU.

Mask IoU target is the IoU of the predicted mask (inside a bbox) and the gt mask of corresponding gt mask (the whole instance). The intersection area is computed inside the bbox, and the gt mask area is computed with two steps, firstly we compute the gt area inside the bbox, then divide it by the area ratio of gt area inside the bbox and the gt area of the whole instance.

Parameters:
  • sampling_results (list[SamplingResult]) – sampling results.
  • gt_masks (BitmapMask | PolygonMask) – Gt masks (the whole instance) of each image, with the same shape of the input image.
  • mask_pred (Tensor) – Predicted masks of each positive proposal, shape (num_pos, h, w).
  • mask_targets (Tensor) – Gt mask of each positive proposal, binary map of the shape (num_pos, h, w).
  • rcnn_train_cfg (dict) – Training config for R-CNN part.
Returns:

mask iou target (length == num positive).

Return type:

Tensor

class mmdet.models.roi_heads.SingleRoIExtractor(roi_layer, out_channels, featmap_strides, finest_scale=56)[source]

Extract RoI features from a single level feature map.

If there are mulitple input feature levels, each RoI is mapped to a level according to its scale.

Parameters:
  • roi_layer (dict) – Specify RoI layer type and arguments.
  • out_channels (int) – Output channels of RoI layers.
  • featmap_strides (int) – Strides of input feature maps.
  • finest_scale (int) – Scale threshold of mapping to level 0.
forward(feats, rois, roi_scale_factor=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

map_roi_levels(rois, num_levels)[source]

Map rois to corresponding feature levels by scales.

  • scale < finest_scale * 2: level 0
  • finest_scale * 2 <= scale < finest_scale * 4: level 1
  • finest_scale * 4 <= scale < finest_scale * 8: level 2
  • scale >= finest_scale * 8: level 3
Parameters:
  • rois (Tensor) – Input RoIs, shape (k, 5).
  • num_levels (int) – Total level number.
Returns:

Level index (0-based) of each RoI, shape (k, )

Return type:

Tensor

num_inputs

Input feature map levels.

Type:int
class mmdet.models.roi_heads.PISARoIHead(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]
forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]

StandardRoIHead with PrIme Sample Attention (PISA), described in PISA.

Parameters:
  • x (list[Tensor]) – List of multi-level img features.
  • img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
  • proposals (list[Tensors]) – List of region proposals.
  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
  • gt_labels (list[Tensor]) – Class indices corresponding to each box
  • gt_bboxes_ignore (list[Tensor], optional) – Specify which bounding boxes can be ignored when computing the loss.
  • gt_masks (None | Tensor) – True segmentation masks for each box used if the architecture supports a segmentation task.
Returns:

a dictionary of loss components

Return type:

dict[str, Tensor]

losses

class mmdet.models.losses.Accuracy(topk=(1, ))[source]
forward(pred, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.CrossEntropyLoss(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0)[source]
forward(cls_score, label, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.FocalLoss(use_sigmoid=True, gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.SmoothL1Loss(beta=1.0, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.BalancedL1Loss(alpha=0.5, gamma=1.5, beta=1.0, reduction='mean', loss_weight=1.0)[source]

Balanced L1 Loss

arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.MSELoss(reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

mmdet.models.losses.iou_loss(pred, target, eps=1e-06)[source]

IoU loss.

Computing the IoU loss between a set of predicted bboxes and target bboxes. The loss is calculated as negative log of IoU.

Parameters:
  • pred (Tensor) – Predicted bboxes of format (x1, y1, x2, y2), shape (n, 4).
  • target (Tensor) – Corresponding gt bboxes, shape (n, 4).
  • eps (float) – Eps to avoid log(0).
Returns:

Loss tensor.

Return type:

Tensor

mmdet.models.losses.bounded_iou_loss(pred, target, beta=0.2, eps=0.001)[source]

Improving Object Localization with Fitness NMS and Bounded IoU Loss, https://arxiv.org/abs/1711.00164.

Parameters:
  • pred (tensor) – Predicted bboxes.
  • target (tensor) – Target bboxes.
  • beta (float) – beta parameter in smoothl1.
  • eps (float) – eps to avoid NaN.
class mmdet.models.losses.IoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.BoundedIoULoss(beta=0.2, eps=0.001, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.GIoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmdet.models.losses.GHMC(bins=10, momentum=0, use_sigmoid=True, loss_weight=1.0)[source]

GHM Classification Loss.

Details of the theorem can be viewed in the paper “Gradient Harmonized Single-stage Detector”. https://arxiv.org/abs/1811.05181

Parameters:
  • bins (int) – Number of the unit regions for distribution calculation.
  • momentum (float) – The parameter for moving average.
  • use_sigmoid (bool) – Can only be true for BCE based loss now.
  • loss_weight (float) – The weight of the total GHM-C loss.
forward(pred, target, label_weight, *args, **kwargs)[source]

Calculate the GHM-C loss.

Parameters:
  • pred (float tensor of size [batch_num, class_num]) – The direct prediction of classification fc layer.
  • target (float tensor of size [batch_num, class_num]) – Binary class target for each sample.
  • label_weight (float tensor of size [batch_num, class_num]) – the value is 1 if the sample is valid and 0 if ignored.
Returns:

The gradient harmonized loss.

class mmdet.models.losses.GHMR(mu=0.02, bins=10, momentum=0, loss_weight=1.0)[source]

GHM Regression Loss.

Details of the theorem can be viewed in the paper “Gradient Harmonized Single-stage Detector” https://arxiv.org/abs/1811.05181

Parameters:
  • mu (float) – The parameter for the Authentic Smooth L1 loss.
  • bins (int) – Number of the unit regions for distribution calculation.
  • momentum (float) – The parameter for moving average.
  • loss_weight (float) – The weight of the total GHM-R loss.
forward(pred, target, label_weight, avg_factor=None)[source]

Calculate the GHM-R loss.

Parameters:
  • pred (float tensor of size [batch_num, 4 (* class_num)]) – The prediction of box regression layer. Channel number can be 4 or 4 * class_num depending on whether it is class-agnostic.
  • target (float tensor of size [batch_num, 4 (* class_num)]) – The target regression values with the same size of pred.
  • label_weight (float tensor of size [batch_num, 4 (* class_num)]) – The weight of each sample, 0 if ignored.
Returns:

The gradient harmonized loss.

mmdet.models.losses.reduce_loss(loss, reduction)[source]

Reduce loss as specified.

Parameters:
  • loss (Tensor) – Elementwise loss tensor.
  • reduction (str) – Options are “none”, “mean” and “sum”.
Returns:

Reduced loss tensor.

Return type:

Tensor

mmdet.models.losses.weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None)[source]

Apply element-wise weight and reduce loss.

Parameters:
  • loss (Tensor) – Element-wise loss.
  • weight (Tensor) – Element-wise weights.
  • reduction (str) – Same as built-in losses of PyTorch.
  • avg_factor (float) – Avarage factor when computing the mean of losses.
Returns:

Processed loss values.

Return type:

Tensor

mmdet.models.losses.weighted_loss(loss_func)[source]

Create a weighted version of a given loss function.

To use this decorator, the loss function must have the signature like loss_func(pred, target, **kwargs). The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like loss_func(pred, target, weight=None, reduction=’mean’, avg_factor=None, **kwargs).

Example:
>>> import torch
>>> @weighted_loss
>>> def l1_loss(pred, target):
>>>     return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3])
>>> target = torch.Tensor([1, 1, 1])
>>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target)
tensor(1.3333)
>>> l1_loss(pred, target, weight)
tensor(1.)
>>> l1_loss(pred, target, reduction='none')
tensor([1., 1., 2.])
>>> l1_loss(pred, target, weight, avg_factor=2)
tensor(1.5000)
class mmdet.models.losses.L1Loss(reduction='mean', loss_weight=1.0)[source]
forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

mmdet.models.losses.isr_p(cls_score, bbox_pred, bbox_targets, rois, sampling_results, loss_cls, bbox_coder, k=2, bias=0, num_class=80)[source]

Importance-based Sample Reweighting (ISR_P), positive part.

Parameters:
  • cls_score (Tensor) – Predicted classification scores.
  • bbox_pred (Tensor) – Predicted bbox deltas.
  • bbox_targets (tuple[Tensor]) – A tuple of bbox targets, the are labels, label_weights, bbox_targets, bbox_weights, respectively.
  • rois (Tensor) – Anchors (single_stage) in shape (n, 4) or RoIs (two_stage) in shape (n, 5).
  • sampling_results (obj) – Sampling results.
  • loss_cls (func) – Classification loss func of the head.
  • bbox_coder (obj) – BBox coder of the head.
  • k (float) – Power of the non-linear mapping.
  • bias (float) – Shift of the non-linear mapping.
  • num_class (int) – Number of classes, default: 80.
Returns:

labels, imp_based_label_weights, bbox_targets,

bbox_target_weights

Return type:

tuple([Tensor])

mmdet.models.losses.carl_loss(cls_score, labels, bbox_pred, bbox_targets, loss_bbox, k=1, bias=0.2, avg_factor=None, sigmoid=False, num_class=80)[source]

Classification-Aware Regression Loss (CARL).

Parameters:
  • cls_score (Tensor) – Predicted classification scores.
  • labels (Tensor) – Targets of classification.
  • bbox_pred (Tensor) – Predicted bbox deltas.
  • bbox_targets (Tensor) – Target of bbox regression.
  • loss_bbox (func) – Regression loss func of the head.
  • bbox_coder (obj) – BBox coder of the head.
  • k (float) – Power of the non-linear mapping.
  • bias (float) – Shift of the non-linear mapping.
  • avg_factor (int) – Average factor used in regression loss.
  • sigmoid (bool) – Activation of the classification score.
  • num_class (int) – Number of classes, default: 80.
Returns:

CARL loss dict.

Return type:

dict