API Reference¶
mmdet.apis¶
mmdet.core¶
anchor¶
-
class
mmdet.core.anchor.
AnchorGenerator
(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]¶ Standard anchor generator for 2D anchor-based detectors.
Parameters: - strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels in order (w, h).
- ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
- scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
- base_sizes (list[int] | None) – The basic sizes of anchors in multiple levels. If None is given, strides will be used as base_sizes. (If strides are non square, the shortest stride is taken.)
- scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
- octave_base_scale (int) – The base scale of octave.
- scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
- centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. If a list of tuple of float is given, they will be used to shift the centers of anchors.
- center_offset (float) – The offset of center in proportion to anchors’ width and height. By default it is 0 in V2.0.
Examples
>>> from mmdet.core import AnchorGenerator >>> self = AnchorGenerator([16], [1.], [1.], [9]) >>> all_anchors = self.grid_anchors([(2, 2)], device='cpu') >>> print(all_anchors) [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], [11.5000, -4.5000, 20.5000, 4.5000], [-4.5000, 11.5000, 4.5000, 20.5000], [11.5000, 11.5000, 20.5000, 20.5000]])] >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) >>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu') >>> print(all_anchors) [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], [11.5000, -4.5000, 20.5000, 4.5000], [-4.5000, 11.5000, 4.5000, 20.5000], [11.5000, 11.5000, 20.5000, 20.5000]]), tensor([[-9., -9., 9., 9.]])]
-
gen_base_anchors
()[source]¶ Generate base anchors.
Returns: Base anchors of a feature grid in multiple feature levels. Return type: list(torch.Tensor)
-
gen_single_level_base_anchors
(base_size, scales, ratios, center=None)[source]¶ Generate base anchors of a single level.
Parameters: - base_size (int | float) – Basic size of an anchor.
- scales (torch.Tensor) – Scales of the anchor.
- ratios (torch.Tensor) – The ratio between between the height and width of anchors in a single level.
- center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
Returns: Anchors in a single-level feature maps.
Return type: torch.Tensor
-
grid_anchors
(featmap_sizes, device='cuda')[source]¶ Generate grid anchors in multiple feature levels.
Parameters: - featmap_sizes (list[tuple]) – List of feature map sizes in multiple feature levels.
- device (str) – Device where the anchors will be put on.
Returns: Anchors in multiple feature levels. The sizes of each tensor should be [N, 4], where N = width * height * num_base_anchors, width and height are the sizes of the corresponding feature level, num_base_anchors is the number of anchors for that level.
Return type: list[torch.Tensor]
-
num_base_anchors
¶ total number of base anchors in a feature grid
Type: list[int]
-
num_levels
¶ number of feature levels that the generator will be applied
Type: int
-
single_level_grid_anchors
(base_anchors, featmap_size, stride=(16, 16), device='cuda')[source]¶ Generate grid anchors of a single level.
Note
This function is usually called by method
self.grid_anchors
.Parameters: - base_anchors (torch.Tensor) – The base anchors of a feature grid.
- featmap_size (tuple[int]) – Size of the feature maps.
- stride (tuple[int], optional) – Stride of the feature map in order (w, h). Defaults to (16, 16).
- device (str, optional) – Device the tensor will be put on. Defaults to ‘cuda’.
Returns: Anchors in the overall feature maps.
Return type: torch.Tensor
-
single_level_valid_flags
(featmap_size, valid_size, num_base_anchors, device='cuda')[source]¶ Generate the valid flags of anchor in a single feature map.
Parameters: - featmap_size (tuple[int]) – The size of feature maps.
- valid_size (tuple[int]) – The valid size of the feature maps.
- num_base_anchors (int) – The number of base anchors.
- device (str, optional) – Device where the flags will be put on. Defaults to ‘cuda’.
Returns: The valid flags of each anchor in a single level feature map.
Return type: torch.Tensor
-
valid_flags
(featmap_sizes, pad_shape, device='cuda')[source]¶ Generate valid flags of anchors in multiple feature levels.
Parameters: - featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
- pad_shape (tuple) – The padded shape of the image.
- device (str) – Device where the anchors will be put on.
Returns: Valid flags of anchors in multiple levels.
Return type: list(torch.Tensor)
-
class
mmdet.core.anchor.
LegacyAnchorGenerator
(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]¶ Legacy anchor generator used in MMDetection V1.x.
Note
Difference to the V2.0 anchor generator:
- The center offset of V1.x anchors are set to be 0.5 rather than 0.
- The width/height are minused by 1 when calculating the anchors’ centers and corners to meet the V1.x coordinate system.
- The anchors’ corners are quantized.
Parameters: - strides (list[int] | list[tuple[int]]) – Strides of anchors in multiple feature levels.
- ratios (list[float]) – The list of ratios between the height and width of anchors in a single level.
- scales (list[int] | None) – Anchor scales for anchors in a single level. It cannot be set at the same time if octave_base_scale and scales_per_octave are set.
- base_sizes (list[int]) – The basic sizes of anchors in multiple levels. If None is given, strides will be used to generate base_sizes.
- scale_major (bool) – Whether to multiply scales first when generating base anchors. If true, the anchors in the same row will have the same scales. By default it is True in V2.0
- octave_base_scale (int) – The base scale of octave.
- scales_per_octave (int) – Number of scales for each octave. octave_base_scale and scales_per_octave are usually used in retinanet and the scales should be None when they are set.
- centers (list[tuple[float, float]] | None) – The centers of the anchor relative to the feature grid center in multiple feature levels. By default it is set to be None and not used. It a list of float is given, this list will be used to shift the centers of anchors.
- center_offset (float) – The offset of center in propotion to anchors’ width and height. By default it is 0.5 in V2.0 but it should be 0.5 in v1.x models.
Examples
>>> from mmdet.core import LegacyAnchorGenerator >>> self = LegacyAnchorGenerator( >>> [16], [1.], [1.], [9], center_offset=0.5) >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') >>> print(all_anchors) [tensor([[ 0., 0., 8., 8.], [16., 0., 24., 8.], [ 0., 16., 8., 24.], [16., 16., 24., 24.]])]
-
gen_single_level_base_anchors
(base_size, scales, ratios, center=None)[source]¶ Generate base anchors of a single level.
Note
The width/height of anchors are minused by 1 when calculating the centers and corners to meet the V1.x coordinate system.
Parameters: - base_size (int | float) – Basic size of an anchor.
- scales (torch.Tensor) – Scales of the anchor.
- ratios (torch.Tensor) – The ratio between between the height. and width of anchors in a single level.
- center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
Returns: Anchors in a single-level feature map.
Return type: torch.Tensor
-
mmdet.core.anchor.
anchor_inside_flags
(flat_anchors, valid_flags, img_shape, allowed_border=0)[source]¶ Check whether the anchors are inside the border.
Parameters: - flat_anchors (torch.Tensor) – Flatten anchors, shape (n, 4).
- valid_flags (torch.Tensor) – An existing valid flags of anchors.
- img_shape (tuple(int)) – Shape of current image.
- allowed_border (int, optional) – The border to allow the valid anchor. Defaults to 0.
Returns: Flags indicating whether the anchors are inside a valid range.
Return type: torch.Tensor
-
mmdet.core.anchor.
images_to_levels
(target, num_levels)[source]¶ Convert targets by image to targets by feature level.
[target_img0, target_img1] -> [target_level0, target_level1, …]
-
mmdet.core.anchor.
calc_region
(bbox, ratio, featmap_size=None)[source]¶ Calculate a proportional bbox region.
The bbox center are fixed and the new h’ and w’ is h * ratio and w * ratio.
Parameters: - bbox (Tensor) – Bboxes to calculate regions, shape (n, 4).
- ratio (float) – Ratio of the output region.
- featmap_size (tuple) – Feature map size used for clipping the boundary.
Returns: x1, y1, x2, y2
Return type: tuple
-
class
mmdet.core.anchor.
YOLOAnchorGenerator
(strides, base_sizes)[source]¶ Anchor generator for YOLO.
Parameters: - strides (list[int] | list[tuple[int, int]]) – Strides of anchors in multiple feature levels.
- base_sizes (list[list[tuple[int, int]]]) – The basic sizes of anchors in multiple levels.
-
gen_base_anchors
()[source]¶ Generate base anchors.
Returns: Base anchors of a feature grid in multiple feature levels. Return type: list(torch.Tensor)
-
gen_single_level_base_anchors
(base_sizes_per_level, center=None)[source]¶ Generate base anchors of a single level.
Parameters: - base_sizes_per_level (list[tuple[int, int]]) – Basic sizes of anchors.
- center (tuple[float], optional) – The center of the base anchor related to a single feature grid. Defaults to None.
Returns: Anchors in a single-level feature maps.
Return type: torch.Tensor
-
num_levels
¶ number of feature levels that the generator will be applied
Type: int
-
responsible_flags
(featmap_sizes, gt_bboxes, device='cuda')[source]¶ Generate responsible anchor flags of grid cells in multiple scales.
Parameters: - featmap_sizes (list(tuple)) – List of feature map sizes in multiple feature levels.
- gt_bboxes (Tensor) – Ground truth boxes, shape (n, 4).
- device (str) – Device where the anchors will be put on.
Returns: responsible flags of anchors in multiple level
Return type: list(torch.Tensor)
-
single_level_responsible_flags
(featmap_size, gt_bboxes, stride, num_base_anchors, device='cuda')[source]¶ Generate the responsible flags of anchor in a single feature map.
Parameters: - featmap_size (tuple[int]) – The size of feature maps.
- gt_bboxes (Tensor) – Ground truth boxes, shape (n, 4).
- stride (tuple(int)) – stride of current level
- num_base_anchors (int) – The number of base anchors.
- device (str, optional) – Device where the flags will be put on. Defaults to ‘cuda’.
Returns: The valid flags of each anchor in a single level feature map.
Return type: torch.Tensor
bbox¶
-
mmdet.core.bbox.
bbox_overlaps
(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-06)[source]¶ Calculate overlap between two set of bboxes.
If
is_aligned
isFalse
, then calculate the ious between each bbox of bboxes1 and bboxes2, otherwise the ious between each aligned pair of bboxes1 and bboxes2.Parameters: - bboxes1 (Tensor) – shape (m, 4) in <x1, y1, x2, y2> format or empty.
- bboxes2 (Tensor) – shape (n, 4) in <x1, y1, x2, y2> format or empty.
If is_aligned is
True
, then m and n must be equal. - mode (str) – “iou” (intersection over union) or iof (intersection over foreground).
Returns: shape (m, n) if is_aligned == False else shape (m, 1)
Return type: ious(Tensor)
Example
>>> bboxes1 = torch.FloatTensor([ >>> [0, 0, 10, 10], >>> [10, 10, 20, 20], >>> [32, 32, 38, 42], >>> ]) >>> bboxes2 = torch.FloatTensor([ >>> [0, 0, 10, 20], >>> [0, 10, 10, 19], >>> [10, 10, 20, 20], >>> ]) >>> bbox_overlaps(bboxes1, bboxes2) tensor([[0.5000, 0.0000, 0.0000], [0.0000, 0.0000, 1.0000], [0.0000, 0.0000, 0.0000]])
Example
>>> empty = torch.FloatTensor([]) >>> nonempty = torch.FloatTensor([ >>> [0, 0, 10, 9], >>> ]) >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
-
class
mmdet.core.bbox.
MaxIoUAssigner
(pos_iou_thr, neg_iou_thr, min_pos_iou=0.0, gt_max_assign_all=True, ignore_iof_thr=-1, ignore_wrt_candidates=True, match_low_quality=True, gpu_assign_thr=-1, iou_calculator={'type': 'BboxOverlaps2D'})[source]¶ Assign a corresponding gt bbox or background to each bbox.
Each proposals will be assigned with -1, or a semi-positive integer indicating the ground truth index.
- -1: negative sample, no assigned gt
- semi-positive integer: positive sample, index (0-based) of assigned gt
Parameters: - pos_iou_thr (float) – IoU threshold for positive bboxes.
- neg_iou_thr (float or tuple) – IoU threshold for negative bboxes.
- min_pos_iou (float) – Minimum iou for a bbox to be considered as a positive bbox. Positive samples can have smaller IoU than pos_iou_thr due to the 4th step (assign max IoU sample to each gt).
- gt_max_assign_all (bool) – Whether to assign all bboxes with the same highest overlap with some gt to that gt.
- ignore_iof_thr (float) – IoF threshold for ignoring bboxes (if gt_bboxes_ignore is specified). Negative values mean not ignoring any bboxes.
- ignore_wrt_candidates (bool) – Whether to compute the iof between bboxes and gt_bboxes_ignore, or the contrary.
- match_low_quality (bool) – Whether to allow low quality matches. This is usually allowed for RPN and single stage detectors, but not allowed in the second stage. Details are demonstrated in Step 4.
- gpu_assign_thr (int) – The upper bound of the number of GT for GPU assign. When the number of gt is above this threshold, will assign on CPU device. Negative values mean not assign on CPU.
-
assign
(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]¶ Assign gt to bboxes.
This method assign a gt bbox to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt. The assignment is done in following steps, the order matters.
- assign every bbox to the background
- assign proposals whose iou with all gts < neg_iou_thr to 0
- for each bbox, if the iou with its nearest gt >= pos_iou_thr, assign it to that bbox
- for each gt bbox, assign its nearest proposals (may be more than one) to itself
Parameters: - bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).
Returns: The assign result.
Return type: Example
>>> self = MaxIoUAssigner(0.5, 0.5) >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) >>> gt_bboxes = torch.Tensor([[0, 0, 10, 9]]) >>> assign_result = self.assign(bboxes, gt_bboxes) >>> expected_gt_inds = torch.LongTensor([1, 0]) >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
-
class
mmdet.core.bbox.
AssignResult
(num_gts, gt_inds, max_overlaps, labels=None)[source]¶ Stores assignments between predicted and truth boxes.
-
num_gts
¶ the number of truth boxes considered when computing this assignment
Type: int
-
gt_inds
¶ for each predicted box indicates the 1-based index of the assigned truth box. 0 means unassigned and -1 means ignore.
Type: LongTensor
-
max_overlaps
¶ the iou between the predicted box and its assigned truth box.
Type: FloatTensor
-
labels
¶ If specified, for each predicted box indicates the category label of the assigned truth box.
Type: None | LongTensor
Example
>>> # An assign result between 4 predicted boxes and 9 true boxes >>> # where only two boxes were assigned. >>> num_gts = 9 >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) >>> labels = torch.LongTensor([0, 3, 4, 0]) >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) >>> print(str(self)) # xdoctest: +IGNORE_WANT <AssignResult(num_gts=9, gt_inds.shape=(4,), max_overlaps.shape=(4,), labels.shape=(4,))> >>> # Force addition of gt labels (when adding gt as proposals) >>> new_labels = torch.LongTensor([3, 4, 5]) >>> self.add_gt_(new_labels) >>> print(str(self)) # xdoctest: +IGNORE_WANT <AssignResult(num_gts=9, gt_inds.shape=(7,), max_overlaps.shape=(7,), labels.shape=(7,))>
-
add_gt_
(gt_labels)[source]¶ Add ground truth as assigned results.
Parameters: gt_labels (torch.Tensor) – Labels of gt boxes
-
info
¶ a dictionary of info about the object
Type: dict
-
num_preds
¶ the number of predictions in this assignment
Type: int
-
classmethod
random
(**kwargs)[source]¶ Create random AssignResult for tests or debugging.
Parameters: - num_preds – number of predicted boxes
- num_gts – number of true boxes
- p_ignore (float) – probability of a predicted box assinged to an ignored truth
- p_assigned (float) – probability of a predicted box not being assigned
- p_use_label (float | bool) – with labels or not
- rng (None | int | numpy.random.RandomState) – seed or state
Returns: Randomly generated assign results.
Return type: Example
>>> from mmdet.core.bbox.assigners.assign_result import * # NOQA >>> self = AssignResult.random() >>> print(self.info)
-
-
class
mmdet.core.bbox.
BaseSampler
(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]¶ Base class of samplers.
-
sample
(assign_result, bboxes, gt_bboxes, gt_labels=None, **kwargs)[source]¶ Sample positive and negative bboxes.
This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.
Parameters: - assign_result (
AssignResult
) – Bbox assigning results. - bboxes (Tensor) – Boxes to be sampled from.
- gt_bboxes (Tensor) – Ground truth bboxes.
- gt_labels (Tensor, optional) – Class labels of ground truth bboxes.
Returns: Sampling result.
Return type: Example
>>> from mmdet.core.bbox import RandomSampler >>> from mmdet.core.bbox import AssignResult >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes >>> rng = ensure_rng(None) >>> assign_result = AssignResult.random(rng=rng) >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) >>> gt_labels = None >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, >>> add_gt_as_proposals=False) >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels)
- assign_result (
-
-
class
mmdet.core.bbox.
PseudoSampler
(**kwargs)[source]¶ A pseudo sampler that does not do sampling actually.
-
sample
(assign_result, bboxes, gt_bboxes, **kwargs)[source]¶ Directly returns the positive and negative indices of samples.
Parameters: - assign_result (
AssignResult
) – Assigned results - bboxes (torch.Tensor) – Bounding boxes
- gt_bboxes (torch.Tensor) – Ground truth boxes
Returns: sampler results
Return type: - assign_result (
-
-
class
mmdet.core.bbox.
RandomSampler
(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]¶ Random sampler.
Parameters: - num (int) – Number of samples
- pos_fraction (float) – Fraction of positive samples
- neg_pos_up (int, optional) – Upper bound number of negative and positive samples. Defaults to -1.
- add_gt_as_proposals (bool, optional) – Whether to add ground truth boxes as proposals. Defaults to True.
-
random_choice
(gallery, num)[source]¶ Random select some elements from the gallery.
If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.
Parameters: - gallery (Tensor | ndarray | list) – indices pool.
- num (int) – expected sample num.
Returns: sampled indices.
Return type: Tensor or ndarray
-
class
mmdet.core.bbox.
InstanceBalancedPosSampler
(num, pos_fraction, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]¶ Instance balanced sampler that samples equal number of positive samples for each instance.
-
class
mmdet.core.bbox.
IoUBalancedNegSampler
(num, pos_fraction, floor_thr=-1, floor_fraction=0, num_bins=3, **kwargs)[source]¶ IoU Balanced Sampling.
arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
Sampling proposals according to their IoU. floor_fraction of needed RoIs are sampled from proposals whose IoU are lower than floor_thr randomly. The others are sampled from proposals whose IoU are higher than floor_thr. These proposals are sampled from some bins evenly, which are split by num_bins via IoU evenly.
Parameters: - num (int) – number of proposals.
- pos_fraction (float) – fraction of positive proposals.
- floor_thr (float) – threshold (minimum) IoU for IoU balanced sampling, set to -1 if all using IoU balanced sampling.
- floor_fraction (float) – sampling fraction of proposals under floor_thr.
- num_bins (int) – number of bins in IoU balanced sampling.
-
sample_via_interval
(max_overlaps, full_set, num_expected)[source]¶ Sample according to the iou interval.
Parameters: - max_overlaps (torch.Tensor) – IoU between bounding boxes and ground truth boxes.
- full_set (set(int)) – A full set of indices of boxes。
- num_expected (int) – Number of expected samples。
Returns: Indices of samples
Return type: np.ndarray
-
class
mmdet.core.bbox.
CombinedSampler
(pos_sampler, neg_sampler, **kwargs)[source]¶ A sampler that combines positive sampler and negative sampler.
-
class
mmdet.core.bbox.
OHEMSampler
(num, pos_fraction, context, neg_pos_ub=-1, add_gt_as_proposals=True, **kwargs)[source]¶ Online Hard Example Mining Sampler described in Training Region-based Object Detectors with Online Hard Example Mining.
-
class
mmdet.core.bbox.
SamplingResult
(pos_inds, neg_inds, bboxes, gt_bboxes, assign_result, gt_flags)[source]¶ Bbox sampling result.
Example
>>> # xdoctest: +IGNORE_WANT >>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA >>> self = SamplingResult.random(rng=10) >>> print(f'self = {self}') self = <SamplingResult({ 'neg_bboxes': torch.Size([12, 4]), 'neg_inds': tensor([ 0, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12]), 'num_gts': 4, 'pos_assigned_gt_inds': tensor([], dtype=torch.int64), 'pos_bboxes': torch.Size([0, 4]), 'pos_inds': tensor([], dtype=torch.int64), 'pos_is_gt': tensor([], dtype=torch.uint8) })>
-
bboxes
¶ concatenated positive and negative boxes
Type: torch.Tensor
-
info
¶ Returns a dictionary of info about the object.
-
classmethod
random
(rng=None, **kwargs)[source]¶ Parameters: - rng (None | int | numpy.random.RandomState) – seed or state.
- kwargs (keyword arguments) –
- num_preds: number of predicted boxes
- num_gts: number of true boxes
- p_ignore (float): probability of a predicted box assinged to an ignored truth.
- p_assigned (float): probability of a predicted box not being assigned.
- p_use_label (float | bool): with labels or not.
Returns: Randomly generated sampling result.
Return type: Example
>>> from mmdet.core.bbox.samplers.sampling_result import * # NOQA >>> self = SamplingResult.random() >>> print(self.__dict__)
-
-
class
mmdet.core.bbox.
ScoreHLRSampler
(num, pos_fraction, context, neg_pos_ub=-1, add_gt_as_proposals=True, k=0.5, bias=0, score_thr=0.05, iou_thr=0.5, **kwargs)[source]¶ Importance-based Sample Reweighting (ISR_N), described in Prime Sample Attention in Object Detection.
Score hierarchical local rank (HLR) differentiates with RandomSampler in negative part. It firstly computes Score-HLR in a two-step way, then linearly maps score hlr to the loss weights.
Parameters: - num (int) – Total number of sampled RoIs.
- pos_fraction (float) – Fraction of positive samples.
- context (
BaseRoIHead
) – RoI head that the sampler belongs to. - neg_pos_ub (int) – Upper bound of the ratio of num negative to num positive, -1 means no upper bound.
- add_gt_as_proposals (bool) – Whether to add ground truth as proposals.
- k (float) – Power of the non-linear mapping.
- bias (float) – Shift of the non-linear mapping.
- score_thr (float) – Minimum score that a negative sample is to be considered as valid bbox.
-
static
random_choice
(gallery, num)[source]¶ Randomly select some elements from the gallery.
If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.
Parameters: - gallery (Tensor | ndarray | list) – indices pool.
- num (int) – expected sample num.
Returns: sampled indices.
Return type: Tensor or ndarray
-
sample
(assign_result, bboxes, gt_bboxes, gt_labels=None, img_meta=None, **kwargs)[source]¶ Sample positive and negative bboxes.
This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.
Parameters: - assign_result (
AssignResult
) – Bbox assigning results. - bboxes (Tensor) – Boxes to be sampled from.
- gt_bboxes (Tensor) – Ground truth bboxes.
- gt_labels (Tensor, optional) – Class labels of ground truth bboxes.
Returns: - Sampling result and negetive
label weights.
Return type: tuple[
SamplingResult
, Tensor]- assign_result (
-
mmdet.core.bbox.
bbox_flip
(bboxes, img_shape, direction='horizontal')[source]¶ Flip bboxes horizontally or vertically.
Parameters: - bboxes (Tensor) – Shape (…, 4*k)
- img_shape (tuple) – Image shape.
- direction (str) – Flip direction, options are “horizontal”, “vertical”, “diagonal”. Default: “horizontal”
Returns: Flipped bboxes.
Return type: Tensor
-
mmdet.core.bbox.
bbox_mapping
(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]¶ Map bboxes from the original image scale to testing scale.
-
mmdet.core.bbox.
bbox_mapping_back
(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]¶ Map bboxes from testing scale to original image scale.
-
mmdet.core.bbox.
bbox2roi
(bbox_list)[source]¶ Convert a list of bboxes to roi format.
Parameters: bbox_list (list[Tensor]) – a list of bboxes corresponding to a batch of images. Returns: shape (n, 5), [batch_ind, x1, y1, x2, y2] Return type: Tensor
-
mmdet.core.bbox.
roi2bbox
(rois)[source]¶ Convert rois to bounding box format.
Parameters: rois (torch.Tensor) – RoIs with the shape (n, 5) where the first column indicates batch id of each RoI. Returns: Converted boxes of corresponding rois. Return type: list[torch.Tensor]
-
mmdet.core.bbox.
bbox2result
(bboxes, labels, num_classes)[source]¶ Convert detection results to a list of numpy arrays.
Parameters: - bboxes (torch.Tensor | np.ndarray) – shape (n, 5)
- labels (torch.Tensor | np.ndarray) – shape (n, )
- num_classes (int) – class number, including background class
Returns: bbox results of each class
Return type: list(ndarray)
-
mmdet.core.bbox.
distance2bbox
(points, distance, max_shape=None)[source]¶ Decode distance prediction to bounding box.
Parameters: - points (Tensor) – Shape (n, 2), [x, y].
- distance (Tensor) – Distance from the given point to 4 boundaries (left, top, right, bottom).
- max_shape (tuple) – Shape of the image.
Returns: Decoded bboxes.
Return type: Tensor
-
mmdet.core.bbox.
bbox2distance
(points, bbox, max_dis=None, eps=0.1)[source]¶ Decode bounding box based on distances.
Parameters: - points (Tensor) – Shape (n, 2), [x, y].
- bbox (Tensor) – Shape (n, 4), “xyxy” format
- max_dis (float) – Upper bound of the distance.
- eps (float) – a small value to ensure target < max_dis, instead <=
Returns: Decoded distances.
Return type: Tensor
-
class
mmdet.core.bbox.
BaseBBoxCoder
(**kwargs)[source]¶ Base bounding box coder.
-
class
mmdet.core.bbox.
DeltaXYWHBBoxCoder
(target_means=(0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0))[source]¶ Delta XYWH BBox coder.
Following the practice in R-CNN, this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).
Parameters: - target_means (Sequence[float]) – Denormalizing means of target for delta coordinates
- target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates
-
decode
(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]¶ Apply transformation pred_bboxes to boxes.
Parameters: - boxes (torch.Tensor) – Basic boxes.
- pred_bboxes (torch.Tensor) – Encoded boxes with shape
- max_shape (tuple[int], optional) – Maximum shape of boxes. Defaults to None.
- wh_ratio_clip (float, optional) – The allowed ratio between width and height.
Returns: Decoded boxes.
Return type: torch.Tensor
-
encode
(bboxes, gt_bboxes)[source]¶ Get box regression transformation deltas that can be used to transform the
bboxes
into thegt_bboxes
.Parameters: - bboxes (torch.Tensor) – Source boxes, e.g., object proposals.
- gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.
Returns: Box transformation deltas
Return type: torch.Tensor
-
class
mmdet.core.bbox.
TBLRBBoxCoder
(normalizer=4.0)[source]¶ TBLR BBox coder.
Following the practice in FSAF, this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, right) and decode it back to the original.
Parameters: normalizer (list | float) – Normalization factor to be divided with when coding the coordinates. If it is a list, it should have length of 4 indicating normalization factor in tblr dims. Otherwise it is a unified float factor for all dims. Default: 4.0 -
decode
(bboxes, pred_bboxes, max_shape=None)[source]¶ Apply transformation pred_bboxes to boxes.
Parameters: - boxes (torch.Tensor) – Basic boxes.
- pred_bboxes (torch.Tensor) – Encoded boxes with shape
- max_shape (tuple[int], optional) – Maximum shape of boxes. Defaults to None.
Returns: Decoded boxes.
Return type: torch.Tensor
-
encode
(bboxes, gt_bboxes)[source]¶ Get box regression transformation deltas that can be used to transform the
bboxes
into thegt_bboxes
in the (top, left, bottom, right) order.Parameters: - bboxes (torch.Tensor) – source boxes, e.g., object proposals.
- gt_bboxes (torch.Tensor) – target of the transformation, e.g., ground truth boxes.
Returns: Box transformation deltas
Return type: torch.Tensor
-
-
class
mmdet.core.bbox.
CenterRegionAssigner
(pos_scale, neg_scale, min_pos_iof=0.01, ignore_gt_scale=0.5, foreground_dominate=False, iou_calculator={'type': 'BboxOverlaps2D'})[source]¶ Assign pixels at the center region of a bbox as positive.
Each proposals will be assigned with -1, 0, or a positive integer indicating the ground truth index. - -1: negative samples - semi-positive numbers: positive sample, index (0-based) of assigned gt
Parameters: - pos_scale (float) – Threshold within which pixels are labelled as positive.
- neg_scale (float) – Threshold above which pixels are labelled as positive.
- min_pos_iof (float) – Minimum iof of a pixel with a gt to be labelled as positive. Default: 1e-2
- ignore_gt_scale (float) – Threshold within which the pixels are ignored when the gt is labelled as shadowed. Default: 0.5
- foreground_dominate (bool) – If True, the bbox will be assigned as positive when a gt’s kernel region overlaps with another’s shadowed (ignored) region, otherwise it is set as ignored. Default to False.
-
assign
(bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]¶ Assign gt to bboxes.
This method assigns gts to every bbox (proposal/anchor), each bbox will be assigned with -1, or a semi-positive number. -1 means negative sample, semi-positive number is the index (0-based) of assigned gt.
Parameters: - bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).
- gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.
- gt_labels (tensor, optional) – Label of gt_bboxes, shape (num_gts,).
Returns: The assigned result. Note that shadowed_labels of shape (N, 2) is also added as an assign_result attribute. shadowed_labels is a tensor composed of N pairs of anchor_ind, class_label], where N is the number of anchors that lie in the outer region of a gt, anchor_ind is the shadowed anchor index and class_label is the shadowed class label.
Return type: Example
>>> self = CenterRegionAssigner(0.2, 0.2) >>> bboxes = torch.Tensor([[0, 0, 10, 10], [10, 10, 20, 20]]) >>> gt_bboxes = torch.Tensor([[0, 0, 10, 10]]) >>> assign_result = self.assign(bboxes, gt_bboxes) >>> expected_gt_inds = torch.LongTensor([1, 0]) >>> assert torch.all(assign_result.gt_inds == expected_gt_inds)
-
assign_one_hot_gt_indices
(is_bbox_in_gt_core, is_bbox_in_gt_shadow, gt_priority=None)[source]¶ Assign only one gt index to each prior box.
Gts with large gt_priority are more likely to be assigned.
Parameters: - is_bbox_in_gt_core (Tensor) – Bool tensor indicating the bbox center is in the core area of a gt (e.g. 0-0.2). Shape: (num_prior, num_gt).
- is_bbox_in_gt_shadow (Tensor) – Bool tensor indicating the bbox center is in the shadowed area of a gt (e.g. 0.2-0.5). Shape: (num_prior, num_gt).
- gt_priority (Tensor) – Priorities of gts. The gt with a higher priority is more likely to be assigned to the bbox when the bbox match with multiple gts. Shape: (num_gt, ).
Returns: Returns (assigned_gt_inds, shadowed_gt_inds).
- assigned_gt_inds: The assigned gt index of each prior bbox (i.e. index from 1 to num_gts). Shape: (num_prior, ).
- shadowed_gt_inds: shadowed gt indices. It is a tensor of shape (num_ignore, 2) with first column being the shadowed prior bbox indices and the second column the shadowed gt indices (1-based).
Return type: tuple
-
get_gt_priorities
(gt_bboxes)[source]¶ Get gt priorities according to their areas.
Smaller gt has higher priority.
Parameters: gt_bboxes (Tensor) – Ground truth boxes, shape (k, 4). Returns: The priority of gts so that gts with larger priority is more likely to be assigned. Shape (k, ) Return type: Tensor
mask¶
-
mmdet.core.mask.
split_combined_polys
(polys, poly_lens, polys_per_mask)[source]¶ Split the combined 1-D polys into masks.
A mask is represented as a list of polys, and a poly is represented as a 1-D array. In dataset, all masks are concatenated into a single 1-D tensor. Here we need to split the tensor into original representations.
Parameters: - polys (list) – a list (length = image num) of 1-D tensors
- poly_lens (list) – a list (length = image num) of poly length
- polys_per_mask (list) – a list (length = image num) of poly number of each mask
Returns: a list (length = image num) of list (length = mask num) of list (length = poly num) of numpy array.
Return type: list
-
mmdet.core.mask.
mask_target
(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, cfg)[source]¶ Compute mask target for positive proposals in multiple images.
Parameters: - pos_proposals_list (list[Tensor]) – Positive proposals in multiple images.
- pos_assigned_gt_inds_list (list[Tensor]) – Assigned GT indices for each positive proposals.
- gt_masks_list (list[
BaseInstanceMasks
]) – Ground truth masks of each image. - cfg (dict) – Config dict that specifies the mask size.
Returns: Mask target of each image.
Return type: list[Tensor]
-
class
mmdet.core.mask.
BaseInstanceMasks
[source]¶ Base class for instance masks.
-
areas
¶ areas of each instance.
Type: ndarray
-
crop
(bbox)[source]¶ Crop each mask by the given bbox.
Parameters: bbox (ndarray) – Bbox in format [x1, y1, x2, y2], shape (4, ). Returns: The cropped masks. Return type: BaseInstanceMasks
-
crop_and_resize
(bboxes, out_shape, inds, device, interpolation='bilinear')[source]¶ Crop and resize masks by the given bboxes.
This function is mainly used in mask targets computation. It firstly align mask to bboxes by assigned_inds, then crop mask by the assigned bbox and resize to the size of (mask_h, mask_w)
Parameters: - bboxes (Tensor) – Bboxes in format [x1, y1, x2, y2], shape (N, 4)
- out_shape (tuple[int]) – Target (h, w) of resized mask
- inds (ndarray) – Indexes to assign masks to each bbox
- device (str) – Device of bboxes
- interpolation (str) – See mmcv.imresize
Returns: the cropped and resized masks.
Return type:
-
flip
(flip_direction='horizontal')[source]¶ Flip masks alone the given direction.
Parameters: flip_direction (str) – Either ‘horizontal’ or ‘vertical’. Returns: The flipped masks. Return type: BaseInstanceMasks
-
pad
(out_shape, pad_val)[source]¶ Pad masks to the given size of (h, w).
Parameters: - out_shape (tuple[int]) – Target (h, w) of padded mask.
- pad_val (int) – The padded value.
Returns: The padded masks.
Return type:
-
rescale
(scale, interpolation='nearest')[source]¶ Rescale masks as large as possible while keeping the aspect ratio. For details can refer to mmcv.imrescale.
Parameters: - scale (tuple[int]) – The maximum size (h, w) of rescaled mask.
- interpolation (str) – Same as
mmcv.imrescale()
.
Returns: The rescaled masks.
Return type:
-
resize
(out_shape, interpolation='nearest')[source]¶ Resize masks to the given out_shape.
Parameters: - out_shape – Target (h, w) of resized mask.
- interpolation (str) – See
mmcv.imresize()
.
Returns: The resized masks.
Return type:
-
-
class
mmdet.core.mask.
BitmapMasks
(masks, height, width)[source]¶ This class represents masks in the form of bitmaps.
Parameters: - masks (ndarray) – ndarray of masks in shape (N, H, W), where N is the number of objects.
- height (int) – height of masks
- width (int) – width of masks
-
areas
¶
-
class
mmdet.core.mask.
PolygonMasks
(masks, height, width)[source]¶ This class represents masks in the form of polygons.
Polygons is a list of three levels. The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
Parameters: - masks (list[list[ndarray]]) – The first level of the list corresponds to objects, the second level to the polys that compose the object, the third level to the poly coordinates
- height (int) – height of masks
- width (int) – width of masks
-
areas
¶ Compute areas of masks.
This func is modified from detectron2. The function only works with Polygons using the shoelace formula.
Returns: areas of each instance Return type: ndarray
evaluation¶
-
class
mmdet.core.evaluation.
DistEvalHook
(dataloader, start=None, interval=1, tmpdir=None, gpu_collect=False, **eval_kwargs)[source]¶ Distributed evaluation hook.
Notes
If new arguments are added, tools/test.py may be effected.
-
dataloader
¶ A PyTorch dataloader.
Type: DataLoader
-
start
¶ Evaluation starting epoch. It enables evaluation before the training starts if
start
<= the resuming epoch. If None, whether to evaluate is merely decided byinterval
. Default: None.Type: int, optional
-
interval
¶ Evaluation interval (by epochs). Default: 1.
Type: int
-
tmpdir
¶ Temporary directory to save the results of all processes. Default: None.
Type: str | None
-
gpu_collect
¶ Whether to use gpu or cpu to collect results. Default: False.
Type: bool
-
**eval_kwargs
Evaluation arguments fed into the evaluate function of the dataset.
-
-
class
mmdet.core.evaluation.
EvalHook
(dataloader, start=None, interval=1, **eval_kwargs)[source]¶ Evaluation hook.
Notes
If new arguments are added for EvalHook, tools/test.py may be
effected.
-
dataloader
¶ A PyTorch dataloader.
Type: DataLoader
-
start
¶ Evaluation starting epoch. It enables evaluation before the training starts if
start
<= the resuming epoch. If None, whether to evaluate is merely decided byinterval
. Default: None.Type: int, optional
-
interval
¶ Evaluation interval (by epochs). Default: 1.
Type: int
-
**eval_kwargs
Evaluation arguments fed into the evaluate function of the dataset.
-
-
mmdet.core.evaluation.
average_precision
(recalls, precisions, mode='area')[source]¶ Calculate average precision (for single or multiple scales).
Parameters: - recalls (ndarray) – shape (num_scales, num_dets) or (num_dets, )
- precisions (ndarray) – shape (num_scales, num_dets) or (num_dets, )
- mode (str) – ‘area’ or ‘11points’, ‘area’ means calculating the area under precision-recall curve, ‘11points’ means calculating the average precision of recalls at [0, 0.1, …, 1]
Returns: calculated average precision
Return type: float or ndarray
-
mmdet.core.evaluation.
eval_map
(det_results, annotations, scale_ranges=None, iou_thr=0.5, dataset=None, logger=None, nproc=4)[source]¶ Evaluate mAP of a dataset.
Parameters: - det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.
- annotations (list[dict]) –
Ground truth annotations where each item of the list indicates an image. Keys of annotations are:
- bboxes: numpy array of shape (n, 4)
- labels: numpy array of shape (n, )
- bboxes_ignore (optional): numpy array of shape (k, 4)
- labels_ignore (optional): numpy array of shape (k, )
- scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None.
- iou_thr (float) – IoU threshold to be considered as matched. Default: 0.5.
- dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datsets, e.g. “voc07”, “imagenet_det”, etc. Default: None.
- logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
- nproc (int) – Processes used for computing TP and FP. Default: 4.
Returns: (mAP, [dict, dict, …])
Return type: tuple
-
mmdet.core.evaluation.
print_map_summary
(mean_ap, results, dataset=None, scale_ranges=None, logger=None)[source]¶ Print mAP and results of each class.
A table will be printed to show the gts/dets/recall/AP of each class and the mAP.
Parameters: - mean_ap (float) – Calculated from eval_map().
- results (list[dict]) – Calculated from eval_map().
- dataset (list[str] | str | None) – Dataset name or dataset classes.
- scale_ranges (list[tuple] | None) – Range of scales to be evaluated.
- logger (logging.Logger | str | None) – The way to print the mAP summary. See mmdet.utils.print_log() for details. Default: None.
-
mmdet.core.evaluation.
eval_recalls
(gts, proposals, proposal_nums=None, iou_thrs=0.5, logger=None)[source]¶ Calculate recalls.
Parameters: - gts (list[ndarray]) – a list of arrays of shape (n, 4)
- proposals (list[ndarray]) – a list of arrays of shape (k, 4) or (k, 5)
- proposal_nums (int | Sequence[int]) – Top N proposals to be evaluated.
- iou_thrs (float | Sequence[float]) – IoU thresholds. Default: 0.5.
- logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
Returns: recalls of different ious and proposal nums
Return type: ndarray
-
mmdet.core.evaluation.
print_recall_summary
(recalls, proposal_nums, iou_thrs, row_idxs=None, col_idxs=None, logger=None)[source]¶ Print recalls in a table.
Parameters: - recalls (ndarray) – calculated from bbox_recalls
- proposal_nums (ndarray or list) – top N proposals
- iou_thrs (ndarray or list) – iou thresholds
- row_idxs (ndarray) – which rows(proposal nums) to print
- col_idxs (ndarray) – which cols(iou thresholds) to print
- logger (logging.Logger | str | None) – The way to print the recall summary. See mmdet.utils.print_log() for details. Default: None.
post_processing¶
-
mmdet.core.post_processing.
multiclass_nms
(multi_bboxes, multi_scores, score_thr, nms_cfg, max_num=-1, score_factors=None)[source]¶ NMS for multi-class bboxes.
Parameters: - multi_bboxes (Tensor) – shape (n, #class*4) or (n, 4)
- multi_scores (Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.
- score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.
- nms_thr (float) – NMS IoU threshold
- max_num (int) – if there are more than max_num bboxes after NMS, only top max_num will be kept.
- score_factors (Tensor) – The factors multiplied to scores before applying NMS
Returns: (bboxes, labels), tensors of shape (k, 5) and (k, 1). Labels are 0-based.
Return type: tuple
-
mmdet.core.post_processing.
merge_aug_proposals
(aug_proposals, img_metas, rpn_test_cfg)[source]¶ Merge augmented proposals (multiscale, flip, etc.)
Parameters: - aug_proposals (list[Tensor]) – proposals from different testing schemes, shape (n, 5). Note that they are not rescaled to the original image size.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- rpn_test_cfg (dict) – rpn test config.
Returns: shape (n, 4), proposals corresponding to original image scale.
Return type: Tensor
-
mmdet.core.post_processing.
merge_aug_bboxes
(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)[source]¶ Merge augmented detection bboxes and scores.
Parameters: - aug_bboxes (list[Tensor]) – shape (n, 4*#class)
- aug_scores (list[Tensor] or None) – shape (n, #class)
- img_shapes (list[Tensor]) – shape (3, ).
- rcnn_test_cfg (dict) – rcnn test config.
Returns: (bboxes, scores)
Return type: tuple
-
mmdet.core.post_processing.
merge_aug_masks
(aug_masks, img_metas, rcnn_test_cfg, weights=None)[source]¶ Merge augmented mask prediction.
Parameters: - aug_masks (list[ndarray]) – shape (n, #class, h, w)
- img_shapes (list[ndarray]) – shape (3, ).
- rcnn_test_cfg (dict) – rcnn test config.
Returns: (bboxes, scores)
Return type: tuple
fp16¶
-
mmdet.core.fp16.
auto_fp16
(apply_to=None, out_fp32=False)[source]¶ Decorator to enable fp16 training automatically.
This decorator is useful when you write custom modules and want to support mixed precision training. If inputs arguments are fp32 tensors, they will be converted to fp16 automatically. Arguments other than fp32 tensors are ignored.
Parameters: - apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
- out_fp32 (bool) – Whether to convert the output back to fp32.
Example
>>> import torch.nn as nn >>> class MyModule1(nn.Module): >>> >>> # Convert x and y to fp16 >>> @auto_fp16() >>> def forward(self, x, y): >>> pass
>>> import torch.nn as nn >>> class MyModule2(nn.Module): >>> >>> # convert pred to fp16 >>> @auto_fp16(apply_to=('pred', )) >>> def do_something(self, pred, others): >>> pass
-
mmdet.core.fp16.
force_fp32
(apply_to=None, out_fp16=False)[source]¶ Decorator to convert input arguments to fp32 in force.
This decorator is useful when you write custom modules and want to support mixed precision training. If there are some inputs that must be processed in fp32 mode, then this decorator can handle it. If inputs arguments are fp16 tensors, they will be converted to fp32 automatically. Arguments other than fp16 tensors are ignored.
Parameters: - apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.
- out_fp16 (bool) – Whether to convert the output back to fp16.
Example
>>> import torch.nn as nn >>> class MyModule1(nn.Module): >>> >>> # Convert x and y to fp32 >>> @force_fp32() >>> def loss(self, x, y): >>> pass
>>> import torch.nn as nn >>> class MyModule2(nn.Module): >>> >>> # convert pred to fp32 >>> @force_fp32(apply_to=('pred', )) >>> def post_process(self, pred, others): >>> pass
-
class
mmdet.core.fp16.
Fp16OptimizerHook
(grad_clip=None, coalesce=True, bucket_size_mb=-1, loss_scale=512.0, distributed=True)[source]¶ FP16 optimizer hook.
The steps of fp16 optimizer is as follows. 1. Scale the loss value. 2. BP in the fp16 model. 2. Copy gradients from fp16 model to fp32 weights. 3. Update fp32 weights. 4. Copy updated parameters from fp32 weights to fp16 model.
Refer to https://arxiv.org/abs/1710.03740 for more details.
Parameters: loss_scale (float) – Scale factor multiplied with loss. -
after_train_iter
(runner)[source]¶ Backward optimization steps for Mixed Precision Training.
- Scale the loss by a scale factor.
- Backward the loss to obtain the gradients (fp16).
- Copy gradients from the model to the fp32 weight copy.
- Scale the gradients back and update the fp32 weight copy.
- Copy back the params from fp32 weight copy to the fp16 model.
-
before_run
(runner)[source]¶ Preparing steps before Mixed Precision Training.
- Make a master copy of fp32 weights for optimization.
- Convert the main model from fp32 to fp16.
-
optimizer¶
utils¶
-
mmdet.core.utils.
allreduce_grads
(params, coalesce=True, bucket_size_mb=-1)[source]¶ Allreduce gradients.
Parameters: - params (list[torch.Parameters]) – List of parameters of a model
- coalesce (bool, optional) – Whether allreduce parameters as a whole. Defaults to True.
- bucket_size_mb (int, optional) – Size of bucket, the unit is MB. Defaults to -1.
-
class
mmdet.core.utils.
DistOptimizerHook
(*args, **kwargs)[source]¶ Deprecated optimizer hook for distributed training.
-
mmdet.core.utils.
tensor2imgs
(tensor, mean=(0, 0, 0), std=(1, 1, 1), to_rgb=True)[source]¶ Convert tensor to images.
Parameters: - tensor (torch.Tensor) – Tensor that contains multiple images
- mean (tuple[float], optional) – Mean of images. Defaults to (0, 0, 0).
- std (tuple[float], optional) – Standard deviation of images. Defaults to (1, 1, 1).
- to_rgb (bool, optional) – Whether convert the images to RGB format. Defaults to True.
Returns: A list that contains multiple images.
Return type: list[np.ndarray]
-
mmdet.core.utils.
multi_apply
(func, *args, **kwargs)[source]¶ Apply function to a list of arguments.
Note
This function applies the
func
to multiple inputs and map the multiple outputs of thefunc
into different list. Each list contains the same type of outputs corresponding to different inputs.Parameters: func (Function) – A function that will be applied to a list of arguments Returns: A tuple containing multiple list, each list contains a kind of returned results by the function Return type: tuple(list)
mmdet.datasets¶
datasets¶
pipelines¶
-
class
mmdet.datasets.pipelines.
Compose
(transforms)[source]¶ Compose multiple transforms sequentially.
Parameters: transforms (Sequence[dict | callable]) – Sequence of transform object or config dict to be composed.
-
mmdet.datasets.pipelines.
to_tensor
(data)[source]¶ Convert objects of various python types to
torch.Tensor
.Supported types are:
numpy.ndarray
,torch.Tensor
,Sequence
,int
andfloat
.Parameters: data (torch.Tensor | numpy.ndarray | Sequence | int | float) – Data to be converted.
-
class
mmdet.datasets.pipelines.
ToTensor
(keys)[source]¶ Convert some results to
torch.Tensor
by given keys.Parameters: keys (Sequence[str]) – Keys that need to be converted to Tensor.
-
class
mmdet.datasets.pipelines.
ImageToTensor
(keys)[source]¶ Convert image to
torch.Tensor
by given keys.The dimension order of input image is (H, W, C). The pipeline will convert it to (C, H, W). If only 2 dimension (H, W) is given, the output would be (1, H, W).
Parameters: keys (Sequence[str]) – Key of images to be converted to Tensor.
-
class
mmdet.datasets.pipelines.
ToDataContainer
(fields=({'key': 'img', 'stack': True}, {'key': 'gt_bboxes'}, {'key': 'gt_labels'}))[source]¶ Convert results to
mmcv.DataContainer
by given fields.Parameters: fields (Sequence[dict]) – Each field is a dict like dict(key='xxx', **kwargs)
. Thekey
in result will be converted tommcv.DataContainer
with**kwargs
. Default:(dict(key='img', stack=True), dict(key='gt_bboxes'), dict(key='gt_labels'))
.
-
class
mmdet.datasets.pipelines.
Transpose
(keys, order)[source]¶ Transpose some results by given keys.
Parameters: - keys (Sequence[str]) – Keys of results to be transposed.
- order (Sequence[int]) – Order of transpose.
-
class
mmdet.datasets.pipelines.
Collect
(keys, meta_keys=('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg'))[source]¶ Collect data from the loader relevant to the specific task.
This is usually the last stage of the data loader pipeline. Typically keys is set to some subset of “img”, “proposals”, “gt_bboxes”, “gt_bboxes_ignore”, “gt_labels”, and/or “gt_masks”.
The “img_meta” item is always populated. The contents of the “img_meta” dictionary depends on “meta_keys”. By default this includes:
“img_shape”: shape of the image input to the network as a tuple (h, w, c). Note that images may be zero padded on the bottom/right if the batch tensor is larger than this shape.
“scale_factor”: a float indicating the preprocessing scale
“flip”: a boolean indicating if image flip transform was used
“filename”: path to the image file
“ori_shape”: original shape of the image as a tuple (h, w, c)
“pad_shape”: image shape after padding
“img_norm_cfg”: a dict of normalization information:
- mean - per channel mean subtraction
- std - per channel std divisor
- to_rgb - bool indicating if bgr was converted to rgb
Parameters: - keys (Sequence[str]) – Keys of results to be collected in
data
. - meta_keys (Sequence[str], optional) – Meta keys to be converted to
mmcv.DataContainer
and collected indata[img_metas]
. Default:('filename', 'ori_filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg')
-
class
mmdet.datasets.pipelines.
DefaultFormatBundle
[source]¶ Default formatting bundle.
It simplifies the pipeline of formatting common fields, including “img”, “proposals”, “gt_bboxes”, “gt_labels”, “gt_masks” and “gt_semantic_seg”. These fields are formatted as follows.
- img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True)
- proposals: (1)to tensor, (2)to DataContainer
- gt_bboxes: (1)to tensor, (2)to DataContainer
- gt_bboxes_ignore: (1)to tensor, (2)to DataContainer
- gt_labels: (1)to tensor, (2)to DataContainer
- gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True)
- gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, (3)to DataContainer (stack=True)
-
class
mmdet.datasets.pipelines.
LoadAnnotations
(with_bbox=True, with_label=True, with_mask=False, with_seg=False, poly2mask=True, file_client_args={'backend': 'disk'})[source]¶ Load mutiple types of annotations.
Parameters: - with_bbox (bool) – Whether to parse and load the bbox annotation. Default: True.
- with_label (bool) – Whether to parse and load the label annotation. Default: True.
- with_mask (bool) – Whether to parse and load the mask annotation. Default: False.
- with_seg (bool) – Whether to parse and load the semantic segmentation annotation. Default: False.
- poly2mask (bool) – Whether to convert the instance masks from polygons to bitmaps. Default: True.
- file_client_args (dict) – Arguments to instantiate a FileClient.
See
mmcv.fileio.FileClient
for details. Defaults todict(backend='disk')
.
-
class
mmdet.datasets.pipelines.
LoadImageFromFile
(to_float32=False, color_type='color', file_client_args={'backend': 'disk'})[source]¶ Load an image from file.
Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).
Parameters: - to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
- color_type (str) – The flag argument for
mmcv.imfrombytes()
. Defaults to ‘color’. - file_client_args (dict) – Arguments to instantiate a FileClient.
See
mmcv.fileio.FileClient
for details. Defaults todict(backend='disk')
.
-
class
mmdet.datasets.pipelines.
LoadImageFromWebcam
(to_float32=False, color_type='color', file_client_args={'backend': 'disk'})[source]¶ Load an image from webcam.
Similar with
LoadImageFromFile
, but the image read from webcam is inresults['img']
.
-
class
mmdet.datasets.pipelines.
LoadMultiChannelImageFromFiles
(to_float32=False, color_type='unchanged', file_client_args={'backend': 'disk'})[source]¶ Load multi-channel images from a list of separate channel files.
Required keys are “img_prefix” and “img_info” (a dict that must contain the key “filename”, which is expected to be a list of filenames). Added or updated keys are “filename”, “img”, “img_shape”, “ori_shape” (same as img_shape), “pad_shape” (same as img_shape), “scale_factor” (1.0) and “img_norm_cfg” (means=0 and stds=1).
Parameters: - to_float32 (bool) – Whether to convert the loaded image to a float32 numpy array. If set to False, the loaded image is an uint8 array. Defaults to False.
- color_type (str) – The flag argument for
mmcv.imfrombytes()
. Defaults to ‘color’. - file_client_args (dict) – Arguments to instantiate a FileClient.
See
mmcv.fileio.FileClient
for details. Defaults todict(backend='disk')
.
-
class
mmdet.datasets.pipelines.
LoadProposals
(num_max_proposals=None)[source]¶ Load proposal pipeline.
Required key is “proposals”. Updated keys are “proposals”, “bbox_fields”.
Parameters: num_max_proposals (int, optional) – Maximum number of proposals to load. If not specified, all proposals will be loaded.
-
class
mmdet.datasets.pipelines.
MultiScaleFlipAug
(transforms, img_scale=None, scale_factor=None, flip=False, flip_direction='horizontal')[source]¶ Test-time augmentation with multiple scales and flipping.
An example configuration is as followed:
After MultiScaleFLipAug with above configuration, the results are wrapped into lists of the same length as followed:
Parameters: - transforms (list[dict]) – Transforms to apply in each augmentation.
- img_scale (tuple | list[tuple] | None) – Images scales for resizing.
- scale_factor (float | list[float] | None) – Scale factors for resizing.
- flip (bool) – Whether apply flip augmentation. Default: False.
- flip_direction (str | list[str]) – Flip augmentation directions, options are “horizontal” and “vertical”. If flip_direction is list, multiple flip augmentations will be applied. It has no effect when flip == False. Default: “horizontal”.
-
class
mmdet.datasets.pipelines.
Resize
(img_scale=None, multiscale_mode='range', ratio_range=None, keep_ratio=True, backend='cv2')[source]¶ Resize images & bbox & mask.
This transform resizes the input image to some scale. Bboxes and masks are then resized with the same scale factor. If the input dict contains the key “scale”, then the scale in the input dict is used, otherwise the specified scale in the init method is used. If the input dict contains the key “scale_factor” (if MultiScaleFlipAug does not give img_scale but scale_factor), the actual scale will be computed by image shape and scale_factor.
img_scale can either be a tuple (single-scale) or a list of tuple (multi-scale). There are 3 multiscale modes:
ratio_range is not None
: randomly sample a ratio from the ratio range and multiply it with the image scale.ratio_range is None
andmultiscale_mode == "range"
: randomly sample a scale from the multiscale range.ratio_range is None
andmultiscale_mode == "value"
: randomly sample a scale from multiple scales.
Parameters: - img_scale (tuple or list[tuple]) – Images scales for resizing.
- multiscale_mode (str) – Either “range” or “value”.
- ratio_range (tuple[float]) – (min_ratio, max_ratio)
- keep_ratio (bool) – Whether to keep the aspect ratio when resizing the image.
- backend (str) – Image resize backend, choices are ‘cv2’ and ‘pillow’. These two backends generates slightly different results. Defaults to ‘cv2’.
-
static
random_sample
(img_scales)[source]¶ Randomly sample an img_scale when
multiscale_mode=='range'
.Parameters: img_scales (list[tuple]) – Images scale range for sampling. There must be two tuples in img_scales, which specify the lower and uper bound of image scales. Returns: Returns a tuple (img_scale, None)
, whereimg_scale
is sampled scale and None is just a placeholder to be consistent withrandom_select()
.Return type: (tuple, None)
-
static
random_sample_ratio
(img_scale, ratio_range)[source]¶ Randomly sample an img_scale when
ratio_range
is specified.A ratio will be randomly sampled from the range specified by
ratio_range
. Then it would be multiplied withimg_scale
to generate sampled scale.Parameters: - img_scale (tuple) – Images scale base to multiply with ratio.
- ratio_range (tuple[float]) – The minimum and maximum ratio to scale
the
img_scale
.
Returns: Returns a tuple
(scale, None)
, wherescale
is sampled ratio multiplied withimg_scale
and None is just a placeholder to be consistent withrandom_select()
.Return type: (tuple, None)
-
static
random_select
(img_scales)[source]¶ Randomly select an img_scale from given candidates.
Parameters: img_scales (list[tuple]) – Images scales for selection. Returns: Returns a tuple (img_scale, scale_dix)
, whereimg_scale
is the selected image scale andscale_idx
is the selected index in the given candidates.Return type: (tuple, int)
-
class
mmdet.datasets.pipelines.
RandomFlip
(flip_ratio=None, direction='horizontal')[source]¶ Flip the image & bbox & mask.
If the input dict contains the key “flip”, then the flag will be used, otherwise it will be randomly decided by a ratio specified in the init method.
When random flip is enabled,
flip_ratio
/direction
can either be a float/string or tuple of float/string. There are 3 flip modes:flip_ratio
is float,direction
is string: the image will bedirection``ly flipped with probability of ``flip_ratio
. E.g.,flip_ratio=0.5
,direction='horizontal'
, then image will be horizontally flipped with probability of 0.5.
flip_ratio
is float,direction
is list of string: the image wilbe
direction[i]``ly flipped with probability of ``flip_ratio/len(direction)
. E.g.,flip_ratio=0.5
,direction=['horizontal', 'vertical']
, then image will be horizontally flipped with probability of 0.25, vertically with probability of 0.25.
flip_ratio
is list of float,direction
is list of string:given
len(flip_ratio) == len(direction)
, the image wil bedirection[i]``ly flipped with probability of ``flip_ratio[i]
. E.g.,flip_ratio=[0.3, 0.5]
,direction=['horizontal', 'vertical']
, then image will be horizontally flipped with probabilityof 0.3, vertically with probability of 0.5
Parameters: - flip_ratio (float | list[float], optional) – The flipping probability. Default: None.
- direction (str | list[str], optional) – The flipping direction. Options
are ‘horizontal’, ‘vertical’, ‘diagonal’. Default: ‘horizontal’.
If input is a list, the length must equal
flip_ratio
. Each element inflip_ratio
indicates the flip probability of corresponding direction.
-
bbox_flip
(bboxes, img_shape, direction)[source]¶ Flip bboxes horizontally.
Parameters: - bboxes (numpy.ndarray) – Bounding boxes, shape (…, 4*k)
- img_shape (tuple[int]) – Image shape (height, width)
- direction (str) – Flip direction. Options are ‘horizontal’, ‘vertical’.
Returns: Flipped bounding boxes.
Return type: numpy.ndarray
-
class
mmdet.datasets.pipelines.
Pad
(size=None, size_divisor=None, pad_val=0)[source]¶ Pad the image & mask.
There are two padding modes: (1) pad to a fixed size and (2) pad to the minimum size that is divisible by some number. Added keys are “pad_shape”, “pad_fixed_size”, “pad_size_divisor”,
Parameters: - size (tuple, optional) – Fixed padding size.
- size_divisor (int, optional) – The divisor of padded size.
- pad_val (float, optional) – Padding value, 0 by default.
-
class
mmdet.datasets.pipelines.
RandomCrop
(crop_size, allow_negative_crop=False)[source]¶ Random crop the image & bboxes & masks.
Parameters: - crop_size (tuple) – Expected size after cropping, (h, w).
- allow_negative_crop (bool) – Whether to allow a crop that does not contain any bbox area. Default to False.
Note
- If the image is smaller than the crop size, return the original image
- The keys for bboxes, labels and masks must be aligned. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore corresponds to gt_labels_ignore and gt_masks_ignore.
- If the crop does not contain any gt-bbox region and allow_negative_crop is set to False, skip this image.
-
class
mmdet.datasets.pipelines.
Normalize
(mean, std, to_rgb=True)[source]¶ Normalize the image.
Added key is “img_norm_cfg”.
Parameters: - mean (sequence) – Mean values of 3 channels.
- std (sequence) – Std values of 3 channels.
- to_rgb (bool) – Whether to convert the image from BGR to RGB, default is true.
-
class
mmdet.datasets.pipelines.
SegRescale
(scale_factor=1, backend='cv2')[source]¶ Rescale semantic segmentation maps.
Parameters: - scale_factor (float) – The scale factor of the final output.
- backend (str) – Image rescale backend, choices are ‘cv2’ and ‘pillow’. These two backends generates slightly different results. Defaults to ‘cv2’.
-
class
mmdet.datasets.pipelines.
MinIoURandomCrop
(min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), min_crop_size=0.3)[source]¶ Random crop the image & bboxes, the cropped patches have minimum IoU requirement with original image & bboxes, the IoU threshold is randomly selected from min_ious.
Parameters: - min_ious (tuple) – minimum IoU threshold for all intersections with
- boxes (bounding) –
- min_crop_size (float) – minimum crop’s size (i.e. h,w := a*h, a*w,
- a >= min_crop_size) (where) –
Note
The keys for bboxes, labels and masks should be paired. That is, gt_bboxes corresponds to gt_labels and gt_masks, and gt_bboxes_ignore to gt_labels_ignore and gt_masks_ignore.
-
class
mmdet.datasets.pipelines.
Expand
(mean=(0, 0, 0), to_rgb=True, ratio_range=(1, 4), seg_ignore_label=None, prob=0.5)[source]¶ Random expand the image & bboxes.
Randomly place the original image on a canvas of ‘ratio’ x original image size filled with mean values. The ratio is in the range of ratio_range.
Parameters: - mean (tuple) – mean value of dataset.
- to_rgb (bool) – if need to convert the order of mean to align with RGB.
- ratio_range (tuple) – range of expand ratio.
- prob (float) – probability of applying this transformation
-
class
mmdet.datasets.pipelines.
PhotoMetricDistortion
(brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18)[source]¶ Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last.
- random brightness
- random contrast (mode 0)
- convert color from BGR to HSV
- random saturation
- random hue
- convert color from HSV to BGR
- random contrast (mode 1)
- randomly swap channels
Parameters: - brightness_delta (int) – delta of brightness.
- contrast_range (tuple) – range of contrast.
- saturation_range (tuple) – range of saturation.
- hue_delta (int) – delta of hue.
-
class
mmdet.datasets.pipelines.
Albu
(transforms, bbox_params=None, keymap=None, update_pad_shape=False, skip_img_without_anno=False)[source]¶ Albumentation augmentation.
Adds custom transformations from Albumentations library. Please, visit https://albumentations.readthedocs.io to get more information.
An example of
transforms
is as followed:Parameters: - transforms (list[dict]) – A list of albu transformations
- bbox_params (dict) – Bbox_params for albumentation Compose
- keymap (dict) – Contains {‘input key’:’albumentation-style key’}
- skip_img_without_anno (bool) – Whether to skip the image if no ann left after aug
-
class
mmdet.datasets.pipelines.
InstaBoost
(action_candidate=('normal', 'horizontal', 'skip'), action_prob=(1, 0, 0), scale=(0.8, 1.2), dx=15, dy=15, theta=(-1, 1), color_prob=0.5, hflag=False, aug_ratio=0.5)[source]¶ Data augmentation method in InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting.
Refer to https://github.com/GothicAi/Instaboost for implementation details.
-
class
mmdet.datasets.pipelines.
RandomCenterCropPad
(crop_size=None, ratios=(0.9, 1.0, 1.1), border=128, mean=None, std=None, to_rgb=None, test_mode=False, test_pad_mode=('logical_or', 127))[source]¶ Random center crop and random around padding for CornerNet.
This operation generates randomly cropped image from the original image and pads it simultaneously. Different from
RandomCrop
, the output shape may not equal tocrop_size
strictly. We choose a random value fromratios
and the output shape could be larger or smaller thancrop_size
. The padding operation is also different fromPad
, here we use around padding instead of right-bottom padding.The relation between output image (padding image) and original image:
output image +----------------------------+ | padded area | +------|----------------------------|----------+ | | cropped area | | | | +---------------+ | | | | | . center | | | original image | | | range | | | | | +---------------+ | | +------|----------------------------|----------+ | padded area | +----------------------------+
There are 5 main areas in the figure:
- output image: output image of this operation, also called padding image in following instruction.
- original image: input image of this operation.
- padded area: non-intersect area of output image and original image.
- cropped area: the overlap of output image and original image.
- center range: a smaller area where random center chosen from.
center range is computed by
border
and original image’s shape to avoid our random center is too close to original image’s border.
Also this operation act differently in train and test mode, the summary pipeline is listed below.
Train pipeline:
- Choose a
random_ratio
fromratios
, the shape of padding image will berandom_ratio * crop_size
. - Choose a
random_center
in center range. - Generate padding image with center matches the
random_center
. - Initialize the padding image with pixel value equals to
mean
. - Copy the cropped area to padding image.
- Refine annotations.
Test pipeline:
- Compute output shape according to
test_pad_mode
. - Generate padding image with center matches the original image center.
- Initialize the padding image with pixel value equals to
mean
. - Copy the
cropped area
to padding image.
Parameters: - crop_size (tuple | None) – expected size after crop, final size will computed according to ratio. Requires (h, w) in train mode, and None in test mode.
- ratios (tuple) – random select a ratio from tuple and crop image to (crop_size[0] * ratio) * (crop_size[1] * ratio). Only available in train mode.
- border (int) – max distance from center select area to image border. Only available in train mode.
- mean (sequence) – Mean values of 3 channels.
- std (sequence) – Std values of 3 channels.
- to_rgb (bool) – Whether to convert the image from BGR to RGB.
- test_mode (bool) – whether involve random variables in transform. In train mode, crop_size is fixed, center coords and ratio is random selected from predefined lists. In test mode, crop_size is image’s original shape, center coords and ratio is fixed.
- test_pad_mode (tuple) –
padding method and padding shape value, only available in test mode. Default is using ‘logical_or’ with 127 as padding shape value.
- ’logical_or’: final_shape = input_shape | padding_shape_value
- ’size_divisor’: final_shape = int( ceil(input_shape / padding_shape_value) * padding_shape_value)
-
class
mmdet.datasets.pipelines.
AutoAugment
(policies)[source]¶ Auto augmentation.
This data augmentation is proposed in Learning Data Augmentation Strategies for Object Detection.
TODO: Implement ‘Shear’, ‘Sharpness’ and ‘Rotate’ transforms
Parameters: policies (list[list[dict]]) – The policies of auto augmentation. Each policy in policies
is a specific augmentation policy, and is composed by several augmentations (dict). When AutoAugment is called, a random policy inpolicies
will be selected to augment images.Examples
>>> replace = (104, 116, 124) >>> policies = [ >>> [ >>> dict(type='Sharpness', prob=0.0, level=8), >>> dict( >>> type='Shear', >>> prob=0.4, >>> level=0, >>> replace=replace, >>> axis='x') >>> ], >>> [ >>> dict( >>> type='Rotate', >>> prob=0.6, >>> level=10, >>> replace=replace), >>> dict(type='Color', prob=1.0, level=6) >>> ] >>> ] >>> augmentation = AutoAugment(policies) >>> img = np.ones(100, 100, 3) >>> gt_bboxes = np.ones(10, 4) >>> results = dict(img=img, gt_bboxes=gt_bboxes) >>> results = augmentation(results)
-
class
mmdet.datasets.pipelines.
CutOut
(n_holes, cutout_shape=None, cutout_ratio=None, fill_in=(0, 0, 0))[source]¶ CutOut operation.
Randomly drop some regions of image used in Cutout.
Parameters: - n_holes (int | tuple[int, int]) – Number of regions to be dropped. If it is given as a list, number of holes will be randomly selected from the closed interval [n_holes[0], n_holes[1]].
- cutout_shape (tuple[int, int] | list[tuple[int, int]]) – The candidate shape of dropped regions. It can be tuple[int, int] to use a fixed cutout shape, or list[tuple[int, int]] to randomly choose shape from the list.
- cutout_ratio (tuple[float, float] | list[tuple[float, float]]) – The candidate ratio of dropped regions. It can be tuple[float, float] to use a fixed ratio or list[tuple[float, float]] to randomly choose ratio from the list. Please note that cutout_shape and cutout_ratio cannot be both given at the same time.
- fill_in (tuple[float, float, float] | tuple[int, int, int]) – The value of pixel to fill in the dropped regions. Default: (0, 0, 0).
mmdet.models¶
detectors¶
-
class
mmdet.models.detectors.
ATSS
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of ATSS.
-
class
mmdet.models.detectors.
BaseDetector
[source]¶ Base class for detectors.
-
extract_feats
(imgs)[source]¶ Extract features from multiple images.
Parameters: imgs (list[torch.Tensor]) – A list of images. The images are augmented from the same image but in different ways. Returns: Features of different images Return type: list[torch.Tensor]
-
forward
(img, img_metas, return_loss=True, **kwargs)[source]¶ Calls either
forward_train()
orforward_test()
depending on whetherreturn_loss
isTrue
.Note this setting will change the expected inputs. When
return_loss=True
, img and img_meta are single-nested (i.e. Tensor and List[dict]), and whenresturn_loss=False
, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.
-
forward_test
(imgs, img_metas, **kwargs)[source]¶ Parameters: - imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
- img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
-
forward_train
(imgs, img_metas, **kwargs)[source]¶ Parameters: - img (list[Tensor]) – List of tensors of shape (1, C, H, W). Typically these should be mean centered and std scaled.
- img_metas (list[dict]) – List of image info dict where each dict
has: ‘img_shape’, ‘scale_factor’, ‘flip’, and my also contain
‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.
For details on the values of these keys, see
mmdet.datasets.pipelines.Collect
. - kwargs (keyword arguments) – Specific to concrete implementation.
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in detector.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
show_result
(img, result, score_thr=0.3, bbox_color='green', text_color='green', thickness=1, font_scale=0.5, win_name='', show=False, wait_time=0, out_file=None)[source]¶ Draw result over img.
Parameters: - img (str or Tensor) – The image to be displayed.
- result (Tensor or tuple) – The results to draw over img bbox_result or (bbox_result, segm_result).
- score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.
- bbox_color (str or tuple or
Color
) – Color of bbox lines. - text_color (str or tuple or
Color
) – Color of texts. - thickness (int) – Thickness of lines.
- font_scale (float) – Font scales of texts.
- win_name (str) – The window name.
- wait_time (int) – Value of waitKey param. Default: 0.
- show (bool) – Whether to show the image. Default: False.
- out_file (str or None) – The filename to write the image. Default: None.
Returns: Only if not show or out_file
Return type: img (Tensor)
-
train_step
(data, optimizer)[source]¶ The iteration step during training.
This method defines an iteration step during training, except for the back propagation and optimizer updating, which are done in an optimizer hook. Note that in some complicated cases or models, the whole process including back propagation and optimizer updating is also defined in this method, such as GAN.
Parameters: - data (dict) – The output of dataloader.
- optimizer (
torch.optim.Optimizer
| dict) – The optimizer of runner is passed totrain_step()
. This argument is unused and reserved.
Returns: It should contain at least 3 keys:
loss
,log_vars
,num_samples
.loss
is a tensor for back propagation, which can be a weighted sum of multiple losses.log_vars
contains all the variables to be sent to the
logger. -
num_samples
indicates the batch size (when the model is DDP, it means the batch size on each GPU), which is used for averaging the logs.Return type: dict
-
val_step
(data, optimizer)[source]¶ The iteration step during validation.
This method shares the same signature as
train_step()
, but used during val epochs. Note that the evaluation after training epochs is not implemented with this method, but an evaluation hook.
-
with_bbox
¶ whether the detector has a bbox head
Type: bool
-
with_mask
¶ whether the detector has a mask head
Type: bool
-
with_neck
¶ whether the detector has a neck
Type: bool
whether the detector has a shared head in the RoI Head
Type: bool
-
-
class
mmdet.models.detectors.
SingleStageDetector
(backbone, neck=None, bbox_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Base class for single-stage detectors.
Single-stage detectors directly and densely predict bounding boxes on the output features of the backbone+neck.
-
forward_train
(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]¶ Parameters: - img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
- img_metas (list[dict]) – A List of image info dict where each dict
has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain
‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.
For details on the values of these keys see
mmdet.datasets.pipelines.Collect
. - gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – Class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in detector.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
simple_test
(img, img_metas, rescale=False)[source]¶ Test function without test time augmentation.
Parameters: - imgs (list[torch.Tensor]) – List of multiple images
- img_metas (list[dict]) – List of image information.
- rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns: - BBox results of each image and classes.
The outer list corresponds to each image. The inner list corresponds to each class.
Return type: list[list[np.ndarray]]
-
-
class
mmdet.models.detectors.
TwoStageDetector
(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Base class for two-stage detectors.
Two-stage detectors typically consisting of a region proposal network and a task-specific regression head.
-
async_simple_test
(img, img_meta, proposals=None, rescale=False)[source]¶ Async test without augmentation.
-
aug_test
(imgs, img_metas, rescale=False)[source]¶ Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].
-
forward_train
(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, proposals=None, **kwargs)[source]¶ Parameters: - img (Tensor) – of shape (N, C, H, W) encoding input images. Typically these should be mean centered and std scaled.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
- proposals – override rpn proposals with custom proposals. Use when with_rpn is False.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in detector.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
with_roi_head
¶ whether the detector has a RoI head
Type: bool
-
with_rpn
¶ whether the detector has RPN
Type: bool
-
-
class
mmdet.models.detectors.
RPN
(backbone, neck, rpn_head, train_cfg, test_cfg, pretrained=None)[source]¶ Implementation of Region Proposal Network.
-
aug_test
(imgs, img_metas, rescale=False)[source]¶ Test function with test time augmentation.
Parameters: - imgs (list[torch.Tensor]) – List of multiple images
- img_metas (list[dict]) – List of image information.
- rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns: proposals
Return type: list[np.ndarray]
-
extract_feat
(img)[source]¶ Extract features.
Parameters: img (torch.Tensor) – Image tensor with shape (n, c, h ,w). Returns: - Multi-level features that may have
- different resolutions.
Return type: list[torch.Tensor]
-
forward_train
(img, img_metas, gt_bboxes=None, gt_bboxes_ignore=None)[source]¶ Parameters: - img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.
- img_metas (list[dict]) – A List of image info dict where each dict
has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain
‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.
For details on the values of these keys see
mmdet.datasets.pipelines.Collect
. - gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in detector.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
show_result
(data, result, dataset=None, top_k=20)[source]¶ Show RPN proposals on the image.
Although we assume batch size is 1, this method supports arbitrary batch size.
-
simple_test
(img, img_metas, rescale=False)[source]¶ Test function without test time augmentation.
Parameters: - imgs (list[torch.Tensor]) – List of multiple images
- img_metas (list[dict]) – List of image information.
- rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns: proposals
Return type: list[np.ndarray]
-
-
class
mmdet.models.detectors.
FastRCNN
(backbone, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ Implementation of Fast R-CNN
-
forward_test
(imgs, img_metas, proposals, **kwargs)[source]¶ Parameters: - imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
- img_metas (List[List[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch.
- proposals (List[List[Tensor]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. The Tensor should have a shape Px4, where P is the number of proposals.
-
-
class
mmdet.models.detectors.
FasterRCNN
(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ Implementation of Faster R-CNN
-
class
mmdet.models.detectors.
MaskRCNN
(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ Implementation of Mask R-CNN
-
class
mmdet.models.detectors.
CascadeRCNN
(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of Cascade R-CNN: Delving into High Quality Object Detection
-
class
mmdet.models.detectors.
HybridTaskCascade
(**kwargs)[source]¶ Implementation of HTC
-
with_semantic
¶ whether the detector has a semantic head
Type: bool
-
-
class
mmdet.models.detectors.
RetinaNet
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of RetinaNet
-
class
mmdet.models.detectors.
FCOS
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of FCOS
-
class
mmdet.models.detectors.
GridRCNN
(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ Grid R-CNN.
This detector is the implementation of: - Grid R-CNN (https://arxiv.org/abs/1811.12030) - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)
-
class
mmdet.models.detectors.
MaskScoringRCNN
(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ Mask Scoring RCNN.
-
class
mmdet.models.detectors.
RepPointsDetector
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ RepPoints: Point Set Representation for Object Detection.
This detector is the implementation of: - RepPoints detector (https://arxiv.org/pdf/1904.11490)
-
aug_test
(imgs, img_metas, rescale=False)[source]¶ Test function with test time augmentation.
Parameters: - imgs (list[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.
- img_metas (list[list[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. each dict has image information.
- rescale (bool, optional) – Whether to rescale the results. Defaults to False.
Returns: bbox results of each class
Return type: list[ndarray]
-
merge_aug_results
(aug_bboxes, aug_scores, img_metas)[source]¶ Merge augmented detection bboxes and scores.
Parameters: - aug_bboxes (list[Tensor]) – shape (n, 4*#class)
- aug_scores (list[Tensor] or None) – shape (n, #class)
- img_shapes (list[Tensor]) – shape (3, ).
Returns: (bboxes, scores)
Return type: tuple
-
-
class
mmdet.models.detectors.
FOVEA
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of FoveaBox
-
class
mmdet.models.detectors.
FSAF
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ Implementation of FSAF
-
class
mmdet.models.detectors.
NASFCOS
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ NAS-FCOS: Fast Neural Architecture Search for Object Detection.
-
class
mmdet.models.detectors.
PointRend
(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None)[source]¶ PointRend: Image Segmentation as Rendering
This detector is the implementation of PointRend.
-
class
mmdet.models.detectors.
GFL
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶
-
class
mmdet.models.detectors.
CornerNet
(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]¶ CornerNet.
This detector is the implementation of the paper CornerNet: Detecting Objects as Paired Keypoints .
-
aug_test
(imgs, img_metas, rescale=False)[source]¶ Augment testing of CornerNet.
Parameters: - imgs (list[Tensor]) – Augmented images.
- img_metas (list[list[dict]]) – Meta information of each image, e.g., image size, scaling factor, etc.
- rescale (bool) – If True, return boxes in original image space. Default: False.
Note
imgs
must including flipped image pairs.Returns: - BBox results of each image and classes.
- The outer list corresponds to each image. The inner list corresponds to each class.
Return type: list[list[np.ndarray]]
-
merge_aug_results
(aug_results, img_metas)[source]¶ Merge augmented detection bboxes and score.
Parameters: - aug_results (list[list[Tensor]]) – Det_bboxes and det_labels of each image.
- img_metas (list[list[dict]]) – Meta information of each image, e.g., image size, scaling factor, etc.
Returns: (bboxes, labels)
Return type: tuple
-
backbones¶
-
class
mmdet.models.backbones.
RegNet
(arch, in_channels=3, stem_channels=32, base_channels=32, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]¶ RegNet backbone.
More details can be found in paper .
Parameters: - arch (dict) –
The parameter of RegNets.
- w0 (int): initial width
- wa (float): slope of width
- wm (float): quantization parameter to quantize the width
- depth (int): depth of the backbone
- group_w (int): width of group
- bot_mul (float): bottleneck ratio, i.e. expansion of bottlneck.
- strides (Sequence[int]) – Strides of the first block of each stage.
- base_channels (int) – Base channels after stem layer.
- in_channels (int) – Number of input image channels. Default: 3.
- dilations (Sequence[int]) – Dilation of each stage.
- out_indices (Sequence[int]) – Output from which stages.
- style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
- frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
- norm_cfg (dict) – dictionary to construct and config norm layer.
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
Example
>>> from mmdet.models import RegNet >>> import torch >>> self = RegNet( arch=dict( w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0)) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 96, 8, 8) (1, 192, 4, 4) (1, 432, 2, 2) (1, 1008, 1, 1)
-
adjust_width_group
(widths, bottleneck_ratio, groups)[source]¶ Adjusts the compatibility of widths and groups.
Parameters: - widths (list[int]) – Width of each stage.
- bottleneck_ratio (float) – Bottleneck ratio.
- groups (int) – number of groups in each stage
Returns: The adjusted widths and groups of each stage.
Return type: tuple(list)
-
generate_regnet
(initial_width, width_slope, width_parameter, depth, divisor=8)[source]¶ Generates per block width from RegNet parameters.
Parameters: - initial_width ([int]) – Initial width of the backbone
- width_slope ([float]) – Slope of the quantized linear function
- width_parameter ([int]) – Parameter used to quantize the width.
- depth ([int]) – Depth of the backbone.
- divisor (int, optional) – The divisor of channels. Defaults to 8.
Returns: return a list of widths of each stage and the number of stages
Return type: list, int
- arch (dict) –
-
class
mmdet.models.backbones.
ResNet
(depth, in_channels=3, stem_channels=None, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True)[source]¶ ResNet backbone.
Parameters: - depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
- stem_channels (int | None) – Number of stem channels. If not specified, it will be the same as base_channels. Default: None.
- base_channels (int) – Number of base channels of res layer. Default: 64.
- in_channels (int) – Number of input image channels. Default: 3.
- num_stages (int) – Resnet stages. Default: 4.
- strides (Sequence[int]) – Strides of the first block of each stage.
- dilations (Sequence[int]) – Dilation of each stage.
- out_indices (Sequence[int]) – Output from which stages.
- style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
- deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
- avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.
- frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
- norm_cfg (dict) – Dictionary to construct and config norm layer.
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
- plugins (list[dict]) –
List of plugins for stages, each dict contains:
- cfg (dict, required): Cfg dict to build plugin.
- position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
- stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.
Example
>>> from mmdet.models import ResNet >>> import torch >>> self = ResNet(depth=18) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 64, 8, 8) (1, 128, 4, 4) (1, 256, 2, 2) (1, 512, 1, 1)
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in backbone.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
make_stage_plugins
(plugins, stage_idx)[source]¶ Make plugins for ResNet
stage_idx
th stage.Currently we support to insert
context_block
,empirical_attention_block
,nonlocal_block
into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck.An example of plugins format could be:
Examples
>>> plugins=[ ... dict(cfg=dict(type='xxx', arg1='xxx'), ... stages=(False, True, True, True), ... position='after_conv2'), ... dict(cfg=dict(type='yyy'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='1'), ... stages=(True, True, True, True), ... position='after_conv3'), ... dict(cfg=dict(type='zzz', postfix='2'), ... stages=(True, True, True, True), ... position='after_conv3') ... ] >>> self = ResNet(depth=18) >>> stage_plugins = self.make_stage_plugins(plugins, 0) >>> assert len(stage_plugins) == 3
Suppose
stage_idx=0
, the structure of blocks in the stage would be:conv1-> conv2->conv3->yyy->zzz1->zzz2
Suppose ‘stage_idx=1’, the structure of blocks in the stage would be:
conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2
If stages is missing, the plugin would be applied to all stages.
Parameters: - plugins (list[dict]) – List of plugins cfg to build. The postfix is required if multiple same type plugins are inserted.
- stage_idx (int) – Index of stage to build
Returns: Plugins for current stage
Return type: list[dict]
-
norm1
¶ the normalization layer named “norm1”
Type: nn.Module
-
class
mmdet.models.backbones.
ResNetV1d
(**kwargs)[source]¶ ResNetV1d variant described in Bag of Tricks.
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.
-
class
mmdet.models.backbones.
ResNeXt
(groups=1, base_width=4, **kwargs)[source]¶ ResNeXt backbone.
Parameters: - depth (int) – Depth of resnet, from {18, 34, 50, 101, 152}.
- in_channels (int) – Number of input image channels. Default: 3.
- num_stages (int) – Resnet stages. Default: 4.
- groups (int) – Group of resnext.
- base_width (int) – Base width of resnext.
- strides (Sequence[int]) – Strides of the first block of each stage.
- dilations (Sequence[int]) – Dilation of each stage.
- out_indices (Sequence[int]) – Output from which stages.
- style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
- frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.
- norm_cfg (dict) – dictionary to construct and config norm layer.
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
-
class
mmdet.models.backbones.
SSDVGG
(input_size, depth, with_last_pool=False, ceil_mode=True, out_indices=(3, 4), out_feature_indices=(22, 34), l2_norm_scale=20.0)[source]¶ VGG Backbone network for single-shot-detection.
Parameters: - input_size (int) – width and height of input, from {300, 512}.
- depth (int) – Depth of vgg, from {11, 13, 16, 19}.
- out_indices (Sequence[int]) – Output from which stages.
Example
>>> self = SSDVGG(input_size=300, depth=11) >>> self.eval() >>> inputs = torch.rand(1, 3, 300, 300) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 1024, 19, 19) (1, 512, 10, 10) (1, 256, 5, 5) (1, 256, 3, 3) (1, 256, 1, 1)
-
class
mmdet.models.backbones.
HRNet
(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=True, with_cp=False, zero_init_residual=False)[source]¶ HRNet backbone.
High-Resolution Representations for Labeling Pixels and Regions arXiv: https://arxiv.org/abs/1904.04514
Parameters: - extra (dict) – detailed configuration for each stage of HRNet.
- in_channels (int) – Number of input image channels. Default: 3.
- conv_cfg (dict) – dictionary to construct and config conv layer.
- norm_cfg (dict) – dictionary to construct and config norm layer.
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.
Example
>>> from mmdet.models import HRNet >>> import torch >>> extra = dict( >>> stage1=dict( >>> num_modules=1, >>> num_branches=1, >>> block='BOTTLENECK', >>> num_blocks=(4, ), >>> num_channels=(64, )), >>> stage2=dict( >>> num_modules=1, >>> num_branches=2, >>> block='BASIC', >>> num_blocks=(4, 4), >>> num_channels=(32, 64)), >>> stage3=dict( >>> num_modules=4, >>> num_branches=3, >>> block='BASIC', >>> num_blocks=(4, 4, 4), >>> num_channels=(32, 64, 128)), >>> stage4=dict( >>> num_modules=3, >>> num_branches=4, >>> block='BASIC', >>> num_blocks=(4, 4, 4, 4), >>> num_channels=(32, 64, 128, 256))) >>> self = HRNet(extra, in_channels=1) >>> self.eval() >>> inputs = torch.rand(1, 1, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 32, 8, 8) (1, 64, 4, 4) (1, 128, 2, 2) (1, 256, 1, 1)
-
init_weights
(pretrained=None)[source]¶ Initialize the weights in backbone.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
norm1
¶ the normalization layer named “norm1”
Type: nn.Module
-
norm2
¶ the normalization layer named “norm2”
Type: nn.Module
-
class
mmdet.models.backbones.
Res2Net
(scales=4, base_width=26, style='pytorch', deep_stem=True, avg_down=True, **kwargs)[source]¶ Res2Net backbone.
Parameters: - scales (int) – Scales used in Res2Net. Default: 4
- base_width (int) – Basic width of each scale. Default: 26
- depth (int) – Depth of res2net, from {50, 101, 152}.
- in_channels (int) – Number of input image channels. Default: 3.
- num_stages (int) – Res2net stages. Default: 4.
- strides (Sequence[int]) – Strides of the first block of each stage.
- dilations (Sequence[int]) – Dilation of each stage.
- out_indices (Sequence[int]) – Output from which stages.
- style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.
- deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv
- avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottle2neck.
- frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.
- norm_cfg (dict) – Dictionary to construct and config norm layer.
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
- plugins (list[dict]) –
List of plugins for stages, each dict contains:
- cfg (dict, required): Cfg dict to build plugin.
- position (str, required): Position inside block to insert plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.
- stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.
Example
>>> from mmdet.models import Res2Net >>> import torch >>> self = Res2Net(depth=50, scales=4, base_width=26) >>> self.eval() >>> inputs = torch.rand(1, 3, 32, 32) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) (1, 256, 8, 8) (1, 512, 4, 4) (1, 1024, 2, 2) (1, 2048, 1, 1)
-
class
mmdet.models.backbones.
HourglassNet
(downsample_times=5, num_stacks=2, stage_channels=(256, 256, 384, 384, 384, 512), stage_blocks=(2, 2, 2, 2, 2, 4), feat_channel=256, norm_cfg={'requires_grad': True, 'type': 'BN'})[source]¶ HourglassNet backbone.
Stacked Hourglass Networks for Human Pose Estimation. More details can be found in the paper .
Parameters: - downsample_times (int) – Downsample times in a HourglassModule.
- num_stacks (int) – Number of HourglassModule modules stacked, 1 for Hourglass-52, 2 for Hourglass-104.
- stage_channels (list[int]) – Feature channel of each sub-module in a HourglassModule.
- stage_blocks (list[int]) – Number of sub-modules stacked in a HourglassModule.
- feat_channel (int) – Feature channel of conv after a HourglassModule.
- norm_cfg (dict) – Dictionary to construct and config norm layer.
Example
>>> from mmdet.models import HourglassNet >>> import torch >>> self = HourglassNet() >>> self.eval() >>> inputs = torch.rand(1, 3, 511, 511) >>> level_outputs = self.forward(inputs) >>> for level_output in level_outputs: ... print(tuple(level_output.shape)) (1, 256, 128, 128) (1, 256, 128, 128)
-
init_weights
(pretrained=None)[source]¶ Init module weights.
We do nothing in this function because all modules we used (ConvModule, BasicBlock and etc.) have default initialization, and currently we don’t provide pretrained model of HourglassNet.
Detector’s __init__() will call backbone’s init_weights() with pretrained as input, so we keep this function.
-
class
mmdet.models.backbones.
DetectoRS_ResNet
(sac=None, stage_with_sac=(False, False, False, False), rfp_inplanes=None, output_img=False, pretrained=None, **kwargs)[source]¶ ResNet backbone for DetectoRS.
Parameters: - sac (dict, optional) – Dictionary to construct SAC (Switchable Atrous Convolution). Default: None.
- stage_with_sac (list) – Which stage to use sac. Default: (False, False, False, False).
- rfp_inplanes (int, optional) – The number of channels from RFP.
Default: None. If specified, an additional conv layer will be
added for
rfp_feat
. Otherwise, the structure is the same as base class. - output_img (bool) – If
True
, the input image will be inserted into the starting position of output. Default: False. - pretrained (str, optional) – The pretrained model to load.
-
class
mmdet.models.backbones.
DetectoRS_ResNeXt
(groups=1, base_width=4, **kwargs)[source]¶ ResNeXt backbone for DetectoRS.
Parameters: - groups (int) – The number of groups in ResNeXt.
- base_width (int) – The base width of ResNeXt.
-
class
mmdet.models.backbones.
Darknet
(depth=53, out_indices=(3, 4, 5), frozen_stages=-1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'}, norm_eval=True)[source]¶ Darknet backbone.
Parameters: - depth (int) – Depth of Darknet. Currently only support 53.
- out_indices (Sequence[int]) – Output from which stages.
- frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
- act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
- norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.
Example
>>> from mmdet.models import Darknet >>> import torch >>> self = Darknet(depth=53) >>> self.eval() >>> inputs = torch.rand(1, 3, 416, 416) >>> level_outputs = self.forward(inputs) >>> for level_out in level_outputs: ... print(tuple(level_out.shape)) ... (1, 256, 52, 52) (1, 512, 26, 26) (1, 1024, 13, 13)
-
static
make_conv_res_block
(in_channels, out_channels, res_repeat, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'})[source]¶ In Darknet backbone, ConvLayer is usually followed by ResBlock. This function will make that. The Conv layers always have 3x3 filters with stride=2. The number of the filters in Conv layer is the same as the out channels of the ResBlock.
Parameters: - in_channels (int) – The number of input channels.
- out_channels (int) – The number of output channels.
- res_repeat (int) – The number of ResBlocks.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
- act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
-
train
(mode=True)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.Parameters: mode (bool) – whether to set training mode ( True
) or evaluation mode (False
). Default:True
.Returns: self Return type: Module
necks¶
-
class
mmdet.models.necks.
FPN
(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, upsample_cfg={'mode': 'nearest'})[source]¶ Feature Pyramid Network.
This is an implementation of paper Feature Pyramid Networks for Object Detection.
Parameters: - in_channels (List[int]) – Number of input channels per scale.
- out_channels (int) – Number of output channels (used at each scale)
- num_outs (int) – Number of output scales.
- start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
- end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str) –
If bool, it decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs. If str, it specifies the source feature map of the extra convs. Only the following options are allowed
- ’on_input’: Last feat map of neck inputs (i.e. backbone feature).
- ’on_lateral’: Last feature map after lateral convs.
- ’on_output’: The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated) – Whether to apply extra convs on the original feature from the backbone. If True, it is equivalent to add_extra_convs=’on_input’. If False, it is equivalent to set add_extra_convs=’on_output’. Default to True.
- relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
- no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Config dict for normalization layer. Default: None.
- act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
- upsample_cfg (dict) – Config dict for interpolate layer. Default: dict(mode=’nearest’)
Example
>>> import torch >>> in_channels = [2, 3, 5, 7] >>> scales = [340, 170, 84, 43] >>> inputs = [torch.rand(1, c, s, s) ... for c, s in zip(in_channels, scales)] >>> self = FPN(in_channels, 11, len(in_channels)).eval() >>> outputs = self.forward(inputs) >>> for i in range(len(outputs)): ... print(f'outputs[{i}].shape = {outputs[i].shape}') outputs[0].shape = torch.Size([1, 11, 340, 340]) outputs[1].shape = torch.Size([1, 11, 170, 170]) outputs[2].shape = torch.Size([1, 11, 84, 84]) outputs[3].shape = torch.Size([1, 11, 43, 43])
-
class
mmdet.models.necks.
BFP
(Balanced Feature Pyrmamids)[source]¶ BFP takes multi-level features as inputs and gather them into a single one, then refine the gathered feature and scatter the refined results to multi-level features. This module is used in Libra R-CNN (CVPR 2019), see the paper Libra R-CNN: Towards Balanced Learning for Object Detection for details.
Parameters: - in_channels (int) – Number of input channels (feature maps of all levels should have the same channels).
- num_levels (int) – Number of input feature levels.
- conv_cfg (dict) – The config dict for convolution layers.
- norm_cfg (dict) – The config dict for normalization layers.
- refine_level (int) – Index of integration and refine level of BSF in multi-level features from bottom to top.
- refine_type (str) – Type of the refine op, currently support [None, ‘conv’, ‘non_local’].
-
class
mmdet.models.necks.
HRFPN
(High Resolution Feature Pyrmamids)[source]¶ paper: High-Resolution Representations for Labeling Pixels and Regions.
Parameters: - in_channels (list) – number of channels for each branch.
- out_channels (int) – output channels of feature pyramids.
- num_outs (int) – number of output stages.
- pooling_type (str) – pooling for generating feature pyramids from {MAX, AVG}.
- conv_cfg (dict) – dictionary to construct and config conv layer.
- norm_cfg (dict) – dictionary to construct and config norm layer.
- with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.
- stride (int) – stride of 3x3 convolutional layers
-
class
mmdet.models.necks.
NASFPN
(in_channels, out_channels, num_outs, stack_times, start_level=0, end_level=-1, add_extra_convs=False, norm_cfg=None)[source]¶ NAS-FPN.
Implementation of NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection
Parameters: - in_channels (List[int]) – Number of input channels per scale.
- out_channels (int) – Number of output channels (used at each scale)
- num_outs (int) – Number of output scales.
- stack_times (int) – The number of times the pyramid architecture will be stacked.
- start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
- end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
-
class
mmdet.models.necks.
FPN_CARAFE
(in_channels, out_channels, num_outs, start_level=0, end_level=-1, norm_cfg=None, act_cfg=None, order=('conv', 'norm', 'act'), upsample_cfg={'encoder_dilation': 1, 'encoder_kernel': 3, 'type': 'carafe', 'up_group': 1, 'up_kernel': 5})[source]¶ FPN_CARAFE is a more flexible implementation of FPN. It allows more choice for upsample methods during the top-down pathway.
It can reproduce the preformance of ICCV 2019 paper CARAFE: Content-Aware ReAssembly of FEatures Please refer to https://arxiv.org/abs/1905.02188 for more details.
Parameters: - in_channels (list[int]) – Number of channels for each input feature map.
- out_channels (int) – Output channels of feature pyramids.
- num_outs (int) – Number of output stages.
- start_level (int) – Start level of feature pyramids. (Default: 0)
- end_level (int) – End level of feature pyramids. (Default: -1 indicates the last level).
- norm_cfg (dict) – Dictionary to construct and config norm layer.
- activate (str) – Type of activation function in ConvModule (Default: None indicates w/o activation).
- order (dict) – Order of components in ConvModule.
- upsample (str) – Type of upsample layer.
- upsample_cfg (dict) – Dictionary to construct and config upsample layer.
-
class
mmdet.models.necks.
PAFPN
(in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None)[source]¶ Path Aggregation Network for Instance Segmentation.
This is an implementation of the PAFPN in Path Aggregation Network.
Parameters: - in_channels (List[int]) – Number of input channels per scale.
- out_channels (int) – Number of output channels (used at each scale)
- num_outs (int) – Number of output scales.
- start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
- end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool) – Whether to add conv layers on top of the original feature maps. Default: False.
- extra_convs_on_inputs (bool) – Whether to apply extra conv on the original feature from the backbone. Default: False.
- relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.
- no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Config dict for normalization layer. Default: None.
- act_cfg (str) – Config dict for activation layer in ConvModule. Default: None.
-
class
mmdet.models.necks.
NASFCOS_FPN
(in_channels, out_channels, num_outs, start_level=1, end_level=-1, add_extra_convs=False, conv_cfg=None, norm_cfg=None)[source]¶ FPN structure in NASFPN.
Implementation of paper NAS-FCOS: Fast Neural Architecture Search for Object Detection
Parameters: - in_channels (List[int]) – Number of input channels per scale.
- out_channels (int) – Number of output channels (used at each scale)
- num_outs (int) – Number of output scales.
- start_level (int) – Index of the start input backbone level used to build the feature pyramid. Default: 0.
- end_level (int) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool) – It decides whether to add conv layers on top of the original feature maps. Default to False. If True, its actual mode is specified by extra_convs_on_inputs.
- conv_cfg (dict) – dictionary to construct and config conv layer.
- norm_cfg (dict) – dictionary to construct and config norm layer.
-
class
mmdet.models.necks.
RFP
(Recursive Feature Pyramid)[source]¶ This is an implementation of RFP in DetectoRS. Different from standard FPN, the input of RFP should be multi level features along with origin input image of backbone.
Parameters: - rfp_steps (int) – Number of unrolled steps of RFP.
- rfp_backbone (dict) – Configuration of the backbone for RFP.
- aspp_out_channels (int) – Number of output channels of ASPP module.
- aspp_dilations (tuple[int]) – Dilation rates of four branches. Default: (1, 3, 6, 1)
-
class
mmdet.models.necks.
YOLOV3Neck
(num_scales, in_channels, out_channels, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'})[source]¶ The neck of YOLOV3.
It can be treated as a simplified version of FPN. It will take the result from Darknet backbone and do some upsampling and concatenation. It will finally output the detection result.
Note
- The input feats should be from top to bottom.
- i.e., from high-lvl to low-lvl
- But YOLOV3Neck will process them in reversed order.
- i.e., from bottom (high-lvl) to top (low-lvl)
Parameters: - num_scales (int) – The number of scales / stages.
- in_channels (int) – The number of input channels.
- out_channels (int) – The number of output channels.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
- act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
dense_heads¶
-
class
mmdet.models.dense_heads.
AnchorFreeHead
(num_classes, in_channels, feat_channels=256, stacked_convs=4, strides=(4, 8, 16, 32, 64), dcn_on_last_conv=False, conv_bias='auto', background_label=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, conv_cfg=None, norm_cfg=None, train_cfg=None, test_cfg=None)[source]¶ Anchor-free head (FCOS, Fovea, RepPoints, etc.).
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- feat_channels (int) – Number of hidden channels. Used in child classes.
- stacked_convs (int) – Number of stacking convs of the head.
- strides (tuple) – Downsample factor of each feature map.
- dcn_on_last_conv (bool) – If true, use dcn in the last layer of towers. Default: False.
- conv_bias (bool | str) – If specified as auto, it will be decided by the norm_cfg. Bias of conv will be set as True if norm_cfg is None, otherwise False. Default: “auto”.
- background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
- loss_cls (dict) – Config of classification loss.
- loss_bbox (dict) – Config of localization loss.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Config dict for normalization layer. Default: None.
- train_cfg (dict) – Training config of anchor head.
- test_cfg (dict) – Testing config of anchor head.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - Usually contain classification scores and bbox predictions.
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level, each is a 4D-tensor, the channel number is num_points * 4.
Return type: tuple
-
forward_single
(x)[source]¶ Forward features of a single scale levle.
Parameters: x (Tensor) – FPN feature maps of the specified stride. Returns: - Scores for each class, bbox predictions, features
- after classification and regression conv layers, some models needs these features like FCOS.
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, img_metas, cfg=None, rescale=None)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space
-
get_points
(featmap_sizes, dtype, device, flatten=False)[source]¶ Get points according to feature map sizes.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- dtype (torch.dtype) – Type of points.
- device (torch.device) – Device of points.
Returns: points of each image.
Return type: tuple
-
get_targets
(points, gt_bboxes_list, gt_labels_list)[source]¶ Compute regression, classification and centerss targets for points in multiple images.
Parameters: - points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
-
class
mmdet.models.dense_heads.
AnchorHead
(num_classes, in_channels, feat_channels=256, anchor_generator={'ratios': [0.5, 1.0, 2.0], 'scales': [8, 16, 32], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, background_label=None, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, train_cfg=None, test_cfg=None)[source]¶ Anchor-based head (RPN, RetinaNet, SSD, etc.).
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- feat_channels (int) – Number of hidden channels. Used in child classes.
- anchor_generator (dict) – Config dict for anchor generator
- bbox_coder (dict) – Config of bounding box coder.
- reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False
- background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
- loss_cls (dict) – Config of classification loss.
- loss_bbox (dict) – Config of localization loss.
- train_cfg (dict) – Training config of anchor head.
- test_cfg (dict) – Testing config of anchor head.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: A tuple of classification scores and bbox prediction. - cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type: tuple
-
forward_single
(x)[source]¶ Forward feature of a single scale level.
Parameters: x (Tensor) – Features of a single scale level. Returns: cls_score (Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes. bbox_pred (Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 4. Return type: tuple
-
get_anchors
(featmap_sizes, img_metas, device='cuda')[source]¶ Get anchors according to feature map sizes.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- img_metas (list[dict]) – Image meta info.
- device (torch.device | str) – Device for returned tensors
Returns: anchor_list (list[Tensor]): Anchors of each image. valid_flag_list (list[Tensor]): Valid flags of each image.
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space. Default: False.
Returns: - Each item in result_list is 2-tuple.
The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class labelof the corresponding box.
Return type: list[tuple[Tensor, Tensor]]
Example
>>> import mmcv >>> self = AnchorHead( >>> num_classes=9, >>> in_channels=1, >>> anchor_generator=dict( >>> type='AnchorGenerator', >>> scales=[8], >>> ratios=[0.5, 1.0, 2.0], >>> strides=[4,])) >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}] >>> cfg = mmcv.Config(dict( >>> score_thr=0.00, >>> nms=dict(type='nms', iou_thr=1.0), >>> max_per_img=10)) >>> feat = torch.rand(1, 1, 3, 3) >>> cls_score, bbox_pred = self.forward_single(feat) >>> # note the input lists are over different levels, not images >>> cls_scores, bbox_preds = [cls_score], [bbox_pred] >>> result_list = self.get_bboxes(cls_scores, bbox_preds, >>> img_metas, cfg) >>> det_bboxes, det_labels = result_list[0] >>> assert len(result_list) == 1 >>> assert det_bboxes.shape[1] == 5 >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
-
get_targets
(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True, return_sampling_results=False)[source]¶ Compute regression and classification targets for anchors in multiple images.
Parameters: - anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 4).
- valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- img_metas (list[dict]) – Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
- gt_labels_list (list[Tensor]) – Ground truth labels of each box.
- label_channels (int) – Channel of label.
- unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns: Usually returns a tuple containing learning targets.
- labels_list (list[Tensor]): Labels of each level.
- label_weights_list (list[Tensor]): Label weights of each level.
- bbox_targets_list (list[Tensor]): BBox targets of each level.
- bbox_weights_list (list[Tensor]): BBox weights of each level.
- num_total_pos (int): Number of positive samples in all images.
- num_total_neg (int): Number of negative samples in all images.
- additional_returns: This function enables user-defined returns from
self._get_targets_single. These returns are currently refined to properties at each feature map (i.e. having HxW dimension). The results will be concatenated after the end
Return type: tuple
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
loss_single
(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]¶ Compute loss of a single scale level.
Parameters: - cls_score (Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).
- bbox_pred (Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W).
- anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
- labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
- label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
- bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
- bbox_weights (Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 4).
- num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
class
mmdet.models.dense_heads.
GuidedAnchorHead
(num_classes, in_channels, feat_channels=256, approx_anchor_generator={'octave_base_scale': 8, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, square_anchor_generator={'ratios': [1.0], 'scales': [8], 'strides': [4, 8, 16, 32, 64], 'type': 'AnchorGenerator'}, anchor_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, deform_groups=4, loc_filter_thr=0.01, background_label=None, train_cfg=None, test_cfg=None, loss_loc={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_shape={'beta': 0.2, 'loss_weight': 1.0, 'type': 'BoundedIoULoss'}, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]¶ Guided-Anchor-based head (GA-RPN, GA-RetinaNet, etc.).
This GuidedAnchorHead will predict high-quality feature guided anchors and locations where anchors will be kept in inference. There are mainly 3 categories of bounding-boxes.
- Sampled 9 pairs for target assignment. (approxes)
- The square boxes where the predicted anchors are based on. (squares)
- Guided anchors.
Please refer to https://arxiv.org/abs/1901.03278 for more details.
Parameters: - num_classes (int) – Number of classes.
- in_channels (int) – Number of channels in the input feature map.
- feat_channels (int) – Number of hidden channels.
- approx_anchor_generator (dict) – Config dict for approx generator
- square_anchor_generator (dict) – Config dict for square generator
- anchor_coder (dict) – Config dict for anchor coder
- bbox_coder (dict) – Config dict for bbox coder
- deform_groups – (int): Group number of DCN in FeatureAdaption module.
- loc_filter_thr (float) – Threshold to filter out unconcerned regions.
- background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
- loss_loc (dict) – Config of location loss.
- loss_shape (dict) – Config of anchor shape loss.
- loss_cls (dict) – Config of classification loss.
- loss_bbox (dict) – Config of bbox regression loss.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: A tuple of classification scores and bbox prediction. - cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type: tuple
-
forward_single
(x)[source]¶ Forward feature of a single scale level.
Parameters: x (Tensor) – Features of a single scale level. Returns: cls_score (Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes. bbox_pred (Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 4. Return type: tuple
-
ga_loc_targets
(gt_bboxes_list, featmap_sizes)[source]¶ Compute location targets for guided anchoring.
Each feature map is divided into positive, negative and ignore regions. - positive regions: target 1, weight 1 - ignore regions: target 0, weight 0 - negative regions: target 0, weight 0.1
Parameters: - gt_bboxes_list (list[Tensor]) – Gt bboxes of each image.
- featmap_sizes (list[tuple]) – Multi level sizes of each feature maps.
Returns: tuple
-
ga_shape_targets
(approx_list, inside_flag_list, square_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, unmap_outputs=True)[source]¶ Compute guided anchoring targets.
Parameters: - approx_list (list[list]) – Multi level approxs of each image.
- inside_flag_list (list[list]) – Multi level inside flags of each image.
- square_list (list[list]) – Multi level squares of each image.
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- img_metas (list[dict]) – Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]) – ignore list of gt bboxes.
- unmap_outputs (bool) – unmap outputs or not.
Returns: tuple
-
get_anchors
(featmap_sizes, shape_preds, loc_preds, img_metas, use_loc_filter=False, device='cuda')[source]¶ Get squares according to feature map sizes and guided anchors.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- shape_preds (list[tensor]) – Multi-level shape predictions.
- loc_preds (list[tensor]) – Multi-level location predictions.
- img_metas (list[dict]) – Image meta info.
- use_loc_filter (bool) – Use loc filter or not.
- device (torch.device | str) – device for returned tensors
Returns: - square approxs of each image, guided anchors of each image,
loc masks of each image
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, shape_preds, loc_preds, img_metas, cfg=None, rescale=False)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space. Default: False.
Returns: - Each item in result_list is 2-tuple.
The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class labelof the corresponding box.
Return type: list[tuple[Tensor, Tensor]]
Example
>>> import mmcv >>> self = AnchorHead( >>> num_classes=9, >>> in_channels=1, >>> anchor_generator=dict( >>> type='AnchorGenerator', >>> scales=[8], >>> ratios=[0.5, 1.0, 2.0], >>> strides=[4,])) >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}] >>> cfg = mmcv.Config(dict( >>> score_thr=0.00, >>> nms=dict(type='nms', iou_thr=1.0), >>> max_per_img=10)) >>> feat = torch.rand(1, 1, 3, 3) >>> cls_score, bbox_pred = self.forward_single(feat) >>> # note the input lists are over different levels, not images >>> cls_scores, bbox_preds = [cls_score], [bbox_pred] >>> result_list = self.get_bboxes(cls_scores, bbox_preds, >>> img_metas, cfg) >>> det_bboxes, det_labels = result_list[0] >>> assert len(result_list) == 1 >>> assert det_bboxes.shape[1] == 5 >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
-
get_sampled_approxs
(featmap_sizes, img_metas, device='cuda')[source]¶ Get sampled approxs and inside flags according to feature map sizes.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- img_metas (list[dict]) – Image meta info.
- device (torch.device | str) – device for returned tensors
Returns: approxes of each image, inside flags of each image
Return type: tuple
-
loss
(cls_scores, bbox_preds, shape_preds, loc_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
class
mmdet.models.dense_heads.
FeatureAdaption
(in_channels, out_channels, kernel_size=3, deform_groups=4)[source]¶ Feature Adaption Module.
Feature Adaption Module is implemented based on DCN v1. It uses anchor shape prediction rather than feature map to predict offsets of deform conv layer.
Parameters: - in_channels (int) – Number of channels in the input feature map.
- out_channels (int) – Number of channels in the output feature map.
- kernel_size (int) – Deformable conv kernel size.
- deform_groups (int) – Deformable conv group size.
-
class
mmdet.models.dense_heads.
RPNHead
(in_channels, **kwargs)[source]¶ RPN head.
Parameters: in_channels (int) – Number of channels in the input feature map. -
loss
(cls_scores, bbox_preds, gt_bboxes, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
-
class
mmdet.models.dense_heads.
GARPNHead
(in_channels, **kwargs)[source]¶ Guided-Anchor-based RPN head.
-
loss
(cls_scores, bbox_preds, shape_preds, loc_preds, gt_bboxes, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
-
class
mmdet.models.dense_heads.
RetinaHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]¶ An anchor-based head used in RetinaNet.
The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors.
Example
>>> import torch >>> self = RetinaHead(11, 7) >>> x = torch.rand(1, 7, 32, 32) >>> cls_score, bbox_pred = self.forward_single(x) >>> # Each anchor predicts a score for each class except background >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors >>> assert cls_per_anchor == (self.num_classes) >>> assert box_per_anchor == 4
-
forward_single
(x)[source]¶ Forward feature of a single scale level.
Parameters: x (Tensor) – Features of a single scale level. Returns: - cls_score (Tensor): Cls scores for a single scale level
- the channels number is num_anchors * num_classes.
- bbox_pred (Tensor): Box energies / deltas for a single scale
- level, the channels number is num_anchors * 4.
Return type: tuple
-
-
class
mmdet.models.dense_heads.
RetinaSepBNHead
(num_classes, num_ins, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]¶ “RetinaHead with separate BN.
In RetinaHead, conv/norm layers are shared across different FPN levels, while in RetinaSepBNHead, conv layers are shared across different FPN levels, but BN layers are separated.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - Usually a tuple of classification scores and bbox prediction
- cls_scores (list[Tensor]): Classification scores for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type: tuple
-
-
class
mmdet.models.dense_heads.
GARetinaHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, **kwargs)[source]¶ Guided-Anchor-based RetinaNet head.
-
class
mmdet.models.dense_heads.
SSDHead
(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]¶ SSD head used in https://arxiv.org/abs/1512.02325.
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- anchor_generator (dict) – Config dict for anchor generator
- background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.
- bbox_coder (dict) – Config of bounding box coder.
- reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False
- train_cfg (dict) – Training config of anchor head.
- test_cfg (dict) – Testing config of anchor head.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - cls_scores (list[Tensor]): Classification scores for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type: tuple
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
loss_single
(cls_score, bbox_pred, anchor, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]¶ Compute loss of a single image.
Parameters: - cls_score (Tensor) – Box scores for eachimage Has shape (num_total_anchors, num_classes).
- bbox_pred (Tensor) – Box energies / deltas for each image level with shape (num_total_anchors, 4).
- anchors (Tensor) – Box reference for each scale level with shape (num_total_anchors, 4).
- labels (Tensor) – Labels of each anchors with shape (num_total_anchors,).
- label_weights (Tensor) – Label weights of each anchor with shape (num_total_anchors,)
- bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (num_total_anchors, 4).
- bbox_weights (Tensor) – BBox regression loss weights of each anchor with shape (num_total_anchors, 4).
- num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
class
mmdet.models.dense_heads.
FCOSHead
(num_classes, in_channels, regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, norm_on_bbox=False, centerness_on_reg=False, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, **kwargs)[source]¶ Anchor-free head used in FCOS.
The FCOS head does not use anchor boxes. Instead bounding boxes are predicted at each pixel and a centerness measure is used to supress low-quality predictions. Here norm_on_bbox, centerness_on_reg, dcn_on_last_conv are training tricks used in official repo, which will bring remarkable mAP gains of up to 4.9. Please see https://github.com/tianzhi0549/FCOS for more detail.
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- strides (list[int] | list[tuple[int, int]]) – Strides of points in multiple feature levels. Default: (4, 8, 16, 32, 64).
- regress_ranges (tuple[tuple[int, int]]) – Regress range of multiple level points.
- center_sampling (bool) – If true, use center sampling. Default: False.
- center_sample_radius (float) – Radius of center sampling. Default: 1.5.
- norm_on_bbox (bool) – If true, normalize the regression targets with FPN strides. Default: False.
- centerness_on_reg (bool) – If true, position centerness on the regress branch. Please refer to https://github.com/tianzhi0549/FCOS/issues/89#issuecomment-516877042. Default: False.
- conv_bias (bool | str) – If specified as auto, it will be decided by the norm_cfg. Bias of conv will be set as True if norm_cfg is None, otherwise False. Default: “auto”.
- loss_cls (dict) – Config of classification loss.
- loss_bbox (dict) – Config of localization loss.
- loss_centerness (dict) – Config of centerness loss.
- norm_cfg (dict) – dictionary to construct and config norm layer. Default: norm_cfg=dict(type=’GN’, num_groups=32, requires_grad=True).
Example
>>> self = FCOSHead(11, 7) >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] >>> cls_score, bbox_pred, centerness = self.forward(feats) >>> assert len(cls_score) == len(self.scales)
-
centerness_target
(pos_bbox_targets)[source]¶ Compute centerness targets.
Parameters: pos_bbox_targets (Tensor) – BBox targets of positive bboxes in shape (num_pos, 4) Returns: Centerness target. Return type: Tensor
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: cls_scores (list[Tensor]): Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes. bbox_preds (list[Tensor]): Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4. centernesses (list[Tensor]): Centerss for each scale level, each is a 4D-tensor, the channel number is num_points * 1. Return type: tuple
-
forward_single
(x, scale, stride)[source]¶ Forward features of a single scale levle.
Parameters: - x (Tensor) – FPN feature maps of the specified stride.
- ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
- stride (int) – The corresponding stride for feature maps, only used to normalize the bbox prediction when self.norm_on_bbox is True.
Returns: scores for each class, bbox predictions and centerness predictions of input feature maps.
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, centernesses, img_metas, cfg=None, rescale=None)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
- centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_points * 1, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space
Returns: Each item in result_list is 2-tuple. The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.
Return type: list[tuple[Tensor, Tensor]]
-
get_targets
(points, gt_bboxes_list, gt_labels_list)[source]¶ Compute regression, classification and centerss targets for points in multiple images.
Parameters: - points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
Returns: concat_lvl_labels (list[Tensor]): Labels of each level. concat_lvl_bbox_targets (list[Tensor]): BBox targets of each level.
Return type: tuple
-
loss
(cls_scores, bbox_preds, centernesses, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
- centernesses (list[Tensor]) – Centerss for each scale level, each is a 4D-tensor, the channel number is num_points * 1.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
class
mmdet.models.dense_heads.
RepPointsHead
(num_classes, in_channels, point_feat_channels=256, num_points=9, gradient_mul=0.1, point_strides=[8, 16, 32, 64, 128], point_base_scale=4, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_init={'beta': 0.1111111111111111, 'loss_weight': 0.5, 'type': 'SmoothL1Loss'}, loss_bbox_refine={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, use_grid_points=False, center_init=True, transform_method='moment', moment_mul=0.01, **kwargs)[source]¶ RepPoint head.
Parameters: - point_feat_channels (int) – Number of channels of points features.
- gradient_mul (float) – The multiplier to gradients from points refinement and recognition.
- point_strides (Iterable) – points strides.
- point_base_scale (int) – bbox scale for assigning labels.
- loss_cls (dict) – Config of classification loss.
- loss_bbox_init (dict) – Config of initial points loss.
- loss_bbox_refine (dict) – Config of points loss in refinement.
- use_grid_points (bool) – If we use bounding box representation, the
- is represented as grid points on the bounding box. (reppoints) –
- center_init (bool) – Whether to use center point assignment.
- transform_method (str) – The methods to transform RepPoints to bbox.
-
centers_to_bboxes
(point_list)[source]¶ Get bboxes according to center points.
Only used in
MaxIoUAssigner
.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - Usually contain classification scores and bbox predictions.
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level, each is a 4D-tensor, the channel number is num_points * 4.
Return type: tuple
-
gen_grid_from_reg
(reg, previous_boxes)[source]¶ Base on the previous bboxes and regression values, we compute the regressed bboxes and generate the grids on the bboxes.
Parameters: - reg – the regression value to previous bboxes.
- previous_boxes – previous bboxes.
Returns: generate grids on the regressed bboxes.
-
get_bboxes
(cls_scores, pts_preds_init, pts_preds_refine, img_metas, cfg=None, rescale=False, nms=True)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space
-
get_points
(featmap_sizes, img_metas)[source]¶ Get points according to feature map sizes.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- img_metas (list[dict]) – Image meta info.
Returns: points of each image, valid flags of each image
Return type: tuple
-
get_targets
(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]¶ Compute corresponding GT box and classification targets for proposals.
Parameters: - proposals_list (list[list]) – Multi level points/bboxes of each image.
- valid_flag_list (list[list]) – Multi level valid flags of each image.
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- img_metas (list[dict]) – Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
- gt_bboxes_list – Ground truth labels of each box.
- stage (str) – init or refine. Generate target for init stage or refine stage
- label_channels (int) – Channel of label.
- unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns: - labels_list (list[Tensor]): Labels of each level.
- label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501
- bbox_gt_list (list[Tensor]): Ground truth bbox of each level.
- proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501
- proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501
- num_total_pos (int): Number of positive samples in all images. # noqa: E501
- num_total_neg (int): Number of negative samples in all images. # noqa: E501
Return type: tuple
-
loss
(cls_scores, pts_preds_init, pts_preds_refine, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
-
points2bbox
(pts, y_first=True)[source]¶ Converting the points set into bounding box.
Parameters: - pts – the input points sets (fields), each points set (fields) is represented as 2n scalar.
- y_first – if y_fisrt=True, the point set is represented as [y1, x1, y2, x2 … yn, xn], otherwise the point set is represented as [x1, y1, x2, y2 … xn, yn].
Returns: each points set is converting to a bbox [x1, y1, x2, y2].
-
class
mmdet.models.dense_heads.
FoveaHead
(num_classes, in_channels, base_edge_list=(16, 32, 64, 128, 256), scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, 512)), sigma=0.4, with_deform=False, deform_groups=4, **kwargs)[source]¶ FoveaBox: Beyond Anchor-based Object Detector https://arxiv.org/abs/1904.03797
-
forward_single
(x)[source]¶ Forward features of a single scale levle.
Parameters: x (Tensor) – FPN feature maps of the specified stride. Returns: - Scores for each class, bbox predictions, features
- after classification and regression conv layers, some models needs these features like FCOS.
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, img_metas, cfg=None, rescale=None)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space
-
get_targets
(gt_bbox_list, gt_label_list, featmap_sizes, points)[source]¶ Compute regression, classification and centerss targets for points in multiple images.
Parameters: - points (list[Tensor]) – Points of each fpn level, each has shape (num_points, 2).
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
-
loss
(cls_scores, bbox_preds, gt_bbox_list, gt_label_list, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level, each is a 4D-tensor, the channel number is num_points * num_classes.
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level, each is a 4D-tensor, the channel number is num_points * 4.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
-
-
class
mmdet.models.dense_heads.
FreeAnchorRetinaHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, pre_anchor_topk=50, bbox_thr=0.6, gamma=2.0, alpha=0.5, **kwargs)[source]¶ FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466.
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- stacked_convs (int) – Number of conv layers in cls and reg tower. Default: 4.
- conv_cfg (dict) – dictionary to construct and config conv layer. Default: None.
- norm_cfg (dict) – dictionary to construct and config norm layer. Default: norm_cfg=dict(type=’GN’, num_groups=32, requires_grad=True).
- pre_anchor_topk (int) – Number of boxes that be token in each bag.
- bbox_thr (float) – The threshold of the saturated linear function. It is usually the same with the IoU threshold used in NMS.
- gamma (float) – Gamma parameter in focal loss.
- alpha (float) – Alpha parameter in focal loss.
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
negative_bag_loss
(cls_prob, box_prob)[source]¶ Compute negative bag loss.
\(FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))\).
\(P_{a_{j} \in A_{+}}\): Box_probability of matched samples.
\(P_{j}^{bg}\): Classification probability of negative samples.
Parameters: - cls_prob (Tensor) – Classification probability, in shape (num_img, num_anchors, num_classes).
- box_prob (Tensor) – Box probability, in shape (num_img, num_anchors, num_classes).
Returns: Negative bag loss in shape (num_img, num_anchors, num_classes).
Return type: Tensor
-
positive_bag_loss
(matched_cls_prob, matched_box_prob)[source]¶ Compute positive bag loss.
\(-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )\).
\(P_{ij}^{cls}\): matched_cls_prob, classification probability of matched samples.
\(P_{ij}^{loc}\): matched_box_prob, box probability of matched samples.
Parameters: - matched_cls_prob (Tensor) – Classification probabilty of matched samples in shape (num_gt, pre_anchor_topk).
- matched_box_prob (Tensor) – BBox probability of matched samples, in shape (num_gt, pre_anchor_topk).
Returns: Positive bag loss in shape (num_gt,).
Return type: Tensor
-
class
mmdet.models.dense_heads.
ATSSHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, **kwargs)[source]¶ Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection.
ATSS head structure is similar with FCOS, however ATSS use anchor boxes and assign label by Adaptive Training Sample Selection instead max-iou.
https://arxiv.org/abs/1912.02424
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - Usually a tuple of classification scores and bbox prediction
- cls_scores (list[Tensor]): Classification scores for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for all scale
- levels, each is a 4D-tensor, the channels number is num_anchors * 4.
Return type: tuple
-
forward_single
(x, scale)[source]¶ Forward feature of a single scale level.
Parameters: - x (Tensor) – Features of a single scale level.
- ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
Returns: - cls_score (Tensor): Cls scores for a single scale level
the channels number is num_anchors * num_classes.
- bbox_pred (Tensor): Box energies / deltas for a single scale
level, the channels number is num_anchors * 4.
- centerness (Tensor): Centerness for a single scale level, the
channel number is (N, num_anchors * 1, H, W).
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, centernesses, img_metas, cfg=None, rescale=False)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_anchors * 1, H, W)
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used. Default: None.
- rescale (bool) – If True, return boxes in original image space. Default: False.
Returns: - Each item in result_list is 2-tuple.
The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.
Return type: list[tuple[Tensor, Tensor]]
-
get_targets
(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]¶ Get targets for ATSS head.
This method is almost the same as AnchorHead.get_targets(). Besides returning the targets as the parent method does, it also returns the anchors as the first element of the returned tuple.
-
loss
(cls_scores, bbox_preds, centernesses, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- centernesses (list[Tensor]) – Centerness for each scale level with shape (N, num_anchors * 1, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
loss_single
(anchors, cls_score, bbox_pred, centerness, labels, label_weights, bbox_targets, num_total_samples)[source]¶ Compute loss of a single scale level.
Parameters: - cls_score (Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).
- bbox_pred (Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W).
- anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
- labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
- label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
- bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
- num_total_samples (int) – Number os positive samples that is reduced over all GPUs.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
-
class
mmdet.models.dense_heads.
FSAFHead
(*args, score_threshold=None, **kwargs)[source]¶ Anchor-free head used in FSAF.
The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors (num_anchors is 1 for anchor- free methods)
Parameters: - *args – Same as its base class in
RetinaHead
- score_threshold (float, optional) – The score_threshold to calculate positive recall. If given, prediction scores lower than this value is counted as incorrect prediction. Default to None.
- **kwargs – Same as its base class in
RetinaHead
Example
>>> import torch >>> self = FSAFHead(11, 7) >>> x = torch.rand(1, 7, 32, 32) >>> cls_score, bbox_pred = self.forward_single(x) >>> # Each anchor predicts a score for each class except background >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors >>> assert cls_per_anchor == self.num_classes >>> assert box_per_anchor == 4
-
calculate_pos_recall
(cls_scores, labels_list, pos_inds)[source]¶ Calculate positive recall with score threshold.
Parameters: - cls_scores (list[Tensor]) – Classification scores at all fpn levels. Each tensor is in shape (N, num_classes * num_anchors, H, W)
- labels_list (list[Tensor]) – The label that each anchor is assigned to. Shape (N * H * W * num_anchors, )
- pos_inds (list[Tensor]) – List of bool tensors indicating whether the anchor is assigned to a positive label. Shape (N * H * W * num_anchors, )
Returns: A single float number indicating the positive recall.
Return type: Tensor
-
collect_loss_level_single
(cls_loss, reg_loss, assigned_gt_inds, labels_seq)[source]¶ Get the average loss in each FPN level w.r.t. each gt label.
Parameters: - cls_loss (Tensor) – Classification loss of each feature map pixel, shape (num_anchor, num_class)
- reg_loss (Tensor) – Regression loss of each feature map pixel, shape (num_anchor, 4)
- assigned_gt_inds (Tensor) – It indicates which gt the prior is assigned to (0-based, -1: no assignment). shape (num_anchor),
- labels_seq – The rank of labels. shape (num_gt)
Returns: (num_gt), average loss of each gt in this level
Return type: shape
-
forward_single
(x)[source]¶ Forward feature map of a single scale level.
Parameters: x (Tensor) – Feature map of a single scale level. Returns: - cls_score (Tensor): Box scores for each scale level
- Has shape (N, num_points * num_classes, H, W).
- bbox_pred (Tensor): Box energies / deltas for each scale
- level with shape (N, num_points * 4, H, W).
Return type: tuple (Tensor)
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_points * num_classes, H, W).
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_points * 4, H, W).
- gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
reweight_loss_single
(cls_loss, reg_loss, assigned_gt_inds, labels, level, min_levels)[source]¶ Reweight loss values at each level.
Reassign loss values at each level by masking those where the pre-calculated loss is too large. Then return the reduced losses.
Parameters: - cls_loss (Tensor) – Element-wise classification loss. Shape: (num_anchors, num_classes)
- reg_loss (Tensor) – Element-wise regression loss. Shape: (num_anchors, 4)
- assigned_gt_inds (Tensor) – The gt indices that each anchor bbox is assigned to. -1 denotes a negative anchor, otherwise it is the gt index (0-based). Shape: (num_anchors, ),
- labels (Tensor) – Label assigned to anchors. Shape: (num_anchors, ).
- level (int) – The current level index in the pyramid (0-4 for RetinaNet)
- min_levels (Tensor) – The best-matching level for each gt. Shape: (num_gts, ),
Returns: - cls_loss: Reduced corrected classification loss. Scalar.
- reg_loss: Reduced corrected regression loss. Scalar.
- pos_flags (Tensor): Corrected bool tensor indicating the final postive anchors. Shape: (num_anchors, ).
Return type: tuple
- *args – Same as its base class in
-
class
mmdet.models.dense_heads.
NASFCOSHead
(num_classes, in_channels, regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, 100000000.0)), center_sampling=False, center_sample_radius=1.5, norm_on_bbox=False, centerness_on_reg=False, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'IoULoss'}, loss_centerness={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, **kwargs)[source]¶ Anchor-free head used in NASFCOS.
It is quite similar with FCOS head, except for the searched structure of classification branch and bbox regression branch, where a structure of “dconv3x3, conv3x3, dconv3x3, conv1x1” is utilized instead.
-
class
mmdet.models.dense_heads.
PISARetinaHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, **kwargs)[source]¶ PISA Retinanet Head.
- The head owns the same structure with Retinanet Head, but differs in two
aspects: 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to
change the positive loss weights.- Classification-aware regression loss is adopted as a third loss.
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes of each image with shape (num_obj, 4).
- gt_labels (list[Tensor]) – Ground truth labels of each image with shape (num_obj, 4).
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor]) – Ignored gt bboxes of each image. Default: None.
Returns: - Loss dict, comprise classification loss, regression loss and
carl loss.
Return type: dict
-
class
mmdet.models.dense_heads.
PISASSDHead
(num_classes=80, in_channels=(512, 1024, 512, 256, 256, 256), anchor_generator={'basesize_ratio_range': (0.1, 0.9), 'input_size': 300, 'ratios': ([2], [2, 3], [2, 3], [2, 3], [2], [2]), 'scale_major': False, 'strides': [8, 16, 32, 64, 100, 300], 'type': 'SSDAnchorGenerator'}, background_label=None, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'type': 'DeltaXYWHBBoxCoder'}, reg_decoded_bbox=False, train_cfg=None, test_cfg=None)[source]¶ -
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes of each image with shape (num_obj, 4).
- gt_labels (list[Tensor]) – Ground truth labels of each image with shape (num_obj, 4).
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor]) – Ignored gt bboxes of each image. Default: None.
Returns: - Loss dict, comprise classification loss regression loss and
carl loss.
Return type: dict
-
-
class
mmdet.models.dense_heads.
GFLHead
(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg={'num_groups': 32, 'requires_grad': True, 'type': 'GN'}, loss_dfl={'loss_weight': 0.25, 'type': 'DistributionFocalLoss'}, reg_max=16, **kwargs)[source]¶ Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection.
GFL head structure is similar with ATSS, however GFL uses 1) joint representation for classification and localization quality, and 2) flexible General distribution for bounding box locations, which are supervised by Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively
https://arxiv.org/abs/2006.04388
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- stacked_convs (int) – Number of conv layers in cls and reg tower. Default: 4.
- conv_cfg (dict) – dictionary to construct and config conv layer. Default: None.
- norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’GN’, num_groups=32, requires_grad=True).
- loss_qfl (dict) – Config of Quality Focal Loss (QFL).
- reg_max (int) – Max value of integral set :math: {0, …, reg_max} in QFL setting. Default: 16.
Example
>>> self = GFLHead(11, 7) >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] >>> cls_quality_score, bbox_pred = self.forward(feats) >>> assert len(cls_quality_score) == len(self.scales)
-
anchor_center
(anchors)[source]¶ Get anchor centers from anchors.
Parameters: anchors (Tensor) – Anchor list with shape (N, 4), “xyxy” format. Returns: Anchor centers with shape (N, 2), “xy” format. Return type: Tensor
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - Usually a tuple of classification scores and bbox prediction
- cls_scores (list[Tensor]): Classification and quality (IoU)
- joint scores for all scale levels, each is a 4D-tensor, the channel number is num_classes.
- bbox_preds (list[Tensor]): Box distribution logits for all
- scale levels, each is a 4D-tensor, the channel number is 4*(n+1), n is max value of integral set.
Return type: tuple
-
forward_single
(x, scale)[source]¶ Forward feature of a single scale level.
Parameters: - x (Tensor) – Features of a single scale level.
- ( (scale) – obj: mmcv.cnn.Scale): Learnable scale module to resize the bbox prediction.
Returns: - cls_score (Tensor): Cls and quality joint scores for a single
scale level the channel number is num_classes.
- bbox_pred (Tensor): Box distribution logits for a single scale
level, the channel number is 4*(n+1), n is max value of integral set.
Return type: tuple
-
get_targets
(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]¶ Get targets for GFL head.
This method is almost the same as AnchorHead.get_targets(). Besides returning the targets as the parent method does, it also returns the anchors as the first element of the returned tuple.
-
loss
(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Cls and quality scores for each scale level has shape (N, num_classes, H, W).
- bbox_preds (list[Tensor]) – Box distribution logits for each scale level with shape (N, 4*(n+1), H, W), n is max value of integral set.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
loss_single
(anchors, cls_score, bbox_pred, labels, label_weights, bbox_targets, stride, num_total_samples)[source]¶ Compute loss of a single scale level.
Parameters: - anchors (Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).
- cls_score (Tensor) – Cls and quality joint scores for each scale level has shape (N, num_classes, H, W).
- bbox_pred (Tensor) – Box distribution logits for each scale level with shape (N, 4*(n+1), H, W), n is max value of integral set.
- labels (Tensor) – Labels of each anchors with shape (N, num_total_anchors).
- label_weights (Tensor) – Label weights of each anchor with shape (N, num_total_anchors)
- bbox_targets (Tensor) – BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
- stride (tuple) – Stride in this scale level.
- num_total_samples (int) – Number of positive samples that is reduced over all GPUs.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
class
mmdet.models.dense_heads.
CornerHead
(num_classes, in_channels, num_feat_levels=2, corner_emb_channels=1, train_cfg=None, test_cfg=None, loss_heatmap={'alpha': 2.0, 'gamma': 4.0, 'loss_weight': 1, 'type': 'GaussianFocalLoss'}, loss_embedding={'pull_weight': 0.25, 'push_weight': 0.25, 'type': 'AssociativeEmbeddingLoss'}, loss_offset={'beta': 1.0, 'loss_weight': 1, 'type': 'SmoothL1Loss'})[source]¶ Head of CornerNet: Detecting Objects as Paired Keypoints.
Code is modified from the official github repo .
More details can be found in the paper .
Parameters: - num_classes (int) – Number of categories excluding the background category.
- in_channels (int) – Number of channels in the input feature map.
- num_feat_levels (int) – Levels of feature from the previous module. 2 for HourglassNet-104 and 1 for HourglassNet-52. Because HourglassNet-104 outputs the final feature and intermediate supervision feature and HourglassNet-52 only outputs the final feature. Default: 2.
- corner_emb_channels (int) – Channel of embedding vector. Default: 1.
- train_cfg (dict | None) – Training config. Useless in CornerHead, but we keep this variable for SingleStageDetector. Default: None.
- test_cfg (dict | None) – Testing config of CornerHead. Default: None.
- loss_heatmap (dict | None) – Config of corner heatmap loss. Default: GaussianFocalLoss.
- loss_embedding (dict | None) – Config of corner embedding loss. Default: AssociativeEmbeddingLoss.
- loss_offset (dict | None) – Config of corner offset loss. Default: SmoothL1Loss.
-
decode_heatmap
(tl_heat, br_heat, tl_off, br_off, tl_emb=None, br_emb=None, tl_centripetal_shift=None, br_centripetal_shift=None, img_meta=None, k=100, kernel=3, distance_threshold=0.5, num_dets=1000)[source]¶ Transform outputs for a single batch item into raw bbox predictions.
Parameters: - tl_heat (Tensor) – Top-left corner heatmap for current level with shape (N, num_classes, H, W).
- br_heat (Tensor) – Bottom-right corner heatmap for current level with shape (N, num_classes, H, W).
- tl_off (Tensor) – Top-left corner offset for current level with shape (N, corner_offset_channels, H, W).
- br_off (Tensor) – Bottom-right corner offset for current level with shape (N, corner_offset_channels, H, W).
- tl_emb (Tensor | None) – Top-left corner embedding for current level with shape (N, corner_emb_channels, H, W).
- br_emb (Tensor | None) – Bottom-right corner embedding for current level with shape (N, corner_emb_channels, H, W).
- tl_centripetal_shift (Tensor | None) – Top-left centripetal shift for current level with shape (N, 2, H, W).
- br_centripetal_shift (Tensor | None) – Bottom-right centripetal shift for current level with shape (N, 2, H, W).
- img_meta (dict) – Meta information of current image, e.g., image size, scaling factor, etc.
- k (int) – Get top k corner keypoints from heatmap.
- kernel (int) – Max pooling kernel for extract local maximum pixels.
- distance_threshold (float) – Distance threshold. Top-left and bottom-right corner keypoints with feature distance less than the threshold will be regarded as keypoints from same object.
- num_dets (int) – Num of raw boxes before doing nms.
Returns: Decoded output of CornerHead, containing the following Tensors:
- bboxes (Tensor): Coords of each box.
- scores (Tensor): Scores of each box.
- clses (Tensor): Categories of each box.
Return type: tuple[torch.Tensor]
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: Usually a tuple of corner heatmaps, offset heatmaps and embedding heatmaps. - tl_heats (list[Tensor]): Top-left corner heatmaps for all levels, each is a 4D-tensor, the channels number is num_classes.
- br_heats (list[Tensor]): Bottom-right corner heatmaps for all levels, each is a 4D-tensor, the channels number is num_classes.
- tl_embs (list[Tensor] | list[None]): Top-left embedding heatmaps for all levels, each is a 4D-tensor or None. If not None, the channels number is corner_emb_channels.
- br_embs (list[Tensor] | list[None]): Bottom-right embedding heatmaps for all levels, each is a 4D-tensor or None. If not None, the channels number is corner_emb_channels.
- tl_offs (list[Tensor]): Top-left offset heatmaps for all levels, each is a 4D-tensor. The channels number is corner_offset_channels.
- br_offs (list[Tensor]): Bottom-right offset heatmaps for all levels, each is a 4D-tensor. The channels number is corner_offset_channels.
Return type: tuple
-
forward_single
(x, lvl_ind, return_pool=False)[source]¶ Forward feature of a single level.
Parameters: - x (Tensor) – Feature of a single level.
- lvl_ind (int) – Level index of current feature.
- return_pool (bool) – Return corner pool feature or not.
Returns: A tuple of CornerHead’s output for current feature level. Containing the following Tensors:
- tl_heat (Tensor): Predicted top-left corner heatmap.
- br_heat (Tensor): Predicted bottom-right corner heatmap.
- tl_emb (Tensor | None): Predicted top-left embedding heatmap. None for self.with_corner_emb == False.
- br_emb (Tensor | None): Predicted bottom-right embedding heatmap. None for self.with_corner_emb == False.
- tl_off (Tensor): Predicted top-left offset heatmap.
- br_off (Tensor): Predicted bottom-right offset heatmap.
- tl_pool (Tensor): Top-left corner pool feature. Not must have.
- br_pool (Tensor): Bottom-right corner pool feature. Not must have.
Return type: tuple[Tensor]
-
get_bboxes
(tl_heats, br_heats, tl_embs, br_embs, tl_offs, br_offs, img_metas, rescale=False, with_nms=True)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - tl_heats (list[Tensor]) – Top-left corner heatmaps for each level with shape (N, num_classes, H, W).
- br_heats (list[Tensor]) – Bottom-right corner heatmaps for each level with shape (N, num_classes, H, W).
- tl_embs (list[Tensor]) – Top-left corner embeddings for each level with shape (N, corner_emb_channels, H, W).
- br_embs (list[Tensor]) – Bottom-right corner embeddings for each level with shape (N, corner_emb_channels, H, W).
- tl_offs (list[Tensor]) – Top-left corner offsets for each level with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]) – Bottom-right corner offsets for each level with shape (N, corner_offset_channels, H, W).
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- rescale (bool) – If True, return boxes in original image space. Default: False.
- with_nms (bool) – If True, do nms before return boxes. Default: True.
-
get_targets
(gt_bboxes, gt_labels, feat_shape, img_shape, with_corner_emb=False, with_guiding_shift=False, with_centripetal_shift=False)[source]¶ Generate corner targets.
Including corner heatmap, corner offset.
Optional: corner embedding, corner guiding shift, centripetal shift.
For CornerNet, we generate corner heatmap, corner offset and corner embedding from this function.
For CentripetalNet, we generate corner heatmap, corner offset, guiding shift and centripetal shift from this function.
Parameters: - gt_bboxes (list[Tensor]) – Ground truth bboxes of each image, each has shape (num_gt, 4).
- gt_labels (list[Tensor]) – Ground truth labels of each box, each has shape (num_gt,).
- feat_shape (list[int]) – Shape of output feature, [batch, channel, height, width].
- img_shape (list[int]) – Shape of input image, [height, width, channel].
- with_corner_emb (bool) – Generate corner embedding target or not. Default: False.
- with_guiding_shift (bool) – Generate guiding shift target or not. Default: False.
- with_centripetal_shift (bool) – Generate centripetal shift target or not. Default: False.
Returns: Ground truth of corner heatmap, corner offset, corner embedding, guiding shift and centripetal shift. Containing the following keys:
- topleft_heatmap (Tensor): Ground truth top-left corner heatmap.
- bottomright_heatmap (Tensor): Ground truth bottom-right corner heatmap.
- topleft_offset (Tensor): Ground truth top-left corner offset.
- bottomright_offset (Tensor): Ground truth bottom-right corner offset.
- corner_embedding (list[list[list[int]]]): Ground truth corner embedding. Not must have.
- topleft_guiding_shift (Tensor): Ground truth top-left corner guiding shift. Not must have.
- bottomright_guiding_shift (Tensor): Ground truth bottom-right corner guiding shift. Not must have.
- topleft_centripetal_shift (Tensor): Ground truth top-left corner centripetal shift. Not must have.
- bottomright_centripetal_shift (Tensor): Ground truth bottom-right corner centripetal shift. Not must have.
Return type: dict
-
loss
(tl_heats, br_heats, tl_embs, br_embs, tl_offs, br_offs, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - tl_heats (list[Tensor]) – Top-left corner heatmaps for each level with shape (N, num_classes, H, W).
- br_heats (list[Tensor]) – Bottom-right corner heatmaps for each level with shape (N, num_classes, H, W).
- tl_embs (list[Tensor]) – Top-left corner embeddings for each level with shape (N, corner_emb_channels, H, W).
- br_embs (list[Tensor]) – Bottom-right corner embeddings for each level with shape (N, corner_emb_channels, H, W).
- tl_offs (list[Tensor]) – Top-left corner offsets for each level with shape (N, corner_offset_channels, H, W).
- br_offs (list[Tensor]) – Bottom-right corner offsets for each level with shape (N, corner_offset_channels, H, W).
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [left, top, right, bottom] format.
- gt_labels (list[Tensor]) – Class indices corresponding to each box.
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None) – Specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components. Containing the following losses:
- det_loss (list[Tensor]): Corner keypoint losses of all feature levels.
- pull_loss (list[Tensor]): Part one of AssociativeEmbedding losses of all feature levels.
- push_loss (list[Tensor]): Part two of AssociativeEmbedding losses of all feature levels.
- off_loss (list[Tensor]): Corner offset losses of all feature levels.
Return type: dict[str, Tensor]
-
loss_single
(tl_hmp, br_hmp, tl_emb, br_emb, tl_off, br_off, targets)[source]¶ Compute losses for single level.
Parameters: - tl_hmp (Tensor) – Top-left corner heatmap for current level with shape (N, num_classes, H, W).
- br_hmp (Tensor) – Bottom-right corner heatmap for current level with shape (N, num_classes, H, W).
- tl_emb (Tensor) – Top-left corner embedding for current level with shape (N, corner_emb_channels, H, W).
- br_emb (Tensor) – Bottom-right corner embedding for current level with shape (N, corner_emb_channels, H, W).
- tl_off (Tensor) – Top-left corner offset for current level with shape (N, corner_offset_channels, H, W).
- br_off (Tensor) – Bottom-right corner offset for current level with shape (N, corner_offset_channels, H, W).
- targets (dict) – Corner target generated by get_targets.
Returns: Losses of the head’s differnet branches containing the following losses:
- det_loss (Tensor): Corner keypoint loss.
- pull_loss (Tensor): Part one of AssociativeEmbedding loss.
- push_loss (Tensor): Part two of AssociativeEmbedding loss.
- off_loss (Tensor): Corner offset loss.
Return type: tuple[torch.Tensor]
-
class
mmdet.models.dense_heads.
PAAHead
(*args, topk=9, score_voting=True, **kwargs)[source]¶ Head of PAAAssignment: Probabilistic Anchor Assignment with IoU Prediction for Object Detection.
Code is modified from the official github repo.
More details can be found in the paper .
Parameters: - topk (int) – Select topk samples with smallest loss in each level.
- score_voting (bool) – Whether to use score voting in post-process.
-
get_pos_loss
(anchors, cls_score, bbox_pred, label, label_weight, bbox_target, bbox_weight, pos_inds)[source]¶ Calculate loss of all potential positive samples obtained from first match process.
Parameters: - anchors (list[Tensor]) – Anchors of each scale.
- cls_score (Tensor) – Box scores of single image with shape (num_anchors, num_classes)
- bbox_pred (Tensor) – Box energies / deltas of single image with shape (num_anchors, 4)
- label (Tensor) – classification target of each anchor with shape (num_anchors,)
- label_weight (Tensor) – Classification loss weight of each anchor with shape (num_anchors).
- bbox_target (dict) – Regression target of each anchor with shape (num_anchors, 4).
- bbox_weight (Tensor) – Bbox weight of each anchor with shape (num_anchors, 4).
- pos_inds (Tensor) – Index of all positive samples got from first assign process.
Returns: Losses of all positive samples in single image.
Return type: Tensor
-
get_targets
(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True)[source]¶ Get targets for PAA head.
This method is almost the same as AnchorHead.get_targets(). We direct return the results from _get_targets_single instead map it to levels by images_to_levels function.
Parameters: - anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 4).
- valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- img_metas (list[dict]) – Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.
- gt_labels_list (list[Tensor]) – Ground truth labels of each box.
- label_channels (int) – Channel of label.
- unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.
Returns: Usually returns a tuple containing learning targets.
- labels (list[Tensor]): Labels of all anchors, each with
- shape (num_anchors,).
- label_weights (list[Tensor]): Label weights of all anchor.
- each with shape (num_anchors,).
- bbox_targets (list[Tensor]): BBox targets of all anchors.
- each with shape (num_anchors, 4).
- bbox_weights (list[Tensor]): BBox weights of all anchors.
- each with shape (num_anchors, 4).
- pos_inds (list[Tensor]): Contains all index of positive
- sample in all anchor.
- gt_inds (list[Tensor]): Contains all gt_index of positive
- sample in all anchor.
Return type: tuple
-
gmm_separation_scheme
(gmm_assignment, scores, pos_inds_gmm)[source]¶ A general separation scheme for gmm model.
It separates a GMM distribution of candidate samples into three parts, 0 1 and uncertain areas, and you can implement other separation schemes by rewriting this function.
Parameters: - gmm_assignment (Tensor) – The prediction of GMM which is of shape (num_samples,). The 0/1 value indicates the distribution that each sample comes from.
- scores (Tensor) – The probability of sample coming from the fit GMM distribution. The tensor is of shape (num_samples,).
- pos_inds_gmm (Tensor) – All the indexes of samples which are used to fit GMM model. The tensor is of shape (num_samples,)
Returns: The indices of positive and ignored samples.
- pos_inds_temp (Tensor): Indices of positive samples.
- ignore_inds_temp (Tensor): Indices of ignore samples.
Return type: tuple[Tensor]
-
loss
(cls_scores, bbox_preds, iou_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute losses of the head.
Parameters: - cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W)
- iou_preds (list[Tensor]) – iou_preds for each scale level with shape (N, num_anchors * 1, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None) – Specify which bounding boxes can be ignored when are computing the loss.
Returns: A dictionary of loss gmm_assignment.
Return type: dict[str, Tensor]
-
paa_reassign
(pos_losses, label, label_weight, bbox_weight, pos_inds, pos_gt_inds, anchors)[source]¶ Fit loss to GMM distribution and separate positive, ignore, negative samples again with GMM model.
Parameters: - pos_losses (Tensor) – Losses of all positive samples in single image.
- label (Tensor) – classification target of each anchor with shape (num_anchors,)
- label_weight (Tensor) – Classification loss weight of each anchor with shape (num_anchors).
- bbox_weight (Tensor) – Bbox weight of each anchor with shape (num_anchors, 4).
- pos_inds (Tensor) – Index of all positive samples got from first assign process.
- pos_gt_inds (Tensor) – Gt_index of all positive samples got from first assign process.
- anchors (list[Tensor]) – Anchors of each scale.
Returns: Usually returns a tuple containing learning targets.
- label (Tensor): classification target of each anchor after paa assign, with shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each anchor after paa assign, with shape (num_anchors).
- bbox_weight (Tensor): Bbox weight of each anchor with shape (num_anchors, 4).
- num_pos (int): The number of positive samples after paa assign.
Return type: tuple
-
score_voting
(det_bboxes, det_labels, mlvl_bboxes, mlvl_nms_scores, score_thr)[source]¶ Implementation of score voting method works on each remaining boxes after NMS procedure.
Parameters: - det_bboxes (Tensor) – Remaining boxes after NMS procedure, with shape (k, 5), each dimension means (x1, y1, x2, y2, score).
- det_labels (Tensor) – The label of remaining boxes, with shape (k, 1),Labels are 0-based.
- mlvl_bboxes (Tensor) – All boxes before the NMS procedure, with shape (num_anchors,4).
- mlvl_nms_scores (Tensor) – The scores of all boxes which is used in the NMS procedure, with shape (num_anchors, num_class)
- mlvl_iou_preds (Tensot) – The predictions of IOU of all boxes before the NMS procedure, with shape (num_anchors, 1)
- score_thr (float) – The score threshold of bboxes.
Returns: Usually returns a tuple containing voting results.
- det_bboxes_voted (Tensor): Remaining boxes after
- score voting procedure, with shape (k, 5), each dimension means (x1, y1, x2, y2, score).
- det_labels_voted (Tensor): Label of remaining bboxes
- after voting, with shape (num_anchors,).
Return type: tuple
-
class
mmdet.models.dense_heads.
YOLOV3Head
(num_classes, in_channels, out_channels=(1024, 512, 256), anchor_generator={'base_sizes': [[(116, 90), (156, 198), (373, 326)], [(30, 61), (62, 45), (59, 119)], [(10, 13), (16, 30), (33, 23)]], 'strides': [32, 16, 8], 'type': 'YOLOAnchorGenerator'}, bbox_coder={'type': 'YOLOBBoxCoder'}, featmap_strides=[32, 16, 8], one_hot_smoother=0.0, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, act_cfg={'negative_slope': 0.1, 'type': 'LeakyReLU'}, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_conf={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_xy={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_wh={'loss_weight': 1.0, 'type': 'MSELoss'}, train_cfg=None, test_cfg=None)[source]¶ YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767.
Parameters: - num_classes (int) – The number of object classes (w/o background)
- in_channels (List[int]) – Number of input channels per scale.
- out_channels (List[int]) – The number of output channels per scale before the final 1x1 layer. Default: (1024, 512, 256).
- anchor_generator (dict) – Config dict for anchor generator
- bbox_coder (dict) – Config of bounding box coder.
- featmap_strides (List[int]) – The stride of each scale. Should be in descending order. Default: (32, 16, 8).
- one_hot_smoother (float) – Set a non-zero value to enable label-smooth Default: 0.
- conv_cfg (dict) – Config dict for convolution layer. Default: None.
- norm_cfg (dict) – Dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True)
- act_cfg (dict) – Config dict for activation layer. Default: dict(type=’LeakyReLU’, negative_slope=0.1).
- loss_cls (dict) – Config of classification loss.
- loss_conf (dict) – Config of confidence loss.
- loss_xy (dict) – Config of xy coordinate loss.
- loss_wh (dict) – Config of wh coordinate loss.
- train_cfg (dict) – Training config of YOLOV3 head. Default: None.
- test_cfg (dict) – Testing config of YOLOV3 head. Default: None.
-
forward
(feats)[source]¶ Forward features from the upstream network.
Parameters: feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor. Returns: - A tuple of multi-level predication map, each is a
- 4D-tensor of shape (batch_size, 5+num_classes, height, width).
Return type: tuple[Tensor]
-
get_bboxes
(pred_maps, img_metas, cfg=None, rescale=False)[source]¶ Transform network output for a batch into bbox predictions.
Parameters: - pred_maps (list[Tensor]) – Raw predictions for a batch of images.
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- cfg (mmcv.Config) – Test / postprocessing configuration, if None, test_cfg would be used
- rescale (bool) – If True, return boxes in original image space
Returns: - Each item in result_list is 2-tuple.
The first item is an (n, 5) tensor, where the first 4 columns are bounding box positions (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.
Return type: list[tuple[Tensor, Tensor]]
-
get_targets
(anchor_list, responsible_flag_list, gt_bboxes_list, gt_labels_list)[source]¶ Compute target maps for anchors in multiple images.
Parameters: - anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_total_anchors, 4).
- responsible_flag_list (list[list[Tensor]]) – Multi level responsible flags of each image. Each element is a tensor of shape (num_total_anchors, )
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- gt_labels_list (list[Tensor]) – Ground truth labels of each box.
Returns: - Usually returns a tuple containing learning targets.
- target_map_list (list[Tensor]): Target map of each level.
- neg_map_list (list[Tensor]): Negative map of each level.
Return type: tuple
-
loss
(pred_maps, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]¶ Compute loss of the head.
Parameters: - pred_maps (list[Tensor]) – Prediction map for each scale level, shape (N, num_anchors * num_attrib, H, W)
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
Returns: A dictionary of loss components.
Return type: dict[str, Tensor]
-
loss_single
(pred_map, target_map, neg_map)[source]¶ Compute loss of a single image from a batch.
Parameters: - pred_map (Tensor) – Raw predictions for a single level.
- target_map (Tensor) – The Ground-Truth target for a single level.
- neg_map (Tensor) – The negative masks for a single level.
Returns: loss_cls (Tensor): Classification loss. loss_conf (Tensor): Confidence loss. loss_xy (Tensor): Regression loss of x, y coordinate. loss_wh (Tensor): Regression loss of w, h coordinate.
Return type: tuple
-
num_attrib
¶ number of attributes in pred_map, bboxes (4) + objectness (1) + num_classes
Type: int
-
class
mmdet.models.dense_heads.
SABLRetinaHead
(num_classes, in_channels, stacked_convs=4, feat_channels=256, approx_anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, square_anchor_generator={'ratios': [1.0], 'scales': [4], 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, conv_cfg=None, norm_cfg=None, bbox_coder={'num_buckets': 14, 'scale_factor': 3.0, 'type': 'BucketingBBoxCoder'}, reg_decoded_bbox=False, background_label=None, train_cfg=None, test_cfg=None, loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_cls={'loss_weight': 1.5, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, loss_bbox_reg={'beta': 0.1111111111111111, 'loss_weight': 1.5, 'type': 'SmoothL1Loss'})[source]¶ Side-Aware Boundary Localization (SABL) for RetinaNet.
The anchor generation, assigning and sampling in SABLRetinaHead are the same as GuidedAnchorHead for guided anchoring.
Please refer to https://arxiv.org/abs/1912.04260 for more details.
Parameters: - num_classes (int) – Number of classes.
- in_channels (int) – Number of channels in the input feature map.
- stacked_convs (int) – Number of Convs for classification and regression branches. Defaults to 4.
- feat_channels (int) – Number of hidden channels. Defaults to 256.
- approx_anchor_generator (dict) – Config dict for approx generator.
- square_anchor_generator (dict) – Config dict for square generator.
- conv_cfg (dict) – Config dict for ConvModule. Defaults to None.
- norm_cfg (dict) – Config dict for Norm Layer. Defaults to None.
- bbox_coder (dict) – Config dict for bbox coder.
- reg_decoded_bbox (bool) – Whether to regress decoded bbox. Defaults to False.
- background_label (int) – Background label. Defaults to None.
- train_cfg (dict) – Training config of SABLRetinaHead.
- test_cfg (dict) – Testing config of SABLRetinaHead.
- loss_cls (dict) – Config of classification loss.
- loss_bbox_cls (dict) – Config of classification loss for bbox branch.
- loss_bbox_reg (dict) – Config of regression loss for bbox branch.
-
get_anchors
(featmap_sizes, img_metas, device='cuda')[source]¶ Get squares according to feature map sizes and guided anchors.
Parameters: - featmap_sizes (list[tuple]) – Multi-level feature map sizes.
- img_metas (list[dict]) – Image meta info.
- device (torch.device | str) – device for returned tensors
Returns: square approxs of each image
Return type: tuple
-
get_bboxes
(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False)[source]¶ Transform network output for a batch into bbox predictions.
-
get_target
(approx_list, inside_flag_list, square_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=None, sampling=True, unmap_outputs=True)[source]¶ Compute bucketing targets. :param approx_list: Multi level approxs of each image. :type approx_list: list[list] :param inside_flag_list: Multi level inside flags of each
image.Parameters: - square_list (list[list]) – Multi level squares of each image.
- gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.
- img_metas (list[dict]) – Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]) – ignore list of gt bboxes.
- gt_bboxes_list – Gt bboxes of each image.
- label_channels (int) – Channel of label.
- sampling (bool) – Sample Anchors or not.
- unmap_outputs (bool) – unmap outputs or not.
Returns: Returns a tuple containing learning targets.
- labels_list (list[Tensor]): Labels of each level.
- label_weights_list (list[Tensor]): Label weights of each level.
- bbox_cls_targets_list (list[Tensor]): BBox cls targets of each level.
- bbox_cls_weights_list (list[Tensor]): BBox cls weights of each level.
- bbox_reg_targets_list (list[Tensor]): BBox reg targets of each level.
- bbox_reg_weights_list (list[Tensor]): BBox reg weights of each level.
- num_total_pos (int): Number of positive samples in all images.
- num_total_neg (int): Number of negative samples in all images.
Return type: tuple
roi_heads¶
-
class
mmdet.models.roi_heads.
BaseRoIHead
(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]¶ Base class for RoIHeads.
-
aug_test
(x, proposal_list, img_metas, rescale=False, **kwargs)[source]¶ Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].
-
forward_train
(x, img_meta, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, **kwargs)[source]¶ Forward function during training.
-
init_weights
(pretrained)[source]¶ Initialize the weights in head.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
simple_test
(x, proposal_list, img_meta, proposals=None, rescale=False, **kwargs)[source]¶ Test without augmentation.
-
with_bbox
¶ whether the RoI head contains a bbox_head
Type: bool
-
with_mask
¶ whether the RoI head contains a mask_head
Type: bool
whether the RoI head contains a shared_head
Type: bool
-
-
class
mmdet.models.roi_heads.
CascadeRoIHead
(num_stages, stage_loss_weights, bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]¶ Cascade roi head including one bbox head and one mask head.
https://arxiv.org/abs/1712.00726
-
aug_test
(features, proposal_list, img_metas, rescale=False)[source]¶ Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].
-
forward_train
(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]¶ Parameters: - x (list[Tensor]) – list of multi-level img features.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- proposals (list[Tensors]) – list of region proposals.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
init_bbox_head
(bbox_roi_extractor, bbox_head)[source]¶ Initialize box head and box roi extractor.
Parameters: - bbox_roi_extractor (dict) – Config of box roi extractor.
- bbox_head (dict) – Config of box in box head.
-
init_mask_head
(mask_roi_extractor, mask_head)[source]¶ Initialize mask head and mask roi extractor.
Parameters: - mask_roi_extractor (dict) – Config of mask roi extractor.
- mask_head (dict) – Config of mask in mask head.
-
-
class
mmdet.models.roi_heads.
DoubleHeadRoIHead
(reg_roi_scale_factor, **kwargs)[source]¶ RoI head for Double Head RCNN.
-
class
mmdet.models.roi_heads.
MaskScoringRoIHead
(mask_iou_head, **kwargs)[source]¶ Mask Scoring RoIHead for Mask Scoring RCNN.
-
class
mmdet.models.roi_heads.
HybridTaskCascadeRoIHead
(num_stages, stage_loss_weights, semantic_roi_extractor=None, semantic_head=None, semantic_fusion=('bbox', 'mask'), interleaved=True, mask_info_flow=True, **kwargs)[source]¶ Hybrid task cascade roi head including one bbox head and one mask head.
https://arxiv.org/abs/1901.07518
-
aug_test
(img_feats, proposal_list, img_metas, rescale=False)[source]¶ Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].
-
forward_train
(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, gt_semantic_seg=None)[source]¶ Parameters: - x (list[Tensor]) – list of multi-level img features.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- proposal_list (list[Tensors]) – list of region proposals.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- gt_bboxes_ignore (None, list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None, Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
- gt_semantic_seg (None, list[Tensor]) – semantic segmentation masks used if the architecture supports semantic segmentation task.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
init_weights
(pretrained)[source]¶ Initialize the weights in head.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
with_semantic
¶ whether the head has semantic head
Type: bool
-
-
class
mmdet.models.roi_heads.
GridRoIHead
(grid_roi_extractor, grid_head, **kwargs)[source]¶ Grid roi head for Grid R-CNN.
-
class
mmdet.models.roi_heads.
ResLayer
(depth, stage=3, stride=2, dilation=1, style='pytorch', norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, with_cp=False, dcn=None)[source]¶ -
init_weights
(pretrained=None)[source]¶ Initialize the weights in the module.
Parameters: pretrained (str, optional) – Path to pre-trained weights. Defaults to None.
-
train
(mode=True)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.Parameters: mode (bool) – whether to set training mode ( True
) or evaluation mode (False
). Default:True
.Returns: self Return type: Module
-
-
class
mmdet.models.roi_heads.
BBoxHead
(with_avg_pool=False, with_cls=True, with_reg=True, roi_feat_size=7, in_channels=256, num_classes=80, bbox_coder={'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'type': 'DeltaXYWHBBoxCoder'}, reg_class_agnostic=False, reg_decoded_bbox=False, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': False}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'})[source]¶ Simplest RoI head, with only two fc layers for classification and regression respectively.
-
refine_bboxes
(rois, labels, bbox_preds, pos_is_gts, img_metas)[source]¶ Refine bboxes during training.
Parameters: - rois (Tensor) – Shape (n*bs, 5), where n is image number per GPU, and bs is the sampled RoIs per image. The first column is the image id and the next 4 columns are x1, y1, x2, y2.
- labels (Tensor) – Shape (n*bs, ).
- bbox_preds (Tensor) – Shape (n*bs, 4) or (n*bs, 4*#class).
- pos_is_gts (list[Tensor]) – Flags indicating if each positive bbox is a gt bbox.
- img_metas (list[dict]) – Meta info of each image.
Returns: Refined bboxes of each image in a mini-batch.
Return type: list[Tensor]
Example
>>> # xdoctest: +REQUIRES(module:kwarray) >>> import kwarray >>> import numpy as np >>> from mmdet.core.bbox.demodata import random_boxes >>> self = BBoxHead(reg_class_agnostic=True) >>> n_roi = 2 >>> n_img = 4 >>> scale = 512 >>> rng = np.random.RandomState(0) >>> img_metas = [{'img_shape': (scale, scale)} ... for _ in range(n_img)] >>> # Create rois in the expected format >>> roi_boxes = random_boxes(n_roi, scale=scale, rng=rng) >>> img_ids = torch.randint(0, n_img, (n_roi,)) >>> img_ids = img_ids.float() >>> rois = torch.cat([img_ids[:, None], roi_boxes], dim=1) >>> # Create other args >>> labels = torch.randint(0, 2, (n_roi,)).long() >>> bbox_preds = random_boxes(n_roi, scale=scale, rng=rng) >>> # For each image, pretend random positive boxes are gts >>> is_label_pos = (labels.numpy() > 0).astype(np.int) >>> lbl_per_img = kwarray.group_items(is_label_pos, ... img_ids.numpy()) >>> pos_per_img = [sum(lbl_per_img.get(gid, [])) ... for gid in range(n_img)] >>> pos_is_gts = [ >>> torch.randint(0, 2, (npos,)).byte().sort( >>> descending=True)[0] >>> for npos in pos_per_img >>> ] >>> bboxes_list = self.refine_bboxes(rois, labels, bbox_preds, >>> pos_is_gts, img_metas) >>> print(bboxes_list)
-
regress_by_class
(rois, label, bbox_pred, img_meta)[source]¶ Regress the bbox for the predicted class. Used in Cascade R-CNN.
Parameters: - rois (Tensor) – shape (n, 4) or (n, 5)
- label (Tensor) – shape (n, )
- bbox_pred (Tensor) – shape (n, 4*(#class)) or (n, 4)
- img_meta (dict) – Image meta info.
Returns: Regressed bboxes, the same shape as input rois.
Return type: Tensor
-
-
class
mmdet.models.roi_heads.
ConvFCBBoxHead
(num_shared_convs=0, num_shared_fcs=0, num_cls_convs=0, num_cls_fcs=0, num_reg_convs=0, num_reg_fcs=0, conv_out_channels=256, fc_out_channels=1024, conv_cfg=None, norm_cfg=None, *args, **kwargs)[source]¶ More general bbox head, with shared conv and fc layers and two optional separated branches.
/-> cls convs -> cls fcs -> cls shared convs -> shared fcs \-> reg convs -> reg fcs -> reg
-
class
mmdet.models.roi_heads.
StandardRoIHead
(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]¶ Simplest base roi head including one bbox head and one mask head.
-
async_simple_test
(x, proposal_list, img_metas, proposals=None, rescale=False)[source]¶ Async test without augmentation.
-
aug_test
(x, proposal_list, img_metas, rescale=False)[source]¶ Test with augmentations.
If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].
-
forward_train
(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]¶ Parameters: - x (list[Tensor]) – list of multi-level img features.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- proposals (list[Tensors]) – list of region proposals.
- gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
-
class
mmdet.models.roi_heads.
DoubleConvFCBBoxHead
(num_convs=0, num_fcs=0, conv_out_channels=1024, fc_out_channels=1024, conv_cfg=None, norm_cfg={'type': 'BN'}, **kwargs)[source]¶ Bbox head used in Double-Head R-CNN
/-> cls /-> shared convs -> \-> reg roi features /-> cls \-> shared fc -> \-> reg
-
class
mmdet.models.roi_heads.
FCNMaskHead
(num_convs=4, roi_feat_size=14, in_channels=256, conv_kernel_size=3, conv_out_channels=256, num_classes=80, class_agnostic=False, upsample_cfg={'scale_factor': 2, 'type': 'deconv'}, conv_cfg=None, norm_cfg=None, loss_mask={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_mask': True})[source]¶ -
get_seg_masks
(mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, scale_factor, rescale)[source]¶ Get segmentation masks from mask_pred and bboxes.
Parameters: - mask_pred (Tensor or ndarray) – shape (n, #class, h, w). For single-scale testing, mask_pred is the direct output of model, whose type is Tensor, while for multi-scale testing, it will be converted to numpy array outside of this method.
- det_bboxes (Tensor) – shape (n, 4/5)
- det_labels (Tensor) – shape (n, )
- img_shape (Tensor) – shape (3, )
- rcnn_test_cfg (dict) – rcnn testing config
- ori_shape – original image size
Returns: encoded masks
Return type: list[list]
-
-
class
mmdet.models.roi_heads.
FusedSemanticHead
(num_ins, fusion_level, num_convs=4, in_channels=256, conv_out_channels=256, num_classes=183, ignore_label=255, loss_weight=0.2, conv_cfg=None, norm_cfg=None)[source]¶ Multi-level fused semantic segmentation head.
in_1 -> 1x1 conv --- | in_2 -> 1x1 conv -- | || in_3 -> 1x1 conv - || ||| /-> 1x1 conv (mask prediction) in_4 -> 1x1 conv -----> 3x3 convs (*4) | \-> 1x1 conv (feature) in_5 -> 1x1 conv ---
-
class
mmdet.models.roi_heads.
GridHead
(grid_points=9, num_convs=8, roi_feat_size=14, in_channels=256, conv_kernel_size=3, point_feat_channels=64, deconv_kernel_size=4, class_agnostic=False, loss_grid={'loss_weight': 15, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, conv_cfg=None, norm_cfg={'num_groups': 36, 'type': 'GN'})[source]¶ -
calc_sub_regions
()[source]¶ Compute point specific representation regions.
See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details.
-
-
class
mmdet.models.roi_heads.
MaskIoUHead
(num_convs=4, num_fcs=2, roi_feat_size=14, in_channels=256, conv_out_channels=256, fc_out_channels=1024, num_classes=80, loss_iou={'loss_weight': 0.5, 'type': 'MSELoss'})[source]¶ Mask IoU Head.
This head predicts the IoU of predicted masks and corresponding gt masks.
-
get_mask_scores
(mask_iou_pred, det_bboxes, det_labels)[source]¶ Get the mask scores.
mask_score = bbox_score * mask_iou
-
get_targets
(sampling_results, gt_masks, mask_pred, mask_targets, rcnn_train_cfg)[source]¶ Compute target of mask IoU.
Mask IoU target is the IoU of the predicted mask (inside a bbox) and the gt mask of corresponding gt mask (the whole instance). The intersection area is computed inside the bbox, and the gt mask area is computed with two steps, firstly we compute the gt area inside the bbox, then divide it by the area ratio of gt area inside the bbox and the gt area of the whole instance.
Parameters: - sampling_results (list[
SamplingResult
]) – sampling results. - gt_masks (BitmapMask | PolygonMask) – Gt masks (the whole instance) of each image, with the same shape of the input image.
- mask_pred (Tensor) – Predicted masks of each positive proposal, shape (num_pos, h, w).
- mask_targets (Tensor) – Gt mask of each positive proposal, binary map of the shape (num_pos, h, w).
- rcnn_train_cfg (dict) – Training config for R-CNN part.
Returns: mask iou target (length == num positive).
Return type: Tensor
- sampling_results (list[
-
-
class
mmdet.models.roi_heads.
SingleRoIExtractor
(roi_layer, out_channels, featmap_strides, finest_scale=56)[source]¶ Extract RoI features from a single level feature map.
If there are multiple input feature levels, each RoI is mapped to a level according to its scale. The mapping rule is proposed in FPN.
Parameters: - roi_layer (dict) – Specify RoI layer type and arguments.
- out_channels (int) – Output channels of RoI layers.
- featmap_strides (int) – Strides of input feature maps.
- finest_scale (int) – Scale threshold of mapping to level 0. Default: 56.
-
map_roi_levels
(rois, num_levels)[source]¶ Map rois to corresponding feature levels by scales.
- scale < finest_scale * 2: level 0
- finest_scale * 2 <= scale < finest_scale * 4: level 1
- finest_scale * 4 <= scale < finest_scale * 8: level 2
- scale >= finest_scale * 8: level 3
Parameters: - rois (Tensor) – Input RoIs, shape (k, 5).
- num_levels (int) – Total level number.
Returns: Level index (0-based) of each RoI, shape (k, )
Return type: Tensor
-
class
mmdet.models.roi_heads.
PISARoIHead
(bbox_roi_extractor=None, bbox_head=None, mask_roi_extractor=None, mask_head=None, shared_head=None, train_cfg=None, test_cfg=None)[source]¶ The RoI head for Prime Sample Attention in Object Detection.
-
forward_train
(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]¶ Forward function for training.
Parameters: - x (list[Tensor]) – List of multi-level img features.
- img_metas (list[dict]) – List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- proposals (list[Tensors]) – List of region proposals.
- gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – Class indices corresponding to each box
- gt_bboxes_ignore (list[Tensor], optional) – Specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) – True segmentation masks for each box used if the architecture supports a segmentation task.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
-
class
mmdet.models.roi_heads.
PointRendRoIHead
(point_head, *args, **kwargs)[source]¶ -
-
aug_test_mask
(feats, img_metas, det_bboxes, det_labels)[source]¶ Test for mask head with test time augmentation.
-
-
class
mmdet.models.roi_heads.
MaskPointHead
(num_classes, num_fcs=3, in_channels=256, fc_channels=256, class_agnostic=False, coarse_pred_each_layer=True, conv_cfg={'type': 'Conv1d'}, norm_cfg=None, act_cfg={'type': 'ReLU'}, loss_point={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_mask': True})[source]¶ A mask point head use in PointRend.
MaskPointHead
use shared multi-layer perceptron (equivalent to nn.Conv1d) to predict the logit of input points. The fine-grained feature and coarse feature will be concatenate together for predication.Parameters: - num_fcs (int) – Number of fc layers in the head. Default: 3.
- in_channels (int) – Number of input channels. Default: 256.
- fc_channels (int) – Number of fc channels. Default: 256.
- num_classes (int) – Number of classes for logits. Default: 80.
- class_agnostic (bool) – Whether use class agnostic classification. If so, the output channels of logits will be 1. Default: False.
- coarse_pred_each_layer (bool) – Whether concatenate coarse feature with the output of each fc layer. Default: True.
- conv_cfg (dict | None) – Dictionary to construct and config conv layer. Default: dict(type=’Conv1d’))
- norm_cfg (dict | None) – Dictionary to construct and config norm layer. Default: None.
- loss_point (dict) – Dictionary to construct and config loss layer of point head. Default: dict(type=’CrossEntropyLoss’, use_mask=True, loss_weight=1.0).
-
forward
(fine_grained_feats, coarse_feats)[source]¶ Classify each point base on fine grained and coarse feats.
Parameters: - fine_grained_feats (Tensor) – Fine grained feature sampled from FPN, shape (num_rois, in_channels, num_points).
- coarse_feats (Tensor) – Coarse feature sampled from CoarseMaskHead, shape (num_rois, num_classes, num_points).
Returns: - Point classification results,
shape (num_rois, num_class, num_points).
Return type: Tensor
-
get_roi_rel_points_test
(mask_pred, pred_label, cfg)[source]¶ Get
num_points
most uncertain points during test.Parameters: - mask_pred (Tensor) – A tensor of shape (num_rois, num_classes, mask_height, mask_width) for class-specific or class-agnostic prediction.
- pred_label (list) – The predication class for each instance.
- cfg (dict) – Testing config of point head.
Returns: - A tensor of shape (num_rois, num_points)
that contains indices from [0, mask_height x mask_width) of the most uncertain points.
- point_coords (Tensor): A tensor of shape (num_rois, num_points, 2)
that contains [0, 1] x [0, 1] normalized coordinates of the most uncertain points from the [mask_height, mask_width] grid .
Return type: point_indices (Tensor)
-
get_roi_rel_points_train
(mask_pred, labels, cfg)[source]¶ Get
num_points
most uncertain points with random points during train.Sample points in [0, 1] x [0, 1] coordinate space based on their uncertainty. The uncertainties are calculated for each point using ‘_get_uncertainty()’ function that takes point’s logit prediction as input.
Parameters: - mask_pred (Tensor) – A tensor of shape (num_rois, num_classes, mask_height, mask_width) for class-specific or class-agnostic prediction.
- labels (list) – The ground truth class for each instance.
- cfg (dict) – Training config of point head.
Returns: - A tensor of shape (num_rois, num_points, 2)
that contains the coordinates sampled points.
Return type: point_coords (Tensor)
-
get_targets
(rois, rel_roi_points, sampling_results, gt_masks, cfg)[source]¶ Get training targets of MaskPointHead for all images.
Parameters: - rois (Tensor) – Region of Interest, shape (num_rois, 5).
- rel_roi_points – Points coordinates relative to RoI, shape (num_rois, num_points, 2).
- sampling_results (
SamplingResult
) – Sampling result after sampling and assignment. - gt_masks (Tensor) – Ground truth segmentation masks of corresponding boxes, shape (num_rois, height, width).
- cfg (dict) – Training cfg.
Returns: Point target, shape (num_rois, num_points).
Return type: Tensor
-
init_weights
()[source]¶ Initialize last classification layer of MaskPointHead, conv layers are already initialized by ConvModule.
-
loss
(point_pred, point_targets, labels)[source]¶ Calculate loss for MaskPointHead.
Parameters: - point_pred (Tensor) – Point predication result, shape (num_rois, num_classes, num_points).
- point_targets (Tensor) – Point targets, shape (num_roi, num_points).
- labels (Tensor) – Class label of corresponding boxes, shape (num_rois, )
Returns: a dictionary of point loss components
Return type: dict[str, Tensor]
-
class
mmdet.models.roi_heads.
CoarseMaskHead
(num_convs=0, num_fcs=2, fc_out_channels=1024, downsample_factor=2, *arg, **kwarg)[source]¶ Coarse mask head used in PointRend.
Compared with standard
FCNMaskHead
,CoarseMaskHead
will downsample the input feature map instead of upsample it.Parameters: - num_convs (int) – Number of conv layers in the head. Default: 0.
- num_fcs (int) – Number of fc layers in the head. Default: 2.
- fc_out_channels (int) – Number of output channels of fc layer. Default: 1024.
- downsample_factor (int) – The factor that feature map is downsampled by. Default: 2.
-
class
mmdet.models.roi_heads.
DynamicRoIHead
(**kwargs)[source]¶ RoI head for Dynamic R-CNN.
-
forward_train
(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]¶ Forward function for training.
Parameters: - x (list[Tensor]) – list of multi-level img features.
- img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.
- proposals (list[Tensors]) – list of region proposals.
- gt_bboxes (list[Tensor]) – each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]) – class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.
Returns: a dictionary of loss components
Return type: dict[str, Tensor]
-
losses¶
-
mmdet.models.losses.
accuracy
(pred, target, topk=1, thresh=None)[source]¶ Calculate accuracy according to the prediction and target.
Parameters: - pred (torch.Tensor) – The model prediction, shape (N, num_class)
- target (torch.Tensor) – The target of each prediction, shape (N, )
- topk (int | tuple[int], optional) – If the predictions in
topk
matches the target, the predictions will be regarded as correct ones. Defaults to 1. - thresh (float, optional) – If not None, predictions with scores under this threshold are considered incorrect. Default to None.
Returns: - If the input
topk
is a single integer, the function will return a single float as accuracy. If
topk
is a tuple containing multiple integers, the function will return a tuple containing accuracies of eachtopk
number.
Return type: float | tuple[float]
-
mmdet.models.losses.
cross_entropy
(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None)[source]¶ Calculate the CrossEntropy loss.
Parameters: - pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
- label (torch.Tensor) – The learning label of the prediction.
- weight (torch.Tensor, optional) – Sample-wise loss weight.
- reduction (str, optional) – The method used to reduce the loss.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- class_weight (list[float], optional) – The weight for each class.
Returns: The calculated loss
Return type: torch.Tensor
-
mmdet.models.losses.
binary_cross_entropy
(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None)[source]¶ Calculate the binary CrossEntropy loss.
Parameters: - pred (torch.Tensor) – The prediction with shape (N, 1).
- label (torch.Tensor) – The learning label of the prediction.
- weight (torch.Tensor, optional) – Sample-wise loss weight.
- reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- class_weight (list[float], optional) – The weight for each class.
Returns: The calculated loss
Return type: torch.Tensor
-
mmdet.models.losses.
mask_cross_entropy
(pred, target, label, reduction='mean', avg_factor=None, class_weight=None)[source]¶ Calculate the CrossEntropy loss for masks.
Parameters: - pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
- target (torch.Tensor) – The learning label of the prediction.
- label (torch.Tensor) –
label
indicates the class label of the mask’ corresponding object. This will be used to select the mask in the of the class which the object belongs to when the mask prediction if not class-agnostic. - reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- class_weight (list[float], optional) – The weight for each class.
Returns: The calculated loss
Return type: torch.Tensor
-
class
mmdet.models.losses.
CrossEntropyLoss
(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0)[source]¶ -
forward
(cls_score, label, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]¶ Forward function.
Parameters: - cls_score (torch.Tensor) – The prediction.
- label (torch.Tensor) – The learning label of the prediction.
- weight (torch.Tensor, optional) – Sample-wise loss weight.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
Returns: The calculated loss
Return type: torch.Tensor
-
-
mmdet.models.losses.
sigmoid_focal_loss
(pred, target, weight=None, gamma=2.0, alpha=0.25, reduction='mean', avg_factor=None)[source]¶ A warpper of cuda version Focal Loss.
Parameters: - pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
- target (torch.Tensor) – The learning label of the prediction.
- weight (torch.Tensor, optional) – Sample-wise loss weight.
- gamma (float, optional) – The gamma for calculating the modulating factor. Defaults to 2.0.
- alpha (float, optional) – A balanced form for Focal Loss. Defaults to 0.25.
- reduction (str, optional) – The method used to reduce the loss into a scalar. Defaults to ‘mean’. Options are “none”, “mean” and “sum”.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
-
class
mmdet.models.losses.
FocalLoss
(use_sigmoid=True, gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0)[source]¶ -
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning label of the prediction.
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.
Returns: The calculated loss
Return type: torch.Tensor
-
-
mmdet.models.losses.
smooth_l1_loss
(pred, target, beta=1.0)[source]¶ Smooth L1 loss.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
- beta (float, optional) – The threshold in the piecewise function. Defaults to 1.0.
Returns: Calculated loss
Return type: torch.Tensor
-
class
mmdet.models.losses.
SmoothL1Loss
(beta=1.0, reduction='mean', loss_weight=1.0)[source]¶ Smooth L1 loss.
Parameters: - beta (float, optional) – The threshold in the piecewise function. Defaults to 1.0.
- reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to “mean”.
- loss_weight (float, optional) – The weight of loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
-
mmdet.models.losses.
balanced_l1_loss
(pred, target, beta=1.0, alpha=0.5, gamma=1.5, reduction='mean')[source]¶ Calculate balanced L1 loss.
Please see the Libra R-CNN
Parameters: - pred (torch.Tensor) – The prediction with shape (N, 4).
- target (torch.Tensor) – The learning target of the prediction with shape (N, 4).
- beta (float) – The loss is a piecewise function of prediction and target
and
beta
serves as a threshold for the difference between the prediction and target. Defaults to 1.0. - alpha (float) – The denominator
alpha
in the balanced L1 loss. Defaults to 0.5. - gamma (float) – The
gamma
in the balanced L1 loss. Defaults to 1.5. - reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
Returns: The calculated loss
Return type: torch.Tensor
-
class
mmdet.models.losses.
BalancedL1Loss
(alpha=0.5, gamma=1.5, beta=1.0, reduction='mean', loss_weight=1.0)[source]¶ Balanced L1 Loss.
arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
Parameters: - alpha (float) – The denominator
alpha
in the balanced L1 loss. Defaults to 0.5. - gamma (float) – The
gamma
in the balanced L1 loss. Defaults to 1.5. - beta (float, optional) – The loss is a piecewise function of prediction
and target.
beta
serves as a threshold for the difference between the prediction and target. Defaults to 1.0. - reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
- loss_weight (float, optional) – The weight of the loss. Defaults to 1.0
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]¶ Forward function of loss.
Parameters: - pred (torch.Tensor) – The prediction with shape (N, 4).
- target (torch.Tensor) – The learning target of the prediction with shape (N, 4).
- weight (torch.Tensor, optional) – Sample-wise loss weight with shape (N, ).
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.
Returns: The calculated loss
Return type: torch.Tensor
- alpha (float) – The denominator
-
class
mmdet.models.losses.
MSELoss
(reduction='mean', loss_weight=1.0)[source]¶ MSELoss.
Parameters: - reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.
- loss_weight (float, optional) – The weight of the loss. Defaults to 1.0
-
forward
(pred, target, weight=None, avg_factor=None)[source]¶ Forward function of loss.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
- weight (torch.Tensor, optional) – Weight of the loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
Returns: The calculated loss
Return type: torch.Tensor
-
mmdet.models.losses.
iou_loss
(pred, target, eps=1e-06)[source]¶ IoU loss.
Computing the IoU loss between a set of predicted bboxes and target bboxes. The loss is calculated as negative log of IoU.
Parameters: - pred (torch.Tensor) – Predicted bboxes of format (x1, y1, x2, y2), shape (n, 4).
- target (torch.Tensor) – Corresponding gt bboxes, shape (n, 4).
- eps (float) – Eps to avoid log(0).
Returns: Loss tensor.
Return type: torch.Tensor
-
mmdet.models.losses.
bounded_iou_loss
(pred, target, beta=0.2, eps=0.001)[source]¶ BIoULoss.
This is an implementation of paper Improving Object Localization with Fitness NMS and Bounded IoU Loss..
Parameters: - pred (torch.Tensor) – Predicted bboxes.
- target (torch.Tensor) – Target bboxes.
- beta (float) – beta parameter in smoothl1.
- eps (float) – eps to avoid NaN.
-
class
mmdet.models.losses.
IoULoss
(eps=1e-06, reduction='mean', loss_weight=1.0)[source]¶ IoULoss.
Computing the IoU loss between a set of predicted bboxes and target bboxes.
Parameters: - eps (float) – Eps to avoid log(0).
- reduction (str) – Options are “none”, “mean” and “sum”.
- loss_weight (float) – Weight of loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None. Options are “none”, “mean” and “sum”.
-
class
mmdet.models.losses.
BoundedIoULoss
(beta=0.2, eps=0.001, reduction='mean', loss_weight=1.0)[source]¶
-
class
mmdet.models.losses.
GHMC
(bins=10, momentum=0, use_sigmoid=True, loss_weight=1.0)[source]¶ GHM Classification Loss.
Details of the theorem can be viewed in the paper Gradient Harmonized Single-stage Detector.
Parameters: - bins (int) – Number of the unit regions for distribution calculation.
- momentum (float) – The parameter for moving average.
- use_sigmoid (bool) – Can only be true for BCE based loss now.
- loss_weight (float) – The weight of the total GHM-C loss.
-
forward
(pred, target, label_weight, *args, **kwargs)[source]¶ Calculate the GHM-C loss.
Parameters: - pred (float tensor of size [batch_num, class_num]) – The direct prediction of classification fc layer.
- target (float tensor of size [batch_num, class_num]) – Binary class target for each sample.
- label_weight (float tensor of size [batch_num, class_num]) – the value is 1 if the sample is valid and 0 if ignored.
Returns: The gradient harmonized loss.
-
class
mmdet.models.losses.
GHMR
(mu=0.02, bins=10, momentum=0, loss_weight=1.0)[source]¶ GHM Regression Loss.
Details of the theorem can be viewed in the paper Gradient Harmonized Single-stage Detector.
Parameters: - mu (float) – The parameter for the Authentic Smooth L1 loss.
- bins (int) – Number of the unit regions for distribution calculation.
- momentum (float) – The parameter for moving average.
- loss_weight (float) – The weight of the total GHM-R loss.
-
forward
(pred, target, label_weight, avg_factor=None)[source]¶ Calculate the GHM-R loss.
Parameters: - pred (float tensor of size [batch_num, 4 (* class_num)]) – The prediction of box regression layer. Channel number can be 4 or 4 * class_num depending on whether it is class-agnostic.
- target (float tensor of size [batch_num, 4 (* class_num)]) – The target regression values with the same size of pred.
- label_weight (float tensor of size [batch_num, 4 (* class_num)]) – The weight of each sample, 0 if ignored.
Returns: The gradient harmonized loss.
-
mmdet.models.losses.
reduce_loss
(loss, reduction)[source]¶ Reduce loss as specified.
Parameters: - loss (Tensor) – Elementwise loss tensor.
- reduction (str) – Options are “none”, “mean” and “sum”.
Returns: Reduced loss tensor.
Return type: Tensor
-
mmdet.models.losses.
weight_reduce_loss
(loss, weight=None, reduction='mean', avg_factor=None)[source]¶ Apply element-wise weight and reduce loss.
Parameters: - loss (Tensor) – Element-wise loss.
- weight (Tensor) – Element-wise weights.
- reduction (str) – Same as built-in losses of PyTorch.
- avg_factor (float) – Avarage factor when computing the mean of losses.
Returns: Processed loss values.
Return type: Tensor
-
mmdet.models.losses.
weighted_loss
(loss_func)[source]¶ Create a weighted version of a given loss function.
To use this decorator, the loss function must have the signature like loss_func(pred, target, **kwargs). The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like loss_func(pred, target, weight=None, reduction=’mean’, avg_factor=None, **kwargs).
Example: >>> import torch >>> @weighted_loss >>> def l1_loss(pred, target): >>> return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3]) >>> target = torch.Tensor([1, 1, 1]) >>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target) tensor(1.3333) >>> l1_loss(pred, target, weight) tensor(1.) >>> l1_loss(pred, target, reduction='none') tensor([1., 1., 2.]) >>> l1_loss(pred, target, weight, avg_factor=2) tensor(1.5000)
-
class
mmdet.models.losses.
L1Loss
(reduction='mean', loss_weight=1.0)[source]¶ L1 loss.
Parameters: - reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”.
- loss_weight (float, optional) – The weight of loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
-
mmdet.models.losses.
l1_loss
(pred, target)[source]¶ L1 loss.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction.
Returns: Calculated loss
Return type: torch.Tensor
-
mmdet.models.losses.
isr_p
(cls_score, bbox_pred, bbox_targets, rois, sampling_results, loss_cls, bbox_coder, k=2, bias=0, num_class=80)[source]¶ Importance-based Sample Reweighting (ISR_P), positive part.
Parameters: - cls_score (Tensor) – Predicted classification scores.
- bbox_pred (Tensor) – Predicted bbox deltas.
- bbox_targets (tuple[Tensor]) – A tuple of bbox targets, the are labels, label_weights, bbox_targets, bbox_weights, respectively.
- rois (Tensor) – Anchors (single_stage) in shape (n, 4) or RoIs (two_stage) in shape (n, 5).
- sampling_results (obj) – Sampling results.
- loss_cls (func) – Classification loss func of the head.
- bbox_coder (obj) – BBox coder of the head.
- k (float) – Power of the non-linear mapping.
- bias (float) – Shift of the non-linear mapping.
- num_class (int) – Number of classes, default: 80.
Returns: - labels, imp_based_label_weights, bbox_targets,
bbox_target_weights
Return type: tuple([Tensor])
-
mmdet.models.losses.
carl_loss
(cls_score, labels, bbox_pred, bbox_targets, loss_bbox, k=1, bias=0.2, avg_factor=None, sigmoid=False, num_class=80)[source]¶ Classification-Aware Regression Loss (CARL).
Parameters: - cls_score (Tensor) – Predicted classification scores.
- labels (Tensor) – Targets of classification.
- bbox_pred (Tensor) – Predicted bbox deltas.
- bbox_targets (Tensor) – Target of bbox regression.
- loss_bbox (func) – Regression loss func of the head.
- bbox_coder (obj) – BBox coder of the head.
- k (float) – Power of the non-linear mapping.
- bias (float) – Shift of the non-linear mapping.
- avg_factor (int) – Average factor used in regression loss.
- sigmoid (bool) – Activation of the classification score.
- num_class (int) – Number of classes, default: 80.
Returns: CARL loss dict.
Return type: dict
-
class
mmdet.models.losses.
AssociativeEmbeddingLoss
(pull_weight=0.25, push_weight=0.25)[source]¶ Associative Embedding Loss.
More details can be found in Associative Embedding and CornerNet . Code is modified from kp_utils.py # noqa: E501
Parameters: - pull_weight (float) – Loss weight for corners from same object.
- push_weight (float) – Loss weight for corners from different object.
-
class
mmdet.models.losses.
GaussianFocalLoss
(alpha=2.0, gamma=4.0, reduction='mean', loss_weight=1.0)[source]¶ GaussianFocalLoss is a variant of focal loss.
More details can be found in the paper Code is modified from kp_utils.py # noqa: E501 Please notice that the target in GaussianFocalLoss is a gaussian heatmap, not 0/1 binary target.
Parameters: - alpha (float) – Power of prediction.
- gamma (float) – Power of target for negtive samples.
- reduction (str) – Options are “none”, “mean” and “sum”.
- loss_weight (float) – Loss weight of current loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – The prediction.
- target (torch.Tensor) – The learning target of the prediction in gaussian distribution.
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
-
class
mmdet.models.losses.
QualityFocalLoss
(use_sigmoid=True, beta=2.0, reduction='mean', loss_weight=1.0)[source]¶ Quality Focal Loss (QFL) is a variant of Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection.
Parameters: - use_sigmoid (bool) – Whether sigmoid operation is conducted in QFL. Defaults to True.
- beta (float) – The beta parameter for calculating the modulating factor. Defaults to 2.0.
- reduction (str) – Options are “none”, “mean” and “sum”.
- loss_weight (float) – Loss weight of current loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – Predicted joint representation of classification and quality (IoU) estimation with shape (N, C), C is the number of classes.
- target (tuple([torch.Tensor])) – Target category label with shape (N,) and target quality label with shape (N,).
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
-
class
mmdet.models.losses.
DistributionFocalLoss
(reduction='mean', loss_weight=1.0)[source]¶ Distribution Focal Loss (DFL) is a variant of Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection.
Parameters: - reduction (str) – Options are ‘none’, ‘mean’ and ‘sum’.
- loss_weight (float) – Loss weight of current loss.
-
forward
(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]¶ Forward function.
Parameters: - pred (torch.Tensor) – Predicted general distribution of bounding boxes (before softmax) with shape (N, n+1), n is the max value of the integral set {0, …, n} in paper.
- target (torch.Tensor) – Target distance label for bounding boxes with shape (N,).
- weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.
- avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
- reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.
utils¶
-
class
mmdet.models.utils.
ResLayer
(block, inplanes, planes, num_blocks, stride=1, avg_down=False, conv_cfg=None, norm_cfg={'type': 'BN'}, downsample_first=True, **kwargs)[source]¶ ResLayer to build ResNet style backbone.
Parameters: - block (nn.Module) – block used to build ResLayer.
- inplanes (int) – inplanes of block.
- planes (int) – planes of block.
- num_blocks (int) – number of blocks.
- stride (int) – stride of the first block. Default: 1
- avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False
- conv_cfg (dict) – dictionary to construct and config conv layer. Default: None
- norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)
- downsample_first (bool) – Downsample at the first block or last block. False for Hourglass, True for ResNet. Default: True
-
mmdet.models.utils.
gaussian_radius
(det_size, min_overlap)[source]¶ Generate 2D gaussian radius.
This function is modified from the official github repo.
Given
min_overlap
, radius could computed by a quadratic equation according to Vieta’s formulas.There are 3 cases for computing gaussian radius, details are following:
- Explanation of figure:
lt
andbr
indicates the left-top and bottom-right corner of ground truth box.x
indicates the generated corner at the limited position whenradius=r
. - Case1: one corner is inside the gt box and the other is outside.
|< width >| lt-+----------+ - | | | ^ +--x----------+--+ | | | | | | | | height | | overlap | | | | | | | | | | v +--+---------br--+ - | | | +----------+--x
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}\end{split}\]- Case2: both two corners are inside the gt box.
|< width >| lt-+----------+ - | | | ^ +--x-------+ | | | | | | |overlap| | height | | | | | +-------x--+ | | | v +----------+-br -
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}\end{split}\]- Case3: both two corners are outside the gt box.
|< width >| x--+----------------+ | | | +-lt-------------+ | - | | | | ^ | | | | | | overlap | | height | | | | | | | | v | +------------br--+ - | | | +----------------+--x
To ensure IoU of generated box and gt box is larger than
min_overlap
:\[\begin{split}\cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a}\end{split}\]Parameters: - det_size (list[int]) – Shape of object.
- min_overlap (float) – Min IoU with ground truth for boxes generated by keypoints inside the gaussian kernel.
Returns: Radius of gaussian kernel.
Return type: radius (int)
- Explanation of figure:
-
mmdet.models.utils.
gen_gaussian_target
(heatmap, center, radius, k=1)[source]¶ Generate 2D gaussian heatmap.
Parameters: - heatmap (Tensor) – Input heatmap, the gaussian kernel will cover on it and maintain the max value.
- center (list[int]) – Coord of gaussian kernel’s center.
- radius (int) – Radius of gaussian kernel.
- k (int) – Coefficient of gaussian kernel. Default: 1.
Returns: Updated heatmap covered by gaussian kernel.
Return type: out_heatmap (Tensor)