Skip to content

Latest commit

 

History

History

object_detection

COCO Object detection with UniFormer

We currenent release the code and models for:

  • Mask R-CNN

  • Cascade Mask R-CNN

Updates

05/22/2023

Lightweight models with Mask R-CNN are released.

01/18/2022

  1. Models with Mask R-CNN are released.

  2. Models with Cascade Mask R-CNN are released.

Model Zoo

The followed models and logs can be downloaded on Google Drive: total_models, total_logs.

We also release the models on Baidu Cloud: total_models (5v6i), total_logs (wr74).

Note

Mask R-CNN

Backbone Lr Schd box mAP mask mAP #params FLOPs Model Log Shell
UniFormer-XXS 1x 42.8 39.2 29.4M - google google run.sh/config
UniFormer-XS 1x 44.6 40.9 35.6M - google google run.sh/config
UniFormer-Sh14 1x 45.6 41.6 41M 269G google google run.sh/config
UniFormer-Sh14 3x+MS 48.2 43.4 41M 269G google google run.sh/config
UniFormer-Bh14 1x 47.4 43.1 69M 399G google google run.sh/config
UniFormer-Bh14 3x+MS 50.3 44.8 69M 399G google google run.sh/config

Cascade Mask R-CNN

Backbone Lr Schd box mAP mask mAP #params FLOPs Model Log Shell
UniFormer-Sh14 3x+MS 52.1 45.2 79M 747G google google run.sh/config
UniFormer-Bh14 3x+MS 53.8 46.4 107M 878G google google run.sh/config

Usage

Installation

Please refer to get_started for installation and dataset preparation.

Training

  1. Download the pretrained models in our repository.

  2. Simply run the training scripts in exp as followed:

    bash ./exp/mask_rcnn_1x_hybrid_small/run.sh

    Or you can train other models as follower:

    # single-gpu training
    python tools/train.py <CONFIG_FILE> --cfg-options model.backbone.pretrained_path=<PRETRAIN_MODEL> [other optional arguments]
    
    # multi-gpu training
    tools/dist_train.sh <CONFIG_FILE> <GPU_NUM> --cfg-options model.backbone.pretrained_path=<PRETRAIN_MODEL> [other optional arguments] 

[Note]:

  • We use hybrid MHRA to reduce training cost and set the corresponding hyperparameters in the config.py:

    window: False, # whether use window MHRA
    hybrid: True, # whether use hybrid MHRA
    window_size: 14, # size of window (>=14)
  • To avoid out of memory, we use torch.utils.checkpoint in the config.py:

    use_checkpoint=True, # whether use checkpoint
    checkpoint_num=[0, 0, 8, 0], # index for using checkpoint in every stage

Testing

# single-gpu testing
python tools/test.py <CONFIG_FILE> <DET_CHECKPOINT_FILE> --eval bbox segm

# multi-gpu testing
tools/dist_test.sh <CONFIG_FILE> <DET_CHECKPOINT_FILE> <GPU_NUM> --eval bbox segm

Acknowledgement

This repository is built based on mmdetection and Swin Transformer repository.