Lianjun Wu1, Jiangxiao Han1, Zengqiang Zheng2, Xinggang Wang1 📧
1 School of EIC, HUST, 2 Wuhan Jingce Electronic Group Co., Ltd.
📧 corresponding author.
ECCV 2024
[2024-7-2]
Co-Student is accepted by ECCV 2024!
The Sparsely Annotated Object Detection (SAOD) tackles the issue of incomplete labeling in object detection. Compared with Fully Annotated Object Detection (FAOD), SAOD is more complicated and challenging. Unlabeled objects tend to provide wrong supervision to the detectors during training, resulting in inferior performance for prevalent object detectors. Shrinking the performance gap between SAOD and FAOD does contribute to reducing the labeling cost. Existing methods tend to exploit pseudo-labeling for unlabeled objects while suffering from two issues: (1) they fail to make full use of unlabeled objects mined from the student detector and (2) the pseudo-labels contain much noise. To tackle those two issues, we introduce Co-Student, a novel framework aiming to bridge the gap between SAOD and FAOD via fully exploiting the pseudo-labels from both teacher and student detectors. The proposed Co-Student comprises a sophisticated teacher to denoise the pseudo-labels for unlabeled objects and two collaborative students that leverage strong and weak augmentations to excavate pseudo-labels. The students exchange the denoised pseudo-labels and learn from each other with consistency regularization brought by strong-weak augmentations. Without bells and whistles, the proposed Co-Student framework with the one-stage detector, i.e., FCOS, can achieve state-of-the-art performance on the COCO dataset with sparse annotations under diverse settings. Compared to previous works, it obtains 1.0%~3.0% AP improvements under five settings of sparse annotations and achieves 95.1% performance compared to FCOS trained on fully annotated COCO dataset.
The codes for our work "Co-Student: Collaborating Strong and Weak Students for Sparsely Annotated Object Detection" based on cvpods
If you want to retrain our model on MS COCO, you need to ensure that the COCO dataset exist in your machine. Or you can head to MS COCO to download the datasets.
Assume you have your COCO dataset in "/your-path/coco", and expected dataset structure as follows:
/your-path/coco/
annotations/
train2017/
test2017/
val2017/
link your COCO dataset to CoStudent root by:
(for Windows, need Administrator permissions) mklink /D "/path/CoStudent/datasets/coco" "/your-path/coco/"
ln -s "/your-path/coco/" "/path/CoStudent/datasets/coco"
Download the sparse-annotations "missing_50p", "easy", "hard", and "extreme" from Co-mining paper, which are publicly available. The annotation of "keep1" is from the authors of SIOD paper and here
Step 1. Create a conda environment and activate it.
conda create --name cvpods python=3.6 -y
conda activate cvpods
Step 2. Install corresponding version of torch and torchvision depends on the version of Cuda compilation tools you use.
Fisrt, check the version of Cuda compilation tools by:
nvcc -V
Assume your Cuda compilation tools vesion is 11.1,
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
Step 3. Install other needed packages
pip install -r requirements.txt
Step 4. Build cvpods as follows:
cd /path/CoStudent
pip install -e .
You can train our mothod CoStudent on COCO-miss50p for 12 epochs by the following command:
bash tools/train.sh
@inproceedings{wu2024CoStudent,
title={Co-Student: Collaborating Strong and Weak Students for Sparsely Annotated Object Detection},
author={Lianjun Wu, Jiangxiao Han, Zengqiang Zheng and Xinggang Wang},
year={2024},
booktitle={ECCV}
}
Released under the MIT License.