-
Origin Repo:DingXiaoH/RepVGG
-
Code:repvgg.py
-
Evaluate Transforms:
# backend: pil # input_size: 224x224 transforms = T.Compose([ T.Resize(256), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ])
-
Model Details:
Model Model Name Params (M) FLOPs (G) Top-1 (%) Top-5 (%) Pretrained Model RepVGG-A0 repvgg_a0 8.3 1.4 72.42 90.49 Download RepVGG-A1 repvgg_a1 12.8 2.4 74.46 91.85 Download RepVGG-A2 repvgg_a2 25.5 5.1 76.46 93.00 Download RepVGG-B0 repvgg_b0 14.3 3.1 75.15 92.42 Download RepVGG-B1 repvgg_b1 51.8 11.8 78.37 94.10 Download RepVGG-B2 repvgg_b2 80.3 18.4 78.79 94.42 Download RepVGG-B3 repvgg_b3 111.0 26.2 80.50 95.26 Download RepVGG-B1g2 repvgg_b1g2 41.4 8.8 77.80 93.88 Download RepVGG-B1g4 repvgg_b1g4 36.1 7.3 77.58 93.84 Download RepVGG-B2g4 repvgg_b2g4 55.8 11.3 79.38 94.68 Download RepVGG-B3g4 repvgg_b3g4 75.6 16.1 80.21 95.09 Download
-
Citation:
@article{ding2021repvgg, title={RepVGG: Making VGG-style ConvNets Great Again}, author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian}, journal={arXiv preprint arXiv:2101.03697}, year={2021} }