Skip to content

[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment

License

Notifications You must be signed in to change notification settings

blacorse/once-for-all

 
 

Repository files navigation

Once for All: Train One Network and Specialize it for Efficient Deployment [arXiv] [Slides] [Video]

@inproceedings{
  cai2020once,
  title={Once for All: Train One Network and Specialize it for Efficient Deployment},
  author={Han Cai and Chuang Gan and Tianzhe Wang and Zhekai Zhang and Song Han},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://arxiv.org/pdf/1908.09791.pdf}
}

[News] First place in the CVPR 2020 Low-Power Computer Vision Challenge, CPU detection and FPGA track.

[News] OFA-ResNet50 is released.

[News] The hands-on tutorial of OFA is released!

[News] OFA is available via pip! Run pip install ofa to install the whole OFA codebase.

[News] First place in the 4th Low-Power Computer Vision Challenge, both classification and detection track.

[News] First place in the 3rd Low-Power Computer Vision Challenge, DSP track at ICCV’19 using the Once-for-all Network.

Train once, specialize for many deployment scenarios

80% top1 ImageNet accuracy under mobile setting

Consistently outperforms MobileNetV3 on Diverse hardware platforms

OFA-ResNet50 [How to use]

How to use / evaluate OFA Networks

Use

""" OFA Networks.
    Example: ofa_network = ofa_net('ofa_mbv3_d234_e346_k357_w1.0', pretrained=True)
""" 
from ofa.model_zoo import ofa_net
ofa_network = ofa_net(net_id, pretrained=True)
    
# Randomly sample sub-networks from OFA network
ofa_network.sample_active_subnet()
random_subnet = ofa_network.get_active_subnet(preserve_weight=True)
    
# Manually set the sub-network
ofa_network.set_active_subnet(ks=7, e=6, d=4)
manual_subnet = ofa_network.get_active_subnet(preserve_weight=True)

If the above scripts failed to download, you download it manually from Google Drive and put them under $HOME/.torch/ofa_nets/.

Evaluate

python eval_ofa_net.py --path 'Your path to imagenet' --net ofa_mbv3_d234_e346_k357_w1.0

OFA Network Design Space Resolution Width Multiplier Depth Expand Ratio kernel Size
ofa_resnet50 ResNet50D 128 - 224 0.65, 0.8, 1.0 0, 1, 2 0.2, 0.25, 0.35 3
ofa_mbv3_d234_e346_k357_w1.0 MobileNetV3 128 - 224 1.0 2, 3, 4 3, 4, 6 3, 5, 7
ofa_mbv3_d234_e346_k357_w1.2 MobileNetV3 160 - 224 1.2 2, 3, 4 3, 4, 6 3, 5, 7
ofa_proxyless_d234_e346_k357_w1.3 ProxylessNAS 128 - 224 1.3 2, 3, 4 3, 4, 6 3, 5, 7

How to use / evaluate OFA Specialized Networks

Use

""" OFA Specialized Networks.
Example: net, image_size = ofa_specialized('flops@[email protected]_finetune@75', pretrained=True)
""" 
from ofa.model_zoo import ofa_specialized
net, image_size = ofa_specialized(net_id, pretrained=True)

If the above scripts failed to download, you download it manually from Google Drive and put them under $HOME/.torch/ofa_specialized/.

Evaluate

python eval_specialized_net.py --path 'Your path to imagent' --net flops@[email protected]_finetune@75

OFA Specialized Sub-nets Top-1 (%) Top-5 (%) #Params #MACs
ResNet50 Design Space
[email protected][email protected] 79.8 94.7 30.9M 4.1B
[email protected][email protected] 79.7 94.7 26.5M 3.7B
[email protected][email protected] 79.3 94.5 28.7M 3.0B
[email protected][email protected] 79.0 94.2 29.0M 2.4B
[email protected][email protected] 78.3 94.0 20.7M 1.8B
[email protected][email protected]_finetune@25 77.1 93.3 19.3M 1.2B
[email protected][email protected]_finetune@25 76.3 92.9 14.5M 0.9B
[email protected][email protected]_finetune@25 75.0 92.1 9.6M 0.6B
FLOPs
flops@[email protected]_finetune@75 80.0 94.9 9.1M 595M
flops@[email protected]_finetune@75 79.6 94.8 9.1M 482M
flops@[email protected]_finetune@75 79.1 94.5 8.4M 389M
LG G8
LG-G8_lat@[email protected]_finetune@25 76.4 93.0 5.8M 230M
LG-G8_lat@[email protected]_finetune@25 74.7 92.0 5.8M 151M
LG-G8_lat@[email protected]_finetune@25 73.0 91.1 5.0M 103M
LG-G8_lat@[email protected]_finetune@25 71.1 89.7 4.1M 74M
Samsung S7 Edge
s7edge_lat@[email protected]_finetune@25 76.3 92.9 6.4M 219M
s7edge_lat@[email protected]_finetune@25 74.7 92.0 4.6M 145M
s7edge_lat@[email protected]_finetune@25 73.1 91.0 4.7M 96M
s7edge_lat@[email protected]_finetune@25 70.5 89.5 3.8M 66M
Samsung Note8
note8_lat@[email protected]_finetune@25 76.1 92.7 5.3M 220M
note8_lat@[email protected]_finetune@25 74.9 92.1 6.0M 164M
note8_lat@[email protected]_finetune@25 72.8 90.8 4.6M 101M
note8_lat@[email protected]_finetune@25 70.4 89.3 4.3M 67M
Samsung Note10
note10_lat@[email protected]_finetune@75 80.2 95.1 9.1M 743M
note10_lat@[email protected]_finetune@75 79.7 94.9 9.1M 554M
note10_lat@[email protected]_finetune@75 79.3 94.5 9.0M 457M
note10_lat@[email protected]_finetune@75 78.4 94.2 7.5M 339M
note10_lat@[email protected]_finetune@25 76.6 93.1 5.9M 237M
note10_lat@[email protected]_finetune@25 75.5 92.3 4.9M 163M
note10_lat@[email protected]_finetune@25 73.6 91.2 4.3M 110M
note10_lat@[email protected]_finetune@25 71.4 89.8 3.8M 79M
Google Pixel1
pixel1_lat@[email protected]_finetune@75 80.1 95.0 9.2M 642M
pixel1_lat@[email protected]_finetune@75 79.8 94.9 9.2M 593M
pixel1_lat@[email protected]_finetune@75 78.7 94.2 8.2M 356M
pixel1_lat@[email protected]_finetune@75 76.9 93.3 5.8M 230M
pixel1_lat@[email protected]_finetune@25 74.9 92.1 6.0M 162M
pixel1_lat@[email protected]_finetune@25 73.3 91.0 5.2M 109M
pixel1_lat@[email protected]_finetune@25 71.4 89.8 4.3M 77M
Google Pixel2
pixel2_lat@[email protected]_finetune@25 75.8 92.7 5.8M 208M
pixel2_lat@[email protected]_finetune@25 74.7 91.9 4.7M 166M
pixel2_lat@[email protected]_finetune@25 73.4 91.1 5.1M 113M
pixel2_lat@[email protected]_finetune@25 71.5 90.1 4.1M 79M
1080ti GPU (Batch Size 64)
1080ti_gpu64@[email protected]_finetune@25 76.4 93.0 6.5M 397M
1080ti_gpu64@[email protected]_finetune@25 75.3 92.4 5.2M 313M
1080ti_gpu64@[email protected]_finetune@25 73.8 91.3 6.0M 226M
1080ti_gpu64@[email protected]_finetune@25 72.6 90.9 5.9M 165M
V100 GPU (Batch Size 64)
v100_gpu64@[email protected]_finetune@25 76.1 92.7 6.2M 352M
v100_gpu64@[email protected]_finetune@25 75.3 92.4 5.2M 313M
v100_gpu64@[email protected]_finetune@25 73.0 91.1 4.9M 179M
v100_gpu64@[email protected]_finetune@25 71.6 90.3 5.2M 141M
Jetson TX2 GPU (Batch Size 16)
tx2_gpu16@[email protected]_finetune@25 75.8 92.7 6.2M 349M
tx2_gpu16@[email protected]_finetune@25 75.4 92.4 5.2M 313M
tx2_gpu16@[email protected]_finetune@25 72.9 91.1 4.9M 179M
tx2_gpu16@[email protected]_finetune@25 70.3 89.4 4.3M 121M
Intel Xeon CPU with MKL-DNN (Batch Size 1)
cpu_lat@[email protected]_finetune@25 75.7 92.6 4.9M 365M
cpu_lat@[email protected]_finetune@25 74.6 92.0 4.9M 301M
cpu_lat@[email protected]_finetune@25 72.0 90.4 4.4M 160M
cpu_lat@[email protected]_finetune@25 71.1 89.9 4.2M 143M

How to train OFA Networks

mpirun -np 32 -H <server1_ip>:8,<server2_ip>:8,<server3_ip>:8,<server4_ip>:8 \
    -bind-to none -map-by slot \
    -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
    python train_ofa_net.py

or

horovodrun -np 32 -H <server1_ip>:8,<server2_ip>:8,<server3_ip>:8,<server4_ip>:8 \
    python train_ofa_net.py

Introduction Video

Watch the video

Hands-on Tutorial Video

Watch the video

Requirement

  • Python 3.6+
  • Pytorch 1.4.0+
  • ImageNet Dataset
  • Horovod

Related work on automated and efficient deep learning:

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR’19)

AutoML for Architecting Efficient and Specialized Neural Networks (IEEE Micro)

AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV’18)

HAQ: Hardware-Aware Automated Quantization (CVPR’19, oral)

About

[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 74.3%
  • Jupyter Notebook 25.7%