Modified from det3d's original document.
# For nuScenes Dataset
└── NUSCENES_DATASET_ROOT
├── samples <-- key frames
├── sweeps <-- frames without annotation
├── maps <-- unused
├── v1.0-trainval <-- metadata
Create a symlink to the dataset root
mkdir data && cd data
ln -s DATA_ROOT
mv DATA_ROOT nuScenes # rename to nuScenes
Remember to change the DATA_ROOT to the actual path in your system.
Data creation should be under the gpu environment.
# nuScenes
python tools/create_data.py nuscenes_data_prep --root_path=NUSCENES_TRAINVAL_DATASET_ROOT --version="v1.0-trainval" --nsweeps=10
In the end, the data and info files should be organized as follows
# For nuScenes Dataset
└── CenterPoint
└── data
└── nuScenes
├── samples <-- key frames
├── sweeps <-- frames without annotation
├── maps <-- unused
|── v1.0-trainval <-- metadata and annotations
|── infos_train_10sweeps_withvelo_filter_True.pkl <-- train annotations
|── infos_val_10sweeps_withvelo_filter_True.pkl <-- val annotations
|── dbinfos_train_10sweeps_withvelo.pkl <-- GT database info files
|── gt_database_10sweeps_withvelo <-- GT database
Now we only support training and evaluation with gpu. Cpu only mode is not supported.
Use the following command to start a distributed training using 4 GPUs. The models and logs will be saved to work_dirs/CONFIG_NAME
python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py CONFIG_PATH
For distributed testing with 4 gpus,
python -m torch.distributed.launch --nproc_per_node=4 ./tools/dist_test.py CONFIG_PATH --work_dir work_dirs/CONFIG_NAME --checkpoint work_dirs/CONFIG_NAME/latest.pth
For testing with one gpu and see the inference time,
python ./tools/dist_test.py CONFIG_PATH --work_dir work_dirs/CONFIG_NAME --checkpoint work_dirs/CONFIG_NAME/latest.pth --speed_test
The pretrained models and configurations are in MODEL ZOO.
You can find the detection files are in the MODEL ZOO. After downloading the detection files, you can simply run
# val set
python tools/nusc_tracking/pub_test.py --work_dir WORK_DIR_PATH --checkpoint DETECTION_PATH
# test set
python tools/nusc_tracking/pub_test.py --work_dir WORK_DIR_PATH --checkpoint DETECTION_PATH --version v1.0-test --root data/nuScenes/v1.0-test
Organize your dataset as follows
# For nuScenes Dataset
└── CenterPoint
└── data
└── nuScenes
├── samples <-- key frames
├── sweeps <-- frames without annotation
├── maps <-- unused
|── v1.0-trainval <-- metadata and annotations
|── infos_train_10sweeps_withvelo_filter_True.pkl <-- train annotations
|── infos_val_10sweeps_withvelo_filter_True.pkl <-- val annotations
|── dbinfos_train_10sweeps_withvelo.pkl <-- GT database info files
|── gt_database_10sweeps_withvelo <-- GT database
└── v1.0-test <-- main test folder
├── samples <-- key frames
├── sweeps <-- frames without annotation
├── maps <-- unused
|── v1.0-test <-- metadata and annotations
|── infos_test_10sweeps_withvelo.pkl <-- test info
Download the centerpoint_voxel_1440_flip
here, save it into work_dirs/nusc_0075_flip
, then run the following commands in the main folder to get detection prediction
python tools/dist_test.py configs/nusc/voxelnet/nusc_centerpoint_voxelnet_0075voxel_fix_bn_z_flip.py --work_dir work_dirs/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset --checkpoint work_dirs/nusc_0075_flip/voxelnet_converted.pth --testset