- We use distributed training.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of
torch.cuda.max_memory_allocated()
for all 8 GPUs. Note that this value is usually less than whatnvidia-smi
shows. - We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script benchmark.py which computes the average time on 2000 images.
Please refer to SECOND for details. We provide SECOND baselines on KITTI and Waymo datasets.
Please refer to PointPillars for details. We provide pointpillars baselines on KITTI, nuScenes, Lyft, and Waymo datasets.
Please refer to Part-A2 for details.
Please refer to VoteNet for details. We provide VoteNet baselines on ScanNet and SUNRGBD datasets.
Please refer to Dynamic Voxelization for details.
Please refer to MVXNet for details.
Please refer to RegNet for details. We provide pointpillars baselines with RegNetX backbones on nuScenes and Lyft datasets currently.
We also support baseline models on nuImages dataset. Please refer to nuImages for details. We report Mask R-CNN, Cascade Mask R-CNN and HTC results currently.
Please refer to H3DNet for details.
Please refer to 3DSSD for details.
Please refer to CenterPoint for details.
Please refer to SSN for details. We provide pointpillars with shape-aware grouping heads used in SSN on the nuScenes and Lyft datasets currently.
Please refer to ImVoteNet for details. We provide ImVoteNet baselines on SUNRGBD dataset.
Please refer to FCOS3D for details. We provide FCOS3D baselines on the nuScenes dataset currently.
Please refer to PointNet++ for details. We provide PointNet++ baselines on ScanNet and S3DIS datasets.
Please refer to Group-Free-3D for details. We provide Group-Free-3D baselines on ScanNet dataset.
Please refer to ImVoxelNet for details. We provide ImVoxelNet baselines on KITTI dataset.
Please refer to PAConv for details. We provide PAConv baselines on S3DIS dataset.