Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about performance of supervised learning on target domain #2

Open
jiangzhengkai opened this issue Nov 21, 2022 · 1 comment

Comments

@jiangzhengkai
Copy link

Hi, I notice supervised learning on target domain only achieves 71.3% mIoU, which is much lower than proposed in mmsegmentation. Any reasons for such phenomenon?

@tsunghan-wu
Copy link
Owner

Hi @jiangzhengkai,

Thanks for your feedback on our paper. I apologize for the delay in my response due to my busy schedule. I wanted to provide some additional context and clarify a few points:

  1. Implementation details. I just checked out the Github Repository for MMSegmentation and it works really well as you said. Our code is largely based on THIS GitHub repository, but the performance of DeepLabV3+ in our paper is slightly lower than their reported result (73.2 vs 76.2). We believe that "freezing the batch norm layer" may be one reason for this difference. Also, as I haven't utilized MMSegmentation before, I'm not sure where the difference in their implementation is. Thanks for the info, maybe I should take a closer look...

  2. Comparison to the original papers. In Table 7 of the original DeepLabV3+ paper, the authors used a different, deeper network backbone (X-71) to achieve 80+ mIoU. In contrast, we used the standard ResNet-101 backbone that is commonly used in prior domain adaptation work. Additionally, as shown in Table 9 of the original DeepLabV2 paper, our reproduced result in Table 1 of our D2ADA paper (71.3 mIoU) is very close to the official Cityscapes performance (71.4 mIoU) with the same ResNet-101 backbone.

  3. Comparison to recent DA studies. Our reported "Target Only" results are similar to the results reported in recent outstanding papers (such as RIPU in CVPR 2022) and better than previous work (such as LabOR in ICCV 2021).

Please let us know if you have any other questions. We appreciate your feedback and will continue to work on improving our results.

Best,
Patrick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants