Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you have implemented the "cpp" file of DepthwiseConvolution for CPU? #4

Open
csyking opened this issue Jun 29, 2017 · 10 comments
Open

Comments

@csyking
Copy link

csyking commented Jun 29, 2017

You maybe only have implemented the layer for CUDA, but the implementation of CPU is still only the original "Caffe's conv+group"?

The same issue is here: https://github.com/Zehaos/MobileNet/issues/22

I wonder if you will implement the DepthwiseConvolutionLayer for CPU? Any contribution will be grateful!

Best.

@ONLY-VEDA
Copy link

He already implement the cpu version,you can find that in code.

@mychina75
Copy link

But looks like cpu version does not optimized for depthwise conversion.

@yonghenglh6
Copy link
Owner

I'm sorry for uncertainty about this.
It's a tough work and I am right busy on other work.

@ryusaeba
Copy link

Hi @yonghenglh6
Can we use your cpp/hpp/cu files to load MobileNet you pasted as pretrained weight to do finetune work? I have this question is because when we update conv to depthwsie, caffe still can load the pretrained weight? Or caffe base on layer {name} to load the pretrained weight?

@ryusaeba
Copy link

I check the website http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html and saw the following statement. "If we provide the weights argument to the caffe train command, the pretrained weights will be loaded into our model, matching layers by name."
Therefore, I assume your answer is correct. If I am wrong, please correct me. Thanks!

@yonghenglh6
Copy link
Owner

yonghenglh6 commented Jul 11, 2017

@ryusaeba Yes, that's why I use the original conv_param instead of new special param. You can just change the type without compatible price.

@ryusaeba
Copy link

ryusaeba commented Jul 12, 2017

@yonghenglh6 Thanks! I have got all pass message by using check.py. Then I apply DepthWiseConvlution on https://github.com/shicai/MobileNet-Caffe inference path, the TOP-1 result (accuracy) is the same but I get slight difference on loss. I assume the loss will be the same. Do you have any idea about this?

@yonghenglh6
Copy link
Owner

yonghenglh6 commented Jul 12, 2017

@ryusaeba
The slight difference comes from the blas algorithm. I did not employ the blas lib, while the original conv layer did.
I assume the blas algorithm sacrifice slight precision to get better performance, because the depthwise outputs matches my handcraft computation.

@libra7
Copy link

libra7 commented Oct 26, 2017

hello ,do you implement the DepthwiseConvolutionLayer for CPU?

@sunjunlishi
Copy link

.....wait

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants