Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the invert res-block module #3

Open
hq-liu opened this issue Jan 24, 2018 · 5 comments
Open

About the invert res-block module #3

hq-liu opened this issue Jan 24, 2018 · 5 comments

Comments

@hq-liu
Copy link

hq-liu commented Jan 24, 2018

Hi, I have a question about the invert res-block module in your code. When I implemented this model, I feel confused about how to build the invert res-block module.
I found that in your code, you use this:
self.use_res_connect = self.stride == 1 and inp == oup
to ensure that input channels match with output channels. However, the input channels and output channels are always different. Thus, seems that there is no use for the this skip connection because (inp == oup) is always false.
Hope you can reply this issue, thanks very match.

@FatherOfHam
Copy link

input channel and output channel are always the same.

@tonylins
Copy link
Owner

For each inverted residual sequence, input channel and output channel are the same except for the first layer.

@hq-liu
Copy link
Author

hq-liu commented Jan 24, 2018

I have already understood, thanks anyway

@foreverYoungGitHub
Copy link

But does this is same with the original paper?

The paper indicated that 'when input layer depth is 0 the underlying conv is the identity function.'

I'm just curious that whether the architecture should add an extra conv node for the first layer shortcut.

@foreverYoungGitHub
Copy link

The other thing is that, even though it is not indicated in the paper. But making the batch norm before the conv is possible to improve the accuracy. just like the resnet v2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants