-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the invert res-block module #3
Comments
input channel and output channel are always the same. |
For each inverted residual sequence, input channel and output channel are the same except for the first layer. |
I have already understood, thanks anyway |
But does this is same with the original paper? The paper indicated that 'when input layer depth is 0 the underlying conv is the identity function.' I'm just curious that whether the architecture should add an extra conv node for the first layer shortcut. |
The other thing is that, even though it is not indicated in the paper. But making the batch norm before the conv is possible to improve the accuracy. just like the resnet v2. |
Hi, I have a question about the invert res-block module in your code. When I implemented this model, I feel confused about how to build the invert res-block module.
I found that in your code, you use this:
self.use_res_connect = self.stride == 1 and inp == oup
to ensure that input channels match with output channels. However, the input channels and output channels are always different. Thus, seems that there is no use for the this skip connection because (inp == oup) is always false.
Hope you can reply this issue, thanks very match.
The text was updated successfully, but these errors were encountered: