Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you alot for sharing this code. I was wondering, How I can extract the attention scores from the network? What I mean is, How did you calculated the image in the description based on the importance of patches? #17

Open
narminGhaffari opened this issue Feb 18, 2021 · 2 comments

Comments

@narminGhaffari
Copy link

No description provided.

@narminGhaffari narminGhaffari changed the title Thank you alot for sharing this code. Thank you alot for sharing this code. I was wondering, How I can extract the attention scores from the network? What I mean is, How did you calculated the image in the description based on the importance of patches? Feb 18, 2021
@narminGhaffari
Copy link
Author

Thank you a lot for sharing this code. I was wondering, How I can extract the attention scores from the network? What I mean is, How did you calculate the image in the description based on the importance of patches?

@utayao
Copy link
Owner

utayao commented Feb 18, 2021

Hi, you can get model output ak for each patch in the bag by using the following function:
get_alpha_layer_output = K.function([model.layers[0].input], [model.layers[10].output])
Then you need to rescal the ak to get the attention weight as described in the paper (Fig.5). The heatmap is produced by multiplying the patch and its corresponding attention weight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants