Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The implement of rank-aware contrastive loss #21

Closed
KonataMyLove opened this issue Jul 5, 2023 · 2 comments
Closed

The implement of rank-aware contrastive loss #21

KonataMyLove opened this issue Jul 5, 2023 · 2 comments

Comments

@KonataMyLove
Copy link

Thank you for your open-source code.
Here I have a question about the code of rank-aware contrastive loss metioned in the Section3.4 of the paper:

This is the rank-aware contrastive loss formula listed in the paper,
图片

This is the code of rank-aware contrastive loss computation in the qd_detr/model.py:
# softmax
exp_logits = torch.exp(logits)
log_prob = logits - torch.log(exp_logits.sum(1, keepdim=True) + 1e-6)
mean_log_prob_pos = (pos_mask * log_prob * vid_token_mask).sum(1) / (pos_mask.sum(1) + 1e-6)
loss = - mean_log_prob_pos * batch_drop_mask
loss_rank_contrastive = loss_rank_contrastive + loss.mean()

I don't think the code implement matched the formula? Especially the bolded code makes me confused.
If I get it wrong, please inform me. Thank you for your time.

@wjun0830
Copy link
Owner

wjun0830 commented Jul 5, 2023

For the contrastive equation and the implementation, you can find descriptions in supervised contrastive Learning paper and repository.
For the rank, there is a for loop to implement it.

@KonataMyLove
Copy link
Author

Thanks. I rewrote the formula according to the repository and the code, it does match the paper. Problem solved.

@wjun0830 wjun0830 closed this as completed Jul 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants