Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug?] Missing Padding Mask for Batch Mode #288

Open
Oxer11 opened this issue Oct 22, 2024 · 0 comments
Open

[Bug?] Missing Padding Mask for Batch Mode #288

Oxer11 opened this issue Oct 22, 2024 · 0 comments

Comments

@Oxer11
Copy link

Oxer11 commented Oct 22, 2024

Hi team, thanks for the great work done for the community. I've learned a lot from reading the code. I have one question regarding the batch mode of RFDiffusion. The released inference code always uses batch_size=1 for generation, but the model actually has a batch dimension. So I assume that RFDiffusion is able to handle batch mode inference and training.

However, as I go deeply into the model implementation, I don't find anything like padding_mask that is required for attention and other computations in batch mode. Without this, the padded token will also be used for generation and representations, which I don't think is correct. Am I missing something here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant