You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi team, thanks for the great work done for the community. I've learned a lot from reading the code. I have one question regarding the batch mode of RFDiffusion. The released inference code always uses batch_size=1 for generation, but the model actually has a batch dimension. So I assume that RFDiffusion is able to handle batch mode inference and training.
However, as I go deeply into the model implementation, I don't find anything like padding_mask that is required for attention and other computations in batch mode. Without this, the padded token will also be used for generation and representations, which I don't think is correct. Am I missing something here?
The text was updated successfully, but these errors were encountered:
Hi team, thanks for the great work done for the community. I've learned a lot from reading the code. I have one question regarding the batch mode of RFDiffusion. The released inference code always uses
batch_size=1
for generation, but the model actually has a batch dimension. So I assume that RFDiffusion is able to handle batch mode inference and training.However, as I go deeply into the model implementation, I don't find anything like
padding_mask
that is required for attention and other computations in batch mode. Without this, the padded token will also be used for generation and representations, which I don't think is correct. Am I missing something here?The text was updated successfully, but these errors were encountered: