You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In flava multimodal encoder, why don't we pass an attention mask to mask out '[PAD]' embeddings coming from text encoder? Is this a bug or intentional?
🚀 The feature, motivation and pitch
In flava multimodal encoder, why don't we pass an attention mask to mask out '[PAD]' embeddings coming from text encoder? Is this a bug or intentional?
multimodal/torchmultimodal/models/flava/model.py
Line 197 in e4d288b
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: