-
Notifications
You must be signed in to change notification settings - Fork 356
Downstream task #51
Comments
You can use Encoder and not the target encoder for the task. Because during the training, we trained the encoder model to predict the masked regions given an unmasked context. Hence using encoder would be the choice! |
@VimukthiRandika1997 I was thinking about this in a similar way although the paper says "We use the target-encoder for evaluation and average pool its output to produce a global image representation." You can check this on the first paragraph in the paper appendix (A.1. Pretraining). Can you please give a check on this? |
Yeah, I looked into that. I think in this case It makes sense to use I was mainly inspired by previous approach called BYOL, here we used |
That makes a lot of sense, really nice intuition. They actually do test both approaches (also shown in the appendix) for reconstruction but personally I couldn't find the conclusions visually intuitive. Ps: me and other folks are uniting to reproduce some of the experiments, mess around with the architecture, etc. If you want to join add me on discord: falsomoralista. |
hello @FalsoMoralista, I'm currently interested in pretraining IJEPA and finetuning on that pretrained model on semantic segmentation task, can I join with you? |
After train the model can we use only target-encoder for down-stream task ?? like- image captioning etc.
The text was updated successfully, but these errors were encountered: