You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I am currently using your great library extensively - I was wondering, what are the main feature differences between your dataloader and the one provided by the original meta-dataset authors (https://github.com/google-research/meta-dataset)?
I was wondering because I just recently saw that in the original meta-dataset library, the end of this notebook (https://github.com/google-research/meta-dataset/blob/main/Intro_to_Metadataset.ipynb) describes how to integrate their dataloader with PyTorch, such as an epsiodic dataloader that supports fixed ways, support-shot and query-shot. They also have a batch dataloader that samples batches from the datasets in a "non-episodic manner".
These features seem quite similar to your PyTorch meta-dataset wrapper, so I was wondering what was the main motivation of creating your PyTorch wrapper library. One thing I did notice is that they state that "If we want to use fixed num_ways... We advise using single dataset for using this feature". I assume your dataset supports this unlike the original meta-dataset library, since I have been using fixed ways with multiple data sources with no issue. Are there other major feature differences between your library and the original meta-dataset library?
Thanks again for providing a great tool!
Patrick
The text was updated successfully, but these errors were encountered:
As I am currently using your great library extensively - I was wondering, what are the main feature differences between your dataloader and the one provided by the original meta-dataset authors (https://github.com/google-research/meta-dataset)?
I was wondering because I just recently saw that in the original meta-dataset library, the end of this notebook (https://github.com/google-research/meta-dataset/blob/main/Intro_to_Metadataset.ipynb) describes how to integrate their dataloader with PyTorch, such as an epsiodic dataloader that supports fixed ways, support-shot and query-shot. They also have a batch dataloader that samples batches from the datasets in a "non-episodic manner".
These features seem quite similar to your PyTorch meta-dataset wrapper, so I was wondering what was the main motivation of creating your PyTorch wrapper library. One thing I did notice is that they state that "If we want to use fixed num_ways... We advise using single dataset for using this feature". I assume your dataset supports this unlike the original meta-dataset library, since I have been using fixed ways with multiple data sources with no issue. Are there other major feature differences between your library and the original meta-dataset library?
Thanks again for providing a great tool!
Patrick
The text was updated successfully, but these errors were encountered: