- Add DataComp models
- Enable int8 inference without
.weight
attribute
- Update push_to_hf_hub
- Add int8 support
- Update notebook demo
- Refactor zero-shot classification code
- Fixes for context_length and vocab_size attributes
- Fixes for context_length and vocab_size attributes
- Fix --train-num-samples logic
- Add HF BERT configs for PubMed CLIP model
- Add improved g-14 weights
- Update protobuf version
- Add convnext_xxlarge weights
- Fixed import in readme
- Add samples per second per gpu logging
- Fix slurm example
- Move dataset mixtures logic to shard level
- Fix CoCa accum-grad training
- Safer transformers import guard
- get_labels refactoring
- Add support for dataset mixtures with different sampling weights
- Make transformers optional again
- Updated convnext configs for consistency
- Added input_patchnorm option
- Clean and improve CoCa generation
- Support model distillation
- Add ConvNeXt-Large 320x320 fine-tune weights
- Make transformers optional
- Add MSCOCO CoCa finetunes to pretrained models
- coca support and weights
- ConvNeXt-Large weights
hf-hub:org/model_id
support for loading models w/ config and weights in Hugging Face Hub
- Added a ViT-bigG-14 model.
- Added an up-to-date example slurm script for large training jobs.
- Added a option to sync logs and checkpoints to S3 during training.
- New options for LR schedulers, constant and constant with cooldown
- Fix wandb autoresuming when resume is not set
- ConvNeXt
base
&base_w
pretrained models added timm-
model prefix removed from configstimm
augmentation + regularization (dropout / drop-path) supported
- Fix wandb collapsing multiple parallel runs into a single one
- Fix braceexpand memory explosion for complex webdataset urls
- Fix release
- Add training feature to auto-resume from the latest checkpoint on restart via
--resume latest
- Allow webp in webdataset
- Fix logging for number of samples when using gradient accumulation
- Add model configs for convnext xxlarge
- wrapped patchdropout in a torch.nn.Module
- relax protobuf dependency
- override the default patch dropout value in 'vision_cfg'
- better support for HF models
- add support for gradient accumulation
- CI fixes
- add support for patch dropout
- add convnext configs
- add multilingual H/14 xlm roberta large
- fix setup.py _read_reqs
- Make openclip training usable from pypi.
- Add xlm roberta large vit h 14 config.
- pretrained B/32 xlm roberta base: first multilingual clip trained on laion5B
- pretrained B/32 roberta base: first clip trained using an HF text encoder
- Add missing hf_tokenizer_name in CLIPTextCfg.
- Fix #211, missing RN50x64 config. Fix type of dropout param for ResNet models
- Bring back LayerNorm impl that casts to input for non bf16/fp16
- zero_shot.py: set correct tokenizer based on args
- training/params.py: remove hf params and get them from model config
- Implement grad checkpointing for hf model.
- custom_text: True if hf_model_name is set
- Disable hf tokenizer parallelism
- Generalizable Text Transformer with HuggingFace Models (@iejMac)
- Support for custom text tower
- Add checksum verification for pretrained model weights
- lot including sota models, bfloat16 option, better loading, better metrics
- ViT-B/32 trained on Laion2B-en
- add missing openai RN50x64 model
- ViT-B/16+
- Add grad checkpointing support
- more robust data loader