You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I use finetune_lora.sh and set as feliows:
--model_name_or_path liuhaotian/llava-v1.5-7b
--vision_tower openai/clip-vit-large-patch14-336 \
and got these:
File "/home/wucz/remote-sensing/GeoChat/geochat/model/multimodal_encoder/clip_encoder.py", line 97, in init
self.clip_interpolate_embeddings(image_size=504, patch_size=14)
File "/home/wucz/remote-sensing/GeoChat/geochat/model/multimodal_encoder/clip_encoder.py", line 34, in clip_interpolate_embeddings
n, seq_length, hidden_dim = pos_embedding.shape
ValueError: not enough values to unpack (expected 3, got 2)
Hello! I use finetune_lora.sh and set as feliows:
--model_name_or_path liuhaotian/llava-v1.5-7b
--vision_tower openai/clip-vit-large-patch14-336 \
and got these:
File "/home/wucz/remote-sensing/GeoChat/geochat/model/multimodal_encoder/clip_encoder.py", line 97, in init
self.clip_interpolate_embeddings(image_size=504, patch_size=14)
File "/home/wucz/remote-sensing/GeoChat/geochat/model/multimodal_encoder/clip_encoder.py", line 34, in clip_interpolate_embeddings
n, seq_length, hidden_dim = pos_embedding.shape
ValueError: not enough values to unpack (expected 3, got 2)
Where did I set the wrong settings that caused me to not read the model?
The text was updated successfully, but these errors were encountered: