You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How do we finetune llava for object detection tasks, or predicting a trajectory of actions. How would that work? Then I need some regression based loss like MSE right ? And instead of outputting text we would want to output a set of coordinates. From other repos it seems like fine tuning for regression tasks doesn’t seem to work well.
The text was updated successfully, but these errors were encountered:
Hey! I would said multi_token/llava are best suited for taking the features from an image model and embedding them in a chat-able large language model.
If you want to do object detection (rather than chat-with-this-image type tasks), you are best off just using an existing object detection specialized architecture: https://huggingface.co/docs/transformers/en/tasks/object_detection. If you wanted to then do something chat/text based on already detected objects, you could pass the outputs of that object detection model to the LLM.
How do we finetune llava for object detection tasks, or predicting a trajectory of actions. How would that work? Then I need some regression based loss like MSE right ? And instead of outputting text we would want to output a set of coordinates. From other repos it seems like fine tuning for regression tasks doesn’t seem to work well.
The text was updated successfully, but these errors were encountered: