You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
As we know, YOLO supports both square and rectangular images. However, for speed and dataset size considerations, I want to crop an image from 1280x1280 to 640x640. YOLO annotations/labels are originally created based on the image’s width and height. How can I bridge the gap in training the dataset before and after cropping the image while keeping the annotations unchanged?
thanks in advance!
Additional
No response
The text was updated successfully, but these errors were encountered:
👋 Hello @andualemw1, thank you for your interest in YOLOv5 🚀! An Ultralytics engineer will assist you soon.
To get started with cropping images while retaining annotations, you might find our ⭐️ Tutorials helpful. You can explore guides for tasks such as Custom Data Training where managing image sizes and annotations is discussed.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
For custom training questions, provide as much information as possible, including dataset image examples and training logs. Verify you are following our Tips for Best Training Results.
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export, and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Introducing YOLOv8 🚀
Explore our latest object detection model, YOLOv8 🚀! Designed for speed and accuracy, perfect for a wide range of tasks. Discover more in our YOLOv8 Docs and get started with:
pip install ultralytics
Feel free to provide further details to help us address your question! 🔍
Search before asking
Question
As we know, YOLO supports both square and rectangular images. However, for speed and dataset size considerations, I want to crop an image from 1280x1280 to 640x640. YOLO annotations/labels are originally created based on the image’s width and height. How can I bridge the gap in training the dataset before and after cropping the image while keeping the annotations unchanged?
thanks in advance!
Additional
No response
The text was updated successfully, but these errors were encountered: