Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed inference of custom yolov7 model due to broadcast of input array #137

Open
obidare-folu opened this issue Aug 2, 2024 · 1 comment

Comments

@obidare-folu
Copy link

Hello. I trained a custom yolov7 model and converted it to onnx using https://github.com/WongKinYiu/yolov7/blob/main/export.py on a Jetson Xavier device with flags --img-size 512 512 --batch 1.
I have 3 classes.
I cloned this repository and used the export.py file with command python3 export.py -o ../models/yolov7/runs/train/fold1/weights/onnx_trial.onnx -e ../models/yolov7/runs/train/fold1/weights/onnx_trial.trt to convert it into trt engine. These are the first and last few logs from running the command:
image

image

I tried to run both video and image inference using command python3 trt.py -e ../models/yolov7/runs/train/fold1/weights/onnx_trial.trt -v 0 but none of them worked. Here is the output:
image

I have changed self.n_classes to 3 in the trt.py file and also in the 'utils.py' file. I also tried using --calib_batch_size 1 but it didn't work.

@Linaom1214
Copy link
Owner

obidare-folu

the onnx must include the decode result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants