Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov7 C++ #20

Closed
Cuzny opened this issue Aug 1, 2022 · 10 comments
Closed

yolov7 C++ #20

Cuzny opened this issue Aug 1, 2022 · 10 comments
Labels
question Further information is requested

Comments

@Cuzny
Copy link

Cuzny commented Aug 1, 2022

按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗?
./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg
[08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)

@CloudRider-pixel
Copy link

CloudRider-pixel commented Aug 1, 2022

Hi,

I got the same error as you. I solved it by using docker image nvcr.io/nvidia/tensorrt:22.06-py3

To export I used : https://github.com/WongKinYiu.git
python export.py --weights ./yolov7.pt --grid
Then :
/tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache
// Test engine
./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine

Nevertheless I got some other issue after and I needed to change the following :

In yolov7.cpp add:

#include "NvInferPlugin.h"

and

initLibNvInferPlugins(&gLogger.getTRTLogger(), "");

just before IRuntime* runtime = createInferRuntime(gLogger);

In CMakeLists.txt replace:

target_link_libraries(yolov7 nvinfer)
by
target_link_libraries(yolov7 nvinfer nvinfer_plugin)

For now I'm still blocked with this issue:
void doInference(nvinfer1::IExecutionContext&, float*, float*, int, cv::Size): Assertion `engine.getNbBindings() == 2' failed
but as mentionned by @Linaom1214 it's an issue related with the export to onnx with NMS, I just have to find how I can export without nms.

@jia0511
Copy link

jia0511 commented Aug 1, 2022

开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!

@Linaom1214
Copy link
Owner

开源出的代码自己都不测下吗,最基本的编译过程缺少vInferPlugin插件,导致编译错误,第二模型序列化无法engine.create_execution_context,多少人遇到类似问题,官方工程化指向这个工程,麻烦你给别人少弄点弯路!!

我很明确的回复您了, 现在的C++代码不支持,NMS插件, 您提的好几个issue 我都一一回复了!!!!

@Linaom1214
Copy link
Owner

Hi,

I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:

// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .

Nevertheless I got some other issue after and I needed to change the following :

In yolov7.cpp add:

#include "NvInferPlugin.h"

and

initLibNvInferPlugins(&gLogger.getTRTLogger(), "");

just before IRuntime* runtime = createInferRuntime(gLogger);

In CMakeLists.txt replace:

target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

@Linaom1214
Copy link
Owner

按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)

不知道为什么有人把本项目的v7 C++代码列到了end2end里,我已经给V7仓库pr了相信他们会会快修复的。

@Linaom1214 Linaom1214 pinned this issue Aug 2, 2022
@CloudRider-pixel
Copy link

CloudRider-pixel commented Aug 2, 2022

Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.

@Linaom1214
Copy link
Owner

Hi,
I got the same error as you. I the beginning of https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server and now I didn't get this issue anymore:
// Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 // ONNX -> TensorRT with trtexec and docker docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3 // Copy onnx -> container: docker cp yolov7.onnx :/workspace/ // Export with FP16 precision, min batch 1, opt batch 8 and max batch 8 ./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache // Test engine ./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine // Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
Nevertheless I got some other issue after and I needed to change the following :
In yolov7.cpp add:
#include "NvInferPlugin.h"
and
initLibNvInferPlugins(&gLogger.getTRTLogger(), "");
just before IRuntime* runtime = createInferRuntime(gLogger);
In CMakeLists.txt replace:
target_link_libraries(yolov7 nvinfer) by target_link_libraries(yolov7 nvinfer nvinfer_plugin)

the please refer #18 .The real reason is that the reop of yolov7 did not understand that the model supported by the code of this reop does not include end-to-end

I agree sorry, I typed it too fast yesterday, I updated my answer and thanks for your work.

按照流程说明下来出现了这个错误: 这是因为什么的版本问题造成的吗? ./yolov7 ../yolov7.engine -i ../../../../assets/dog.jpg [08/01/2022-19:43:48] [E] [TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 213)

如果yolov5-rt那个项目中的代码有效,感谢您的反馈!

@Cuzny
Copy link
Author

Cuzny commented Aug 2, 2022

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

@Linaom1214
Copy link
Owner

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支

@Linaom1214 Linaom1214 added the question Further information is requested label Aug 12, 2022
@Linaom1214
Copy link
Owner

yolov5-rt-stack那个项目的代码是可以使用的,或者在torch1.12.0版本下直接使用yolov7官方仓库的export.py导出带NMS的onnx文件,然后用8.2以上的tensorrt自带的trtexec转化.engine也可以。

好的, 感谢反馈,之前给他们的pr并没有合并, 相关的代码我都有严格的测试,v7仓库莫名奇妙就把我这个v7 的c++ demo 放上去了,给您带来困扰了! 不知道大家对端到端的需求多不多,有需要的话考虑专门做一个端到端的分支

Now I add the C++ support
https://github.com/Linaom1214/TensorRT-For-YOLO-Series/blob/main/cpp/README.MD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants