Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Perform Static Quantization Directly on an ONNX Model Using Intel® Neural Compressor? #2488

Open
morteza89 opened this issue Sep 17, 2024 · 0 comments
Labels
question Further information is requested

Comments

@morteza89
Copy link

Hello,

I'm using Intel® Neural Compressor (INC) to perform static quantization on my custom PyTorch model. I followed this script which demonstrates how to apply static quantization using INC on a PyTorch model.

My goal is to obtain the final quantized model in ONNX format. However, after quantization, saving the q_model results in a .pt file (PyTorch format). I also found that exporting quantized PyTorch models to ONNX is problematic due to limited support and compatibility issues, especially with static quantization.

My Question:

Is there a way to perform static quantization directly on an ONNX model using Intel® Neural Compressor to produce a quantized ONNX model as the output?
Alternatively, is there a specific method to export the statically quantized PyTorch model to ONNX format while addressing the compatibility issues?
Any guidance or examples on how to achieve this would be greatly appreciated.

Thank you!

@morteza89 morteza89 added the question Further information is requested label Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant