You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Chunking script is now being refactored to use coremltools.models.utils.bisect_model() API (#354)
Can you please try again with this coremltools API (coremltools==8.0b2)? And if the issue persists, can you please open an issue in coremltools Github, with the code to reproduce.
Chunking quantized model leads to unequal Chunks, say we have a ~153 MB model, it's gettting chunked to 153 and 2 kb ,
how can i chunk model with constant nodes (like in quantization). (might have trouble processing quantized consts)
The text was updated successfully, but these errors were encountered: