Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to desearilize object Validation error #54

Open
prateethvnayak opened this issue Apr 22, 2019 · 2 comments
Open

Unable to desearilize object Validation error #54

prateethvnayak opened this issue Apr 22, 2019 · 2 comments

Comments

@prateethvnayak
Copy link

I am trying to do something new here. So I have the yolov2 model in frozen_pb format from tensorflow. I have successfully converted it to mlmodel and got it working using this repo.

Now, I have a another model where I have quantized the weights in the frozen_pb model from float32 to 8-bits (but the numbers are still in float32 format just the different levels of unique values are now just 8-bits i.e 255 unique float values only). This kinda compresses the model.

I was able to successfully convert the model using the tf-coreml repo. Same as the float32 model.

Now on adding this model to the Xcode.proj , it gives this error.
HERE

There was a problem decoding this CoreML document
validation error : Unable to deserialize object 

The model is still float32 data-type (and same size) . Any ideas where it might be going wrong ? Is there a way that the mlmodel protobuf stores differently for lesser unique values ?

@hollance
Copy link
Owner

That means the Core ML file is not a correct protobuf file, or that the contents in the file are not correct. Most likely something is wrong with the weights for the layers (since that's what you changed) but I can't see that from here. ;-)

@prateethvnayak
Copy link
Author

I believe the weights of the layers are correct. After quantizing, I do get the expected predictions on the tensorflow platform. Do think the frozen_pb itself is not created correct after the quantization ?

            input_node_names = ["Input/Placeholder"] 
            output_node_names = ["Convolutional_9/add"]
            print(input_node_names,"\n",output_node_names)

            gdef = tf.graph_util.convert_variables_to_constants(
                    sess,
                    sess.graph_def,
                    output_node_names)
            gdef = strip_unused_lib.strip_unused(
                    input_graph_def = gdef,
                    input_node_names = input_node_names,
                    output_node_names = output_node_names,
                    placeholder_type_enum = dtypes.float32.as_datatype_enum)
            # Save it to an output file
            frozen_model_file = self.base_dir_out+'/frozen_model.pb'
            with gfile.GFile(frozen_model_file, "wb") as f:
                f.write(gdef.SerializeToString())

This is the same snippet I use to create the model before and after quantizing. Do you think the model frozen_pb from the TF itself is wrong before I bring it to mlmodel ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants