Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PytorchStreamReader failed reading zip archive, but example documentation code does not produce zip file #2756

Closed
yangkl96 opened this issue Aug 17, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@yangkl96
Copy link

yangkl96 commented Aug 17, 2023

Description

The documentation here (https://docs.djl.ai/docs/tensorflow/how_to_import_tensorflow_models_in_DJL.html) explains how to make a saved model in .pb format. I am then instructed to load the model (https://docs.djl.ai/docs/load_model.html), but it is looking for a zip archive rather than my .pb file.

Expected Behavior

Identifying .pb file.

Error Message

[main] INFO ai.djl.util.Platform - Found placeholder platform from: cpu-win-x86_64:2.0.1
[main] INFO ai.djl.pytorch.engine.PtEngine - PyTorch graph executor optimizer is enabled, this may impact your inference latency and throughput. See: https://docs.djl.ai/docs/development/inference_performance_optimization.html#graph-executor-optimization
[main] INFO ai.djl.pytorch.engine.PtEngine - Number of inter-op threads is 6
[main] INFO ai.djl.pytorch.engine.PtEngine - Number of intra-op threads is 6
Exception in thread "main" ai.djl.engine.EngineException: PytorchStreamReader failed reading zip archive: failed finding central directory
	at ai.djl.pytorch.jni.PyTorchLibrary.moduleLoad(Native Method)
	at ai.djl.pytorch.jni.JniUtils.loadModule(JniUtils.java:1719)
	at ai.djl.pytorch.engine.PtModel.load(PtModel.java:92)
	at ai.djl.repository.zoo.BaseModelLoader.loadModel(BaseModelLoader.java:161)
	at ai.djl.repository.zoo.Criteria.loadModel(Criteria.java:172)
	at Testing.main(Testing.java:25)

How to Reproduce?

Ran the following in python:

lrelu = lambda x: tf.keras.activations.relu(x, alpha=0.1, max_value=20.0)
loaded_model = keras.models.load_model("model.hdf5", custom_objects={'<lambda>': lrelu})
tf.saved_model.save(loaded_model, "my_model/")

Then ran this in IntellijIDEA:

public class Testing {
    public static void main(String[] args) throws IOException, MalformedModelException, ModelNotFoundException {
        Path modelDir = Paths.get("C:/path/to/my_model/");

        Criteria<Image, Classifications> criteria = Criteria.builder()
                .setTypes(Image.class, Classifications.class)
                .optModelPath(modelDir)
                .optModelName("saved_model.pb").build();

        ZooModel<Image, Classifications> model = criteria.loadModel();
    }
}

I have the following Maven dependencies loaded:

    <dependencies>
        <!-- https://mvnrepository.com/artifact/ai.djl.pytorch/pytorch-engine -->
        <dependency>
            <groupId>ai.djl.pytorch</groupId>
            <artifactId>pytorch-engine</artifactId>
            <version>0.23.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/ai.djl.pytorch/pytorch-native-cpu -->
        <dependency>
            <groupId>ai.djl.pytorch</groupId>
            <artifactId>pytorch-native-cpu</artifactId>
            <version>2.0.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-simple -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-simple</artifactId>
            <version>1.7.36</version>
        </dependency>
    </dependencies>

Steps to reproduce

(Paste the commands you ran that produced the error.)

What have you tried to solve it?

Environment Info

Please run the command ./gradlew debugEnv from the root directory of DJL (if necessary, clone DJL first). It will output information about your system, environment, and installation that can help us debug your issue. Paste the output of the command below:

PASTE OUTPUT HERE
@yangkl96 yangkl96 added the bug Something isn't working label Aug 17, 2023
@yangkl96
Copy link
Author

Figured out I had to download the tensorflow engine instead of pytorch

@frankfliu
Copy link
Contributor

You need to use TensorFlow engine to load the model.

You have to update your pom.xml and remove pytorch and add tensorflow engine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants