Skip to content

Commit

Permalink
Update runtime tutorial to promote Module APIs in the beginning. (#6198)
Browse files Browse the repository at this point in the history
Summary: Pull Request resolved: #6198

Reviewed By: dbort

Differential Revision: D64352860

fbshipit-source-id: 907dbe5438737b1a14b30da94fd0b02510dee542
  • Loading branch information
shoumikhin authored and facebook-github-bot committed Oct 15, 2024
1 parent 5c8b115 commit bff26f3
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/source/extension-module.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,6 @@ if (auto* etdump = dynamic_cast<ETDumpGen*>(module.event_tracer())) {
}
```

# Conclusion
## Conclusion

The `Module` APIs provide a simplified interface for running ExecuTorch models in C++, closely resembling the experience of PyTorch's eager mode. By abstracting away the complexities of the lower-level runtime APIs, developers can focus on model execution without worrying about the underlying details.
6 changes: 2 additions & 4 deletions docs/source/running-a-model-cpp-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@

**Author:** [Jacob Szwejbka](https://github.com/JacobSzwejbka)

In this tutorial, we will cover the APIs to load an ExecuTorch model,
prepare the MemoryManager, set inputs, execute the model, and retrieve outputs.
In this tutorial, we will cover how to run an ExecuTorch model in C++ using the more detailed, lower-level APIs: prepare the `MemoryManager`, set inputs, execute the model, and retrieve outputs. However, if you’re looking for a simpler interface that works out of the box, consider trying the [Module Extension Tutorial](extension-module.md).

For a high level overview of the ExecuTorch Runtime please see [Runtime Overview](runtime-overview.md), and for more in-depth documentation on
each API please see the [Runtime API Reference](executorch-runtime-api-reference.rst).
Expand Down Expand Up @@ -153,5 +152,4 @@ assert(output.isTensor());

## Conclusion

In this tutorial, we went over the APIs and steps required to load and perform an inference with an ExecuTorch model in C++.
Also, check out the [Simplified Runtime APIs Tutorial](extension-module.md).
This tutorial demonstrated how to run an ExecuTorch model using low-level runtime APIs, which offer granular control over memory management and execution. However, for most use cases, we recommend using the Module APIs, which provide a more streamlined experience without sacrificing flexibility. For more details, check out the [Module Extension Tutorial](extension-module.md).

0 comments on commit bff26f3

Please sign in to comment.