From 143564accb6af453aabff54ca261bf7a8e1efad6 Mon Sep 17 00:00:00 2001 From: Joey Carter Date: Thu, 7 Nov 2024 15:53:20 -0500 Subject: [PATCH] Replace pybind11 with nanobind in frontend (#1173) **Context:** The Catalyst frontend contains a small Python extension module at [frontend/catalyst/utils/wrapper.cpp](https://github.com/PennyLaneAI/catalyst/blob/main/frontend/catalyst/utils/wrapper.cpp), which is an importable Python module written in C/C++ used to wrap the entry point of compiled programs. The Python-C++ bindings were originally implemented using pybind11. This PR is part of a larger effort to replace all pybind11 code with nanobind. **Description of the Change:** This change replaces all the pybind11 code in the frontend with the equivalent nanobind objects and operations. It was also necessary to modify the frontend build system to build the `wrapper` module using CMake for compatibility with nanobind, rather than in `setup.py` with the `intree_extensions` setuptools utility included with pybind11. **Benefits:** See Epic [68472](https://app.shortcut.com/xanaduai/epic/68472) for a list of nanobind's benefits. ----- [[sc-72837](https://app.shortcut.com/xanaduai/story/72837/replace-pybind11-with-nanobind-in-the-frontend)] --------- Co-authored-by: Lee James O'Riordan --- .github/workflows/check-catalyst.yaml | 6 +- .readthedocs.yaml | 3 + Makefile | 7 +- doc/releases/changelog-dev.md | 12 +- frontend/CMakeLists.txt | 5 + frontend/catalyst/CMakeLists.txt | 1 + frontend/catalyst/utils/CMakeLists.txt | 46 +++++++ frontend/catalyst/utils/wrapper.cpp | 89 +++++++------- frontend/test/pytest/test_callback.py | 6 +- pyproject.toml | 2 +- requirements.txt | 1 + setup.py | 160 ++++++++++++++++++++----- 12 files changed, 257 insertions(+), 81 deletions(-) create mode 100644 frontend/CMakeLists.txt create mode 100644 frontend/catalyst/CMakeLists.txt create mode 100644 frontend/catalyst/utils/CMakeLists.txt diff --git a/.github/workflows/check-catalyst.yaml b/.github/workflows/check-catalyst.yaml index 73f7becf16..8b7d21706f 100644 --- a/.github/workflows/check-catalyst.yaml +++ b/.github/workflows/check-catalyst.yaml @@ -404,7 +404,7 @@ jobs: - name: Install Deps run: | sudo apt-get update - sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make + sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make ninja-build python3 --version | grep ${{ needs.constants.outputs.primary_python_version }} python3 -m pip install -r requirements.txt # cuda-quantum is added manually here. @@ -481,7 +481,7 @@ jobs: - name: Install Deps run: | sudo apt-get update - sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make + sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make ninja-build python3 --version | grep ${{ needs.constants.outputs.primary_python_version }} python3 -m pip install -r requirements.txt make frontend @@ -536,7 +536,7 @@ jobs: - name: Install Deps run: | sudo apt-get update - sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make + sudo apt-get install -y python3 python3-pip libomp-dev libasan6 make ninja-build python3 --version | grep ${{ needs.constants.outputs.primary_python_version }} python3 -m pip install -r requirements.txt make frontend diff --git a/.readthedocs.yaml b/.readthedocs.yaml index 7ffb14b09f..0893c4bc7b 100644 --- a/.readthedocs.yaml +++ b/.readthedocs.yaml @@ -22,6 +22,9 @@ build: python: "3.10" apt_packages: - graphviz + - cmake + - ninja-build + - clang # Optionally set the version of Python and requirements required to build your docs python: diff --git a/Makefile b/Makefile index e1bf65a041..6b35435e05 100644 --- a/Makefile +++ b/Makefile @@ -69,6 +69,7 @@ help: @echo " test to run the Catalyst test suites" @echo " docs to build the documentation for Catalyst" @echo " clean to uninstall Catalyst and delete all temporary and cache files" + @echo " clean-frontend to clean build files of Catalyst Frontend" @echo " clean-mlir to clean build files of MLIR and custom Catalyst dialects" @echo " clean-runtime to clean build files of Catalyst Runtime" @echo " clean-oqc to clean build files of OQC Runtime" @@ -201,12 +202,16 @@ clean: rm -rf dist __pycache__ rm -rf .coverage coverage_html_report -clean-all: clean-mlir clean-runtime clean-oqc +clean-all: clean-frontend clean-mlir clean-runtime clean-oqc @echo "uninstall catalyst and delete all temporary, cache, and build files" $(PYTHON) -m pip uninstall -y pennylane-catalyst rm -rf dist __pycache__ rm -rf .coverage coverage_html_report/ +.PHONY: clean-frontend +clean-frontend: + find frontend/catalyst -name "*.so" -exec rm -v {} + + .PHONY: clean-mlir clean-dialects clean-llvm clean-mhlo clean-enzyme clean-mlir: $(MAKE) -C mlir clean diff --git a/doc/releases/changelog-dev.md b/doc/releases/changelog-dev.md index 3412e495bb..23119718c6 100644 --- a/doc/releases/changelog-dev.md +++ b/doc/releases/changelog-dev.md @@ -6,6 +6,14 @@

Improvements 🛠

+* Replace pybind11 with nanobind for C++/Python bindings in the frontend. + [(#1173)](https://github.com/PennyLaneAI/catalyst/pull/1173) + + Nanobind has been developed as a natural successor to the pybind11 library and offers a number of + [advantages](https://nanobind.readthedocs.io/en/latest/why.html#major-additions), in particular, + its ability to target Python's [stable ABI interface](https://docs.python.org/3/c-api/stable.html) + starting with Python 3.12. +

Breaking changes 💔

Deprecations 👋

@@ -16,4 +24,6 @@

Contributors ✍️

-This release contains contributions from (in alphabetical order): \ No newline at end of file +This release contains contributions from (in alphabetical order): + +Joey Carter diff --git a/frontend/CMakeLists.txt b/frontend/CMakeLists.txt new file mode 100644 index 0000000000..1687be61e5 --- /dev/null +++ b/frontend/CMakeLists.txt @@ -0,0 +1,5 @@ +cmake_minimum_required(VERSION 3.26) + +project(catalyst_frontend LANGUAGES CXX) + +add_subdirectory(catalyst) diff --git a/frontend/catalyst/CMakeLists.txt b/frontend/catalyst/CMakeLists.txt new file mode 100644 index 0000000000..512d2b1553 --- /dev/null +++ b/frontend/catalyst/CMakeLists.txt @@ -0,0 +1 @@ +add_subdirectory(utils) diff --git a/frontend/catalyst/utils/CMakeLists.txt b/frontend/catalyst/utils/CMakeLists.txt new file mode 100644 index 0000000000..5dd037086c --- /dev/null +++ b/frontend/catalyst/utils/CMakeLists.txt @@ -0,0 +1,46 @@ +set(CMAKE_CXX_STANDARD 17) +set(CMAKE_CXX_STANDARD_REQUIRED ON) + +find_package(Python 3 + REQUIRED COMPONENTS Interpreter Development.Module + OPTIONAL_COMPONENTS Development.SABIModule) + +# nanobind suggests including these lines to configure CMake to perform an optimized release build +# by default unless another build type is specified. Without this addition, binding code may run +# slowly and produce large binaries. +# See https://nanobind.readthedocs.io/en/latest/building.html#preliminaries +if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES) + set(CMAKE_BUILD_TYPE Release CACHE STRING "Choose the type of build." FORCE) + set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo") +endif() + +# Detect the installed nanobind package and import it into CMake +execute_process( + COMMAND "${Python_EXECUTABLE}" -m nanobind --cmake_dir + OUTPUT_VARIABLE nanobind_ROOT OUTPUT_STRIP_TRAILING_WHITESPACE) + +find_package(nanobind CONFIG REQUIRED) + +# Get the NumPy include directory +execute_process( + COMMAND "${Python_EXECUTABLE}" -c "import numpy; print(numpy.get_include())" + OUTPUT_VARIABLE NUMPY_INCLUDE_DIR + OUTPUT_STRIP_TRAILING_WHITESPACE +) + +# Source file list for `wrapper` module +set(WRAPPER_SRC_FILES + wrapper.cpp +) + +# Create the Python `wrapper` module +# Target the stable ABI for Python 3.12+, which reduces the number of binary wheels that must be +# built (`STABLE_ABI` does nothing on older Python versions). +nanobind_add_module(wrapper STABLE_ABI ${WRAPPER_SRC_FILES}) + +# Add the NumPy include directory to the library's include paths +target_include_directories(wrapper PRIVATE ${NUMPY_INCLUDE_DIR}) + +# Use suffix ".so" rather than ".abi3.so" for library file using Stable ABI +# This is necessary for compatibility with setuptools build extensions +set_target_properties(wrapper PROPERTIES SUFFIX ".so") diff --git a/frontend/catalyst/utils/wrapper.cpp b/frontend/catalyst/utils/wrapper.cpp index 1f11fb141c..6b8e9b0401 100644 --- a/frontend/catalyst/utils/wrapper.cpp +++ b/frontend/catalyst/utils/wrapper.cpp @@ -13,16 +13,15 @@ // limitations under the License. #include -#include -#include +#include // TODO: Periodically check and increment version. // https://endoflife.date/numpy #define NPY_NO_DEPRECATED_API NPY_1_24_API_VERSION -#include "numpy/arrayobject.h" +#include "numpy/ndarrayobject.h" -namespace py = pybind11; +namespace nb = nanobind; struct memref_beginning_t { char *allocated; @@ -50,7 +49,7 @@ size_t *to_sizes(char *base, size_t rank) size_t aligned = sizeof(void *); size_t offset = sizeof(size_t); size_t bytes_offset = allocated + aligned + offset; - return (size_t *)(base + bytes_offset); + return reinterpret_cast(base + bytes_offset); } size_t *to_strides(char *base, size_t rank) @@ -64,7 +63,7 @@ size_t *to_strides(char *base, size_t rank) size_t offset = sizeof(size_t); size_t sizes = rank * sizeof(size_t); size_t bytes_offset = allocated + aligned + offset + sizes; - return (size_t *)(base + bytes_offset); + return reinterpret_cast(base + bytes_offset); } void free_wrap(PyObject *capsule) @@ -76,7 +75,7 @@ void free_wrap(PyObject *capsule) const npy_intp *npy_get_dimensions(char *memref, size_t rank) { size_t *sizes = to_sizes(memref, rank); - const npy_intp *dims = (npy_intp *)sizes; + const npy_intp *dims = reinterpret_cast(sizes); return dims; } @@ -87,41 +86,42 @@ const npy_intp *npy_get_strides(char *memref, size_t element_size, size_t rank) // memref strides are in terms of elements. // numpy strides are in terms of bytes. // Therefore multiply by element size. - strides[idx] *= (size_t)element_size; + strides[idx] *= element_size; } - npy_intp *npy_strides = (npy_intp *)strides; + npy_intp *npy_strides = reinterpret_cast(strides); return npy_strides; } -py::list move_returns(void *memref_array_ptr, py::object result_desc, py::object transfer, - py::dict numpy_arrays) +nb::list move_returns(void *memref_array_ptr, nb::object result_desc, nb::object transfer, + nb::dict numpy_arrays) { - py::list returns; + nb::list returns; if (result_desc.is_none()) { return returns; } - auto ctypes = py::module::import("ctypes"); + auto ctypes = nb::module_::import_("ctypes"); using f_ptr_t = bool (*)(void *); - f_ptr_t f_transfer_ptr = *((f_ptr_t *)ctypes.attr("addressof")(transfer).cast()); + f_ptr_t f_transfer_ptr = *((f_ptr_t *)nb::cast(ctypes.attr("addressof")(transfer))); /* Data from the result description */ auto ranks = result_desc.attr("_ranks_"); auto etypes = result_desc.attr("_etypes_"); auto sizes = result_desc.attr("_sizes_"); - size_t memref_len = ranks.attr("__len__")().cast(); + size_t memref_len = nb::cast(ranks.attr("__len__")()); size_t offset = 0; - char *memref_array_bytes = (char *)(memref_array_ptr); + char *memref_array_bytes = reinterpret_cast(memref_array_ptr); for (size_t idx = 0; idx < memref_len; idx++) { - unsigned int rank_i = ranks.attr("__getitem__")(idx).cast(); + unsigned int rank_i = nb::cast(ranks.attr("__getitem__")(idx)); char *memref_i_beginning = memref_array_bytes + offset; offset += memref_size_based_on_rank(rank_i); - struct memref_beginning_t *memref = (struct memref_beginning_t *)memref_i_beginning; + struct memref_beginning_t *memref = + reinterpret_cast(memref_i_beginning); bool is_in_rt_heap = f_transfer_ptr(memref->allocated); if (!is_in_rt_heap) { @@ -133,15 +133,16 @@ py::list move_returns(void *memref_array_ptr, py::object result_desc, py::object // The first case is guaranteed by the use of the flag --cp-global-memref // // Use the numpy_arrays dictionary which sets up the following map: - // integer (memory address) -> py::object (numpy array) - auto array_object = numpy_arrays.attr("__getitem__")((size_t)memref->allocated); + // integer (memory address) -> nb::object (numpy array) + auto array_object = + numpy_arrays.attr("__getitem__")(reinterpret_cast(memref->allocated)); returns.append(array_object); continue; } const npy_intp *dims = npy_get_dimensions(memref_i_beginning, rank_i); - size_t element_size = sizes.attr("__getitem__")(idx).cast(); + size_t element_size = nb::cast(sizes.attr("__getitem__")(idx)); const npy_intp *strides = npy_get_strides(memref_i_beginning, element_size, rank_i); auto etype_i = etypes.attr("__getitem__")(idx); @@ -157,65 +158,70 @@ py::list move_returns(void *memref_array_ptr, py::object result_desc, py::object throw std::runtime_error("PyArray_NewFromDescr failed."); } - PyObject *capsule = - PyCapsule_New(memref->allocated, NULL, (PyCapsule_Destructor)&free_wrap); + PyObject *capsule = PyCapsule_New(memref->allocated, NULL, + reinterpret_cast(&free_wrap)); if (!capsule) { throw std::runtime_error("PyCapsule_New failed."); } - int retval = PyArray_SetBaseObject((PyArrayObject *)new_array, capsule); + int retval = PyArray_SetBaseObject(reinterpret_cast(new_array), capsule); bool success = 0 == retval; if (!success) { throw std::runtime_error("PyArray_SetBaseObject failed."); } - returns.append(new_array); + returns.append(nb::borrow(new_array)); // nb::borrow increments ref count by 1 // Now we insert the array into the dictionary. // This dictionary is a map of the type: - // integer (memory address) -> py::object (numpy array) + // integer (memory address) -> nb::object (numpy array) // // Upon first entry into this function, it holds the numpy.arrays // sent as an input to the generated function. // Upon following entries it is extended with the numpy.arrays // which are the output of the generated function. - PyObject *pyLong = PyLong_FromLong((size_t)memref->allocated); + PyObject *pyLong = PyLong_FromLong(reinterpret_cast(memref->allocated)); if (!pyLong) { throw std::runtime_error("PyLong_FromLong failed."); } - numpy_arrays[pyLong] = new_array; + numpy_arrays[pyLong] = nb::borrow(new_array); // nb::borrow increments ref count by 1 + // Decrement reference counts. + // The final ref count of `new_array` should be 2: one for the `returns` list and one for + // the `numpy_arrays` dict. Py_DECREF(pyLong); Py_DECREF(new_array); } return returns; } -py::list wrap(py::object func, py::tuple py_args, py::object result_desc, py::object transfer, - py::dict numpy_arrays) +nb::list wrap(nb::object func, nb::tuple py_args, nb::object result_desc, nb::object transfer, + nb::dict numpy_arrays) { // Install signal handler to catch user interrupts (e.g. CTRL-C). signal(SIGINT, [](int code) { throw std::runtime_error("KeyboardInterrupt (SIGINT)"); }); - py::list returns; + nb::list returns; - size_t length = py_args.attr("__len__")().cast(); + size_t length = nb::cast(py_args.attr("__len__")()); if (length != 2) { throw std::invalid_argument("Invalid number of arguments."); } - auto ctypes = py::module::import("ctypes"); + auto ctypes = nb::module_::import_("ctypes"); using f_ptr_t = void (*)(void *, void *); - f_ptr_t f_ptr = *reinterpret_cast(ctypes.attr("addressof")(func).cast()); + f_ptr_t f_ptr = *reinterpret_cast(nb::cast(ctypes.attr("addressof")(func))); auto value0 = py_args.attr("__getitem__")(0); - void *value0_ptr = *reinterpret_cast(ctypes.attr("addressof")(value0).cast()); + void *value0_ptr = + *reinterpret_cast(nb::cast(ctypes.attr("addressof")(value0))); auto value1 = py_args.attr("__getitem__")(1); - void *value1_ptr = *reinterpret_cast(ctypes.attr("addressof")(value1).cast()); + void *value1_ptr = + *reinterpret_cast(nb::cast(ctypes.attr("addressof")(value1))); { - py::gil_scoped_release lock; + nb::gil_scoped_release lock; f_ptr(value0_ptr, value1_ptr); } returns = move_returns(value0_ptr, result_desc, transfer, numpy_arrays); @@ -223,13 +229,16 @@ py::list wrap(py::object func, py::tuple py_args, py::object result_desc, py::ob return returns; } -PYBIND11_MODULE(wrapper, m) +NB_MODULE(wrapper, m) { m.doc() = "wrapper module"; - m.def("wrap", &wrap, "A wrapper function."); + // We have to annotate all the arguments to `wrap` to allow `result_desc` to be None + // See https://nanobind.readthedocs.io/en/latest/functions.html#none-arguments + m.def("wrap", &wrap, "A wrapper function.", nb::arg("func"), nb::arg("py_args"), + nb::arg("result_desc").none(), nb::arg("transfer"), nb::arg("numpy_arrays")); int retval = _import_array(); bool success = retval >= 0; if (!success) { - throw pybind11::import_error("Couldn't import numpy array C-API."); + throw nb::import_error("Could not import numpy array C-API."); } } diff --git a/frontend/test/pytest/test_callback.py b/frontend/test/pytest/test_callback.py index b1a36c1fb9..7854353cc9 100644 --- a/frontend/test/pytest/test_callback.py +++ b/frontend/test/pytest/test_callback.py @@ -233,7 +233,7 @@ def identity(arg) -> int: def cir(x): return identity(x) - with pytest.raises(TypeError, match="Callback identity expected type"): + with pytest.raises(RuntimeError, match="TypeError: Callback identity expected type"): cir(arg) @@ -334,7 +334,7 @@ def cir(x): captured = capsys.readouterr() assert captured.out.strip() == "" - with pytest.raises(ValueError, match="debug.callback is expected to return None"): + with pytest.raises(RuntimeError, match="ValueError: debug.callback is expected to return None"): cir(0) @@ -953,7 +953,7 @@ def result(x): return jnp.sum(some_func(jnp.sin(x))) x = 0.435 - with pytest.raises(TypeError, match="Callback some_func expected type"): + with pytest.raises(RuntimeError, match="TypeError: Callback some_func expected type"): result(x) diff --git a/pyproject.toml b/pyproject.toml index 70a9b4e9c2..c35d71951a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -39,7 +39,7 @@ known_third_party = ["diastatic-malt", "jax", "jaxlib", "numpy", "pennylane"] skips = ["B607"] [build-system] -requires = ["setuptools>=62", "wheel", "pybind11>=2.12.0", "numpy!=2.0.0"] +requires = ["setuptools>=62", "wheel", "pybind11>=2.12.0", "numpy!=2.0.0", "nanobind", "cmake", "ninja"] build-backend = "setuptools.build_meta" [tool.pytest.ini_options] diff --git a/requirements.txt b/requirements.txt index 93766ff37e..9e588ad4ea 100644 --- a/requirements.txt +++ b/requirements.txt @@ -5,6 +5,7 @@ pip>=22.3 # Build dependencies for non-Python components # Do not allow NumPy 2.0.0 due to a bug with their C API that blocks the usage of the Stable ABI; # this bug was fixed in 2.0.1 (https://github.com/numpy/numpy/pull/26995) +nanobind numpy!=2.0.0 pybind11>=2.12.0 PyYAML diff --git a/setup.py b/setup.py index f6eb09afb2..bec7fb36f6 100644 --- a/setup.py +++ b/setup.py @@ -17,13 +17,12 @@ # pylint: disable=wrong-import-order import glob +import os import platform import subprocess -from os import path +import sys from typing import Optional -import numpy as np -from pybind11.setup_helpers import intree_extensions from setuptools import Extension, find_namespace_packages, setup from setuptools._distutils import sysconfig from setuptools.command.build_ext import build_ext @@ -36,17 +35,17 @@ from subprocess import check_output REVISION = ( - check_output(["/usr/bin/env", "git", "rev-parse", "HEAD"], cwd=path.dirname(__file__)) + check_output(["/usr/bin/env", "git", "rev-parse", "HEAD"], cwd=os.path.dirname(__file__)) .decode() .strip() ) except Exception: # pylint: disable=broad-exception-caught # pragma: no cover REVISION = None -with open(path.join("frontend", "catalyst", "_version.py"), encoding="utf-8") as f: +with open(os.path.join("frontend", "catalyst", "_version.py"), encoding="utf-8") as f: version = f.readlines()[-1].split()[-1].strip("\"'") -with open(path.join("frontend", "catalyst", "_revision.py"), "w", encoding="utf-8") as f: +with open(os.path.join("frontend", "catalyst", "_revision.py"), "w", encoding="utf-8") as f: f.write("# AUTOGENERATED by setup.py!\n") f.write(f"__revision__ = '{REVISION}'\n") @@ -127,40 +126,139 @@ } -class CustomBuildExtLinux(build_ext): - """Override build ext from setuptools in order to remove the architecture/python - version suffix of the library name.""" +class CMakeExtension(Extension): + """A setuptools Extension class for modules with a CMake configuration.""" - def get_ext_filename(self, fullname): - filename = super().get_ext_filename(fullname) - suffix = sysconfig.get_config_var("EXT_SUFFIX") - extension = path.splitext(filename)[1] - return filename.replace(suffix, "") + extension + def __init__(self, name, sourcedir=""): + super().__init__(name, sources=[]) + self.sourcedir = os.path.abspath(sourcedir) + + +class UnifiedBuildExt(build_ext): + """Custom build extension class for the Catalyst Frontend. + + This class overrides a number of methods from its parent class + setuptools.command.build_ext.build_ext, the most important of which are: + 1. `get_ext_filename`, in order to remove the architecture/python + version suffix of the library name. + 2. `build_extension`, in order to handle the compilation of extensions + with CMake configurations, namely the catalyst.utils.wrapper module, + and of generic C/C++ extensions without a CMake configuration, namely + the catalyst.utils.libcustom_calls module, which is currently built + as a plain setuptools Extension. -class CustomBuildExtMacos(build_ext): - """Override build ext from setuptools in order to change to remove the architecture/python - version suffix of the library name and to change the LC_ID_DYLIB that otherwise is constant - and equal to where the shared library was created.""" + TODO: Eventually it would be better to build the utils.libcustom_calls + module using a CMake configuration as well, rather than as a setuptools + Extension. + """ + + def initialize_options(self): + super().initialize_options() + self.define = None + self.verbosity = "" + + def finalize_options(self): + # Parse the custom CMake options and store them in a new attribute + defines = [] if self.define is None else self.define.split(";") + self.cmake_defines = [ # pylint: disable=attribute-defined-outside-init + f"-D{define}" for define in defines + ] + if self.verbosity != "": + self.verbosity = "--verbose" # pylint: disable=attribute-defined-outside-init + + super().finalize_options() def get_ext_filename(self, fullname): filename = super().get_ext_filename(fullname) suffix = sysconfig.get_config_var("EXT_SUFFIX") - extension = path.splitext(filename)[1] + extension = os.path.splitext(filename)[1] return filename.replace(suffix, "") + extension + def build_extension(self, ext): + if isinstance(ext, CMakeExtension): + self.build_cmake_extension(ext) + else: + super().build_extension(ext) + + def build_cmake_extension(self, ext: CMakeExtension): + """Configure and build CMake extension.""" + cmake_path = "cmake" + ninja_path = "ninja" + + try: + subprocess.check_output([cmake_path, "--version"]) + except subprocess.CalledProcessError as err: + raise RuntimeError( + f"'{cmake_path} --version' failed: check CMake installation" + ) from err + + try: + subprocess.check_output([ninja_path, "--version"]) + except subprocess.CalledProcessError as err: + raise RuntimeError( + f"'{ninja_path} --version' failed: check Ninja installation" + ) from err + + extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name))) + debug = int(os.environ.get("DEBUG", 0)) if self.debug is None else self.debug + build_type = "Debug" if debug else "RelWithDebInfo" + configure_args = [ + f"-DCMAKE_LIBRARY_OUTPUT_DIRECTORY={extdir}", + f"-DCMAKE_BUILD_TYPE={build_type}", + f"-DCMAKE_MAKE_PROGRAM={ninja_path}", + ] + configure_args += ( + [f"-DPYTHON_EXECUTABLE={sys.executable}"] + if platform.system() != "Darwin" + else [f"-DPython_EXECUTABLE={sys.executable}"] + ) + + configure_args += self.cmake_defines + + if "CMAKE_ARGS" in os.environ: + configure_args += os.environ["CMAKE_ARGS"].split(" ") + + build_temp = os.path.abspath(self.build_temp) + os.makedirs(build_temp, exist_ok=True) + + build_args = ["--config", "Debug"] if debug else ["--config", "RelWithDebInfo"] + build_args += ["--", f"-j{os.cpu_count()}"] + + subprocess.check_call( + [cmake_path, "-G", "Ninja", ext.sourcedir] + configure_args, cwd=build_temp + ) + subprocess.check_call([cmake_path, "--build", "."] + build_args, cwd=build_temp) + + +class CustomBuildExtLinux(UnifiedBuildExt): + """Custom build extension class for Linux platforms + + Currently no extra work needs to be performed with respect to the base class + UnifiedBuildExt. + """ + + +class CustomBuildExtMacos(UnifiedBuildExt): + """Custom build extension class for macOS platforms + + In addition to the work performed by the base class UnifiedBuildExt, this + class also changes the LC_ID_DYLIB that is otherwise constant and equal to + where the shared library was created. + """ + def run(self): # Run the original build_ext command - build_ext.run(self) + super().run() # Construct library name based on ext suffix (contains python version, architecture and .so) library_name = "libcustom_calls.so" - package_root = path.dirname(__file__) + package_root = os.path.dirname(__file__) frontend_path = glob.glob( - path.join(package_root, "frontend", "**", library_name), recursive=True + os.path.join(package_root, "frontend", "**", library_name), recursive=True ) - build_path = glob.glob(path.join("build", "**", library_name), recursive=True) + build_path = glob.glob(os.path.join("build", "**", library_name), recursive=True) lib_with_r_path = "@rpath/libcustom_calls.so" original_path = frontend_path[0] if frontend_path else build_path[0] @@ -203,16 +301,14 @@ def run(self): cmdclass = {"build_ext": CustomBuildExtMacos} -ext_modules = [custom_calls_extension] +project_root_dir = os.path.abspath(os.path.dirname(__file__)) +frontend_dir = os.path.join(project_root_dir, "frontend") + +ext_modules = [ + custom_calls_extension, + CMakeExtension("catalyst.utils.wrapper", sourcedir=frontend_dir), +] -lib_path_npymath = path.join(np.get_include(), "..", "lib") -intree_extension_list = intree_extensions(["frontend/catalyst/utils/wrapper.cpp"]) -for ext in intree_extension_list: - ext._add_ldflags(["-L", lib_path_npymath]) # pylint: disable=protected-access - ext._add_ldflags(["-lnpymath"]) # pylint: disable=protected-access - ext._add_cflags(["-I", np.get_include()]) # pylint: disable=protected-access - ext._add_cflags(["-std=c++17"]) # pylint: disable=protected-access -ext_modules.extend(intree_extension_list) # For any compiler packages seeking to be registered in PennyLane, it is imperative that they # expose the entry_points metadata under the designated group name `pennylane.compilers`, with # the following entry points: