From ea180034889e9650ca3733cf26b806a524399363 Mon Sep 17 00:00:00 2001 From: ishaghosh27 <94150575+ishaghosh27@users.noreply.github.com> Date: Thu, 22 Aug 2024 11:21:02 -0400 Subject: [PATCH 1/3] Update README.md --- AI-and-Analytics/Getting-Started-Samples/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/AI-and-Analytics/Getting-Started-Samples/README.md b/AI-and-Analytics/Getting-Started-Samples/README.md index a8d82bd7da..b382719e24 100644 --- a/AI-and-Analytics/Getting-Started-Samples/README.md +++ b/AI-and-Analytics/Getting-Started-Samples/README.md @@ -23,7 +23,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https |Classical Machine Learning| Intel® Optimization for XGBoost* | [IntelPython_XGBoost_GettingStarted](IntelPython_XGBoost_GettingStarted) | Set up and trains an XGBoost* model on datasets for prediction. |Classical Machine Learning| daal4py | [IntelPython_daal4py_GettingStarted](IntelPython_daal4py_GettingStarted) | Batch linear regression using the Python API package daal4py from oneAPI Data Analytics Library (oneDAL). |Deep Learning
Inference Optimization| Intel® Optimization for TensorFlow* | [IntelTensorFlow_GettingStarted](IntelTensorFlow_GettingStarted) | A simple training example for TensorFlow. -|Deep Learning
Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted](Intel_Extension_For_PyTorch_GettingStarted) | A simple training example for Intel® Extension of PyTorch. +|Deep Learning
Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/jupyter-notebooks)| A simple training example for Intel® Extension of PyTorch. |Classical Machine Learning| Scikit-learn (OneDAL) | [Intel_Extension_For_SKLearn_GettingStarted](Intel_Extension_For_SKLearn_GettingStarted) | Speed up a scikit-learn application using Intel oneDAL. |Deep Learning
Inference Optimization|Intel® Extension of TensorFlow | [Intel® Extension For TensorFlow GettingStarted](Intel_Extension_For_TensorFlow_GettingStarted) | Guides users how to run a TensorFlow inference workload on both GPU and CPU. |Deep Learning Inference Optimization|oneCCL Bindings for PyTorch | [Intel oneCCL Bindings For PyTorch GettingStarted](Intel_oneCCL_Bindings_For_PyTorch_GettingStarted) | Guides users through the process of running a simple PyTorch* distributed workload on both GPU and CPU. | From 0806dcb912874ce2d5e1eb555850349daa2b5e24 Mon Sep 17 00:00:00 2001 From: ishaghosh27 <94150575+ishaghosh27@users.noreply.github.com> Date: Thu, 22 Aug 2024 11:23:49 -0400 Subject: [PATCH 2/3] Update README.md --- .../README.md | 101 +++++++++++++++--- 1 file changed, 86 insertions(+), 15 deletions(-) diff --git a/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/README.md b/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/README.md index 9de6c41275..d0b22d0fde 100644 --- a/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/README.md +++ b/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/README.md @@ -2,8 +2,9 @@ The oneAPI Collective Communications Library Bindings for PyTorch* (oneCCL Bindings for PyTorch*) holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library (oneCCL). -| Area | Description +| Property | Description |:--- |:--- +| Category | Getting Started | What you will learn | How to get started with oneCCL Bindings for PyTorch* | Time to complete | 60 minutes @@ -34,39 +35,107 @@ The Jupyter Notebook also demonstrates how to change PyTorch* distributed worklo >- [Intel® oneCCL Bindings for PyTorch*](https://github.com/intel/torch-ccl) >- [Distributed Training with oneCCL in PyTorch*](https://github.com/intel/optimized-models/tree/master/pytorch/distributed) +## Environment Setup +You will need to download and install the following toolkits, tools, and components to use the sample. + -## Run the `oneCCL Bindings for PyTorch* Getting Started` Sample +**1. Get AI Tools** -Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions: -* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated) -* [Docker](#docker) +Required AI Tools: Intel® Extension for PyTorch* - (CPU or GPU) + +If you have not already, select and install these Tools via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector. + +>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples. + +**2. (Offline Installer) Activate the AI Tools bundle base environment** -### AI Tools Offline Installer (Validated) -1. If you have not already done so, activate the AI Tools bundle base environment. If you used the default location to install AI Tools, open a terminal and type the following +If the default path is used during the installation of AI Tools: ``` source $HOME/intel/oneapi/intelpython/bin/activate ``` -If you used a separate location, open a terminal and type the following +If a non-default path is used: ``` source /bin/activate ``` -2. Clone the GitHub repository and install required packages: + +**3. (Offline Installer) Activate relevant Conda environment** + +For CPU +``` +conda activate pytorch +``` +For GPU +``` +conda activate pytorch-gpu +``` + +**4. Clone the GitHub repository** + ``` git clone https://github.com/oneapi-src/oneAPI-samples.git -cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted/ +cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/Intel_oneCCL_Bindings_For_PyTorch_GettingStarted +``` + +**5. Install dependencies** + +>**Note**: Before running the following commands, make sure your Conda/Python environment with AI Tools installed is activated + +``` pip install -r requirements.txt +pip install notebook +``` +For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions. + +## Run the Sample +>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch#environment-setup) is completed. + +Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions: +* [AI Tools Offline Installer (Validated)](#ai-tools-offline-installer-validated) +* [Docker](#docker) + +### AI Tools Offline Installer (Validated) + +**1. Register Conda kernel to Jupyter Notebook kernel** + +**For CPU** + +If the default path is used during the installation of AI Tools: + ``` -3. Launch Jupyter Notebook. +$HOME/intel/oneapi/intelpython/envs/pytorch/bin/python -m ipykernel install --user --name=pytorch +``` + +If a non-default path is used: +``` +/bin/python -m ipykernel install --user --name=pytorch +``` + +**For GPU** + +If the default path is used during the installation of AI Tools: + +``` +$HOME/intel/oneapi/intelpython/envs/pytorch-gpu/bin/python -m ipykernel install --user --name=pytorch-gpu +``` + +If a non-default path is used: +``` +/bin/python -m ipykernel install --user --name=pytorch-gpu +``` +**2. Launch Jupyter Notebook.** ``` jupyter notebook --ip=0.0.0.0 --port 8888 --allow-root ``` -4. Follow the instructions to open the URL with the token in your browser. -5. Locate and select the Notebook. +**3. Follow the instructions to open the URL with the token in your browser.** + +**4. Select the Notebook.** ``` oneCCL_Bindings_GettingStarted.ipynb ``` -6. Change your Jupyter Notebook kernel to **PyTorch** or **PyTorch-GPU**. -7. Run every cell in the Notebook in sequence. + +**5. Change kernel to ``pytorch`` or ``pytorch-gpu``.** + +**6. Run every cell in the Notebook in sequence.** ### Docker AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples. @@ -77,3 +146,5 @@ Code samples are licensed under the MIT license. See [License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details. Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt). + +*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html) From 0d8ec9ae84d52c60912ad7b9346efee5347b2662 Mon Sep 17 00:00:00 2001 From: ishaghosh27 <94150575+ishaghosh27@users.noreply.github.com> Date: Thu, 22 Aug 2024 11:27:02 -0400 Subject: [PATCH 3/3] Update README.md --- AI-and-Analytics/Getting-Started-Samples/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/AI-and-Analytics/Getting-Started-Samples/README.md b/AI-and-Analytics/Getting-Started-Samples/README.md index b382719e24..643d4db0ca 100644 --- a/AI-and-Analytics/Getting-Started-Samples/README.md +++ b/AI-and-Analytics/Getting-Started-Samples/README.md @@ -23,7 +23,7 @@ Third party program Licenses can be found here: [third-party-programs.txt](https |Classical Machine Learning| Intel® Optimization for XGBoost* | [IntelPython_XGBoost_GettingStarted](IntelPython_XGBoost_GettingStarted) | Set up and trains an XGBoost* model on datasets for prediction. |Classical Machine Learning| daal4py | [IntelPython_daal4py_GettingStarted](IntelPython_daal4py_GettingStarted) | Batch linear regression using the Python API package daal4py from oneAPI Data Analytics Library (oneDAL). |Deep Learning
Inference Optimization| Intel® Optimization for TensorFlow* | [IntelTensorFlow_GettingStarted](IntelTensorFlow_GettingStarted) | A simple training example for TensorFlow. -|Deep Learning
Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/jupyter-notebooks)| A simple training example for Intel® Extension of PyTorch. +|Deep Learning
Inference Optimization|Intel® Extension of PyTorch | [IntelPyTorch_GettingStarted]([https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/jupyter-notebooks](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/IPEX_Getting_Started.ipynb)| A simple training example for Intel® Extension of PyTorch. |Classical Machine Learning| Scikit-learn (OneDAL) | [Intel_Extension_For_SKLearn_GettingStarted](Intel_Extension_For_SKLearn_GettingStarted) | Speed up a scikit-learn application using Intel oneDAL. |Deep Learning
Inference Optimization|Intel® Extension of TensorFlow | [Intel® Extension For TensorFlow GettingStarted](Intel_Extension_For_TensorFlow_GettingStarted) | Guides users how to run a TensorFlow inference workload on both GPU and CPU. |Deep Learning Inference Optimization|oneCCL Bindings for PyTorch | [Intel oneCCL Bindings For PyTorch GettingStarted](Intel_oneCCL_Bindings_For_PyTorch_GettingStarted) | Guides users through the process of running a simple PyTorch* distributed workload on both GPU and CPU. |