This Page is a complete list of oneAPI Code Samples, sorted by Alphabetical order
Code Sample Name | Supported Intel® Architecture(s) | Description |
---|---|---|
1D Heat Transfer | ['CPU', 'GPU'] | The 1D Heat Transfer sample simulates 1D Heat Transfer problem using Data Parallel C++ (DPC++) |
AC Int | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the Algorithmic C Integer (AC Int) |
Adaptive Noise Reduction | ['FPGA'] | A highly optimized adaptive noise reduction (ANR) algorithm on an FPGA. |
All Pairs Shortest Paths | ['CPU', 'GPU'] | All Pairs Shortest Paths finds the shortest paths between pairs of vertices in a graph using a parallel blocked algorithm that enables the application to efficiently offload compute intensive work to the GPU. |
Autorun kernels | ['FPGA'] | Intel® FPGA tutorial demonstrating autorun kernels |
AWS Pub Sub | ['CPU'] | This sample uses the Message Broker for AWS* IoT to send and receive messages through an MQTT connection |
Azure IoTHub Telemetry | ['CPU'] | Demonstrate how to send messages from a single device to Microsoft Azure IoT Hub via chosen protocol |
Base: Vector Add | ['CPU', 'GPU', 'FPGA'] | This simple sample adds two large vectors in parallel. Provides a ‘Hello World!’ like sample to ensure your environment is setup correctly using simple Data Parallel C++ (DPC++) |
Bitonic Sort | ['CPU', 'GPU'] | Bitonic Sort using Data Parallel C++ (DPC++) |
Black Scholes | ['CPU', 'GPU'] | Black Scholes formula calculation using Intel® oneMKL Vector Math and Random Number Generators |
Block Cholesky Decomposition | ['CPU', 'GPU'] | Block Cholesky Decomposition using Intel® oneMKL BLAS and LAPACK |
Block LU Decomposition | ['CPU', 'GPU'] | Block LU Decomposition using Intel® oneMKL BLAS and LAPACK |
Buffered Host-Device Streaming | ['FPGA'] | An FPGA tutorial demonstrating how to stream data between the host and device with multiple buffers |
Census | ['CPU'] | This sample illustrates the use of Intel® Distribution of Modin* and Intel Extension for Scikit-learn to build and run an end-to-end machine learning workload |
Pub: Data Parallel C++: Chapter 01 - Introduction |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_1_1_hello.cpp - Hello data-parallel programming - Fig_1_3_race.cpp - Adding a race condition to illustrate a point about being asynchronous - Fig_1_4_lambda.cpp - Lambda function in C++ code - Fig_1_6_functor.cpp - Function object instead of a lambda (more on this in Chapter 10) |
Pub: Data Parallel C++: Chapter 02 - Where Code Executes |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_2_2_simple_program.cpp - Simple SYCL program - Fig_2_7_implicit_default_selector.cpp - Implicit default device selector through trivial construction of a queue - Fig_2_9_host_selector.cpp - Selecting the host device using the host_selector class - Fig_2_10_cpu_selector.cpp - CPU device selector example - Fig_2_12_multiple_selectors.cpp - Example device identification output from various classes of device selectors and demonstration that device selectors can be used for cons - Fig_2_13_gpu_plus_fpga.cpp - Creating queues to both GPU and FPGA devices - Fig_2_15_custom_selector.cpp - Custom selector for Intel Arria FPGA device - Fig_2_18_simple_device_code.cpp - Submission of device code - Fig_2_22_simple_device_code_2.cpp - Submission of device code - Fig_2_23_fallback.cpp - Fallback queue example |
Pub: Data Parallel C++: Chapter 03 - Data Management |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_3_4_usm_explicit_data_movement.cpp - USM explicit data movement - Fig_3_5_usm_implicit_data_movement.cpp - USM implicit data movement - Fig_3_6_buffers_and_accessors.cpp - Buffers and accessors - Fig_3_10_in_order.cpp - In-order queue usage - Fig_3_11_depends_on.cpp - Using events and depends_on - Fig_3_13_read_after_write.cpp - Read-after-Write - Fig_3_15_write_after_read_and_write_after_write.cpp - Write-after-Read and Write-after-Write |
Pub: Data Parallel C++: Chapter 04 - Expresssing Parallelism |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_4_5_vector_add.cpp - Expressing a vector addition kernel with parallel_for - Fig_4_6_matrix_add.cpp - Expressing a matrix addition kernel with parallel_for - Fig_4_7_basic_matrix_multiply.cpp - Expressing a naïve matrix multiplication kernel for square matrices, with parallel_for - Fig_4_13_nd_range_matrix_multiply.cpp - Expressing a naïve matrix multiplication kernel with ND-range parallel_for - Fig_4_20_hierarchical_matrix_multiply.cpp - Expressing a naïve matrix multiplication kernel with hierarchical parallelism - Fig_4_22_hierarchical_logical_matrix_multiply.cpp - Expressing a naïve matrix multiplication kernel with hierarchical parallelism and a logical range |
Pub: Data Parallel C++: Chapter 05 - Error Handling |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_5_1_async_task_graph.cpp - Separation of host program and task graph executions - Fig_5_2_sync_error.cpp - Creating a synchronous error - Fig_5_3_async_error.cpp - Creating an asynchronous error - Fig_5_4_unhandled_exception.cpp - Unhandled exception in C++ - Fig_5_5_terminate.cpp - std::terminate is called when a SYCL asynchronous exception isn’t handled - Fig_5_6_catch_snip.cpp - Pattern to catch sycl::exception specifically - Fig_5_7_catch.cpp - Pattern to catch exceptions from a block of code - Fig_5_8_lambda_handler.cpp - Example asynchronous handler implementation defined as a lambda - Fig_5_9_default_handler_proxy.cpp - Example of how the default asynchronous handler behaves |
Pub: Data Parallel C++: Chapter 06 - Unified Shared Memory |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_6_5_allocation_styles.cpp - Three styles for allocation - Fig_6_6_usm_explicit_data_movement.cpp - USM explicit data movement example - Fig_6_7_usm_implicit_data_movement.cpp - USM implicit data movement example - Fig_6_8_prefetch_memadvise.cpp - Fine-grained control via prefetch and mem_advise - Fig_6_9_queries.cpp - Queries on USM pointers and devices |
Pub: Data Parallel C++: Chapter 07 - Buffers |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_7_2_3_4_creating_buffers.cpp - Creating buffers, Part 1 - Figure 7-3. Creating buffers, Part 2 - Figure 7-4. Creating buffers, Part 3 - Fig_7_5_buffer_properties.cpp - Buffer properties - Fig_7_8_accessors_simple.cpp - Simple accessor creation - Fig_7_10_accessors.cpp - Accessor creation with specified usage |
Pub: Data Parallel C++: Chapter 08 - Scheduling Kernals and Data Movement |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_8_3_linear_dependence_in_order.cpp - Linear dependence chain with in-order queues - Fig_8_4_linear_dependence_events.cpp - Linear dependence chain with events - Fig_8_5_linear_dependence_buffers.cpp - Linear dependence chain with buffers and accessors - Fig_8_6_y_in_order.cpp - Y pattern with in-order queues - Fig_8_7_y_events.cpp - Y pattern with events - Fig_8_8_y_buffers.cpp - Y pattern with accessors |
Pub: Data Parallel C++: Chapter 09 - Communication and Synchronization |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_9_4_naive_matrix_multiplication.cpp - The naïve matrix multiplication kernel from Chapter 4 - Fig_9_7_local_accessors.cpp - Declaring and using local accessors - Fig_9_8_ndrange_tiled_matrix_multiplication.cpp - Expressing a tiled matrix multiplication kernel with an ND-range parallel_for and work-group local memory - Fig_9_9_local_hierarchical.cpp - Hierarchical kernel with a local memory variable - Fig_9_10_hierarchical_tiled_matrix_multiplication.cpp - A tiled matrix multiplication kernel implemented as a hierarchical kernel - Fig_9_11_sub_group_barrier.cpp - Querying and using the sub_group class - Fig_9_13_matrix_multiplication_broadcast.cpp - Matrix multiplication kernel includes a broadcast operation - Fig_9_14_ndrange_sub_group_matrix_multiplication.cpp - Tiled matrix multiplication kernel expressed with ND-range parallel_for and sub-group collective functions |
Pub: Data Parallel C++: Chapter 10 - Defining Kernels |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_10_2_kernel_lambda.cpp - Kernel defined using a lambda expression - Fig_10_3_optional_kernel_lambda_elements.cpp - More elements of a kernel lambda expression, including optional elements - Fig_10_4_named_kernel_lambda.cpp - Naming kernel lambda expressions - Fig_10_5_unnamed_kernel_lambda.cpp - Using unnamed kernel lambda expressions - Fig_10_6_kernel_functor.cpp - Kernel as a named function object - Fig_10_8_opencl_object_interop.cpp - Kernel created from an OpenCL kernel object |
Pub: Data Parallel C++: Chapter 11 - Vectors |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_11_6_load_store.cpp - Use of load and store member functions. - Fig_11_7_swizzle_vec.cpp - Example of using the swizzled_vec class - Fig_11_8_vector_exec.cpp - Vector execution example |
Pub: Data Parallel C++: Chapter 12 - Device Information |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_12_1_assigned_device.cpp - Device we have been assigned by default - Fig_12_2_try_catch.cpp - Using try-catch to select a GPU device if possible, host device if not - Fig_12_3_device_selector.cpp - Custom device selector—our preferred solution - Fig_12_4_curious.cpp - Simple use of device query mechanisms: curious.cpp - Fig_12_6_very_curious.cpp - More detailed use of device query mechanisms: verycurious.cpp - Fig_12_7_invocation_parameters.cpp - Fetching parameters that can be used to shape a kernel |
Pub: Data Parallel C++: Chapter 13 - Practical Tips |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_13_4_stream.cpp - sycl::stream - Fig_13_6_common_buffer_pattern.cpp - Common pattern—buffer creation from a host allocation - Fig_13_7_common_pattern_bug.cpp - Common bug: Reading data directly from host allocation during buffer lifetime - Fig_13_8_host_accessor.cpp - Recommendation: Use a host accessor to read kernel result - Fig_13_9_host_accessor_for_init.cpp - Recommendation: Use host accessors for buffer initialization and reading of results - Fig_13_10_host_accessor_deadlock.cpp - Bug (hang!) from improper use of host_accessors |
Pub: Data Parallel C++: Chapter 14 - Common Parallel Patterns |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_14_8_one_reduction.cpp - Reduction expressed as an ND-range data-parallel kernel using the reduction library - Fig_14_11_user_defined_reduction.cpp - Using a user-defined reduction to find the location of the minimum value with an ND-range kernel - Fig_14_13_map.cpp - Implementing the map pattern in a data-parallel kernel - Fig_14_14_stencil.cpp - Implementing the stencil pattern in a data-parallel kernel - Fig_14_15_local_stencil.cpp - Implementing the stencil pattern in an ND-range kernel, using work-group local memory - Fig_14_18-20_inclusive_scan.cpp - Implementing a naïve reduction expressed as a data-parallel kernel - Fig_14_22_local_pack.cpp - Using a sub-group pack operation to build a list of elements needing additional postprocessing - Fig_14_24_local_unpack.cpp - Using a sub-group unpack operation to improve load balancing for kernels with divergent control flow |
Pub: Data Parallel C++: Chapter 15 - Programming for GPUs |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_15_3_single_task_matrix_multiplication.cpp - A single task matrix multiplication looks a lot like CPU host code - Fig_15_5_somewhat_parallel_matrix_multiplication.cpp - Somewhat-parallel matrix multiplication - Fig_15_7_more_parallel_matrix_multiplication.cpp - Even more parallel matrix multiplication - Fig_15_10_divergent_control_flow.cpp - Kernel with divergent control flow - Fig_15_12_small_work_group_matrix_multiplication.cpp - Inefficient single-item, somewhat-parallel matrix multiplication - Fig_15_18_columns_matrix_multiplication.cpp - Computing columns of the result matrix in parallel, not rows |
Pub: Data Parallel C++: Chapter 16 - Programming for CPUs |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_16_6_stream_triad.cpp - DPC++ STREAM Triad parallel_for kernel code - Fig_16_12_forward_dep.cpp - Using a sub-group to vectorize a loop with a forward dependence - Fig_16_18_vector_swizzle.cpp - Using vector types and swizzle operations in the single_task kernel |
Pub: Data Parallel C++: Chapter 17 - Programming for FPGA |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_17_9_fpga_selector.cpp - Choosing an FPGA device at runtime using the - Fig_17_11_fpga_emulator_selector.cpp - Using fpga_emulator_selector for rapid development and debugging - Fig_17_17_ndrange_func.cpp - Multiple work-item (16 × 16 × 16) invocation of a random number generator - Fig_17_18_loop_func.cpp - Loop-carried data dependence (state) - Fig_17_20_loop_carried_deps.cpp - Loop with two loop-carried dependences (i.e., i and a) - Fig_17_22_loop_carried_state.cpp - Random number generator that depends on previous value generated - Fig_17_31_inter_kernel_pipe.cpp - Pipe between two kernels: (1) ND-range and (2) single task with a loop |
Pub: Data Parallel C++: Chapter 18 - Libraries |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_18_1_builtin.cpp - Using std::log and sycl::log - Fig_18_7_swap.cpp - Using std::swap in device code - Fig_18_11_std_fill.cpp - Using std::fill - Fig_18_13_binary_search.cpp - Using binary_search - Fig_18_15_pstl_usm.cpp - Using Parallel STL with a USM allocator Errata - code samples for 18-10, 18-12, 18-14, and 19-17 are not in the repository |
Pub: Data Parallel C++: Chapter 19 - Memory Model and Atomics |
['CPU', 'GPU'] | Collection of Code samples for the chapter - Fig_19_3_data_race.cpp - Kernel containing a data race - Fig_19_6_avoid_data_race_with_barrier.cpp - Avoiding a data race using a barrier - Fig_19_7_avoid_data_race_with_atomics.cpp - Avoiding a data race using atomic operations - Fig_19_15_buffer_and_atomic_ref.cpp - Accessing a buffer via an explicitly created atomic_ref - Fig_19_16_atomic_accessor.cpp - Accessing a buffer via an atomic_ref implicitly created by an atomic accessor - Fig_19_18_histogram.cpp - Computing a histogram using atomic references in different memory spaces - Fig_19_19-20_device_latch.cpp - Combining Figure 19-20. Using and building a simple device-wide latch on top of atomic references Errata - code samples for 18-10, 18-12, 18-14, and 19-17 are not in the repository |
Pub: Data Parallel C++: Chapter 20 - Epilogue Future Direction |
['CPU', 'GPU'] | Collection of Code samples for the chapterEpilogue source code examples: Future Direction of DPC++ - Fig_ep_1_mdspan.cpp - Attaching accessor-like indexing to a USM pointer using mdspan - Fig_ep_2-4_generic_space.cpp - Storing pointers to a specific address space in a class - Figure EP-3. Storing pointers to the generic address space in a class - Figure EP-4. Storing pointers with an optional address space in a class - Fig_ep_5_extension_mechanism.cpp - Checking for Intel sub-group extension compiler support with #ifdef - Fig_ep_6_device_constexpr.cpp - Specializing kernel code based on device aspects at kernel compile time - Fig_ep_7_hierarchical_reduction.cpp - Using hierarchical parallelism for a hierarchical reduction |
CMake FPGA | ['FPGA'] | Project Templates - Linux CMake project for FPGA |
CMake GPU | ['GPU'] | Project Templates - Linux CMake project for GPU |
Complex Mult | ['CPU', 'GPU'] | This sample computes Complex Number Multiplication |
Compute Units | ['FPGA'] | Intel® FPGA tutorial showcasing a design pattern to enable the creation of compute units |
Computed Tomography | ['CPU', 'GPU'] | Reconstruct an image from simulated CT data with Intel® oneMKL |
CRR Binomial Tree | ['FPGA'] | This sample shows a Binomial Tree Model for Option Pricing using a FPGA-optimized reference design of the Cox-Ross-Rubinstein (CRR) Binomial Tree Model with Greeks for American exercise options |
DB | ['FPGA'] | An FPGA reference design that demonstrates high-performance Database Query Acceleration on Intel® FPGAs |
Debugger: Array Transform | ['CPU', 'GPU'] | A small Data Parallel C++ (DPC++) example that is used in the "Get Started Guide" of the Application Debugger to exercise major debugger functionality |
Discrete Cosine Transform | ['CPU', 'GPU'] | An image processing algorithm as seen in the JPEG compression standard |
Double Buffering | ['FPGA'] | Intel® FPGA tutorial design to demonstrate overlapping kernel execution with buffer transfers and host-processing to improve system performance |
DPC Reduce | ['CPU', 'GPU'] | This sample models transform Reduce in different ways showing capability of Intel® oneAPI |
DPC++ Essentials Tutorials | ['CPU', 'GPU'] | DPC++ Essentials Tutorials using Jupyter Notebooks |
DPC++ OpenCL Interoperability Samples | ['CPU', 'GPU'] | Samples showing DPC++ and OpenCL Interoperability |
DPCPP Blur | ['CPU', 'GPU'] | Shows how to use Intel® Video Processing Library (VPL) and Data Parallel C++ (DPC++) to convert I420 raw video file in to BGRA and blur each frame |
DPCPP Interoperability | ['CPU', 'GPU'] | Intel® oneDNN SYCL extensions API programming for both Intel® CPU and GPU |
DSP Control | ['FPGA'] | An Intel® FPGA tutorial demonstrating the DSP control feature |
Dynamic Profiler | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the Intel® FPGA Dynamic Profiler for Data Parallel C++ (DPC++) to dynamically collect performance data and reveal areas for optimization |
Explicit Data Movement | ['FPGA'] | An Intel® FPGA tutorial demonstrating an alternative coding style, explicit USM, in which all data movement is controlled explicitly by the author |
Fast Recompile | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to separate the compilation of host and device code to save development time |
Folder Options DPCT | ['CPU'] | Multi-folder project that illustrates migration of a CUDA project that has files located in multiple folders in a directory tree. Uses the --in-root and --out-root options to tell the Intel® DPC++ Compatibility Tool where to locate source code to be migrated |
Fourier Correlation | ['CPU', 'GPU'] | Compute 1D Fourier correlation with Intel® oneMKL |
FPGA Compile | ['FPGA'] | Intel® FPGA tutorial introducing how to compile Data Parallel C++ (DPC++) for Intel® FPGA |
FPGA Reg | ['FPGA'] | An Intel® FPGA advanced tutorial demonstrating how to apply the Data Parallel C++ (DPC++) extension ext::intel::fpga_reg |
Gamma Correction | ['CPU', 'GPU'] | Gamma Correction - a nonlinear operation used to encode and decode the luminance of each image pixel |
Getting Started | ['CPU', 'GPU'] | Basic Intel® oneDNN programming model for both Intel® CPU and GPU |
GZIP | ['FPGA'] | Reference design demonstrating high-performance GZIP compression on Intel® FPGA |
Hello Decode | ['CPU', 'GPU'] | Shows how to use the Intel® oneAPI Video Processing Library (VPL) to perform a simple video decode |
Hello Encode | ['CPU', 'GPU'] | Shows how to use the Intel® oneAPI Video Processing Library (VPL) to perform a simple video encode |
Hello VPP | ['CPU', 'GPU'] | Shows how to use the Intel® oneAPI Video Processing Library (VPL) to perform simple video processing |
Hello World GPU | ['GPU'] | Template 'Hello World' on GPU |
Hidden Markov Models | ['CPU', 'GPU'] | Hidden Markov Models using Data Parallel C++ |
Histogram | ['CPU', 'GPU'] | This sample demonstrates Histogram using Dpstd APIs |
Host-Device Streaming using USM | ['FPGA'] | An FPGA tutorial demonstrating how to stream data between the host and device with low latency and high throughput |
IBM Device | ['CPU'] | This project shows how-to develop a device code using Watson IoT Platform iot-c device client library, connect and interact with Watson IoT Platform Service |
Intel Embree Getting Started | ['CPU'] | This introductory hello rendering toolkit sample illustrates how to cast a ray into a scene with Intel Embree |
Intel Implicit SPMD Program Compiler (Intel ISPC) Getting Started: 05_ispc_gsg | ['CPU'] | This introductory rendering toolkit sample demonstrates how to compile basic programs with Intel ISPC and the system C++ compiler. Use this sample to further explore developing accelerated applications with Intel Embree and Intel Open VKL. |
Intel Open Image Denoise Getting Started | ['CPU'] | This introductory 'hello rendering toolkit' sample program demonstrates how to denoise a raytraced image with Intel Open Image Denoise |
Intel Open VKL Getting Started | ['CPU'] | This introductory hello rendering toolkit sample program demonstrates how to sample into volumes with Intel Open VKL |
Intel OSPRay Getting Started | ['CPU'] | This introductory 'hello rendering toolkit' sample program demonstrates how to render triangle data with the pathtracer from Intel OSPRay |
Intel(R) Extension for Scikit-learn: SVC for Adult dataset | ['CPU'] | Use Intel(R) Extension for Scikit-learn to accelerate the training and prediction with SVC algorithm on Adult dataset. Compare the performance of SVC algorithm optimized through Intel(R) Extension for Scikit-learn against original Scikit-learn. |
Intel® Modin Getting Started | ['CPU'] | This sample illustrates how to use Modin accelerated Pandas functions and notes the performance gain when compared to standard Pandas functions |
Intel® Neural Compressor Tensorflow Getting Started | ['CPU'] | This sample illustrates how to run Intel® Neural Compressor to quantize the FP32 model trained by Keras on Tensorflow to INT8 model to speed up the inference. |
Intel® Python Daal4py Distributed K-Means | ['CPU'] | This sample code illustrates how to train and predict with a distributed K-Means model with the Intel® Distribution of Python using the Python API package Daal4py powered by Intel® oneDAL |
Intel® Python Daal4py Distributed Linear Regression | ['CPU'] | This sample code illustrates how to train and predict with a Distributed Linear Regression model with the Intel® Distribution of Python using the Python API package Daal4py powered by Intel® oneDAL |
Intel® Python Daal4py Getting Started | ['CPU'] | This sample illustrates how to do Batch Linear Regression using the Python API package Daal4py powered by Intel® oneDAL |
Intel® Python Scikit-learn Extension Getting Started | ['CPU'] | This sample illustrates how to do Image classification using SVM classifier from Python API package SKlearnex with the use of Intel® oneAPI Data Analytics Library (oneDAL). |
Intel® Python XGBoost Daal4py Prediction | ['CPU'] | This sample code illustrates how to analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction |
Intel® Python XGBoost Getting Started | ['CPU'] | The sample illustrates how to setup and train an XGBoost model on datasets for prediction |
Intel® Python XGBoost Performance | ['CPU'] | This sample code illustrates how to analyze the performance benefit from using Intel training optimizations upstreamed by Intel to latest XGBoost compared to un-optimized XGBoost 0.81 |
Intel® PyTorch Getting Started | ['CPU'] | This sample illustrates how to train a PyTorch model and run inference with Intel® oneMKL and Intel® oneDNN |
Intel® Tensorflow Getting Started | ['CPU'] | This sample illustrates how to train a TensorFlow model and run inference with oneMKL and oneDNN. |
Intel® TensorFlow Horovod Multinode Training | ['CPU'] | This sample illustrates how to train a TensorFlow model on multiple nodes in a cluster using Horovod |
Intel® TensorFlow Model Zoo Inference With FP32 Int8 | ['CPU'] | This code example illustrates how to run FP32 and Int8 inference on Resnet50 with TensorFlow using Intel® Model Zoo |
Intrinsics | ['CPU'] | Demonstrates the Intrinsic functions of the Intel® oneAPI C++ Compiler Classic |
IO streaming with DPC++ IO pipes | ['FPGA'] | An FPGA tutorial describing how to stream data to and from DPC++ IO pipes. |
ISO2DFD DPCPP | ['CPU', 'GPU'] | The ISO2DFD sample illustrates Data Parallel C++ (DPC++) Basics using 2D Finite Difference Wave Propagation |
ISO3DFD DPCPP | ['CPU'] | The ISO3DFD Sample illustrates Data Parallel C++ (DPC++) using Finite Difference Stencil Kernel for solving 3D Acoustic Isotropic Wave Equation |
ISO3DFD OMP Offload | ['GPU'] | A Finite Difference Stencil Kernel for solving 3D Acoustic Isotropic Wave Equation using OpenMP* (OMP) |
Jacobi | ['CPU', 'GPU'] | A small Data Parallel C++ (DPC++) example which solves a harcoded linear system with Jacobi iteration. The sample includes two versions of the same program: with and without bugs. |
Jacobi Iterative | ['CPU', 'GPU'] | Calculates the number of iterations needed to solve system of linear equations using Jacobi Iterative method |
Kernel Args Restrict | ['FPGA'] | Explain the kernel_args_restrict attribute and its effect on the performance of Intel® FPGA kernels |
Lidar Object Detection using PointPillars | ['CPU', 'GPU'] | Object detection using a LIDAR point cloud as input. This implementation is based on the paper 'PointPillars: Fast Encoders for Object Detection from Point Clouds' |
Loop Coalesce | ['FPGA'] | An Intel® FPGA tutorial demonstrating the loop_coalesce attribute |
Loop Fusion | ['FPGA'] | An Intel® FPGA tutorial demonstrating the usage of the loop_fusion attribute |
Loop Initiation Interval | ['FPGA'] | An Intel® FPGA tutorial demonstrating the usage of the initiation_interval attribute |
Loop IVDEP | ['FPGA'] | An Intel® FPGA tutorial demonstrating the usage of the loop_ivdep attribute |
Loop Unroll | ['CPU', 'GPU'] | Demonstrates the use of loop unrolling as a simple optimization technique to speed up compute and increase memory access throughput. |
Loop Unroll | ['FPGA'] | An Intel® FPGA tutorial design demonstrating the loop_unroll attribute |
LSU Control | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to configure the load-store units (LSU) in Data Parallel C++ (DPC++) program using the LSU controls extension |
Makefile FPGA | ['FPGA'] | Project Templates - Linux Makefile project for FPGA |
Makefile GPU | ['GPU'] | Project Templates - Linux Makefile project for GPU |
Mandelbrot | ['CPU', 'GPU'] | The Mandelbrot Set - a fractal example in mathematics |
Mandelbrot OMP | ['CPU', 'GPU'] | Calculates the Mandelbrot Set and outputs a BMP image representation using OpenMP* (OMP) |
Matrix Multiply | ['CPU', 'GPU'] | This sample Multiplies two large Matrices in parallel using Data Parallel C++ (DPC++) and OpenMP* (OMP) |
Matrix Multiply Advisor | ['CPU', 'GPU'] | Simple program that shows how to improve the Intel® oneAPI Data Parallel C++ (DPC++) Matrix Multiplication program using Intel® VTune™ Profiler and Intel® Advisor |
Matrix Multiply MKL | ['CPU', 'GPU'] | Accelerate Matrix Multiplication with Intel® oneMKL |
Matrix Multiply VTune™ Profiler | ['CPU', 'GPU'] | Simple program that shows how to improve the Data Parallel C++ (DPC++) Matrix Multiplication program using Intel® VTune™ Profiler and Intel® Advisor |
Max Interleaving | ['FPGA'] | An Intel® FPGA tutorial demonstrating the usage of the loop max_interleaving attribute |
Mem Channels | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the mem_channel buffer property and the -Xsno-interleaving flag |
Memory Attributes | ['FPGA'] | An Intel® FPGA tutorial demonstrating the use of on-chip memory attributes to control memory structures in a Data Parallel C++ (DPC++) program |
Merge Sort | ['FPGA'] | A Reference design demonstrating merge sort on an Intel® FPGA |
Merge SPMV | ['CPU', 'GPU'] | The Sparse Matrix Vector sample provides a parallel implementation of a Merge based Sparse Matrix and Vector Multiplication Algorithm using Data Parallel C++ (DPC++) |
MergeSort OMP | ['CPU'] | Classic OpenMP* (OMP) Mergesort algorithm |
Monte Carlo European Opt | ['CPU', 'GPU'] | Monte Carlo Simulation of European Options pricing with Intel® oneMKL random number generators |
Monte Carlo Pi | ['CPU', 'GPU'] | Monte Carlo procedure for estimating Pi |
Monte Carlo Pi | ['CPU', 'GPU'] | Estimating Pi with Intel® oneMKL random number generators |
MVDR Beamforming | ['FPGA'] | A reference design demonstrating a high-performance streaming MVDR beamformer |
N-Body | ['CPU', 'GPU'] | An N-Body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity. This N-Body sample code is implemented using Data Parallel C++ (DPC++) for CPU and GPU |
N-Way Buffering | ['FPGA'] | Intel® FPGA tutorial design to demonstrate overlapping kernel execution with buffer transfers and multi-threaded host-processing to improve system performance |
Numba DPPY Essentials training | ['CPU', 'GPU'] | Numba DPPY Essentials Tutorials using Jupyter Notebooks |
On-Chip Memory Cache | ['FPGA'] | Intel® FPGA tutorial demonstrating the caching of on-chip memory to reduce loop initiation interval |
oneCCL Getting Started | ['CPU', 'GPU'] | Basic Intel® oneCCL programming model for both Intel® CPU and GPU |
OpenMP Offload | ['CPU', 'GPU'] | Demonstration of the new OpenMP offload features supported by the Intel(r) oneAPI DPC++/C++ Compiler |
OpenMP Offload C++ Tutorials | ['CPU', 'GPU'] | C++ OpenMP Offload Basics using Jupyter Notebooks |
OpenMP Offload Fortran Tutorials | ['CPU', 'GPU'] | Fortran OpenMP Offload Basics using Jupyter Notebooks |
OpenMP* Primes | ['CPU'] | Fortran Tutorial - Using OpenMP* (OMP) |
OpenMP* Reduction | ['CPU', 'GPU'] | This sample models OpenMP* (OMP) Reduction in different ways showing capability of Intel® oneAPI |
Optimize Inner Loop | ['FPGA'] | An Intel® FPGA tutorial design demonstrating how to optimize the throughput of inner loops with low trip counts |
Optimize Integral | ['CPU'] | Fortran Sample - Simple Compiler Optimizations |
Optimize TensorFlow pre-trained model for inference | ['CPU'] | This tutorial will guide you how to optimize a pre-trained model for a better inference performance, and also analyze the model pb files before and after the inference optimizations. |
Particle Diffusion | ['CPU', 'GPU'] | The Particle Diffusion code sample illustrates Data Parallel C++ (DPC++) using a simple (non-optimized) implementation of a Monte Carlo Simulation of the Diffusion of Water Molecules in Tissue |
Pipe Array | ['FPGA'] | An Intel® FPGA tutorial showcasing a design pattern to enables the creation of arrays of pipes |
Pipes | ['FPGA'] | How to use Pipes to transfer data between kernels on an Intel® FPGA |
Prefix Sum | ['CPU', 'GPU'] | Compute Prefix Sum using Data Parallel C++ (DPC++) |
Printf | ['FPGA'] | This FPGA tutorial explains how to use the printf() to print in a DPC++ FPGA program |
Private Copies | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the private_copies attribute to trade off the resource use and the throughput of a DPC++ FPGA program |
QRD | ['FPGA'] | Reference design demonstrating high-performance QR Decomposition (QRD) of real and complex matrices on a Intel® FPGA |
QRI | ['FPGA'] | Reference design demonstrating high-performance QR-based matrix inversion (QRI) of real and complex matrices on a Intel® FPGA |
Random Sampling Without Replacement | ['CPU', 'GPU'] | Multiple simple random sampling without replacement with Intel® oneMKL random number generators |
Read-Only Cache | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the read-only cache feature to boost the throughput of a DPC++ FPGA program |
Remove Loop Carried Dependency | ['FPGA'] | An Intel® FPGA tutorial design demonstrating performance optimization by removing loop carried dependencies |
Rodinia NW DPCT | ['CPU'] | Migrate a CUDA project using the Intel® DPCT intercept-build feature to create a compilation database. The compilation database provides compilation options, settings, macro definitions and include paths that the Intel® DPC++ Compatibility Tool (DPCT) will use during migration of the project |
Scheduler Target FMAX | ['FPGA'] | Explain the scheduler_target_fmax_mhz attribute and its effect on the performance of Intel® FPGA kernels |
Sepia Filter | ['CPU', 'GPU'] | A program that converts an image to Sepia Tone |
Shannonization | ['FPGA'] | An Intel® FPGA tutorial design that demonstrates an optimization for removing computation from the critical path and improves Fmax/II |
Simple Add | ['CPU', 'GPU', 'FPGA'] | This simple sample adds two large vectors in parallel and provides a ‘Hello World!’ like sample to ensure your environment is setup correctly using Data Parallel C++ (DPC++) |
Simple Model | ['CPU', 'GPU'] | Run a simple CNN on both Intel® CPU and GPU with sample C++ codes |
Sparse Conjugate Gradient | ['CPU', 'GPU'] | Solve Sparse linear systems with the Conjugate Gradient method using Intel® oneMKL sparse BLAS |
Speculated Iterations | ['FPGA'] | An Intel® FPGA tutorial demonstrating the speculated_iterations attribute |
Stable Sort By Key | ['CPU', 'GPU'] | This sample models Stable Sort By Key during the sorting of 2 sequences (keys and values) only keys are compared but both keys and values are swapped |
Stall Enable | ['FPGA'] | An Intel® FPGA tutorial demonstrating the use_stall_enable_clusters attribute |
STREAM | ['CPU', 'GPU'] | The STREAM is a program that measures memory transfer rates in MB/s for simple computational kernels coded in C |
Student's T-test | ['CPU', 'GPU'] | Performing Student's T-test with Intel® oneMKL Vector Statistics functionality |
System Profiling | ['FPGA'] | An Intel® FPGA tutorial demonstrating how to use the OpenCL* Intercept Layer to improve a design with the double buffering optimization |
TBB ASYNC SYCL | ['CPU', 'GPU'] | This sample illustrates how computational kernel can be split for execution between CPU and GPU using Intel® oneTBB Flow Graph asynchronous node and functional node. The Flow Graph asynchronous node uses SYCL to implement calculations on GPU while the functional node does CPU part of calculations. This TBB ASYNC SYCL sample code is implemented using C++ and SYCL language for Intel® CPU and GPU |
TBB Resumable Tasks SYCL | ['CPU', 'GPU'] | This sample illustrates how computational kernel can be split for execution between CPU and GPU using Intel® oneTBB Resumable Task and parallel_for. The Intel® oneTBB resumable task uses SYCL to implement calculations on GPU while the parallel_for algorithm does CPU part of calculations. This TBB Resumable Tasks SYCL sample code is implemented using C++ and SYCL language for Intel® CPU and GPU |
TBB Task SYCL | ['CPU', 'GPU'] | This sample illustrates how 2 Intel® oneTBB tasks can execute similar computational kernels with one task executing SYCL code and another one executing the Intel® oneTBB code. This TBB Task SYCL sample code is implemented using C++ and SYCL language for Intel® CPU and GPU |
Triangular Loop | ['FPGA'] | An Intel® FPGA tutorial demonstrating an advanced optimization technique for triangular loops |
Tutorials | ['CPU', 'GPU'] | oneAPI Collective Communications Library (oneCCL) Tutorials |
Tutorials | ['CPU', 'GPU'] | Intel® oneDNN Tutorials |
Vector Add DPCT | ['CPU'] | Simple project to illustrate the basic migration of CUDA code. Use this sample to ensure your environment is configured correctly and to understand the basics of migrating existing CUDA projects to Data Parallel C++ (DPC++) |
Vectorize VecMatMult | ['CPU'] | Fortran Tutorial - Using Auto Vectorization |
Zero Copy Data Transfer | ['FPGA'] | An Intel® FPGA tutorial demonstrating zero-copy host memory using the SYCL restricted Unified Shared Memory (USM) model |
Total Samples: 167 |
Report Generated on: March 08, 2022