Skip to content

Commit

Permalink
[documentation] Update tips.md
Browse files Browse the repository at this point in the history
  • Loading branch information
amontoison committed Aug 17, 2023
1 parent cc1d7cc commit f6077a1
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions docs/src/tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,17 @@ BLAS.get_num_threads()
The recommended number of BLAS threads is the number of physical and not logical cores, which is in general `N = NMAX / 2` if your CPU supports simultaneous multithreading (SMT).

By default Julia ships with OpenBLAS but it's also possible to use Intel MKL BLAS and LAPACK with [MKL.jl](https://github.com/JuliaLinearAlgebra/MKL.jl).
If your operating system is MacOS 13.4 or later, it's recommended to use Accelerate BLAS and LAPACK with [AppleAccelerate.jl](https://github.com/JuliaLinearAlgebra/AppleAccelerate.jl).

```julia
using LinearAlgebra
BLAS.vendor() # get_config() for Julia ≥ 1.7
BLAS.get_config() # BLAS.vendor() for Julia 1.6
```

## Multi-threaded sparse matrix-vector products

For sparse matrices, the Julia implementation of `mul!` of [SparseArrays](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) library is not parallelized.
A siginifiant speed-up can be observed with the multhreaded `mul!` of [MKLSparse.jl](https://github.com/JuliaSparse/MKLSparse.jl).
A significant speed-up can be observed with the multhreaded `mul!` of [MKLSparse.jl](https://github.com/JuliaSparse/MKLSparse.jl) or [ThreadedSparseCSR.jl](https://github.com/BacAmorim/ThreadedSparseCSR.jl).

It's also possible to implement a generic multithreaded julia version.
For instance, the following function can be used for symmetric matrices
Expand Down

0 comments on commit f6077a1

Please sign in to comment.