Skip to content

Commit

Permalink
Merge pull request #953 from scritch1sm/contributingDocsUpdate
Browse files Browse the repository at this point in the history
Updates to ContributingGuide for latest llama.cpp repo
  • Loading branch information
martindevans authored Oct 16, 2024
2 parents b1be92b + 3e60c44 commit 624c870
Showing 1 changed file with 13 additions and 13 deletions.
26 changes: 13 additions & 13 deletions docs/ContributingGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,33 +28,33 @@ git clone --recursive https://github.com/SciSharp/LLamaSharp.git

If you want to support cublas in the compilation, please make sure that you've installed it. If you are using Intel CPU, please check the highest AVX ([Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions)) level that is supported by your device.

As shown in [llama.cpp cmake file](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt), there are many options that could be enabled or disabled when building the library. The following ones are commonly used when using it as a native library of LLamaSharp.
As shown in [llama.cpp cmake file](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt), which also includes [llama.cpp/ggml cmake file](https://github.com/ggerganov/llama.cpp/blob/master/ggml/CMakeLists.txt), there are many options that could be enabled or disabled when building the library. The following ones are commonly used when using it as a native library of LLamaSharp.

```cpp
option(BUILD_SHARED_LIBS "build shared libraries") // Please always enable it
option(LLAMA_NATIVE "llama: enable -march=native flag") // Could be disabled
option(LLAMA_AVX "llama: enable AVX") // Enable it if the highest supported avx level is AVX
option(LLAMA_AVX2 "llama: enable AVX2") // Enable it if the highest supported avx level is AVX2
option(LLAMA_AVX512 "llama: enable AVX512") // Enable it if the highest supported avx level is AVX512
option(LLAMA_BLAS "llama: use BLAS") // Enable it if you want to use BLAS library to acclerate the computation on CPU
option(LLAMA_CUDA "llama: use CUDA") // Enable it if you have CUDA device
option(LLAMA_CLBLAST "llama: use CLBlast") // Enable it if you have a device with CLBLast or OpenCL support, for example, some AMD GPUs.
option(LLAMA_VULKAN "llama: use Vulkan") // Enable it if you have a device with Vulkan support
option(LLAMA_METAL "llama: use Metal") // Enable it if you are using a MAC with Metal device.
option(LLAMA_BUILD_TESTS "llama: build tests") // Please disable it.
option(LLAMA_BUILD_EXAMPLES "llama: build examples") // Please disable it.
option(LLAMA_BUILD_SERVER "llama: build server example")// Please disable it.

option(GGML_NATIVE "llama: enable -march=native flag") // Could be disabled
option(GGML_AVX "ggml: enable AVX") // Enable it if the highest supported avx level is AVX
option(GGML_AVX2 "ggml: enable AVX2") // Enable it if the highest supported avx level is AVX2
option(GGML_AVX512 "ggml: enable AVX512") // Enable it if the highest supported avx level is AVX512
option(GGML_CUDA "ggml: use CUDA") // Enable it if you have CUDA device
option(GGML_BLAS "ggml: use BLAS") // Enable it if you want to use BLAS library to acclerate the computation on CPU
option(GGML_VULKAN "ggml: use Vulkan") // Enable it if you have a device with Vulkan support
option(GGML_METAL "ggml: use Metal") // Enable it if you are using a MAC with Metal device.
```
Most importantly, `-DBUILD_SHARED_LIBS=ON` must be added to the cmake instruction and other options depends on you. For example, when building with cublas but without openblas, use the following instruction:
Most importantly, `-DBUILD_SHARED_LIBS=ON` must be added to the cmake instruction and other options depends on you. For example, when building with CUDA, use the following instruction:
```bash
mkdir build && cd build
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
cmake .. -DGGML_CUDA=ON -DBUILD_SHARED_LIBS=ON
cmake --build . --config Release
```

Now you could find the `llama.dll`, `libllama.so` or `llama.dylib` in your build directory (or `build/bin`).
Now you could find the `llama.dll`, `libllama.so` or `llama.dylib` in `build/src`.

To load the compiled native library, please add the following code to the very beginning of your code.

Expand Down

0 comments on commit 624c870

Please sign in to comment.