|
# CI |
|
|
|
In addition to [Github Actions](https://github.com/ggerganov/llama.cpp/actions) `llama.cpp` uses a custom CI framework: |
|
|
|
https://github.com/ggml-org/ci |
|
|
|
It monitors the `master` branch for new commits and runs the |
|
[ci/run.sh](https://github.com/ggerganov/llama.cpp/blob/master/ci/run.sh) script on dedicated cloud instances. This allows us |
|
to execute heavier workloads compared to just using Github Actions. Also with time, the cloud instances will be scaled |
|
to cover various hardware architectures, including GPU and Apple Silicon instances. |
|
|
|
Collaborators can optionally trigger the CI run by adding the `ggml-ci` keyword to their commit message. |
|
Only the branches of this repo are monitored for this keyword. |
|
|
|
It is a good practice, before publishing changes to execute the full CI locally on your machine: |
|
|
|
```bash |
|
mkdir tmp |
|
|
|
# CPU-only build |
|
bash ./ci/run.sh ./tmp/results ./tmp/mnt |
|
|
|
# with CUDA support |
|
GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt |
|
|
|
# with SYCL support |
|
source /opt/intel/oneapi/setvars.sh |
|
GG_BUILD_SYCL=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt |
|
``` |
|
|