Add inference llama.cpp example

#3
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -70,6 +70,34 @@ out = model.generate(input_ids, max_new_tokens=10)
70
  print(tokenizer.batch_decode(out))
71
  ```
72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ### Model hyperparameters
74
 
75
  More details about the model hyperparameters are given in the table below :
 
70
  print(tokenizer.batch_decode(out))
71
  ```
72
 
73
+ ### On-device Inference
74
+
75
+ Since Mambaoutai is only 1.6B parameters, it can run on a CPU at a a fast speed.
76
+
77
+ Here is an example of how to run it on llama.cpp:
78
+
79
+ ```bash
80
+ # Clone llama.cpp repository and compile it from source
81
+ git clone https://github.com/ggerganov/llama.cpp\
82
+ cd llama.cpp
83
+ make
84
+
85
+ # Create a venv and install dependencies
86
+ conda create -n mamba-cpp python=3.10
87
+ conda activate mamba-cpp
88
+ pip install -r requirements/requirements-convert-hf-to-gguf.txt
89
+
90
+ # Download the weights, tokenizer, config, tokenizer_config and special_tokens_map from this repo and
91
+ # put them in a directory 'Mambaoutai/'
92
+ mkdir Mambaoutai
93
+
94
+ # Convert the weights to GGUF format
95
+ python convert-hf-to-gguf.py Mambaoutai
96
+
97
+ # Run inference with a prompt
98
+ ./main -m Mambaoutai/ggml-model-f16.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 1
99
+ ```
100
+
101
  ### Model hyperparameters
102
 
103
  More details about the model hyperparameters are given in the table below :