jianguozhang
commited on
Commit
•
cfbc085
1
Parent(s):
e1256fa
Update README.md
Browse files
README.md
CHANGED
@@ -32,14 +32,15 @@ We provide a series of xLAMs in different sizes to cater to various applications
|
|
32 |
|
33 |
| Model | # Total Params | Context Length | Download Model | Download GGUF files |
|
34 |
|------------------------|----------------|----------------|----------------|----------|
|
35 |
-
| xLAM-1b-fc-r | 1.35B |
|
36 |
-
| xLAM-7b-fc-r | 6.91B |
|
37 |
|
38 |
The `fc` series of models are optimized for function-calling capability, providing fast, accurate, and structured responses based on input queries and available APIs. These models are fine-tuned based on the [deepseek-coder](https://huggingface.co/collections/deepseek-ai/deepseek-coder-65f295d7d8a0a29fe39b4ec4) models and are designed to be small enough for deployment on personal devices like phones or computers.
|
39 |
|
40 |
We also provide their quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) files for efficient deployment and execution. GGUF is a file format designed to efficiently store and load large language models, making GGUF ideal for running AI models on local devices with limited resources, enabling offline functionality and enhanced privacy.
|
41 |
|
42 |
-
For more details, check our [paper](https://arxiv.org/abs/2406.18518).
|
|
|
43 |
|
44 |
## Repository Overview
|
45 |
|
|
|
32 |
|
33 |
| Model | # Total Params | Context Length | Download Model | Download GGUF files |
|
34 |
|------------------------|----------------|----------------|----------------|----------|
|
35 |
+
| xLAM-1b-fc-r | 1.35B | 16384 | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r-gguf) |
|
36 |
+
| xLAM-7b-fc-r | 6.91B | 4096 | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r-gguf) |
|
37 |
|
38 |
The `fc` series of models are optimized for function-calling capability, providing fast, accurate, and structured responses based on input queries and available APIs. These models are fine-tuned based on the [deepseek-coder](https://huggingface.co/collections/deepseek-ai/deepseek-coder-65f295d7d8a0a29fe39b4ec4) models and are designed to be small enough for deployment on personal devices like phones or computers.
|
39 |
|
40 |
We also provide their quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) files for efficient deployment and execution. GGUF is a file format designed to efficiently store and load large language models, making GGUF ideal for running AI models on local devices with limited resources, enabling offline functionality and enhanced privacy.
|
41 |
|
42 |
+
For more details, check our [GitHub](https://github.com/SalesforceAIResearch/xLAM) and [paper](https://arxiv.org/abs/2406.18518).
|
43 |
+
|
44 |
|
45 |
## Repository Overview
|
46 |
|