File size: 1,772 Bytes
00f52f3 2b8e102 fd3aa26 2d7ead9 fd3aa26 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
pipeline_tag: text-generation
tags:
- gpt_neox
- red_pajama
---
**Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1**
This will NOT work with llama.cpp as of 5/8/2023. This will ONLY work with the GGML fork in https://github.com/ggerganov/ggml/pull/134, and soon https://github.com/keldenl/gpt-llama.cpp (which uses llama.cpp or ggml).
# RedPajama-INCITE-Chat-3B-v1
RedPajama-INCITE-Chat-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model.
## Prompt Template
To prompt the chat model, use the following format:
```
<human>: [Instruction]
<bot>:
```
## Which model to download?
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1. |