Text Generation
Transformers
English
gpt_neox
red_pajama
Inference Endpoints
keldenl's picture
Update README.md
fd3aa26
|
raw
history blame
1.77 kB
metadata
license: apache-2.0
language:
  - en
datasets:
  - togethercomputer/RedPajama-Data-1T
  - OpenAssistant/oasst1
  - databricks/databricks-dolly-15k
pipeline_tag: text-generation
tags:
  - gpt_neox
  - red_pajama

Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1

This will NOT work with llama.cpp as of 5/8/2023. This will ONLY work with the GGML fork in https://github.com/ggerganov/ggml/pull/134, and soon https://github.com/keldenl/gpt-llama.cpp (which uses llama.cpp or ggml).

RedPajama-INCITE-Chat-3B-v1

RedPajama-INCITE-Chat-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.

It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.

Model Details

  • Developed by: Together Computer.
  • Model type: Language Model
  • Language(s): English
  • License: Apache 2.0
  • Model Description: A 2.8B parameter pretrained language model.

Prompt Template

To prompt the chat model, use the following format:

<human>: [Instruction]
<bot>:

Which model to download?

  • The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
  • The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
  • The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
  • The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.