Text Generation
Transformers
PyTorch
English
gpt_neox
text-generation-inference
Inference Endpoints
xzyao commited on
Commit
073d92e
2 Parent(s): d2471ee 89560d3

Merge branch 'main' of hf.co:togethercomputer/RedPajama-Chat-INCITE-6.9B-v1 into main

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -2,12 +2,17 @@
2
  license: apache-2.0
3
  language:
4
  - en
 
 
 
 
5
  ---
6
 
7
  # RedPajama-INCITE-Chat-7B-v0.1
8
 
9
- RedPajama-INCITE-Chat-7B-v0.1, is a large transformer-based language model developed by Together Computer and trained on the RedPajama-Data-1T dataset.
10
- It is further fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
 
11
 
12
  ## Model Details
13
  - **Developed by**: Together Computer.
@@ -28,7 +33,7 @@ To prompt the chat model, use the following format:
28
 
29
  ## GPU Inference
30
 
31
- This requires a GPU with 8GB memory.
32
 
33
  ```python
34
  import torch
@@ -61,7 +66,7 @@ Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scienti
61
 
62
  ## GPU Inference in Int8
63
 
64
- This requires a GPU with 6GB memory.
65
 
66
  To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
67
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ datasets:
6
+ - togethercomputer/RedPajama-Data-1T
7
+ - OpenAssistant/oasst1
8
+ - databricks/databricks-dolly-15k
9
  ---
10
 
11
  # RedPajama-INCITE-Chat-7B-v0.1
12
 
13
+ RedPajama-INCITE-Chat-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
14
+
15
+ It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
16
 
17
  ## Model Details
18
  - **Developed by**: Together Computer.
 
33
 
34
  ## GPU Inference
35
 
36
+ This requires a GPU with 16GB memory.
37
 
38
  ```python
39
  import torch
 
66
 
67
  ## GPU Inference in Int8
68
 
69
+ This requires a GPU with 12GB memory.
70
 
71
  To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
72