TheBloke commited on
Commit
4731db6
1 Parent(s): 9cf457d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -1,6 +1,15 @@
1
  ---
2
  inference: false
3
  license: other
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -21,14 +30,22 @@ license: other
21
 
22
  These files are GPTQ 4bit model files for [Georgia Tech Research Institute's Galactica 30B Evol Instruct 70K](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k).
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
  ## Repositories available
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/galactica-30B-evol-instruct-70K-GPTQ)
29
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k)
31
 
 
 
 
 
 
 
 
 
 
32
  ## How to easily download and use this model in text-generation-webui
33
 
34
  Please make sure you're using the latest version of text-generation-webui
@@ -74,8 +91,8 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
74
 
75
  # Note: check the prompt template is correct for this model.
76
  prompt = "Tell me about AI"
77
- prompt_template=f'''### Human: {prompt}
78
- ### Assistant:'''
79
 
80
  print("\n\n*** Generate:")
81
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ datasets:
5
+ - WizardLM/WizardLM_evol_instruct_70k
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - galactica
10
+ - wizardlm
11
+ - alpaca
12
+ - opt
13
  ---
14
 
15
  <!-- header start -->
 
30
 
31
  These files are GPTQ 4bit model files for [Georgia Tech Research Institute's Galactica 30B Evol Instruct 70K](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k).
32
 
33
+ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
34
 
35
  ## Repositories available
36
 
37
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/galactica-30B-evol-instruct-70K-GPTQ)
 
38
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k)
39
 
40
+ ## Prompt template
41
+
42
+ ```
43
+ ### Instruction:
44
+ prompt
45
+
46
+ ### Response:
47
+ ```
48
+
49
  ## How to easily download and use this model in text-generation-webui
50
 
51
  Please make sure you're using the latest version of text-generation-webui
 
91
 
92
  # Note: check the prompt template is correct for this model.
93
  prompt = "Tell me about AI"
94
+ prompt_template=f'''### Instruction: {prompt}
95
+ ### Response:'''
96
 
97
  print("\n\n*** Generate:")
98