elinas commited on
Commit
f26420b
1 Parent(s): 2fdfdd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
  license: other
 
 
3
  ---
4
 
5
  # llama-13b-int4
6
- This LoRA trained for 3 epochs and has been converted to int4 via GPTQ method. See the repo below for more info.
7
 
8
  https://github.com/qwopqwop200/GPTQ-for-LLaMa
9
 
10
- # Update 2023-04-03
11
  Recent GPTQ commits have introduced breaking changes to model loading and you should use commit `a6f363e3f93b9fb5c26064b5ac7ed58d22e3f773` in the `cuda` branch.
12
 
13
  If you're not familiar with the Git process
@@ -221,5 +223,4 @@ We filtered the data from the Web based on its proximity to Wikipedia text and r
221
  Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
222
 
223
  **Use cases**
224
- LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
225
-
 
1
  ---
2
  license: other
3
+ tags:
4
+ - alpaca
5
  ---
6
 
7
  # llama-13b-int4
8
+ This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ method. See the repo below for more info.
9
 
10
  https://github.com/qwopqwop200/GPTQ-for-LLaMa
11
 
12
+ # Important - Update 2023-04-03
13
  Recent GPTQ commits have introduced breaking changes to model loading and you should use commit `a6f363e3f93b9fb5c26064b5ac7ed58d22e3f773` in the `cuda` branch.
14
 
15
  If you're not familiar with the Git process
 
223
  Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
224
 
225
  **Use cases**
226
+ LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.