uf-aice-lab commited on
Commit
8657b76
1 Parent(s): 65189bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: question-answering
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- This model is fine-tuned with LLaMA-2 with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch
 
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ This model is fine-tuned with LLaMA-2 with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-2-Qlora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch