uf-aice-lab
commited on
Commit
•
9a08c8e
1
Parent(s):
7726f9b
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: question-answering
|
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
-
This model is fine-tuned with LLaMA-2 with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-2-Qlora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct
|
12 |
### Here is how to use it with texts in HuggingFace
|
13 |
```python
|
14 |
import torch
|
|
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
+
This model is fine-tuned with LLaMA-2 with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-2-Qlora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct dedicated LLMs for downstream tasks (e.g., classification) related to K-12 math learning.
|
12 |
### Here is how to use it with texts in HuggingFace
|
13 |
```python
|
14 |
import torch
|