Update README.md
Browse files
README.md
CHANGED
@@ -9,17 +9,16 @@ base_model:
|
|
9 |
---
|
10 |
## Usage
|
11 |
Support for this model will be added in the upcoming transformers release. In the meantime, please install the library from source:
|
12 |
-
|
13 |
pip install transformers
|
14 |
-
|
15 |
-
'''
|
16 |
We can now run inference on this model:
|
17 |
-
|
18 |
import torch
|
19 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
20 |
|
21 |
# Load the tokenizer and model
|
22 |
-
model_path = "
|
23 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
24 |
|
25 |
device = 'cuda'
|
@@ -36,8 +35,7 @@ outputs = model.generate(inputs, max_length=20)
|
|
36 |
# Decode and print the output
|
37 |
output_text = tokenizer.decode(outputs[0])
|
38 |
print(output_text)
|
39 |
-
|
40 |
-
'''
|
41 |
|
42 |
## Evaluation Results
|
43 |
Zero-shot performance. Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) with additions:
|
|
|
9 |
---
|
10 |
## Usage
|
11 |
Support for this model will be added in the upcoming transformers release. In the meantime, please install the library from source:
|
12 |
+
~~~
|
13 |
pip install transformers
|
14 |
+
~~~
|
|
|
15 |
We can now run inference on this model:
|
16 |
+
~~~
|
17 |
import torch
|
18 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
19 |
|
20 |
# Load the tokenizer and model
|
21 |
+
model_path = "YaoLuzjut/partial-layer_fine-tuning_Llama-3.1-8B-Instruct"
|
22 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
23 |
|
24 |
device = 'cuda'
|
|
|
35 |
# Decode and print the output
|
36 |
output_text = tokenizer.decode(outputs[0])
|
37 |
print(output_text)
|
38 |
+
~~~
|
|
|
39 |
|
40 |
## Evaluation Results
|
41 |
Zero-shot performance. Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) with additions:
|