QuietImpostor
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ The model was fine-tuned on a synthetic dataset derived from GPT-4 (for user que
|
|
41 |
|
42 |
### Performance Metrics
|
43 |
- **Training Loss:** Final loss of 1.3721 after 3 epochs
|
44 |
-
- **
|
45 |
|
46 |
### Limitations and Current Shortcomings
|
47 |
- The model's knowledge is limited to its training data and cut-off date.
|
@@ -101,7 +101,7 @@ Detailed evaluation results are not available, but the model showed consistent i
|
|
101 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
102 |
import torch
|
103 |
|
104 |
-
model_path = "
|
105 |
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
|
106 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
107 |
|
|
|
41 |
|
42 |
### Performance Metrics
|
43 |
- **Training Loss:** Final loss of 1.3721 after 3 epochs
|
44 |
+
- **Real-world Use** Seems to struggle with maintaining conversational context.
|
45 |
|
46 |
### Limitations and Current Shortcomings
|
47 |
- The model's knowledge is limited to its training data and cut-off date.
|
|
|
101 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
102 |
import torch
|
103 |
|
104 |
+
model_path = "QuietImpostor/OpenELM-270M-Instruct-SonnOpus"
|
105 |
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
|
106 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
107 |
|