Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- ko
|
6 |
+
pipeline_tag: translation
|
7 |
+
tags:
|
8 |
+
- llama-3-ko
|
9 |
+
license: mit
|
10 |
+
datasets:
|
11 |
+
- recipes
|
12 |
+
---
|
13 |
+
|
14 |
+
### Model Card for Model ID
|
15 |
+
### Model Details
|
16 |
+
|
17 |
+
Model Card: llama3-pre1-ds-lora1 with Fine-Tuning
|
18 |
+
Model Overview
|
19 |
+
Model Name: llama3-pre1-ds-lora1
|
20 |
+
|
21 |
+
Model Type: Transformer-based Language Model
|
22 |
+
|
23 |
+
Model Size: 8 billion parameters
|
24 |
+
|
25 |
+
by: 4yo1
|
26 |
+
|
27 |
+
Languages: English and Korean
|
28 |
+
|
29 |
+
### Model Description
|
30 |
+
llama3-pre1-ds-lora1 is a language model pre-trained on a diverse corpus of English and Korean texts.
|
31 |
+
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
|
32 |
+
|
33 |
+
### how to use - sample code
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoConfig, AutoModel, AutoTokenizer
|
37 |
+
|
38 |
+
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-ds-lora1")
|
39 |
+
model = AutoModel.from_pretrained("4yo1/llama3-pre1-ds-lora1")
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-ds-lora1")
|
41 |
+
```
|
42 |
+
datasets:
|
43 |
+
- recipes
|
44 |
+
|
45 |
+
license: mit
|