Gugugo-koen-7B-V1.1 / README.md
squarelike's picture
Update README.md
9a526db
|
raw
history blame
2.51 kB
---
license: apache-2.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- en
- ko
pipeline_tag: translation
---
# Gugugo-koen-7B-V1.1
Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo)
![Gugugo](./logo.png)
**Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
**Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation).
I trained with 1x A6000 GPUs for 90 hours.
## **Prompt Template**
**KO->EN**
```
### ν•œκ΅­μ–΄: {sentence}</끝>
### μ˜μ–΄:
```
**EN->KO**
```
### μ˜μ–΄: {sentence}</끝>
### ν•œκ΅­μ–΄:
```
There are GPTQ and GGUF support.
[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ)
[https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF)
## **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
import torch
repo = "squarelike/Gugugo-koen-7B-V1.1"
model = AutoModelForCausalLM.from_pretrained(
repo,
load_in_4bit=True
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops = [], encounters=1):
super().__init__()
self.stops = [stop for stop in stops]
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
for stop in self.stops:
if torch.all((stop == input_ids[0][-len(stop):])).item():
return True
return False
stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda")
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
def gen(lan="en", x=""):
if (lan == "ko"):
prompt = f"### ν•œκ΅­μ–΄: {x}</끝>\n### μ˜μ–΄:"
else:
prompt = f"### μ˜μ–΄: {x}</끝>\n### ν•œκ΅­μ–΄:"
gened = model.generate(
**tokenizer(
prompt,
return_tensors='pt',
return_token_type_ids=False
).to("cuda"),
max_new_tokens=2000,
temperature=0.1,
do_sample=True,
stopping_criteria=stopping_criteria
)
return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")
print(gen(lan="en", x="Hello, world!"))
```