--- license: cc-by-nc-sa-4.0 datasets: - squarelike/sharegpt_deepl_ko_translation language: - ko pipeline_tag: translation tags: - translate --- ## **Seagull-13b-translation 📇** ![Seagull-typewriter](./Seagull-typewriter.png) **Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models. - Exact match of `newline` or `space` - Not using dataset with first letter removed - Code - Markdown format - LaTeX format - etc 이런 이슈들을 충분히 체크하고 학습을 진행하였지만, 모델을 사용할 때는 이런 부분에 대한 결과를 면밀하게 살펴보는 것을 추천합니다(코드가 포함된 텍스트 등). > If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146). For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - kuotient.dev@gmail.com This model was created as a personal experiment, unrelated to the organization I work for. ## **License** ## From original model author: - Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT - Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE # **Model Details** #### **Developed by** Jisoo Kim(kuotient) #### **Base Model** [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) #### **Datasets** - [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation) - [KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) - AIHUB - 기술과학 분야 한-영 번역 병렬 말뭉치 데이터 - 일상생활 및 구어체 한-영 번역 병렬 말뭉치 데이터 ## **Usage** #### **Format** It follows only **ChatML** format. ```python <|im_start|>system 주어진 문장을 한국어로 번역하세요.<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant # Don't miss newline here ``` ```python <|im_start|>system 주어진 문장을 영어로 번역하세요.<|im_end|> <|im_start|>user {instruction}<|im_end|> <|im_start|>assistant # Don't miss newline here ``` #### Output example 원문: > A particle's wave function, $\psi(x)$, is given by $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ Compute the Fourier transform, $\tilde{\psi}(k)$, of the wave function $\psi(x)$ and show that it satisfies the Fourier inversion theorem, i.e., $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$. Seagull-13b-translation: > 입자의 파동 함수 $\psi(x)$는 다음과 같이 주어집니다. $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ 파동 함수 $\psi(x)$의 푸리에 변환 $\tilde{\psi}(k)$를 계산하고 푸리에 반전 정리, 즉 $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$를 만족합니다. DeepL: > 입자의 파동 함수 $\psi(x)$는 $$\psi(x)=\begin{cases}로 주어집니다. 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{기타} \end{cases}$$ 파동 함수 $\psi(x)$의 푸리에 변환인 $\tilde{\psi}(k)$를 계산하고 푸리에 반전 정리, 즉 $\psi(x) = \frac{1}{\sqrt{2\pi}}를 만족함을 증명합니다. \int_{-\infty}^{\infty} \물결표{\psi}(k) e^{ikx} \mathrm{d}k$. ...and much more awesome cases with SQL query, code, markdown! #### **How to** **I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.** Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation") tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation") messages = [ {"role": "system", "content", "주어진 문장을 한국어로 번역하세요."} {"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ```