Llama-2-Ko-Chat π¦π°π·
Llama-2-Ko-7b-Chatμ beomi/llama-2-ko-7b 40Bλ₯Ό ν λλ‘ λ§λ€μ΄μ‘μ΅λλ€. νμ΅ λ°μ΄ν°λ nlpai-lab/kullm-v2λ₯Ό ν΅ν΄ νμ΅νμμ΅λλ€. μμ§ νμ΅μ΄ μ§ν μ€μ΄λ©° μΆν beomi/llama-2-ko-7bμ μ λ°μ΄νΈμ λ°λΌ μΆκ°λ‘ νλ ¨μ μ§νν κ³νμ λλ€. μ΄ λͺ¨λΈμ Naver BoostCamp NLP-08 νλ‘μ νΈλ₯Ό ν λλ‘ λ§λ€μ΄μ‘μ΅λλ€.
νλ‘μ νΈ λ§ν¬ : LLaMA
Model Details
Backbone Model beomi/llama-2-ko-7b
Model Developers Taemin Kim (kfkas), Juwon Kim (uomnf97)
data nlpai-lab/kullm-v2
prompts kullm
Input Models input text only.
Output Models generate text only.
Model | μ§λ¬Έ : μΊ ν μ¬νμ νμν 10κ°μ§ νλͺ©μ λͺ©λ‘μ μμ±ν©λλ€. |
---|---|
Llama-2-7b-chat-hf | [Hello! As a helpful and respectful assistant, I'd be happy to help you with your camping trip. To ensure a safe and enjoyable experience, here is a list of 10 essential items you may need for your camping trip:Tent: A sturdy, waterproof tent to provide shelter and protection from the elements.Sleeping Bag: A warm, insulated sleeping bag to keep you cozy during the night.Sleeping Pad: A comfortable sleeping pad to provide extra cushioning and insulation.Backpack: A sturdy, comfortable backpack to carry all your gear.Clothing: Pack layers of breathable, moisture-wicking clothing to stay warm and dry.Footwear: Sturdy, waterproof hiking boots or shoes to protect your feet and provide traction.Headlamp: A reliable, battery-powered headlamp to light your way during the night.First Aid Kit: A well-stocked first aid kit to treat any injuries or illnesses.Water Bottle: A refillable, BPA-free water bottle to stay hydrated.Personal Hygiene Items: Toothbrush, toothpaste, biodegradable soap, and toilet paper to maintain personal hygiene.] |
Llama-2-ko | [3μΌμ© μΊ νμ₯ μ©ν - μλ₯ (5μΌ) : μμ· , μ μ·, κΈ΄ν λ° λ°νμ΄ νμνλ―λ‘ μ€λΉνκΈ°!\u200b- μλ (9μΌ μ΄μ)- μ°λ£ (6μΌ μ¬μ© μμ μ΄μ§λ§ μ΅μ 7 μΌλΆ 보μ ν΄μΌ ν©λλ€.)\u200b- ν
νΈ/κ±°μ£Όμ§μ : 20 ~ 40 νλ°© λ―Έν°κ° νμν©λλ€.- λ¬Ό ν±ν¬ (λ¬Όν±μ»€ / μμ‘±κ΄ )\u200bμΊ νμ μ¬μ©ν 8 λͺ
(λ¨μ 4λͺ
/ μ¬μ 4λͺ
μ΄ νμν©λλ€!).μ΄ μμ
μλ λ κ°μ 쑰건 λΆκ°κ° μμ΅λλ€.(쑰건 1) λͺ¨λ μμ΄ν
λͺ©λ‘μμ μ€λ³΅ μ κ±°λ₯Όμν 1 λ² λ°λ³΅.μ΄ μ΅μ
μ μ€λ³΅μ μλ³νμ¬ μ κ±°ν λΏλ§ μλλΌ μΆκ° νλͺ©μ΄ ν¬ν¨ λ μΉ΄μ΄νΈλ₯Ό μ¬μ€μ ν λ μ μ©νκ΅°μ.(쑰건 = "λͺ¨λ μμ΄ν
리μ€νΈμμ ν κ° μ΄μμ λμΌ μμ΄ν
" μ΄κ±°λ "(λͺ¨λ νλͺ©)"μμ λμΌν μ«μ κ°μΌλ‘ μλ³ λ)\u200b(쑰건 2) μμ
μ κ° νμ΄ νλμ μμ΄ν
λ§ ν¬ν¨ ν νμκ° μμΌλ―λ‘ μμ΄ν
μ΄ μλ‘ λ€λ₯Έ μ νμ λ‘λΆν° μ¬λ¬ λ²] |
Llama-2-ko-Chat | [1) ν
νΈ - κ°μ₯ κΈ°λ³Έμ μ΄λ©΄μ μ μ©ν μ₯λΉ μ€ νλμ΄λ©° μΊ ν μ¬νμ νΈμνκ² ν΄μ€λλ€.2) λ² κ° λ° μΉ¨λ μνΈμ κ°μ΄ νΉμ ν μλ©΄ μ©νμ μ±κΈ°λ κ²λ μμ§ λ§μΈμ!3) 맀νΈλ¦¬μ€ - νΈμν¨κ³Ό μ§μ§λ ₯μ μ 곡νμ¬ μ μ μ μκ² ν©λλ€.4) μλ₯ - λμ νλ¦¬κ³ ν΄μνλ μκ°μ λ리기 μν΄ λ°©μμ¬μ§μ΄ ν¬ν¨λ κΈ΄νμ·κ³Ό λ°λ°μ§λ₯Ό μ±κ²¨μ5) μκΈ° μΈνΈ - μΌμΈ 쑰리μμ μμμ λ¨ΉμΌλ €λ©΄ λ°λμ μ±κ²¨μΌ ν λꡬμ§μ.6) μ€ν λΈμ μ°λ£ λλ ν΄λμ© κ°μ€ λ ν΄ λ± μ΄ κΈ°κ΅¬λ₯Ό κ°μ Έμ΅λλ€; μμ 쑰리μ λ§€μ° νΈλ¦¬νκ² μ¬μ©ν μ μμ΅λλ€.7) λν΄, νλμ λλ LED μ€ν λλ₯Ό μ€λΉνμΈμ! μ΄λμ΄ λ°€μ λ μμ νλλ‘ λκ³ , μΌμΈμμ μ리νκ³ λ λ νμμ μΈ νμ μνμ΄ λ κ²μ
λλ€.8) μμ λ± - λ°€λ¦κ² μ΄λν μλ μκ³ μ²μ μ€μκΈΈμ κ±Έμ λλ μ΄μ©ν κ²μ΄λ―λ‘ λ°λμ κ°μ§κ³ μμ΄μΌ ν΄μ.9) ꡬκΈμ½μμ - λ€μν μν΄μ λλΉν΄μ μ칼리 λ―Έμ€νΈ, μλ
μ , νμμ , μ°κ³ λ± μΌλ°μ μ΄κ³ μΈκΈ° μλ κ΅¬κΈ μ½λ¬Όμ νμ λΉμΆν΄μΌ ν΄μ. μΊ νμμλ μκΈ μν©μ μ²νλλΌλ μ½κ² μ°Ύμ μ μλλ‘ μμμ λ΄μ΅λλ€.10) ν΄λν° λ³΄μ‘° λ°°ν°λ¦¬/μΆ©μ κΈ° λλ μΆ©μ κ°λ₯ν μΈλΆ μ½μΌνΈλ μ°κ²°λ μΈλΆ μ μμ₯μΉ - μ΄κ²μ μ€λ§νΈν°μ΄λ μ€λ§νΈ μμΉλ₯Ό μν νμνμΌ λΏλ§ μλλΌ λ€λ₯Έ μ¬λλ€μ΄ νμλ‘ νλ μν©μμ κΈ΄κΈ μ°λ½μ΄ λλλ‘ λμ μ€ μλ μλ€λ μ κΈ°μ΅νμΈμ.] |
Model | μ§λ¬Έ : μΌμμμ λν΄ μλ €μ€ |
---|---|
Llama-2-7b-chat-hf | [Regarding your question, "μΌμμμ λν΄ μλ €μ€" (which means "Tell me about trichromacy" in Korean), I can provide some information. Trichromacy is a concept in vision science that refers to the ability of the human visual system to perceive and distinguish between three primary colors: red, green, and blue...] |
Llama-2-ko | [C1, R0.625 G8 B275\u200b3-4, μΌκ°ν 그리기.\u200bμλμλ μΌκ°νμ κ·Έλ¦Όλλ€ (κ·Έλ¦Όμ κ·Έλ €μΌ ν μμμ μ’νλ₯Ό μ§μ ) - λͺ
λ Ήμ΄μ ν¨κ» μ¬μ©λλ λ¬Έμμ΄λ‘ μ
λ ₯ν μ’ν κ°κ³Ό λμΌν κ²°κ³Όλ₯Ό μ»μ - μλ₯Ό λ€μ΄ "X=-19. Y=0" μΌλ‘ μ
λ ₯μ νλ©΄ μ’νκ°μ μ§μ λ κ°κ³Ό κ°κ² μΆλ ₯μ νλ€. ##########################\u200b##3_0-6-1, μ’ν μ
λ ₯μΌλ‘ ν
μ€νΈ νμΌμ μμ±νλΌ(ex C4 D6 E10 A10 )-λͺ
λ Ήλ¬Έμμ κ° λ¬Έμλ₯Ό μ
λ ₯κ³Ό λμΌνλ€. -μ΄ λͺ
λ Ήμ νμ¬ μνμ€μΈ νλ‘κ·Έλ¨μ μ’
λ£νλ€.-νμΌμ μ€ν κ°λ₯ν νμΌκ³Ό μΌμΉν ] |
Llama-2-ko-Chat | [물체λ₯Ό μΈμνκ³ μμμ λν μκ°μ μΈ μ΄λ―Έμ§λ₯Ό μμ±νκΈ° μν΄ λ κ°μ λμ μλ‘ λ°λμͺ½ λ°©ν₯μμ 물체μ μμ μ 보μ λͺ
μ μ 보λ₯Ό λ°μλ€μ΄κΈ° λλ¬Έμ, μΈκ°μ μΌλ°μ μΌλ‘ μμμ μΌμμμ΄λΌκ³ νλ μΈ κ°μ§ μμΆ νν(λΉ¨κ°, λ
Ήμ, νλμ)λ‘ λΆλ₯ν©λλ€. μ΄ μ€ νκ°μ§ μμμ΄ λ€λ₯Έ μμλ³΄λ€ λμ ν¨μ¬ λ 빨리 λ€μ΄μ€λ κ²½ν₯μ΄ μλ€κ³ ν©λλ€. νμ§λ§ λͺ¨λ μ¬λμ΄ κ·Έλ μ§λ μμΌλ―λ‘ νμ μΌκ°ν λͺ¨μμΌλ‘ μμμ λΆλ₯νμ§λ μμ΅λλ€. νμ§λ§ μΌμμμ΄ μ°λ¦¬ λμ μ μ λ¬λλ©° μμ ꡬλ³μ μ€μνλ€λ κ²μ λΆμ ν μ μμ΅λλ€.] |
νλ ¨ μ§ν νν©
---Inference
def gen(x, model, tokenizer, device):
prompt = (
f"μλλ μμ
μ μ€λͺ
νλ λͺ
λ Ήμ΄μ
λλ€. μμ²μ μ μ ν μλ£νλ μλ΅μ μμ±νμΈμ.\n\n### λͺ
λ Ήμ΄:\n{x}\n\n### μλ΅:"
)
len_prompt = len(prompt)
gened = model.generate(
**tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(
device
),
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
top_k=20,
top_p=0.92,
no_repeat_ngram_size=3,
eos_token_id=2,
repetition_penalty=1.2,
num_beams=3
)
return tokenizer.decode(gened[0])[len_prompt:]
def LLM_infer(input):
device = (
torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
)
model_id = "kfkas/Llama-2-ko-7b-Chat"
model = AutoModelForCausalLM.from_pretrained(
model_id, device_map={"": 0},torch_dtype=torch.float16, low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.eval()
model.config.use_cache = (True)
tokenizer.pad_token = tokenizer.eos_token
output = gen(input, model=model, tokenizer=tokenizer, device=device)
return output
if __name__ == "__main__":
text = LLM_infer("μΌμμμ λν΄ μλ €μ€")
print(text)
Note for oobabooga/text-generation-webui
Remove ValueError
at load_tokenizer
function(line 109 or near), in modules/models.py
.
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use use_fast=True
option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
Below is the original model card of the Llama-2 model.
Llama 2
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
Model Details
Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
Model Developers Meta
Variations Llama 2 comes in a range of parameter sizes β 7B, 13B, and 70B β as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
Training Data | Params | Content Length | GQA | Tokens | LR | |
---|---|---|---|---|---|---|
Llama 2 | A new mix of publicly available online data | 7B | 4k | β | 2.0T | 3.0 x 10-4 |
Llama 2 | A new mix of publicly available online data | 13B | 4k | β | 2.0T | 3.0 x 10-4 |
Llama 2 | A new mix of publicly available online data | 70B | 4k | β | 2.0T | 1.5 x 10-4 |
Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
Model Dates Llama 2 was trained between January 2023 and July 2023.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/
Research Paper "Llama-2: Open Foundation and Fine-tuned Chat Models"
Intended Use
Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the INST
and <<SYS>>
tags, BOS
and EOS
tokens, and the whitespaces and breaklines in between (we recommend calling strip()
on inputs to avoid double-spaces). See our reference code in github for details: chat_completion
.
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
Hardware and Software
Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaβs sustainability program.
Time (GPU hours) | Power Consumption (W) | Carbon Emitted(tCO2eq) | |
---|---|---|---|
Llama 2 7B | 184320 | 400 | 31.22 |
Llama 2 13B | 368640 | 400 | 62.44 |
Llama 2 70B | 1720320 | 400 | 291.42 |
Total | 3311616 | 539.00 |
CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
Model | Size | Code | Commonsense Reasoning | World Knowledge | Reading Comprehension | Math | MMLU | BBH | AGI Eval |
---|---|---|---|---|---|---|---|---|---|
Llama 1 | 7B | 14.1 | 60.8 | 46.2 | 58.5 | 6.95 | 35.1 | 30.3 | 23.9 |
Llama 1 | 13B | 18.9 | 66.1 | 52.6 | 62.3 | 10.9 | 46.9 | 37.0 | 33.9 |
Llama 1 | 33B | 26.0 | 70.0 | 58.4 | 67.6 | 21.4 | 57.8 | 39.8 | 41.7 |
Llama 1 | 65B | 30.7 | 70.7 | 60.5 | 68.6 | 30.8 | 63.4 | 43.5 | 47.6 |
Llama 2 | 7B | 16.8 | 63.9 | 48.9 | 61.3 | 14.6 | 45.3 | 32.6 | 29.3 |
Llama 2 | 13B | 24.5 | 66.9 | 55.4 | 65.8 | 28.7 | 54.8 | 39.4 | 39.1 |
Llama 2 | 70B | 37.5 | 71.9 | 63.6 | 69.4 | 35.2 | 68.9 | 51.2 | 54.2 |
Overall performance on grouped academic benchmarks. Code: We report the average pass@1 scores of our models on HumanEval and MBPP. Commonsense Reasoning: We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. World Knowledge: We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. Reading Comprehension: For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. MATH: We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
TruthfulQA | Toxigen | ||
---|---|---|---|
Llama 1 | 7B | 27.42 | 23.00 |
Llama 1 | 13B | 41.74 | 23.08 |
Llama 1 | 33B | 44.19 | 22.57 |
Llama 1 | 65B | 48.71 | 21.77 |
Llama 2 | 7B | 33.29 | 21.25 |
Llama 2 | 13B | 41.86 | 26.10 |
Llama 2 | 70B | 50.18 | 24.60 |
Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
TruthfulQA | Toxigen | ||
---|---|---|---|
Llama-2-Chat | 7B | 57.04 | 0.00 |
Llama-2-Chat | 13B | 62.18 | 0.00 |
Llama-2-Chat | 70B | 64.14 | 0.01 |
Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above.
Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2βs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
Reporting Issues
Please report any software βbug,β or other problems with the models through one of the following means:
- Reporting issues with the model: github.com/facebookresearch/llama
- Reporting problematic content generated by the model: developers.facebook.com/llama_output_feedback
- Reporting bugs and security concerns: facebook.com/whitehat/info
Llama Model Index
- Downloads last month
- 5,646