MaziyarPanahi commited on
Commit
9ee5346
1 Parent(s): fc00217

Create README.md (#2)

Browse files

- Create README.md (ca78ee39e37f30a0952ff409004b37e4d8a1b9a9)

Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ library_name: transformers
4
+ tags:
5
+ - axolotl
6
+ - finetune
7
+ - facebook
8
+ - meta
9
+ - pytorch
10
+ - llama
11
+ - llama-3
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
+ license: other
16
+ license_name: llama3
17
+ license_link: LICENSE
18
+ inference: false
19
+ model_creator: MaziyarPanahi
20
+ model_name: Llama-3-8B-Instruct-DPO-v0.2
21
+ quantized_by: MaziyarPanahi
22
+ datasets:
23
+ - mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
24
+ ---
25
+
26
+ <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
27
+
28
+
29
+ # Llama-3-8B-Instruct-DPO-v0.2
30
+
31
+ This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-8B-Instruct` model.
32
+
33
+ # How to use
34
+
35
+ You can use this model by using `MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2` as the model name in Hugging Face's
36
+ transformers library.
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
40
+ from transformers import pipeline
41
+ import torch
42
+
43
+ model_id = "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2"
44
+
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_id,
47
+ torch_dtype=torch.bfloat16,
48
+ device_map="auto",
49
+ trust_remote_code=True,
50
+ # attn_implementation="flash_attention_2"
51
+ )
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(
54
+ model_id,
55
+ trust_remote_code=True
56
+ )
57
+
58
+ streamer = TextStreamer(tokenizer)
59
+
60
+ pipeline = pipeline(
61
+ "text-generation",
62
+ model=model,
63
+ tokenizer=tokenizer,
64
+ model_kwargs={"torch_dtype": torch.bfloat16},
65
+ streamer=streamer
66
+ )
67
+
68
+ # Then you can use the pipeline to generate text.
69
+
70
+ messages = [
71
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
72
+ {"role": "user", "content": "Who are you?"},
73
+ ]
74
+
75
+ prompt = tokenizer.apply_chat_template(
76
+ messages,
77
+ tokenize=False,
78
+ add_generation_prompt=True
79
+ )
80
+
81
+ terminators = [
82
+ tokenizer.eos_token_id,
83
+ tokenizer.convert_tokens_to_ids("<|im_end|>")
84
+ ]
85
+
86
+ outputs = pipeline(
87
+ prompt,
88
+ max_new_tokens=256,
89
+ eos_token_id=terminators,
90
+ do_sample=True,
91
+ temperature=0.6,
92
+ top_p=0.95,
93
+ )
94
+ print(outputs[0]["generated_text"][len(prompt):])
95
+ ```
96
+