piuzha commited on
Commit
f02a1dd
1 Parent(s): a14b4fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -59,6 +59,31 @@ print(sequences[0]['generated_text'])
59
 
60
  ## Chat template
61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ## Evaluation
64
 
 
59
 
60
  ## Chat template
61
 
62
+ The chat template is available via the apply_chat_template() method:
63
+ ```
64
+ from transformers import AutoModelForCausalLM, AutoTokenizer
65
+
66
+ device = "cuda"
67
+
68
+ model = AutoModelForCausalLM.from_pretrained("moxin-org/moxin-chat-7b")
69
+ tokenizer = AutoTokenizer.from_pretrained("moxin-org/moxin-chat-7b")
70
+
71
+ messages = [
72
+ {"role": "user", "content": "What is your favourite condiment?"},
73
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
74
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
75
+ ]
76
+
77
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
78
+
79
+ model_inputs = encodeds.to(device)
80
+ model.to(device)
81
+
82
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
83
+ decoded = tokenizer.batch_decode(generated_ids)
84
+ print(decoded[0])
85
+ ```
86
+
87
 
88
  ## Evaluation
89