caldana commited on
Commit
cdbb555
·
verified ·
1 Parent(s): c9c1150

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -3,7 +3,116 @@ datasets:
3
  - glaiveai/glaive-function-calling-v2
4
  ---
5
 
6
- from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  import json
8
  import os
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
 
3
  - glaiveai/glaive-function-calling-v2
4
  ---
5
 
6
+
7
+ Here's how you can format your code and explanation into a README.md file:
8
+
9
+ ```markdown
10
+ # Function Calling with Llama Model
11
+
12
+ This README provides an example of integrating Llama model from Hugging Face Transformers to automate specific tasks, such as canceling a reservation.
13
+
14
+ ## Setup
15
+
16
+ First, import the necessary libraries and setup your environment.
17
+
18
+ ```python
19
+ from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
20
  import json
21
  import os
22
+ import torch
23
+ ```
24
+
25
+ ## Tool Information
26
+
27
+ Define the tools and their functionalities as a list of dictionaries.
28
+
29
+ ```python
30
+ tools_info = [
31
+ {
32
+ "name": "cancel_reservation",
33
+ "description": "cancel a reservation",
34
+ "parameters": {
35
+ "type": "object",
36
+ "properties": {
37
+ "reservation_number": {
38
+ "type": "integer",
39
+ "description": "Reservation number"
40
+ }
41
+ },
42
+ "required": ["reservation_number"]
43
+ }
44
+ },
45
+ ]
46
+ ```
47
+
48
+ ## System Initialization
49
+
50
+ Initialize the system's interactive capabilities using the defined tools.
51
+
52
+ ```python
53
+ system = f"You are a helpful assistant with access to the following functions: \n {json.dumps(tools_info, indent=2)}."
54
+ ```
55
+
56
+ ## Conversation Flow
57
+
58
+ Simulate a conversation flow where the user requests to cancel a reservation.
59
+
60
+ ```python
61
+ messages = [
62
+ {"role": "system", "content": system},
63
+ {"role": "user", "content": "Help me to cancel a reservation"},
64
+ {"role": "assistant", "content": "I can help with that. Could you please provide me with the reservation number?"},
65
+ {"role": "user", "content": "the reservation number is 1011"}
66
+ ]
67
+ ```
68
+
69
+ ## Model Loading
70
+
71
+ Load the causal language model and tokenizer.
72
+
73
+ ```python
74
+ model_id = "caldana/function_calling_llama3_8b_instruct"
75
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
76
+ model = AutoModelForCausalLM.from_pretrained(
77
+ model_id,
78
+ torch_dtype=torch.bfloat16,
79
+ device_map="auto",
80
+ )
81
+ ```
82
+
83
+ ## Generating Response
84
+
85
+ Generate a response from the model based on the conversation context.
86
+
87
+ ```python
88
+ input_ids = tokenizer.apply_chat_template(
89
+ messages,
90
+ add_generation_prompt=True,
91
+ return_tensors="pt"
92
+ ).to(model.device)
93
+
94
+ terminators = [
95
+ tokenizer.eos_token_id,
96
+ tokenizer.convert_tokens_to_ids("")
97
+ ]
98
+
99
+ outputs = model.generate(
100
+ input_ids,
101
+ max_new_tokens=256,
102
+ eos_token_id=terminators,
103
+ do_sample=True,
104
+ temperature=0.6,
105
+ top_p=0.9,
106
+ )
107
+
108
+ response = outputs[0][input_ids.shape[-1]:]
109
+ print(tokenizer.decode(response, skip_special_tokens=True))
110
+ ```
111
+
112
+ ## Conclusion
113
+
114
+ This setup demonstrates how to utilize a pre-trained Llama model to handle function calls within a simulated conversation, focusing on task automation like reservation cancellation.
115
+ ```
116
+
117
+ This markdown structure provides a clear and concise overview, suitable for a GitHub README, which explains the functionality, the setup requirements, and the usage of the code.
118