akshayballal commited on
Commit
41ec4e6
1 Parent(s): 4e7d0b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -2
README.md CHANGED
@@ -9,14 +9,80 @@ tags:
9
  - unsloth
10
  - llama
11
  - trl
 
 
 
 
12
  ---
13
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** akshayballal
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
9
  - unsloth
10
  - llama
11
  - trl
12
+ datasets:
13
+ - Salesforce/xlam-function-calling-60k
14
+ pipeline_tag: text-generation
15
+ library_name: peft
16
  ---
17
 
18
+ # Model Card for Model ID
19
+
20
+ This model is a function calling version of [microsoft/phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) finetuned on the [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) dataset.
21
+
22
+
23
  # Uploaded model
24
 
25
  - **Developed by:** akshayballal
26
  - **License:** apache-2.0
27
  - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
28
 
29
+ ### Usage
30
+
31
+ ```python
32
+ from unsloth import FastLanguageModel
33
+
34
+ max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
35
+ dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
36
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
37
+
38
+ model, tokenizer = FastLanguageModel.from_pretrained(
39
+ model_name = "outputs/checkpoint-3000", # YOUR MODEL YOU USED FOR TRAINING
40
+ max_seq_length = 1024,
41
+ dtype = dtype,
42
+ load_in_4bit = load_in_4bit,
43
+ )
44
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
45
+
46
+ tools = [
47
+ {
48
+ "name": "upcoming",
49
+ "description": "Fetches upcoming CS:GO matches data from the specified API endpoint.",
50
+ "parameters": {
51
+ "content_type": {
52
+ "description": "The content type for the request, default is 'application/json'.",
53
+ "type": "str",
54
+ "default": "application/json",
55
+ },
56
+ "page": {
57
+ "description": "The page number to retrieve, default is 1.",
58
+ "type": "int",
59
+ "default": "1",
60
+ },
61
+ "limit": {
62
+ "description": "The number of matches to retrieve per page, default is 10.",
63
+ "type": "int",
64
+ "default": "10",
65
+ },
66
+ },
67
+ }
68
+ ]
69
+ messages = [
70
+ {
71
+ "role": "user",
72
+ "content": f"You are a helpful assistant. Below are the tools that you have access to. \n\n### Tools: \n{tools} \n\n### Query: \n{query} \n",
73
+ },
74
+ ]
75
+
76
+ input = tokenizer.apply_chat_template(
77
+ messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
78
+ )
79
+
80
+ output = model.generate(
81
+ input_ids=input, max_new_tokens=512, temperature=0.0
82
+ )
83
+
84
+ decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
85
+ ```
86
+
87
 
88
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)