orca_mini_v8_0_70b / README.md
pankajmathur's picture
Update README.md
1674674 verified
|
raw
history blame
5.09 kB
metadata
license: llama3.3
datasets:
  - pankajmathur/orca_mini_v1_dataset
language:
  - en
base_model:
  - meta-llama/Llama-3.3-70B-Instruct
library_name: transformers

Model Name: orca_mini_v8_0_Llama-3.3-70B-Instruct

orca_mini_v8_0_Llama-3.3-70B-Instruct is trained with various SFT Datasets

Passionate about Generative AI? I help companies to privately train and deploy custom use case specific LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!

https://www.linkedin.com/in/pankajam Looking forward to connecting!


NOTICE

By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate!

Example Usage

Here is the Llama3 prompt format

<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are Orca Mini, a helpful AI assistant.<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Hello Orca Mini, what can you do for me?<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Below shows a code example on how to use this model in default(bf16) format

from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v8_0_70b"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)

Below shows a code example on how to use this model in 4-bit format via bitsandbytes library

from transformers import AutoModel, AutoTokenizer, BitsAndBytesConfig
model_slug = "pankajmathur/orca_mini_v8_0_70b"
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
    model_slug, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
quantized_model.generate(**gen_input)

Below shows a code example on how to do a tool use with this model and tranformer library

Since orca_mini_v8_0_70b based upon LLaMA-3.3, it supports multiple tool use formats. You can see a full guide to prompt formatting here.

Tool use is also supported through chat templates in Transformers. Here is a quick example showing a single simple tool:

# First, define a tool
def get_current_temperature(location: str) -> float:
    """
    Get the current temperature at a location.
    
    Args:
        location: The location to get the temperature for, in the format "City, Country"
    Returns:
        The current temperature at the specified location in the specified units, as a float.
    """
    return 22.  # A real function should probably actually get the temperature!

# Next, create a chat and apply the chat template
messages = [
  {"role": "system", "content": "You are a bot that responds to weather queries."},
  {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]

inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)

You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:

tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})

and then call the tool and append the result, with the tool role, like so:

messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})

After that, you can generate() again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the LLaMA prompt format docs and the Transformers tool use documentation.