--- language: en tags: - llama - llama-3.2 - function-calling - instruction-tuning - conversational license: llama2 --- # NeuralTau Functions 3B v1 This is a full version of the model fine-tuned on the full dataset. The model is trained to understand and follow complex instructions, providing detailed explanations and performing function-like operations in a conversational manner. ## Model Variants Available - 16-bit full model - GGUF Q4_K_M quantized version (recommended for most use cases) - GGUF Q8_0 quantized version (higher quality, larger size) ## Training Details - Base Model: unsloth/Llama-3.2-3B-Instruct - Training Dataset: 0xroyce/NeuralTau-With-Functions-chat (https://huggingface.co/datasets/0xroyce/NeuralTau-With-Functions-chat) ## Usage The model follows the Llama chat format. You can interact with it using: ```python messages = [ {"role": "user", "content": "Your instruction or question here"}, ] ``` Function calling example: ``` >>> how do i do a function for weather? use {"arguments": {"location": "Los Angeles", "time_period": "current"}, "name": "get_weather_data"} ``` ## Model Capabilities - Understanding and following complex instructions - Providing detailed explanations and analysis - Breaking down complex topics into understandable components - Function-like operations and systematic problem-solving - Maintaining context in multi-turn conversations - Generating clear and structured responses ## License This model is subject to the Llama 2 license.