language: en | |
tags: | |
- llama | |
- llama-3.2 | |
- function-calling | |
- instruction-tuning | |
- conversational | |
license: llama2 | |
# NeuralTau Functions v1 | |
This is a full version of the model fine-tuned on the full dataset. | |
The model is trained to understand and follow complex instructions, providing detailed explanations and performing function-like operations in a conversational manner. | |
## Model Variants Available | |
- 16-bit full model | |
- GGUF Q4_K_M quantized version (recommended for most use cases) | |
- GGUF Q8_0 quantized version (higher quality, larger size) | |
## Training Details | |
- Base Model: unsloth/Llama-3.2-3B-Instruct | |
- Training Dataset: 0xroyce/NeuralTau-With-Functions-chat | |
- Training Method: LoRA fine-tuning with r=16 | |
- Library Used: Unsloth | |
- Parameters: 3 billion | |
## Usage | |
The model follows the Llama chat format. You can interact with it using: | |
```python | |
messages = [ | |
{"role": "user", "content": "Your instruction or question here"}, | |
] | |
``` | |
## Model Capabilities | |
- Understanding and following complex instructions | |
- Providing detailed explanations and analysis | |
- Breaking down complex topics into understandable components | |
- Function-like operations and systematic problem-solving | |
- Maintaining context in multi-turn conversations | |
- Generating clear and structured responses | |
## License | |
This model is subject to the Llama 2 license. | |