--- language: - en license: llama3 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - groq - tool-use - function-calling --- # Llama-3-Groq-Synth-8B-Tool-Use This is the 8B parameter version of the Llama 3 Groq Tool Use model, specifically designed for advanced tool use and function calling tasks. ## Model Details - **Model Type:** Causal language model fine-tuned for tool use - **Language(s):** English - **License:** Meta Llama 3 Community License - **Model Architecture:** Optimized transformer - **Training Approach:** Full fine-tuning and Direct Preference Optimization (DPO) on Llama 3 8B base model - **Input:** Text - **Output:** Text, with enhanced capabilities for tool use and function calling ## Performance - **Berkeley Function Calling Leaderboard (BFCL) Score:** 89.06% overall accuracy - This score represents the best performance among all open-source 8B LLMs on the BFCL ## Usage and Limitations This model is designed for research and development in tool use and function calling scenarios. It excels at tasks involving API interactions, structured data manipulation, and complex tool use. However, users should note: - For general knowledge or open-ended tasks, a general-purpose language model may be more suitable - The model may still produce inaccurate or biased content in some cases - Users are responsible for implementing appropriate safety measures for their specific use case Note the model is quite sensitive to the `temperature` and `top_p` sampling configuration. Start at `temperature=0.5, top_p=0.65` and move up or down as needed. ## Ethical Considerations While fine-tuned for tool use, this model inherits the ethical considerations of the base Llama 3 model. Use responsibly and implement additional safeguards as needed for your application. ## Availability The model is available through: - [Groq API console](https://console.groq.com) - [Hugging Face](https://huggingface.co/Groq/Llama-3-Groq-Synth-8B-Tool-Use) For full details on responsible use, ethical considerations, and latest benchmarks, please refer to the [official Llama 3 documentation](https://llama.meta.com/) and the Groq model card.