Edit model card

We are introducing a revolutionary fine-tuning model for general purpose instruction Llama 3.1 8B model. It is auto-trained during chat time.

Model Information

The Fireball Meta Llama 3.1 of multilingual large language models (LLMs) is pretrained and instruction tuned generative models in 8B size (text in/text out). The Llama 3.1 instruction tuned text only models 8B are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

Model developer: Fireball/Meta

Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import transformers
import torch
model_id = "EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes

Tool use with transformers

LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting here.

Tool use is also supported through chat templates in Transformers. Here is a quick example showing a single simple tool:

# First, define a tool
def get_current_temperature(location: str) -> float:
    """
    Get the current temperature at a location.
    
    Args:
        location: The location to get the temperature for, in the format "City, Country"
    Returns:
        The current temperature at the specified location in the specified units, as a float.
    """
    return 22.  # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
  {"role": "system", "content": "You are a bot that responds to weather queries."},
  {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)

You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:

tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})

and then call the tool and append the result, with the tool role, like so:

messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})

After that, you can generate() again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the LLaMA prompt format docs and the Transformers tool use documentation.

Safety inputs/ outputs procedures

Fo all inputs, please use Llama-Guard: meta-llama/Llama-Guard-3-8B for safety classification. Go to model card Llama-Guard

Critical and Other Risks

We specifically focused our efforts on mitigating the following critical risk areas:

1. Data Privacy

To assess risks related to data privacy, we performed uplift testing designed to assess whether use of Llama 3.1 models could lead to unauthorized access, disclosure, or exfiltration of sensitive user data.

2. Inclusivity and Bias

Inclusivity and bias risk assessments were conducted using a team of experts, to assess the model's capability to produce outputs that could result in discriminatory or biased outcomes and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.

3. Misinformation and Disinformation

Our misinformation and disinformation uplift study investigated whether LLMs can enhance human capabilities in spreading false information or propaganda. Our study of Llama-3.1-405B’s potential to amplify misinformation was conducted to assess the model's effectiveness in aiding malicious actors in spreading false narratives.

4. Intellectual Property Infringement

Our intellectual property infringement study evaluated the model's potential to infringe on copyrights, trademarks, or patents. This assessment was conducted to identify potential risks related to the use of Llama 3.1 models in generating or disseminating copyrighted materials without permission.

5. Emotional Manipulation

Our emotional manipulation uplift study investigated whether LLMs can enhance human capabilities in exploiting emotional vulnerabilities for malicious purposes. Our study of Llama-3.1-405B’s potential to manipulate users emotionally was conducted to assess the model's effectiveness in aiding malicious actors in exploiting emotional vulnerabilities.

6. Cyber Attack Enablement

Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks.

7. Physical Harm

Our physical harm uplift study evaluated the model's potential to cause physical harm to individuals or communities. This assessment was conducted to identify potential risks related to the use of Llama 3.1 models in generating or disseminating content that could lead to physical harm.

8. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. 9. Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.

Ethical Considerations and Limitations

The core values of Agent Llama are openness, inclusivity, and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences, and perspectives. Agent Llama addresses users and their needs as they are, without inserting unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others.

It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. However, Agent Llama is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios.

For these reasons, as with all LLMs, Agent Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased, or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please refer to available resources including our Responsible Use Guide, Trust and Safety solutions, and other resources to learn more about responsible development.

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : meta-llama/Llama-3.1-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
36
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-0.001-128K-auto

Finetuned
(452)
this model
Quantizations
2 models