mav23's picture
Upload folder using huggingface_hub
f982e93 verified
|
raw
history blame
5.97 kB
metadata
license: other
base_model: meta-llama/Meta-Llama-3-70B

Higgs-Llama-3-70B

Higgs-Llama-3-70B is post-trained from meta-llama/Meta-Llama-3-70B, specially tuned for role-playing while being competitive in general-domain instruction-following and reasoning.

We perform supervised fine-tuning with our in-house instruction-following and chat datasets. Afterwards, we construct preference pairs with a semi-automated pipeline that relies on both human-labelers and our private LLMs. We conduct iterative preference optimization to align the model. During alignment, we adopted a special strategy to align the model’s behavior with the system message. Compared with other instruct models, Higgs models follow their roles more closely.

See our release blog.

Evaluation

All benchmarks lead to eventual overfitting, including those for LLMs. Training on data, particularly beneficial for benchmarks typically does not improve (or even worsen) role-playing performance. We worked to exclude benchmark data, including their training examples, from our fine-tuning data.

We highlight our results on two new and challenging benchmarks: MMLU-Pro and Arena-Hard. MMLU-Pro extends the popular MMLU benchmark. We believe that it suffers from less overfitting by other released models as well, as it was released only recently (it was released after our models finished training).

MMLU-Pro

Model MMLU-Pro
GPT-4o 72.6
Gemini-1.5-Pro 69.0
Claude-3-Opus 68.5
GPT-4-Turbo 63.7
Higgs-Llama-3-70B 63.2
Gemini-1.5-Flash 59.1
Claude-3-Sonnet 56.8
Llama-3-70B-Instruct 56.2

Arena-Hard

Model Arena-Hard
GPT-4o 79.5
Gemini-1.5-Pro 72.0
Claude-3-Opus 60.4
Higgs-Llama-3-70B 49.6
Gemini-1.5-Flash 49.6
Claude-3-Sonnet 46.8
Claude-3-Haiku 41.5
Llama-3-70B-Instruct 41.1
GPT-4-0613 37.9
Mistral-Large 37.7

Overall Results

In the following, we compare our model's performance with gpt-4o and Llama-3-70B-Instruct on MMLU-Pro, Arena-Hard, AlpacaEval 2.0 LC, MMLU, GPQA and DROP. For MMLU, GPQA and DROP, we adopt openai/simple-evals for evaluation. For the other benchmarks, we evaluate via the official implementation.

MMLU-Pro Arena-Hard AlpacaEval
2.0 LC
MMLU GPQA DROP
(F1,3-shot)
GPT-4o 72.6 79.5* 57.5 87.2 49.9 83.7
Higgs-Llama-3-70B 63.2 49.6 38.6 80.8 42.1 81.6
Llama-3-70B-Instruct* 56.2 41.1 34.4 80.2 41.3 81.4

*For Llama-3-70B-Instruct, the MMLU-Pro number is copied from the MMLU-Pro leaderboard; the Arena-Hard numbers are copied from the leaderboard updated on 5/21 while we run gpt-4o ourselves; and the MMLU/GPQA/DROP are copied from simple-evals.

How to use

We use the same prompting format as in Meta-Llama-3-70B-Instruct.

Use with transformers

See the snippet below for usage with Transformers:

import transformers
import torch

model_id = "bosonai/Higgs-Llama-3-70B"

pipeline = transformers.pipeline(
  "text-generation",
  model=model_id,
  model_kwargs={"torch_dtype": torch.bfloat16},
  device_map="auto",
)

messages = [
  {"role": "system", "content": "You are an AI assistant that speaks in the style of Sheldon Cooper. You are arguing with the user and is trying to prove the opposite of what the user said."},
  {"role": "user", "content": "The earth is round."},
]

prompt = pipeline.tokenizer.apply_chat_template(
  messages,
  tokenize=False,
  add_generation_prompt=True
)

outputs = pipeline(
  prompt,
  max_new_tokens=256,
  eos_token_id=[
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
    pipeline.tokenizer.eos_token_id,
  ],
  do_sample=True,
  temperature=1.0,
  top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])

License

Our license is based on Meta's LLama 3 Community License.