sequelbox's picture
updated evals
fbb15e0 verified
|
raw
history blame
9.13 kB
metadata
language:
  - en
pipeline_tag: text-generation
tags:
  - shining-valiant
  - shining-valiant-2
  - valiant
  - valiant-labs
  - llama
  - llama-3.1
  - llama-3.1-instruct
  - llama-3.1-instruct-8b
  - llama-3
  - llama-3-instruct
  - llama-3-instruct-8b
  - 8b
  - science
  - physics
  - biology
  - chemistry
  - compsci
  - computer-science
  - engineering
  - technical
  - conversational
  - chat
  - instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
  - sequelbox/Celestia
  - sequelbox/Spurline
  - sequelbox/Supernova
model_type: llama
model-index:
  - name: Llama3.1-8B-ShiningValiant2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-Shot)
          type: Winogrande
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.85
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Biology (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 68.75
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Biology (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 73.23
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Chemistry (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 46
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Chemistry (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 44.33
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU Conceptual Physics (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 53.19
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Physics (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 37.25
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Physics (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 42.38
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU College Computer Science (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU High School Computer Science (5-Shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 63
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU Astronomy (5-shot)
          type: MMLU
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 63.16
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 65.24
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 26.35
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 11.63
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 8.95
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 7.19
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 26.38
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-ShiningValiant2
          name: Open LLM Leaderboard
license: llama3.1

image/jpeg

Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.

Version

This is the 2024-11-04 release of Shining Valiant 2 for Llama 3.1 8b.

This release uses our newest datasets, open-sourced for everyone's use, including our expanded science-instruct dataset. This release features improvements in logical thinking and structured reasoning as well as physics, chemistry, biology, astronomy, Earth science, computer science, and information theory.

Future upgrades will continue to expand Shining Valiant's technical knowledge base.

Help us and recommend Shining Valiant 2 to your friends!

Prompting Guide

Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers
import torch

model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."},
    {"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])

The Model

Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.

The current version of Shining Valiant 2 is trained on technical knowledge using sequelbox/Celestia, complex reasoning using sequelbox/Spurline, and general chat capability using sequelbox/Supernova.

We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.

image/jpeg

Shining Valiant 2 is created by Valiant Labs.

Check out our HuggingFace page for our open-source Build Tools models, including the newest version of code-specialist Enigma!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.