language:
- en
license: apache-2.0
tags:
- slm
- llama
- tiny
- tinyllama
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
metrics:
- accuracy
- bertscore
- bleu
model-index:
- name: zyte-1.1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.88
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.37
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.62
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.15
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.96
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.36
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aihub-app/zyte-1.1B
name: Open LLM Leaderboard
Model Card for Model ID
Zyte-1.1b: Tiny but Mighty by AI Hub Mobile App
Model Details
Model Description
The Zyte 1B model is a cutting-edge advancement in AI language understanding and generation. This version is a sophisticated refinement of the acclaimed tinyllama model, incorporating the advanced Direct Parameter Optimization (DPO) technique. Our team at AI Hub App has diligently enhanced this model using state-of-the-art datasets, ensuring unparalleled performance and accuracy.
- Developed by: AI Hub Team
- Model type: TinyLlama
- Specialization: AI Language Understanding and Generation
The aihub-app/zyte-1.1b model represents a significant advancement in the field of AI language understanding and generation. This model is a meticulously fine-tuned version of the renowned tinyllama model, utilizing the advanced Direct Parameter Optimization (DPO) technique. Our team at AI Hub App has dedicated considerable effort to enhance this model using state-of-the-art datasets.
"<|system|> You are a helpful AI assistant.<|user|>{prompt}<|assistant|>"
Inference Code - https://huggingface.co/aihub-app/zyte-1B/blob/main/inference_zyte_1b.ipynb
Model Card Contact
For further inquiries and detailed information, please contact us at hello@getaihub.app.
About AI Hub
AI Hub Mobile App emerges as a beacon for AI enthusiasts and professionals worldwide, serving as a comprehensive mobile platform dedicated entirely to Artificial Intelligence. It is a one-stop destination for everything AI, from the latest trends in machine learning and groundbreaking developments in deep learning to thoughtful discussions on AI ethics and data governance. The platform is designed to cater to a wide range of interests within the AI community. Whether you're delving into the intricacies of algorithmic innovations or exploring practical applications in various industries, AI Hub provides concise, insightful content that keeps you informed and engaged. Its commitment to covering all facets of the AI industry makes it an invaluable resource for anyone looking to stay abreast of this rapidly evolving field.
Install app here - https://ai-hub.app.link/install
Model Card Contact
Stay Connected with AI Hub:
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 38.22 |
AI2 Reasoning Challenge (25-Shot) | 37.88 |
HellaSwag (10-Shot) | 61.37 |
MMLU (5-Shot) | 24.62 |
TruthfulQA (0-shot) | 42.15 |
Winogrande (5-shot) | 61.96 |
GSM8k (5-shot) | 1.36 |