Edit model card

Aika-7B

Aika

Aika is a language model constructed using the DARE TIES merge method using mitultiwari/mistral-7B-instruct-dpo as a base. Aika is designed to interact with users in a way that feels natural and human-like, to solve problems and answer questions with a high degree of accuracy and truthfulness, and to engage in creative and logical tasks with proficiency.

Models Merged

The following models were included in the merge:

The base model is Mistral-7Bv0.1 fine tuned on Anthropic/hh-rlhf.

Why?

  • Base model tuned on Anthropic RLHF dataset: Safe AI as a base model, to balance the uncensored model below.
  • Silicon-Maid-7B: Boasts excellent multi-turn conversational skills and logical coherence, ensuring smooth interactions.
  • Samantha-V2: Offers empathy and human-like responses, equipped with programmed "self-awareness" for a more personalized experience.
  • Stealth-V1.3: Known for enhancing performance in merges when integrated as a component, optimizing Aika's functionality.
  • WestLake-7B-V2: Sets a high benchmark for emotional intelligence (EQ) and excels in creative writing, enhancing Aika's ability to understand and respond to your needs.

Combine them all img

Source

You get Aika - a considerate, personal digital assistant.

Configuration

Please check mergekit_config.yml for the merge config.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 59.25
AI2 Reasoning Challenge (25-Shot) 65.36
HellaSwag (10-Shot) 81.49
MMLU (5-Shot) 53.91
TruthfulQA (0-shot) 51.22
Winogrande (5-shot) 77.74
GSM8k (5-shot) 25.78
Downloads last month
72
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sethuiyer/Aika-7B

Dataset used to train sethuiyer/Aika-7B

Evaluation results