Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigWeave v9 90B

The BigWeave models aim to identify merge settings equaling or surpassing the performance of Goliath-120b. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

This version is a DARE-TIES merge of two passthrough merges: Xwin-LM-70b-v0.1 + Euryale-1.3-70b (BigWeave v6) and Platypus2-70b-instruct + WinterGoddess-1.4x-70b (BigWeave v8). Both models individually show strong performance, and the merged model achieves even lower perplexity than each model separately.

The 90b size allows for 4bit quants to fit into 48GB of VRAM.

Prompting Format

Vicuna and Alpaca.

Merge process

The models used in the merge are Xwin-LM-70b-v0.1, Euryale-1.3-70b, Platypus2-70b-instruct and WinterGoddess-1.4x-70b.

Merge configuration: ``` slices: - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [0,12] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [9,14] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [12,62] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [54,71] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [62,80] merge_method: passthrough dtype: float16

slices: - sources: - model: garage-bAInd/Platypus2-70B-instruct layer_range: [0,12] - sources: - model: Sao10K/WinterGoddess-1.4x-70B-L2 layer_range: [9,14] - sources: - model: garage-bAInd/Platypus2-70B-instruct layer_range: [12,62] - sources: - model: Sao10/WinterGoddess-1.4x-70B-L2 layer_range: [54,71] - sources: - model: garage-bAInd/Platypus2-70B-instruct layer_range: [62,80] merge_method: passthrough dtype: float16

models: - model: llmixer/BigWeave-v8-90b parameters: weight: 0.5 density: 0.5 merge_method: dare_ties base_model: llmixer/BigWeave-v6-90b dtype: float16


# Acknowledgements
[@Xwin-LM](https://huggingface.co/Xwin-LM) For creating Xwin

[@Sao10K](https://huggingface.co/Sao10K) For creating Euryale and WinterGoddess

[@garage-bAInd](https://huggingface.co/garage-bAInd) For creating Platypus2

[@alpindale](https://huggingface.co/alpindale) For creating the original Goliath

[@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).
Downloads last month
32
Safetensors
Model size
87.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including llmixer/BigWeave-v9-90b