πΏ Heliotrope-Ely-Swa-slerp-7B
Heliotrope-Ely-Swa-slerp-7B is a merge of the following models using LazyMergekit of Maxime Labonne powered by MergeKit of Arcee AI:
π» Configuration
slices:
- sources:
- model: elyza/ELYZA-japanese-Llama-2-7b
layer_range: [0, 32]
- model: tokyotech-llm/Swallow-7b-hf
layer_range: [0, 32]
merge_method: slerp
base_model: elyza/ELYZA-japanese-Llama-2-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
π€ Usage for HuggingFace
# !pip install -qU transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
model_name = "AkimfromParis/Heliotrope-Ely-Swa-slerp-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
pipe = pipeline("text-generation",model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto")
sequences = pipe('倧谷ηΏεΉ³ιΈζγ―', do_sample=False, max_new_tokens=100)
print(sequences[0].get("generated_text"))
π Citation
@misc{goddard2024arcee,
title={Arcee's MergeKit: A Toolkit for Merging Large Language Models},
author={Goddard, Charles and Siriwardhana, Shamane and Ehghaghi, Malikeh and Meyers, Luke and Karpukhin, Vlad and Benedict, Brian and McQuade, Mark and Solawetz, Jacob},
journal={arXiv preprint arXiv:2403.13257},
year={2024}
}
arxiv.org/abs/2403.13257
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.