Edit model card

speechless-sparsetral-16x7b-MoE

speechless-sparsetral-16x7b-MoE is the MoE upgraded version of speechless-code-mistral-7b-v1.0. The MoE fine-tuning adopts Parameter-Efficient Sparsity Crafting (PESC), which is an efficient fine-tuning architecture that uses LoRA modules as expert models, similar to the concept of multi-loras. The model size is approximately 10B.

Specifically, Mistral-7B-0.1 is used as the base model, with 16 experts and 4 expert outputs selected for inference. The fine-tuning dataset includes codefuse-ai/Evol-Instruction-66k to enhance the model's code generation ability. The specific datasets are as follows:

  • jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
  • Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
  • garage-bAInd/Open-Platypus: 100%, 24,926 samples.
  • WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
  • TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples
  • Spider: 8,659 samples
  • codefuse-ai/Evol-Instruction-66k: 100%, 66,862 samples

Alpaca Prompt Format

### Instruction:
<instruction>
### Response:

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name_or_path="uukuguy/speechless-sparsetral-16x7b-MoE"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True).eval()

system = ""Below is an instruction that describes a task.\nWrite a response that appropriately completes the request.\n\n""
prompt = f"{system}\n\n### Instruction:\n{instruction}\n\n### Response:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
Downloads last month
20
Safetensors
Model size
9.39B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train uukuguy/speechless-sparsetral-mistral-16x7b-MoE

Evaluation results