I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
slimorca-stablelm-3b-4e1t - GGUF
- Model creator: pansophic
- Original model: slimorca-stablelm-3b-4e1t
StableLM
This is a Model based on StableLM. Stablelm is a familiy of Language Models by Stability AI.
Note:
Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models. The model will crash immediately if -ngl is larger than 34. The model works fine however without any gpu acceleration.
Brief
This Model works somehow, but for me it doesn't provide any useful responses.
About GGUF format
gguf
is the current file format used by the ggml
library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the llama.cpp project by Georgi Gerganov
Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are legacy
quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not real K-quants. More details can be found in affected model descriptions. (This mainly refers to Falcon 7b and Starcoder models)
K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load. So, if possible, use K-quants. With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
Original Model Card:
Model Card for OpenOrca-Phi
Full finetuning of the Stability AI's StableLM-3B-4E1T. The model was trained on the SlimOrca dataset. All the samples longer than the context size were removed.
How to Get Started with the Model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
prompt = """<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
"""
system = "You are an advanced and helpful AI assistant."
user = "How are you?"
prompt = prompt.format(system=system, user=user)
model = AutoModelForCausalLM.from_pretrained("pansophic/slimorca-stablelm-3b-4e1t", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("pansophic/slimorca-stablelm-3b-4e1t", trust_remote_code=True, torch_dtype=torch.bfloat16)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, max_length=512, top_k=40, top_p=0.9, do_sample=True, temperature=0.55, use_cache=True, streamer=streamer)
Prompt formatting
The model uses chatML format.
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
End of original Model File
Please consider to support my work
Coming Soon: I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
- Downloads last month
- 143
Model tree for maddes8cht/pansophic-slimorca-stablelm-3b-4e1t-gguf
Base model
stabilityai/stablelm-3b-4e1t