Edit model card

ladybird-base-7B-v8

Model creator: bobofrut
Original model: Mistroll-7B-v2.2
GGUF quantization: llama.cpp commit b8c1476e44cc1f3a1811613f65251cf779067636

Description

Ladybird-base-7B-v8 is based on the Mistral architecture, which is known for its efficiency and effectiveness in handling complex language understanding and generation tasks. The model incorporates several innovative architecture choices to enhance its performance:

  • Grouped-Query Attention: Optimizes attention mechanisms by grouping queries, reducing computational complexity while maintaining model quality.
  • Sliding-Window Attention: Improves the model's ability to handle long-range dependencies by focusing on relevant segments of input, enhancing understanding and coherence.
  • Byte-fallback BPE Tokenizer: Offers robust tokenization by combining the effectiveness of Byte-Pair Encoding (BPE) with a fallback mechanism for out-of-vocabulary bytes, ensuring comprehensive language coverage.

Prompt Template

The prompt template is ChatML.

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
36
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

32-bit

Inference Examples
Inference API (serverless) does not yet support llama.cpp models for this pipeline type.

Model tree for mgonzs13/ladybird-base-7B-v8-GGUF

Quantized
(3)
this model