Nan-Do's picture
Update README.md
842361f verified
|
raw
history blame
3.45 kB
metadata
base_model: Nan-Do/LeetCodeWizard_13B_v1.1a
inference: false
language:
  - en
license: llama2
model-index:
  - name: LeetCodeWizard_13B_v1.1a
    results: []
model_creator: Nan-Do
model_name: LeetCodeWizard 13B v1.1a
model_type: codellama
prompt_template: >-
  Below is an instruction that describes a task. Write a response that
  appropriately completes the request.

  ### Instruction: {instruction}

  ### Response:
quantized_by: Nan-Do
tags:
  - codellama
  - instruct
  - finetune
  - leetcode
  - problem solving

LeetCodeWizard 13B V1.1a - GGUF

Description

This repo contains GGUF format model files for LeetCodeWizard 13B v1.1a. (model template inspired by TheBloke)

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files