Edit model card

Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b.

Prompt format is ChatML.


internlm2-06limarp-1chat-TASK_ARITHM-20b-v0.03

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using intervitens/internlm2-base-20b-llama as a base.

Models Merged

The following models were included in the merge:

  • ./internlm2-limarp-20b-v0.03
  • ./internlm2-chat-20b-llama

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./internlm2-chat-20b-llama
    parameters:
      weight: 1.0
  - model: ./internlm2-limarp-20b-v0.03
    parameters:
      weight: 0.6
merge_method: task_arithmetic
base_model: ./internlm2-base-20b-llama
parameters:
  #normalize: false
  #int8_mask: true
dtype: bfloat16

Downloads last month
3,528
GGUF
Model size
19.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for intervitens/internlm2-limarp-chat-20b-GGUF

Quantized
(1)
this model

Dataset used to train intervitens/internlm2-limarp-chat-20b-GGUF