Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b.
Prompt format is ChatML.
internlm2-06limarp-1chat-TASK_ARITHM-20b-v0.03
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using intervitens/internlm2-base-20b-llama as a base.
Models Merged
The following models were included in the merge:
- ./internlm2-limarp-20b-v0.03
- ./internlm2-chat-20b-llama
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ./internlm2-chat-20b-llama
parameters:
weight: 1.0
- model: ./internlm2-limarp-20b-v0.03
parameters:
weight: 0.6
merge_method: task_arithmetic
base_model: ./internlm2-base-20b-llama
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.