mpasila's picture
Update README.md
6c6817b verified
|
raw
history blame
1.36 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
  - not-for-all-audiences
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
  - grimulkan/LimaRP-augmented
  - mpasila/LimaRP-augmented-8k-context

This is an ExLlamaV2 quantized model in 4bpw of mpasila/Llama-3-LimaRP-8B using the default calibration dataset.

Original Model card:

This is a merge of mpasila/Llama-3-LimaRP-LoRA-8B.

LoRA trained in 4-bit with 8k context using meta-llama/Meta-Llama-3-8B as the base model for 1 epoch.

Dataset used is a modified version of grimulkan/LimaRP-augmented.

Prompt format: ChatML

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.