Edit model card

READY FOR USE GGUF VERSION AVAILABLE

Overview

Secretmoon/LoRA-Llama-3-MLP is a LoRA adapter for the Llama-3-8B model, primarily designed to expand the model's knowledge of the MLP:FiM (My Little Pony: Friendship is Magic) universe. This adapter is ideal for generating fan fiction, role-playing scenarios, and other creative projects. The training data includes factual content from the Fandom wiki and canonical fan works that deeply explore the universe.

Night alicorn

Base Model

The base model for this adapter is Sao10K/L3-8B-Stheno-v3.1, an excellent fine-tuned version of the original Llama-3-8B. It excels in story writing and role-playing without suffering from degradation due to overfitting.

Training Details

  • Dataset:
    1. Cleaned copy of the MLP Fandom Wiki, excluding information about recent and side projects unrelated to MLP:FiM. (Alpaca)
    2. Approximately 100 specially selected fan stories from FiMFiction. (RAW text)
    3. Additional data to train the model as a personal assistant and enhance its sensitivity to user emotions. (Alpaca)
  • Training Duration: 3 hours
  • Hardware: 1 x NVIDIA RTX A6000 48GB
  • PEFT Type: LoRA 8-bit
  • Sequence Length: 6144
  • Batch Size: 2
  • Num Epochs: 3
  • Optimizer: AdamW_BNB_8bit
  • Learning Rate Scheduler: Cosine
  • Learning Rate: 0.00033
  • LoRA R: 256
  • Sample Packing: True
  • LoRA Target Linear: True

How to Use

You can apply the adapter to the original Safetensors weights of the model and load it through Transformers, or you can merge this adapter with the base model weights and convert it to f16 .gguf for use in llama.cpp.

Recommendations for LoRA Alpha

  • 16: Low influence
  • 48: Suggested optimal value (recommended)
  • 64: High influence, significantly impacting model behavior
  • 128: Very high influence, drastically changing language model behavior (not recommended)

You can modify this parameter in the adapter_config.json file. For example, I merged the adapter with the base model using LoRA alpha=40.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

# Loading tokenizer
tokenizer = AutoTokenizer.from_pretrained("Sao10K/L3-8B-Stheno-v3.1")

# Load base model in fp16, if you have ~15gb VRAM at least
base_model = AutoModelForCausalLM.from_pretrained(
    "Sao10K/L3-8B-Stheno-v3.1",
    trust_remote_code=True,
    device_map="auto",
    torch_dtype=torch.float16,   # optional if you have enough VRAM
)

# Loading LoRA
adapter_name = "secretmoon/LoRA-Llama-3-MLP"
model = PeftModel.from_pretrained(base_model, adapter_name)
model = model.eval()

# Text generation function
def generate_text(prompt, max_length=100, num_return_sequences=1):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    outputs = model.generate(
        **inputs, 
        max_length=max_length, 
        num_return_sequences=num_return_sequences, 
        no_repeat_ngram_size=2, 
        early_stopping=True
    )
    return [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]

prompt = "Once upon a time"
generated_texts = generate_text(prompt)
for i, text in enumerate(generated_texts):
    print(f"Generated text {i+1}:\n{text}\n")

Example output:

Generated text 1:
Once upon a time, there was a young filly named Luna. She was the younger sister of a powerful princess named Celestia. Luna lived in a beautiful castle with her sister and their parents, the king and queen. The castle was surrounded by a lush, green forest, and it was always filled with the sounds of birds singing and animals playing.

Merge:

  1. Using Axolotl (https://github.com/OpenAccess-AI-Collective/axolotl)

    python3 -m axolotl.cli.merge_lora lora.yml --lora_model_dir="./completed-model"
    
  2. Conversion to adapter for gguf in OLD llama.cpp

 python3 convert-lora-to-ggml.py /path/to/lora/adapter

Other:


You can contact me on telegram @monstor86 or discord @starlight2288
Also you can try some RP with this adapter for free in my bot on telegram @Luna_Pony_bot Built with Axolotl

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for secretmoon/LoRA-Llama-3-MLP

Adapter
(1)
this model
Quantizations
1 model