--- license: unlicense datasets: - emplocity/owca language: - pl --- # This repo contains EleutherAI/gpt-j-6B fine tuned on OWCA (https://github.com/Emplocity/owca) using LoRa Training params:
``` MICRO_BATCH_SIZE = 64 BATCH_SIZE = 128 GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE EPOCHS = 3 LEARNING_RATE = 2e-5 CUTOFF_LEN = 256 LORA_R = 4 LORA_ALPHA = 16 LORA_DROPOUT = 0.05 warmup_steps=100 fp16=True ```