--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: codellama/CodeLlama-7b-hf model-index: - name: sql-code-llama results: [] --- # sql-code-llama This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2023 | 0.04 | 20 | 2.0520 | | 1.212 | 0.07 | 40 | 0.8926 | | 0.8532 | 0.11 | 60 | 0.7526 | | 0.5953 | 0.14 | 80 | 0.5960 | | 0.3869 | 0.18 | 100 | 0.5596 | | 0.5738 | 0.22 | 120 | 0.5181 | | 0.4281 | 0.25 | 140 | 0.5080 | | 0.6451 | 0.29 | 160 | 0.5146 | | 0.4874 | 0.33 | 180 | 0.4893 | | 0.3588 | 0.36 | 200 | 0.5016 | | 0.5308 | 0.4 | 220 | 0.4816 | | 0.4006 | 0.43 | 240 | 0.4777 | | 0.5958 | 0.47 | 260 | 0.4780 | | 0.4682 | 0.51 | 280 | 0.4685 | | 0.3507 | 0.54 | 300 | 0.4753 | | 0.5079 | 0.58 | 320 | 0.4664 | | 0.3933 | 0.62 | 340 | 0.4626 | | 0.5839 | 0.65 | 360 | 0.4622 | | 0.4543 | 0.69 | 380 | 0.4594 | | 0.3475 | 0.72 | 400 | 0.4583 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.36.0 - Pytorch 2.0.1+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2