--- license: llama2 base_model: codellama/CodeLlama-7b-hf tags: - generated_from_trainer model-index: - name: karim_codellama results: [] library_name: peft --- [Visualize in Weights & Biases](https://wandb.ai/llm_project/llm_project-org/runs/nb0hywqq) # karim_codellama This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: True - _load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: False - load_in_8bit: True ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.146 | 0.0787 | 20 | 1.2504 | | 0.8176 | 0.1573 | 40 | 0.6454 | | 0.6291 | 0.2360 | 60 | 0.4881 | | 0.3068 | 0.3147 | 80 | 0.3589 | | 0.5266 | 0.3933 | 100 | 0.4066 | | 0.302 | 0.4720 | 120 | 0.2728 | | 0.1989 | 0.5506 | 140 | 0.2604 | | 0.3157 | 0.6293 | 160 | 0.2502 | | 0.1768 | 0.7080 | 180 | 0.2285 | | 0.4553 | 0.7866 | 200 | 0.2575 | | 0.2183 | 0.8653 | 220 | 0.2152 | | 0.1815 | 0.9440 | 240 | 0.2148 | | 0.2704 | 1.0226 | 260 | 0.2142 | | 0.1662 | 1.1013 | 280 | 0.2001 | | 0.3306 | 1.1799 | 300 | 0.2065 | | 0.2161 | 1.2586 | 320 | 0.1967 | | 0.1429 | 1.3373 | 340 | 0.1925 | | 0.2892 | 1.4159 | 360 | 0.1927 | | 0.1459 | 1.4946 | 380 | 0.1894 | | 0.3078 | 1.5733 | 400 | 0.1887 | ### Framework versions - PEFT 0.6.0.dev0 - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1