File size: 5,368 Bytes
3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 f6323b3 3ca74b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
library_name: transformers
tags: [qlora, peft, fine-tuning, javascript, causal-lm]
---
# Model Card for gemma-js-instruct-finetune
## Model Details
### Model Description
This is the model card for `gemma-js-instruct-finetune`, a fine-tuned version of the `gemma-2b-it` model. This fine-tuned model was trained to improve the performance of generating long-form, structured responses to JavaScript-related instructional tasks. The fine-tuning process used the QLoRA (Quantized Low-Rank Adaptation) method, enabling efficient parameter tuning on limited hardware resources.
- **Developed by:** Arnav Jain and collaborators
- **Shared by:** [Arnav Jain](https://huggingface.co/arnavj007)
- **Model type:** Decoder-only causal language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [gemma-2b-it](https://huggingface.co/google/gemma-2b-it)
### Model Sources
- **Repository:** [gemma-js-instruct-finetune](https://huggingface.co/arnavj007/gemma-js-instruct-finetune)
- **Dataset:** [Evol-Instruct-JS-Code-500-v1](https://huggingface.co/datasets/pyto-p/Evol-Instruct-JS-Code-500-v1)
- **Demo:** [Weights & Biases Run](https://wandb.ai/arnavj007-24/huggingface/runs/718nwcab)
## Uses
### Direct Use
The model can be directly used for generating solutions to JavaScript programming tasks, creating instructional code snippets, and answering technical questions related to JavaScript programming.
### Downstream Use
This model can be further fine-tuned for specific programming domains, other languages, or instructional content generation tasks.
### Out-of-Scope Use
This model is not suitable for:
- Non-technical, general-purpose text generation
- Applications requiring real-time interaction with external systems
- Generating solutions for non-JavaScript programming tasks without additional fine-tuning
## Bias, Risks, and Limitations
### Recommendations
- Users should validate generated code for correctness and security.
- Be cautious of potential biases or inaccuracies in the dataset that could propagate into model outputs.
- Avoid using the model for sensitive or critical applications without thorough testing.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("arnavj007/gemma-js-instruct-finetune")
model = AutoModelForCausalLM.from_pretrained("arnavj007/gemma-js-instruct-finetune")
def get_completion(query: str):
prompt = f"<start_of_turn>user {query}<end_of_turn>\n<start_of_turn>model"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1000)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
response = get_completion("Create a function in JavaScript to calculate the factorial of a number.")
print(response)
```
## Training Details
### Training Data
The training dataset consisted of 500 JavaScript instructions paired with relevant outputs. These instructions focused on tasks like code snippets, algorithm implementations, and error-handling scenarios.
Dataset: [Evol-Instruct-JS-Code-500-v1](https://huggingface.co/datasets/pyto-p/Evol-Instruct-JS-Code-500-v1)
### Training Procedure
#### Preprocessing
- Instructions and outputs were formatted using a standardized prompt-response template.
- Data was tokenized using the Hugging Face tokenizer for `gemma-2b-it`.
#### Training Hyperparameters
- **Training regime:** QLoRA (Quantized Low-Rank Adaptation)
- **Batch size:** 1 per device
- **Gradient accumulation steps:** 4
- **Learning rate:** 2e-4
- **Training steps:** 100
- **Optimizer:** Paged AdamW (8-bit)
### Speeds, Sizes, Times
- Training runtime: ~1435 seconds
- Trainable parameters: 3% of the model (~78M)
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The test dataset consisted of 100 JavaScript instructions held out from the training set.
#### Metrics
- Quality of generated code snippets
- Ability to handle complex prompts with multiple sub-tasks
### Results
The fine-tuned model demonstrated significant improvement in handling long prompts and generating structured code. It provided complete solutions for tasks like API creation with advanced features (e.g., caching, error handling).
#### Summary
Fine-tuning with QLoRA enabled robust performance improvements, making the model capable of generating detailed instructional responses.
## Environmental Impact
- **Hardware Type:** NVIDIA Tesla T4 GPU (free-tier Colab)
- **Hours used:** ~0.4 hours
- **Carbon Emitted:** Minimal (estimated using [ML Impact Calculator](https://mlco2.github.io/impact#compute))
## Technical Specifications
### Model Architecture and Objective
The model uses a decoder-only architecture optimized for causal language modeling tasks.
### Compute Infrastructure
- **Hardware:** NVIDIA Tesla T4
- **Software:**
- Transformers: 4.38.2
- PEFT: 0.8.2
- Accelerate: 0.27.1
- BitsAndBytes: 0.42.0
## Citation
**BibTeX:**
```bibtex
@misc{Jain2024gemmajs,
author = {Arnav Jain and collaborators},
title = {gemma-js-instruct-finetune},
year = {2024},
howpublished = {\url{https://huggingface.co/arnavj007/gemma-js-instruct-finetune}}
}
```
## More Information
For questions or feedback, contact [Arnav Jain](https://huggingface.co/arnavj007).
|