|
--- |
|
library_name: transformers |
|
tags: [qlora, peft, fine-tuning, javascript, causal-lm] |
|
--- |
|
|
|
# Model Card for gemma-js-instruct-finetune |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
This is the model card for `gemma-js-instruct-finetune`, a fine-tuned version of the `gemma-2b-it` model. This fine-tuned model was trained to improve the performance of generating long-form, structured responses to JavaScript-related instructional tasks. The fine-tuning process used the QLoRA (Quantized Low-Rank Adaptation) method, enabling efficient parameter tuning on limited hardware resources. |
|
|
|
- **Developed by:** Arnav Jain and collaborators |
|
- **Shared by:** [Arnav Jain](https://huggingface.co/arnavj007) |
|
- **Model type:** Decoder-only causal language model |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model:** [gemma-2b-it](https://huggingface.co/google/gemma-2b-it) |
|
|
|
### Model Sources |
|
|
|
- **Repository:** [gemma-js-instruct-finetune](https://huggingface.co/arnavj007/gemma-js-instruct-finetune) |
|
- **Dataset:** [Evol-Instruct-JS-Code-500-v1](https://huggingface.co/datasets/pyto-p/Evol-Instruct-JS-Code-500-v1) |
|
- **Demo:** [Weights & Biases Run](https://wandb.ai/arnavj007-24/huggingface/runs/718nwcab) |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
The model can be directly used for generating solutions to JavaScript programming tasks, creating instructional code snippets, and answering technical questions related to JavaScript programming. |
|
|
|
### Downstream Use |
|
|
|
This model can be further fine-tuned for specific programming domains, other languages, or instructional content generation tasks. |
|
|
|
### Out-of-Scope Use |
|
|
|
This model is not suitable for: |
|
- Non-technical, general-purpose text generation |
|
- Applications requiring real-time interaction with external systems |
|
- Generating solutions for non-JavaScript programming tasks without additional fine-tuning |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
### Recommendations |
|
|
|
- Users should validate generated code for correctness and security. |
|
- Be cautious of potential biases or inaccuracies in the dataset that could propagate into model outputs. |
|
- Avoid using the model for sensitive or critical applications without thorough testing. |
|
|
|
## How to Get Started with the Model |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("arnavj007/gemma-js-instruct-finetune") |
|
model = AutoModelForCausalLM.from_pretrained("arnavj007/gemma-js-instruct-finetune") |
|
|
|
def get_completion(query: str): |
|
prompt = f"<start_of_turn>user {query}<end_of_turn>\n<start_of_turn>model" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=1000) |
|
return tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
|
response = get_completion("Create a function in JavaScript to calculate the factorial of a number.") |
|
print(response) |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
The training dataset consisted of 500 JavaScript instructions paired with relevant outputs. These instructions focused on tasks like code snippets, algorithm implementations, and error-handling scenarios. |
|
|
|
Dataset: [Evol-Instruct-JS-Code-500-v1](https://huggingface.co/datasets/pyto-p/Evol-Instruct-JS-Code-500-v1) |
|
|
|
### Training Procedure |
|
|
|
#### Preprocessing |
|
|
|
- Instructions and outputs were formatted using a standardized prompt-response template. |
|
- Data was tokenized using the Hugging Face tokenizer for `gemma-2b-it`. |
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** QLoRA (Quantized Low-Rank Adaptation) |
|
- **Batch size:** 1 per device |
|
- **Gradient accumulation steps:** 4 |
|
- **Learning rate:** 2e-4 |
|
- **Training steps:** 100 |
|
- **Optimizer:** Paged AdamW (8-bit) |
|
|
|
### Speeds, Sizes, Times |
|
|
|
- Training runtime: ~1435 seconds |
|
- Trainable parameters: 3% of the model (~78M) |
|
|
|
## Evaluation |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
The test dataset consisted of 100 JavaScript instructions held out from the training set. |
|
|
|
#### Metrics |
|
|
|
- Quality of generated code snippets |
|
- Ability to handle complex prompts with multiple sub-tasks |
|
|
|
### Results |
|
|
|
The fine-tuned model demonstrated significant improvement in handling long prompts and generating structured code. It provided complete solutions for tasks like API creation with advanced features (e.g., caching, error handling). |
|
|
|
#### Summary |
|
|
|
Fine-tuning with QLoRA enabled robust performance improvements, making the model capable of generating detailed instructional responses. |
|
|
|
## Environmental Impact |
|
|
|
- **Hardware Type:** NVIDIA Tesla T4 GPU (free-tier Colab) |
|
- **Hours used:** ~0.4 hours |
|
- **Carbon Emitted:** Minimal (estimated using [ML Impact Calculator](https://mlco2.github.io/impact#compute)) |
|
|
|
## Technical Specifications |
|
|
|
### Model Architecture and Objective |
|
|
|
The model uses a decoder-only architecture optimized for causal language modeling tasks. |
|
|
|
### Compute Infrastructure |
|
|
|
- **Hardware:** NVIDIA Tesla T4 |
|
- **Software:** |
|
- Transformers: 4.38.2 |
|
- PEFT: 0.8.2 |
|
- Accelerate: 0.27.1 |
|
- BitsAndBytes: 0.42.0 |
|
|
|
## Citation |
|
|
|
**BibTeX:** |
|
```bibtex |
|
@misc{Jain2024gemmajs, |
|
author = {Arnav Jain and collaborators}, |
|
title = {gemma-js-instruct-finetune}, |
|
year = {2024}, |
|
howpublished = {\url{https://huggingface.co/arnavj007/gemma-js-instruct-finetune}} |
|
} |
|
``` |
|
|
|
## More Information |
|
|
|
For questions or feedback, contact [Arnav Jain](https://huggingface.co/arnavj007). |
|
|
|
|