Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # Instruct-Tuned LLaMA-7B Model Card
6
+
7
+ ## Model Description
8
+
9
+ The Instruct-Tuned LLaMA-7B is a language model based on the LLaMA-2 architecture, trained and fine-tuned to generate coherent responses for a wide range of tasks. This model has been optimized to understand and generate text instructions effectively. It has a total of 7 billion parameters and is designed to provide accurate and contextually relevant responses to given prompts.
10
+
11
+ ## Intended Uses
12
+
13
+ The model is intended to be used for generating responses based on input instructions and contexts. It can be applied in a variety of natural language processing tasks such as text completion, question answering, summarization, and more. Its ability to handle instructions and contexts makes it particularly suitable for tasks involving complex prompts.
14
+
15
+ ## Limitations
16
+
17
+ - **Bias**: Like any large language model, the Instruct-Tuned LLaMA-7B may inadvertently reflect biases present in the training data. It's important to be cautious when using the model in sensitive applications and to perform bias analysis before deployment.
18
+
19
+ - **Context Sensitivity**: While the model is capable of understanding instructions and contexts, its responses are based on patterns in the training data and might not always capture nuanced or subtle instructions accurately.
20
+
21
+ - **Limited Training Data**: The model has been fine-tuned on a specific dataset and may not perform optimally for tasks significantly different from its training data.
22
+
23
+ ## Training Parameters
24
+
25
+ - **Model Architecture**: LLaMA-2 with 7 billion parameters.
26
+ - **Quantization**: The model uses 4-bit quantization techniques.
27
+ - **Attention Mechanism**: Flash Attention (based on the Flash Attention paper).
28
+ - **Training Frameworks**: HF's Transformers library, Peft library, and TRL library.
29
+ - **Optimization Strategy**: Paged AdamW 32-bit optimization.
30
+ - **Training Batch Size**: Varies based on the presence of Flash Attention (4 with Flash Attention, 1 without).
31
+ - **Learning Rate**: 2e-4 with constant scheduling.
32
+ - **Gradient Accumulation**: Every 2 steps.
33
+ - **Max Sequence Length**: 2048 tokens.
34
+
35
+ ## Datasets Used
36
+
37
+ The model was fine-tuned on a subset of the Alpaca-GPT-4 dataset, containing prompts, instructions, and corresponding responses. The dataset was preprocessed to ensure reasonable training times without sacrificing quality.
38
+
39
+ ## Evaluation Results
40
+
41
+ The Instruct-Tuned LLaMA-7B was evaluated on various prompts from the Alpaca-GPT-4 dataset. During evaluation, it demonstrated significant improvements over the base LLaMA-2 model in terms of generating coherent and contextually relevant responses. Its responses aligned well with the intended meaning of the prompts.
42
+
43
+ ## Model Card Attribution
44
+
45
+ This model card was authored by Chris Alexiuk and is based on the work presented in the [GitHub Repository](https://github.com/AI-Maker-Space/Fine-tuning-LLM-Resources#instruct-tuning-openlms-openllama-on-the-dolly-15k-dataset-notebooks). The model and its associated artifacts are available on the [Hugging Face Dataset Card](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data).
46
+
47
+ For more information, sweet tutorials, or collaborations, checkout [AI Makerspace](https://www.linkedin.com/company/ai-maker-space).
48
+
49
+ ---