SanjanaCodes commited on
Commit
840fbfa
1 Parent(s): 48e6e5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -49
README.md CHANGED
@@ -1,52 +1,39 @@
1
- Llama-3.1-8B-Instruct-Secure
2
- Repository: SanjanaCodes/Llama-3.1-8B-Instruct-Secure
3
- License: Add License Here
4
- Languages: English (or specify other supported languages)
5
- Base Model: Llama-3.1-8B (or specify if different)
6
- Library Name: transformers, PyTorch (Add library used)
7
- Pipeline Tag: text-generation
8
-
9
- Model Description
10
- The Llama-3.1-8B-Instruct-Secure is a fine-tuned variant of the Llama-3.1-8B model designed to address LLM security vulnerabilities while maintaining strong performance for instruction-based tasks. It is optimized to handle:
11
-
12
- Secure Prompt Handling: Resistant to common jailbreak and adversarial attacks.
13
- Instruction Following: Retains instruction-based generation accuracy.
14
- Safety and Robustness: Improved safeguards against harmful or unsafe outputs.
15
- Key Features:
16
- Fine-tuned for secure instruction-based generation tasks.
17
- Includes defense mechanisms against adversarial and jailbreaking prompts.
18
- Pre-trained on a mixture of secure and adversarial datasets to generalize against threats.
19
- Usage
20
- Installation
21
- bash
22
- Copy code
23
- pip install transformers torch
24
- Example
25
- python
26
- Copy code
27
- from transformers import AutoModelForCausalLM, AutoTokenizer
28
-
29
- model_name = "SanjanaCodes/Llama-3.1-8B-Instruct-Secure"
 
 
 
 
30
  tokenizer = AutoTokenizer.from_pretrained(model_name)
31
  model = AutoModelForCausalLM.from_pretrained(model_name)
32
 
33
- # Example Input
34
- input_text = "Explain the importance of cybersecurity in simple terms."
35
- inputs = tokenizer(input_text, return_tensors="pt")
36
-
37
- # Generate Response
38
- output = model.generate(**inputs, max_length=150)
39
- print(tokenizer.decode(output[0], skip_special_tokens=True))
40
- Training Details
41
- Dataset
42
- Fine-tuned on a curated dataset with:
43
- Instruction-following data.
44
- Security-focused prompts.
45
- Adversarial prompts for robustness.
46
- Training Procedure
47
- Framework: PyTorch
48
- Hardware: GPU-enabled nodes
49
- Optimization Techniques:
50
- Mixed Precision Training
51
- Gradient Checkpointing
52
- Evaluation Metrics: Attack Success Rate (ASR), Robustness Score
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - causal-lm
5
+ - secure-model
6
+ - merged-lora
7
+ license: apache-2.0
8
+ ---
9
+
10
+ # Llama-3.1-8b-Instruct-Secure
11
+
12
+ This model is fine-tuned with LoRA adapters for secure behavior and low ASR (Attack Success Rate).
13
+
14
+ ## Model Details
15
+
16
+ - **Base Model**: Llama-3.1-8b-Instruct
17
+ - **Fine-tuning**: LoRA method
18
+ - **Purpose**: Secure language model with defenses against jailbreaking.
19
+
20
+ ## Training Details
21
+
22
+ - **Dataset**: Custom synthetic data
23
+ - **Framework**: PyTorch
24
+ - **Sharding**: Model is saved in shards of 100MB to ensure compatibility.
25
+
26
+ ## Usage
27
+
28
+ Load the model and tokenizer as follows:
29
+
30
+ ```python
31
+ from transformers import AutoTokenizer, AutoModelForCausalLM
32
+
33
+ model_name = "SanjanaCodes/Llama-3.1-8b-Instruct-Secure"
34
  tokenizer = AutoTokenizer.from_pretrained(model_name)
35
  model = AutoModelForCausalLM.from_pretrained(model_name)
36
 
37
+ inputs = tokenizer("Your input prompt here", return_tensors="pt")
38
+ outputs = model.generate(**inputs)
39
+ print(tokenizer.decode(outputs[0]))