SanjanaCodes
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,24 @@ metrics:
|
|
6 |
- bertscore
|
7 |
base_model:
|
8 |
- LLM-PBE/Llama3.1-8b-instruct-LLMPC-Blue-Team
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- bertscore
|
7 |
base_model:
|
8 |
- LLM-PBE/Llama3.1-8b-instruct-LLMPC-Blue-Team
|
9 |
+
---
|
10 |
+
Model Card: LLM-PBE-FineTuned-FakeData
|
11 |
+
|
12 |
+
Model Details
|
13 |
+
- Model Name: LLM-PBE-FineTuned-DynamicData
|
14 |
+
- Creator: SanjanaCodes
|
15 |
+
- Language: English
|
16 |
+
|
17 |
+
Description
|
18 |
+
This model is a fine-tuned LLM trained on synthetic (fake) data for research purposes. It’s designed to help understand model behavior and the impact of fine-tuning with controlled, artificial datasets. This model should not be used for real-world applications due to its limited real-world relevance.
|
19 |
+
|
20 |
+
Intended Use
|
21 |
+
- Research: Fine-tuning experiments, synthetic data evaluation.
|
22 |
+
- Educational: Suitable for controlled testing and benchmarking.
|
23 |
+
|
24 |
+
Limitations
|
25 |
+
- Performance: May lack contextual accuracy and depth outside synthetic data contexts.
|
26 |
+
- Generalization: Best suited for synthetic data scenarios rather than practical applications.
|
27 |
+
|
28 |
+
Acknowledgments
|
29 |
+
Trained at NYU Tandon DICE Lab under Professor Chinmay Hegde & Niv Cohen
|