File size: 1,001 Bytes
bc1c0af
 
 
 
 
 
 
 
 
 
a54ae0c
bc1c0af
a54ae0c
 
 
bc1c0af
 
 
 
 
435cd3e
 
 
bc1c0af
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
language:
- en
metrics:
- bertscore
base_model:
- LLM-PBE/Llama3.1-8b-instruct-LLMPC-Blue-Team
---
Model Card: LLM-PBE-FineTuned-FakeData

Model Details
- Model Name: LLM-PBE-FineTuned-FakeData
- Creator: SanjanaCodes
- Language: English

Description
This model is a fine-tuned LLM trained on synthetic (fake) data for research purposes. It’s designed to help understand model behavior and the impact of fine-tuning with controlled, artificial datasets. This model should not be used for real-world applications due to its limited real-world relevance.

Intended Use
- Research: Fine-tuning experiments, synthetic data evaluation.
- Educational: Suitable for controlled testing and benchmarking.
  
Limitations
- Performance: May lack contextual accuracy and depth outside synthetic data contexts.
- Generalization: Best suited for synthetic data scenarios rather than practical applications.

Acknowledgments
Trained at NYU Tandon DICE Lab under Professor Chinmay Hegde & Niv Cohen