Model Card for LDCC-Instruct-Llama-2-ko-13B-v4.2.8
Developed by : Wonchul Kim (Lotte Data Communication AI Technical Team)
Hardware and Software
- Hardware: We utilized an A100x8 * 1 for training our model
- Training Factors: We fine-tuned this model using a combination of the DeepSpeed library and the HuggingFace Trainer / HuggingFace Accelerate
Base Model : beomi/llama-2-koen-13b
Training Data
The LDCC-Instruct-Llama-2-ko-13B model was trained with publicly accessible Korean/English data sources. For its fine-tuning, we utilized other public data and underwent some processing and refinement.
We did not incorporate any client data owned by Lotte Data Communication.
Prompt Template
### Prompt:
{instruction}
### Answer:
{output}
- Downloads last month
- 4,220
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.