|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- ko |
|
--- |
|
# Model Card for LDCC-Instruct-Llama-2-ko-13B-v4.2.8 |
|
|
|
## Developed by : Wonchul Kim ([Lotte Data Communication](https://www.ldcc.co.kr) AI Technical Team) |
|
|
|
## Hardware and Software |
|
|
|
* **Hardware**: We utilized an A100x8 * 1 for training our model |
|
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) |
|
|
|
## Base Model : [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) |
|
|
|
### Training Data |
|
|
|
The LDCC-Instruct-Llama-2-ko-13B model was trained with publicly accessible Korean/English data sources. For its fine-tuning, we utilized other public data and underwent some processing and refinement. |
|
|
|
We did not incorporate any client data owned by Lotte Data Communication. |
|
|
|
## Prompt Template |
|
``` |
|
### Prompt: |
|
{instruction} |
|
|
|
### Answer: |
|
{output} |
|
``` |