Edit model card

The license is cc-by-nc-sa-4.0.

(์ฃผ)๋ฏธ๋””์–ด๊ทธ๋ฃน์‚ฌ๋žŒ๊ณผ์ˆฒ๊ณผ (์ฃผ)๋งˆ์ปค์˜ LLM ์—ฐ๊ตฌ ์ปจ์†Œ์‹œ์—„์œผ๋กœ ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค

๐ŸŒ™Dear_My_best_Friends-v2-13B๐ŸŒ™

img

The main image is generated image using playground AI.

Model Details

Model Developers Seungyoo Lee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
Dear_My_best_Friends-13B is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.

Base Model DopeorNope/Dear_My_best_Friend-SFT-v2-13B- not uploaded yet

COKAL_pre_DPO_Test_v3-13b is the SFT model to train the DPO method.

Training Dataset

This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from "lvwerra/stack-exchange-paired" to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)

This dataset is based on "HumanF-MarkrAI's private data" and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified. Moreover, I implemented a new method(It is a test version, and I will share it soon).

Training
I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04. It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.

Implementation Code


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "Dear_My_best_Friends-v2-13B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)

Acknowledgement

์ด ๋ชจ๋ธ์€ ๊ณผํ•™๊ธฐ์ˆ ์ •๋ณดํ†ต์‹ ๋ถ€ยท๊ด‘์ฃผ๊ด‘์—ญ์‹œ๊ฐ€ ๊ณต๋™ ์ง€์›ํ•œ '์ธ๊ณต์ง€๋Šฅ ์ค‘์‹ฌ ์‚ฐ์—…์œตํ•ฉ ์ง‘์ ๋‹จ์ง€ ์กฐ์„ฑ์‚ฌ์—…'์œผ๋กœ ์ง€์›์„ ๋ฐ›์•„ ์ˆ˜ํ–‰๋œ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ์ž…๋‹ˆ๋‹ค.

This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.


Downloads last month
4,729
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including DopeorNope/Dear_My_best_Friends-v2-13B