--- library_name: transformers tags: [] --- ## Model Card for SudaGom Project (Gemma-Sprint) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66544bef83fac8ccdc291478/0TXMoy_DC7jl9ukvUU0lh.png) ### Model Details #### Model Description This model is a child-friendly enhancement of the Gemma2-ko 9B designed to generate natural and engaging multi-turn conversations tailored for children. It incorporates fine-tuning and reinforcement learning to optimize conversation flow and ensure safe, inclusive, and developmentally appropriate interactions. #### Developed by: Minjeong Kang, Bodam Kim #### Model type: Conversational AI #### Language(s) (NLP): Korean (ko) #### License: - google/gemma-2-9b-it - rtzr/ko-gemma-2-9b-it #### Finetuned from model: - rtzr/ko-gemma-2-9b-it ### Model Sources ### Uses #### Direct Use: This model is intended for direct interaction in applications requiring engagement with children, facilitating conversations that are contextually and emotionally aware. #### Downstream Use: The model can be integrated into educational software, virtual assistants for children, or any platform where child-safe interaction is crucial. ### Out-of-Scope Use The model is not intended for use in situations where adult themes are discussed or any context outside of child-friendly applications. ### Bias, Risks, and Limitations The model may inadvertently generate responses that are not fully aligned with all individual developmental stages and cultural contexts. Continuous monitoring and updating are recommended to mitigate potential biases. ### Recommendations Users should be aware of the linguistic limitations and ensure that children's interactions are supervised to maintain a safe and positive experience. ### Python code with AutoModel ```python from IPython.display import Markdown, display import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoConfig modelName = "rtzr/ko-gemma-2-9b-it" bnbConfig = BitsAndBytesConfig( load_in_4bit = True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained(modelName) model = AutoModelForCausalLM.from_pretrained( modelName, device_map = "auto", quantization_config=bnbConfig ) system = "당신은 소은의 엄마로, 소은과의 대화에서 질문을 하고 대답을 유도하는 역할을 합니다. 현재 소은이는 초등학교 1학년이고, 방금 학교를 다녀와 친절한 엄마와 대화를 하는 중입니다. 엄마다운 감정적 반응 표현을 사용하세요. 어떤 대화를 주제로 대화를 이어나갈지는 소은의 엄마가 정하고, 소은은 엄마의 질문에 대한 대답을 하는 역할을 합니다." conversation = [ "엄마: 너 왜 그래? 왜 그렇게 표정이 안좋아?", "소은: 아니 민지가 나보고 분거지래.", "엄마: 분거지? 걔가 너한테 그런 말을 왜 해?", "소은: 민지가 그러는데 우리 집이 거지래. 가난해서 거지 아파트에 사는거래. 우리 집 진짜 거지야?" ] # 대화 전체를 하나의 프롬프트로 결합 user = "\n".join(conversation) prompt = f"System: {system} \n User: {user} \n 엄마: " inputs = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True, max_length=512) # Specify max_length input_ids = inputs['input_ids'].to(model.device) # Move input_ids to the same device as the model outputs = model.generate(input_ids=input_ids, max_length=500, num_return_sequences=1) # Use input_ids # 텐서를 자연어로 변환 decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded_output) # 결과 출력 Markdown(decoded_output.split("AI:")[1]) ``` ```python >>> 엄마: "아, 소은아. 민지가 그런 말을 했구나. 엄마 듣고 속상했어. 왜 그런 말을 했는지 엄마도 궁금해. 우리 집은 우리에게는 정말 특별한 곳이야. 왜냐하면 우리가 함께 행복하게 살고 있잖아. 우리 집은 돈으로 살 수 없는 사랑과 행복으로 가득 차 있잖아. 소은이가 어떻게 생각하는지 엄마에게 말해줄래?" ``` ### Training Details #### Training Data Data includes children's conversation datasets, anonymized and classified by developmental stages, ensuring a diverse and representative sample. To implement the persona of the service, the speaker's gender and age were specified during the data preprocessing phase. In the "Korean SNS Multi-turn Conversation Data," words like "레게노," which are used primarily on social media and rarely in actual spoken language, were removed. #### Training Procedure - **Preprocessing**: Text data was cleaned and formatted to remove any inappropriate content and personal data. - **Model Fine-tuning**: Conducted on the cleaned dataset to tailor the model's responses to children's linguistic needs. - **Reinforcement Learning**: Implemented to refine the flow and appropriateness of conversations. #### Training Hyperparameters - **Training regime**: Detailed parameters include learning rate adjustments, batch size configurations, and epoch counts, optimized for conversational understanding and safety. ### Evaluation #### Testing Data - Various child-centric scenarios were constructed to test the model's performance across different conversation turns and topics. #### Factors - Age appropriateness, engagement level, and safety were the primary evaluation metrics. #### Metrics - Accuracy of context understanding, appropriateness of language, and user engagement rates. #### Model Architecture and Objective The model utilizes a transformer-based architecture optimized for generating conversational text that is suitable for children. ### Citation For those who utilize this model in academic or industry-related projects, please cite as follows: ```bibtex @misc{gemma2_ko_child_friendly, title={Enhanced Child-Friendly Gemma2-ko-it 9B Model}, author={Minjeong Kang, Bodam Kim}, year={2024} } ``` --- Feel free to modify or add any sections based on the specifics of your project or organizational requirements!