Model Card for SudaGom Project (Gemma-Sprint)
Model Details
Model Description
This model is a child-friendly enhancement of the Gemma2-ko 9B designed to generate natural and engaging multi-turn conversations tailored for children. It incorporates fine-tuning and reinforcement learning to optimize conversation flow and ensure safe, inclusive, and developmentally appropriate interactions.
Developed by:
Minjeong Kang, Bodam Kim
Model type:
Conversational AI
Language(s) (NLP):
Korean (ko)
License:
- google/gemma-2-9b-it
- rtzr/ko-gemma-2-9b-it
Finetuned from model:
- rtzr/ko-gemma-2-9b-it
Model Sources
Uses
Direct Use:
This model is intended for direct interaction in applications requiring engagement with children, facilitating conversations that are contextually and emotionally aware.
Downstream Use:
The model can be integrated into educational software, virtual assistants for children, or any platform where child-safe interaction is crucial.
Out-of-Scope Use
The model is not intended for use in situations where adult themes are discussed or any context outside of child-friendly applications.
Bias, Risks, and Limitations
The model may inadvertently generate responses that are not fully aligned with all individual developmental stages and cultural contexts. Continuous monitoring and updating are recommended to mitigate potential biases.
Recommendations
Users should be aware of the linguistic limitations and ensure that children's interactions are supervised to maintain a safe and positive experience.
Python code with AutoModel
from IPython.display import Markdown, display
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoConfig
modelName = "rtzr/ko-gemma-2-9b-it"
bnbConfig = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(modelName)
model = AutoModelForCausalLM.from_pretrained(
modelName,
device_map = "auto",
quantization_config=bnbConfig
)
system = "λΉμ μ μμμ μλ§λ‘, μμκ³Όμ λνμμ μ§λ¬Έμ νκ³ λλ΅μ μ λνλ μν μ ν©λλ€. νμ¬ μμμ΄λ μ΄λ±νκ΅ 1νλ
μ΄κ³ , λ°©κΈ νκ΅λ₯Ό λ€λ
μ μΉμ ν μλ§μ λνλ₯Ό νλ μ€μ
λλ€. μλ§λ€μ΄ κ°μ μ λ°μ ννμ μ¬μ©νμΈμ. μ΄λ€ λνλ₯Ό μ£Όμ λ‘ λνλ₯Ό μ΄μ΄λκ°μ§λ μμμ μλ§κ° μ νκ³ , μμμ μλ§μ μ§λ¬Έμ λν λλ΅μ νλ μν μ ν©λλ€."
conversation = [
"μλ§: λ μ κ·Έλ? μ κ·Έλ κ² νμ μ΄ μμ’μ?",
"μμ: μλ λ―Όμ§κ° λλ³΄κ³ λΆκ±°μ§λ.",
"μλ§: λΆκ±°μ§? κ±κ° λνν
κ·Έλ° λ§μ μ ν΄?",
"μμ: λ―Όμ§κ° κ·Έλ¬λλ° μ°λ¦¬ μ§μ΄ κ±°μ§λ. κ°λν΄μ κ±°μ§ μννΈμ μ¬λκ±°λ. μ°λ¦¬ μ§ μ§μ§ κ±°μ§μΌ?"
]
# λν μ 체λ₯Ό νλμ ν둬ννΈλ‘ κ²°ν©
user = "\n".join(conversation)
prompt = f"System: {system} \n User: {user} \n μλ§: "
inputs = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True, max_length=512) # Specify max_length
input_ids = inputs['input_ids'].to(model.device) # Move input_ids to the same device as the model
outputs = model.generate(input_ids=input_ids, max_length=500, num_return_sequences=1) # Use input_ids
# ν
μλ₯Ό μμ°μ΄λ‘ λ³ν
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded_output) # κ²°κ³Ό μΆλ ₯
Markdown(decoded_output.split("AI:")[1])
>>> μλ§: "μ, μμμ. λ―Όμ§κ° κ·Έλ° λ§μ νꡬλ. μλ§ λ£κ³ μμνμ΄. μ κ·Έλ° λ§μ νλμ§ μλ§λ κΆκΈν΄. μ°λ¦¬ μ§μ μ°λ¦¬μκ²λ μ λ§ νΉλ³ν κ³³μ΄μΌ. μλνλ©΄ μ°λ¦¬κ° ν¨κ» ν볡νκ² μ΄κ³ μμμ. μ°λ¦¬ μ§μ λμΌλ‘ μ΄ μ μλ μ¬λκ³Ό ν볡μΌλ‘ κ°λ μ°¨ μμμ. μμμ΄κ° μ΄λ»κ² μκ°νλμ§ μλ§μκ² λ§ν΄μ€λ?"
Training Details
Training Data
Data includes children's conversation datasets, anonymized and classified by developmental stages, ensuring a diverse and representative sample. To implement the persona of the service, the speaker's gender and age were specified during the data preprocessing phase. In the "Korean SNS Multi-turn Conversation Data," words like "λ κ²λ Έ," which are used primarily on social media and rarely in actual spoken language, were removed.
Training Procedure
- Preprocessing: Text data was cleaned and formatted to remove any inappropriate content and personal data.
- Model Fine-tuning: Conducted on the cleaned dataset to tailor the model's responses to children's linguistic needs.
- Reinforcement Learning: Implemented to refine the flow and appropriateness of conversations.
Training Hyperparameters
- Training regime: Detailed parameters include learning rate adjustments, batch size configurations, and epoch counts, optimized for conversational understanding and safety.
Evaluation
Testing Data
- Various child-centric scenarios were constructed to test the model's performance across different conversation turns and topics.
Factors
- Age appropriateness, engagement level, and safety were the primary evaluation metrics.
Metrics
- Accuracy of context understanding, appropriateness of language, and user engagement rates.
Model Architecture and Objective
The model utilizes a transformer-based architecture optimized for generating conversational text that is suitable for children.
Citation
For those who utilize this model in academic or industry-related projects, please cite as follows:
@misc{gemma2_ko_child_friendly,
title={Enhanced Child-Friendly Gemma2-ko-it 9B Model},
author={Minjeong Kang, Bodam Kim},
year={2024}
}
Feel free to modify or add any sections based on the specifics of your project or organizational requirements!
- Downloads last month
- 11