Collaborative Domain Knowledge Tutor (Qwen2.5-7B-Instruct)

A fine-tuned Qwen2.5-7B-Instruct model specialized in collaborative problem-solving and domain knowledge tutoring, trained on 1,781 synthetic examples implementing the Usefulness, Inspiration, and Trust framework.

๐ŸŽฏ Model Description

This model has been fine-tuned to excel in collaborative learning scenarios, providing:

  • Step-by-step guidance for complex problems
  • Critical thinking stimulation and engagement
  • Real-world application of academic concepts
  • Collaborative planning and conflict resolution
  • Motivational support for learners

๐Ÿ—๏ธ Architecture

  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • Training Method: LoRA (Low-Rank Adaptation)
  • Training Data: 1,781 synthetic collaborative learning examples
  • Framework: 11-aspect evaluation (Usefulness, Inspiration, Trust)

๐Ÿ“Š Performance Improvements

Metric Base Model Fine-tuned Improvement
Usefulness 0.147 0.400 +173.0%
Inspiration 0.125 0.144 +14.8%
Trust 0.067 0.043 -35.3%*
Overall Satisfaction 0.126 0.224 +78.1%
Collaborative Learning Score 0.108 0.180 +66.7%

*Note: Trust decrease is a known limitation being addressed in future iterations.

๐Ÿš€ Usage

Load and Use the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")

# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "christinekwon/collaborative-tutor-qwen25")

# Generate response
prompt = "Student: I'm struggling with this algebra problem. Can you help me work through it step by step?
Tutor:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Example Prompts

  • Academic Support: "I don't understand how photosynthesis works. Can you explain it?"
  • Collaborative Planning: "We need to work together on this group project. How should we approach this?"
  • Creative Problem Solving: "We're stuck on this creative project. How can we think outside the box?"
  • Motivational Support: "I'm feeling discouraged about my progress. Can you help me stay motivated?"

๐Ÿ”ฌ Training Details

  • Training Data: 1,781 synthetic dialogue examples
  • Data Generation: Gemini 2.5 Pro API
  • Training Framework: PyTorch + Transformers
  • Optimization: LoRA with 4-bit quantization
  • Training Time: ~2 hours on Colab A100 GPU

๐Ÿ“š Research Context

This model implements the collaborative learning framework from our research on:

  • 11 User-Perceived Aspects of educational interactions
  • Usefulness, Inspiration, and Trust as primary outcomes
  • Data-driven weights learned via Random Forest (Rยฒ=0.841)
  • Critical thinking as the highest-weighted aspect (0.157)

โš ๏ธ Limitations

  • Trust scores show some decrease compared to base model
  • Peer teaching capabilities need improvement
  • Consistency across different domains varies
  • Memory requirements for full model loading

๐Ÿ”ฎ Future Work

  • Address trust and consistency issues
  • Expand training data with more diverse scenarios
  • Improve peer-to-peer learning capabilities
  • Optimize for lower memory requirements

๐Ÿ“„ Citation

If you use this model in your research, please cite:

@misc{collaborative-tutor-qwen25,
  title={Collaborative Domain Knowledge Tutor: Fine-tuned Qwen2.5-7B-Instruct for Collaborative Learning},
  year={2025},
  url={https://huggingface.co/christinekwon/collaborative-tutor-qwen25}
}

๐Ÿ“ž Contact

For questions, feedback, or collaboration opportunities, please reach out through the Hugging Face Hub.


This model is designed for educational and research purposes. Please use responsibly and in accordance with applicable guidelines.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support