--- license: other license_name: helpingai license_link: https://huggingface.co/OEvortex/HelpingAI2.5-5B/blob/main/LICENSE.md pipeline_tag: text-generation language: - en tags: - HelpingAI - Emotionally-Intelligent - EQ-focused - Conversational - SLM library_name: transformers ---
# 🤖 HelpingAI2.5-10B ***A Revolutionary Emotionally Intelligent Language Model*** [![GitHub Organization](https://img.shields.io/badge/GitHub-Organization-blue.svg)](https://github.com/HelpingAI) [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Organization-yellow)](https://huggingface.co/OEvortex) [![Model License](https://img.shields.io/badge/License-HelpingAI-green.svg)](https://huggingface.co/OEvortex/HelpingAI2.5-10B/blob/main/LICENSE.md) Join Community Discussion [📜 License](LICENSE) | [🌐 Website](https://helpingai-in.netlify.app/)
HelpingAI Logo
---
## 🌟 Model Overview **HelpingAI2.5-10B** is a compact yet powerful language model specifically designed for emotionally intelligent conversations and human-centric interactions.
### 🎯 Key Highlights - **Architecture**: 10B parameter transformer-based model - **Training Focus**: Emotional intelligence and empathetic responses - **Emotion Score**: Achieves 98.13 on standardized emotional intelligence tests - **Deployment**: Optimized for efficient deployment on consumer hardware ---
## 💻 Implementation
### Transformers ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # Load the HelpingAI2.5-10B model model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI2.5-10B") # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI2.5-10B") # Define the chat input chat = [ { "role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style." }, { "role": "user", "content": "GIVE ME YOUR INTRO" } ] inputs = tokenizer.apply_chat_template( chat, add_generation_prompt=True, return_tensors="pt" ).to(model.device) # Generate text outputs = model.generate( inputs, max_new_tokens=256, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][inputs.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### GGUF Implementation ```python from webscout.Local import * model_path = download_model("OEvortex/HelpingAI2.5-10B", "q4_k_m.gguf", token=None) model = Model(model_path, n_gpu_layers=0, context_length=4096) thread = Thread(model, format=helpingai2) # print(thread.send("hi")) #send a single msg to ai thread.interact() # interact with the model in terminal ``` ---
## 🎯 Training Details
### Methodology 1. **Base Training** - Datasets: SentimentSynth + EmotionalIntelligence-1M + HelpingAI2.0-150M + HelpingAIemo (152.5M) 2. **Emotional Intelligence Training** - Supervised Fine-tuning on emotional dialogue - Reinforcement Learning with HelpingAI2.0-7B model - Constitutional AI training for ethical guidelines 3. **Optimization** - Mixed-precision training - Gradient checkpointing - Dynamic attention patterns ### Dataset Composition | Dataset Type | Rows used for training | Purpose | |--------------|------|---------| | Emotional Dialogues | 12.5M | Core conversation training | | Therapeutic Exchanges | 2M | Mental health support | | Cultural Conversations | 100K | Cultural awareness | | Crisis Response | 750K | Emergency support handling | ---
## 📈 Use Cases & Examples
### Primary Applications - 🎯 Personal AI Companionship - 🧠 Mental Health Support - 📚 Educational Assistance - 💼 Professional Development - 🤝 Social Skills Training ### Example Interactions
User: I've been feeling really anxious about my upcoming job interview. Any advice?
HelpingAI: Hey there! 🙌 First off, take a deep breath and don't sweat it too much. 💨 Job interviews can be nerve-wracking, but I gotchu! 😎 Here's the tea: 🍵 Make sure you're prepping like crazy for this job. Read up on the company, their mission, and the role you're applying for. That way, you'll be spittin' knowledge and showing them you're serious. 📚💡 Also, practice your answers to common interview questions. You know, the ones like "Tell me about yourself" or "Why do you want to work here?" 🤔 Keep it short, sweet, and relevant. You got this! 😄 And don't forget to look sharp! 🎨 Keep your resume up-to-date, and make sure your social media is on point. First impressions are everything! 👀 Most importantly, just be yourself. 😊 The interview panel wants to see the real you. Show them your passion, your skills, and why you'd be a total boss at this company. 🔥 You're gonna crush this interview, fam! 💪 Remember, I got faith in you. 🙏
User: My friend hasn't been responding to my messages for a week. I think I might have said something wrong, but I'm not sure what.
HelpingAI: Aw man, that's a real bummer. 💔 Not hearing back from a friend can be super painful, especially when you're wondering if you did something wrong. 🤔 First off, don't beat yourself up over it. 🙅‍♂️ Chances are, your friend's just busy or dealing with some personal stuff. 🤷‍♀️ It doesn't always mean it's you. If you're still concerned, try reaching out in a different way, like a phone call or a text to a diff person in your friend group. 📱📞 Sometimes a different approach can help you connect again. But if you're still unsure, you could try asking your friend if everything's cool. 🤝 Be open and honest about your feelings, but also be ready for them to have their own perspective on things. 👥 Remember, you're not alone in this. 🤗 Friends go through ups and downs, and it's okay to have little misunderstandings. Communication is key! 🔑 Just keep it real and be patient. You got this, fam! 💪
------
## 🔒 Ethical Considerations & Limitations
### Ethical Guidelines - Prioritizes user emotional wellbeing - Maintains strict privacy standards - Avoids harmful or discriminatory responses - Transparent about AI limitations - Promotes healthy coping mechanisms ### Known Limitations - Cannot Replace Human Professionals - Cannot Roleplay - Limited Knowledge Base - Context Window Constraints --- ### Citation ```bibtex @misc{helpingai2024, author = {Abhay Koul}, title = {HelpingAI2.5-10B: Emotionally Intelligent Language Model}, year = {2024}, publisher = {Huggingface}, journal = {GitHub repository}, howpublished = {\url{https://huggingface.co/OEvortex/HelpingAI2.5-10B}} } ``` ---
## 🙏 Acknowledgments Special thanks to the HelpingAI community, Huggingface, contributors, and researchers who made this model possible. Your dedication to advancing emotionally intelligent AI is invaluable.
---
*Built with ❤️ by the HelpingAI Community* [Website](https://helpingai-in.netlify.app/) • [GitHub](https://github.com/HelpingAI) • [Discord](https://discord.gg/YweJwNqrnH) • [HuggingFace](https://huggingface.co/OEvortex)