Jason233King

Jason233
Β·

AI & ML interests

ai for video game assets.

Recent Activity

Organizations

None yet

Jason233's activity

Reacted to prithivMLmods's post with πŸ‘ 1 day ago
view post
Post
1917
Weekend Dribble πŸ“¦πŸΊ

Adapters for Product Ad Backdrops, Smooth Polaroids, Minimalist Sketch cards, Super Blends!!

🀏Demo on: prithivMLmods/FLUX-LoRA-DLC

Stranger Zones :
πŸ‘‰πŸΌ{ Super Blend } : strangerzonehf/Flux-Super-Blend-LoRA

πŸ‘‰πŸΌ{ Product Concept Ad } : prithivMLmods/Flux-Product-Ad-Backdrop
πŸ‘‰πŸΌ{ Frosted Mock-ups } : prithivMLmods/Flux.1-Dev-Frosted-Container-LoRA
πŸ‘‰πŸΌ{ Polaroid Plus } : prithivMLmods/Flux-Polaroid-Plus
πŸ‘‰πŸΌ{Sketch Cards} : prithivMLmods/Flux.1-Dev-Sketch-Card-LoRA

πŸ‘‰Stranger Zone: https://huggingface.co/strangerzonehf

πŸ‘‰Flux LoRA Collections: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

.
.
.
@prithivMLmods πŸ€—
Reacted to hbseong's post with πŸ‘€ 17 days ago
view post
Post
3291
🚨πŸ”₯ New Release Alert! πŸ”₯🚨

Introducing the 435M model that outperforms Llama-Guard-3-8B while slashing 75% of the computation cost! πŸ’»πŸ’₯
πŸ‘‰ Check it out: hbseong/HarmAug-Guard (Yes, INFERENCE CODE INCLUDED! πŸ’‘)

More details in our paper: https://arxiv.org/abs/2410.01524 πŸ“œ

#HarmAug #LLM # Safety #EfficiencyBoost #Research #AI #MachineLearning
Reacted to DavidGF's post with πŸ‘ 20 days ago
view post
Post
2973
πŸŽ‰ Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases!

We're thrilled to announce the release of SauerkrautLM-v2-14b in two specialized versions: VAGOsolutions/SauerkrautLM-v2-14b-SFT and VAGOsolutions/SauerkrautLM-v2-14b-DPO. Built on the robust Qwen2.5-14B foundation, these models represent a significant leap forward in multilingual AI capabilities.

πŸ”¬ Technical Breakthroughs:
πŸ’  Innovative three-phase Fine-Tuning approach
πŸ’  Two-step Spectrum SFT + one-step Spectrum DPO optimization phase for enhanced performance
πŸ’  Balance of German and English language capabilities
πŸ’  Advanced function calling - almost on par with Claude-3.5-Sonnet-20240620

πŸ‡©πŸ‡ͺ German Language Excellence:
What sets this release apart is our unique achievement in simultaneously improving both German and English capabilities. Through our specialized training approach with over 1.2B tokens across two phases, we've managed to:
πŸ’  Enhance German language understanding and generation (SFT Version > DPO Version)
πŸ’  Maintain authentic German linguistic nuances
πŸ’  Improve cross-lingual capabilities
πŸ’  Preserve cultural context awareness

πŸ“Š Training Innovation:
Our three-phase approach targeted specific layer percentages (15%, 20% and 25%) with carefully curated datasets, including:
πŸ’  Mathematics-focused content (proprietary classifier-selected)
πŸ’  High-quality German training data
πŸ’  Specialized function calling datasets
πŸ’  Premium multilingual content

🎁 Community Contribution:
We're also releasing two new datasets in a few days:
1️⃣ SauerkrautLM-Fermented-GER-DPO: 3,300 high-quality German training samples
2️⃣ SauerkrautLM-Fermented-Irrelevance-GER-DPO: 2,000 specialized samples for optimized function call irrelevance handling

Thank you to our incredible community and partners who have supported us throughout this journey. Here's to another year of AI innovation!Β πŸš€
Reacted to albertvillanova's post with πŸ‘€ 30 days ago
view post
Post
1214
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator