DZ-TDPO: Non-Destructive Temporal Alignment for Mutable State Tracking in Long-Context Dialogue
Abstract
DZ-TDPO framework improves long-context dialogue systems by using dynamic KL constraints and temporal attention bias to resolve user intent conflicts with historical context, achieving high win rates and zero-shot generalization.
Long-context dialogue systems suffer from State Inertia, where static constraints prevent models from resolving conflicts between evolving user intents and established historical context. To address this, we propose DZ-TDPO, a non-destructive alignment framework that synergizes conflict-aware dynamic KL constraints with a calibrated temporal attention bias. Experiments on the Multi-Session Chat (MSC) dataset demonstrate that DZ-TDPO achieves state-of-the-art win rates (55.4% on Phi-3.5) while maintaining robust zero-shot generalization. Our scaling analysis reveals a "Capacity-Stability Trade-off": while smaller models incur an "alignment tax" (perplexity surge) to overcome historical inertia, the larger Qwen2.5-7B model achieves 50.8% win rate with negligible perplexity overhead. This confirms that TAI can be alleviated via precise attention regulation rather than destructive weight updates, preserving general capabilities (MMLU) across model scales. Code and data are available: https://github.com/lyj20071013/DZ-TDPO
Community
๐ฅ Solving "State Inertia" in Long-Context LLMs!
We introduce DZ-TDPO, a non-destructive alignment framework.
Problem: Standard DPO causes "Alignment Tax" (PPL explosion >100) when updating user states in long context.
Solution: Dynamic KL Constraints + Dual-Zone Temporal Attention.
Result: SOTA 55.4% Win Rate on MSC dataset with Zero PPL degradation (PPL ~26.0).
๐ Code & SOTA Model (Phi-3.5) are released! Check the Linked Models section.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Adaptive Focus Memory for Language Models (2025)
- DMA: Online RAG Alignment with Human Feedback (2025)
- Behavior-Equivalent Token: Single-Token Replacement for Long Prompts in LLMs (2025)
- Reward Forcing: Efficient Streaming Video Generation with Rewarded Distribution Matching Distillation (2025)
- ST-PPO: Stabilized Off-Policy Proximal Policy Optimization for Multi-Turn Agents Training (2025)
- Token-Level Inference-Time Alignment for Vision-Language Models (2025)
- Context-aware Fairness Evaluation and Mitigation in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper