🎥 Pack and Force Your Memory: Long-form and Consistent Video Generation

1ShanghaiTech University 2Tencent Hunyuan 3Nanjing University
* Work is done during internship at Tencent Hunyuan† Project leader‡ Corresponding author

Abstract

Long-form video generation presents a dual challenge: models must capture long-range dependencies while preventing the error accumulation inherent in autoregressive decoding. To address these challenges, we make two contributions. First, for dynamic context modeling, we propose MemoryPack, a learnable context-retrieval mechanism that leverages both textual and image information as global guidance to jointly model short- and long-term dependencies, achieving minute-level temporal consistency. This design scales gracefully with video length, preserves computational efficiency, and maintains linear complexity. Second, to mitigate error accumulation, we introduce Direct Forcing, an efficient single-step approximating strategy that improves training–inference alignment and thereby curtails error propagation during inference. Together, MemoryPack and Direct Forcing substantially enhance the context consistency and reliability of long-form video generation, advancing the practical usability of autoregressive video models.

Framework

Framework illustration

Temporal Consistency Video Generation

10s Videos

30s Videos

60s Videos

citation

@misc{wu2025packforcememorylongform,
      title={Pack and Force Your Memory: Long-form and Consistent Video Generation}, 
      author={Xiaofei Wu and Guozhen Zhang and Zhiyong Xu and Yuan Zhou and Qinglin Lu and Xuming He},
      year={2025},
      eprint={2510.01784},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.01784}, 
}
                        
To ensure fairness in the evaluation, all the above conditional images are from VBench。