🎥 Pack and Force Your Memory: Long-form and Consistent Video Generation

1ShanghaiTech University 2Tencent Hunyuan 3Nanjing University
* Work is done during internship at Tencent Hunyuan† Project leader‡ Corresponding author

Abstract

Long-form video generation presents a dual challenge: models must capture long-range dependencies while preventing the error accumulation inherent in autoregressive decoding. To address these challenges, we make two contributions. First, for dynamic context modeling, we propose MemoryPack, a learnable context-retrieval mechanism that leverages both textual and image information as global guidance to jointly model short- and long-term dependencies, achieving minute-level temporal consistency. This design scales gracefully with video length, preserves computational efficiency, and maintains linear complexity. Second, to mitigate error accumulation, we introduce Direct Forcing, an efficient single-step approximating strategy that improves training–inference alignment and thereby curtails error propagation during inference. Together, MemoryPack and Direct Forcing substantially enhance the context consistency and reliability of long-form video generation, advancing the practical usability of autoregressive video models.

Framework

Framework illustration

Temporal Consistency Video Generation

10s Videos

30s Videos

60s Videos

citation

@article{wu2025pack,
    title={Pack and Force Your Memory: Long-form and Consistent Video Generation},
    author={Wu, Xiaofei and Zhang, Guozhen and Xu, Zhiyong and Zhou, Yuan and Lu, Qinglin and He, Xuming},
    journal={arXiv preprint arXiv:2510.01784},
    year={2025}
}
                        
To ensure fairness in the evaluation, all the above conditional images are from VBench。