Deblurring via Video Diffusion Models

Authors: Yan Wang, Haoyang Lone
Conference: ICIC 2024 Posters, Tianjin, China, August 5-8, 2024
Pages: 509-520
Keywords: Computer vision, Video deblurring, Diffusion model

Abstract

Video deblurring poses a significant challenge due to the intricate nature of blur, which often arises from a confluence of factors such as camera shakes, object motions, and variations in depth. While diffusion models and video diffusion models have respectively shone brightly in the fields of image and video generation, achieving remarkable results. Specifically, Diffusion Probabilistic Models (DPMs) have been successfully utilized for image deblurring, indicating the vast potential for research and development of video diffusion models in the realm of video deblurring. However, due to the significant data and training time requirements of diffusion models, the prospects of video diffusion models for video deblurring tasks remain uncertain. To investigate the feasibility of video diffusion models in video deblurring, this paper proposes a diffusion model specifically tailored for this task. Its model structure and some parameters are based on a pre-trained text-to-video diffusion model, and through a two-stage training process, it can accomplish video deblurring with a relatively small number of training parameters and data. Furthermore, this paper compares the performance of the proposed model with baseline models and achieves state-of-the-art results.
📄 View Full Paper (PDF) 📋 Show Citation