Gradient-free Decoder Inversion in Latent Diffusion Models

Seongmin Hong1, Suh Yoon Jeon1, Kyeonghyun Lee1, Ernest K. Ryu2, Se Young Chun1,3

1Dept. of Electrical and Computer Engineering, 3INMC & IPAI, Seoul National University
2Dept. of Mathematics, University of California, Los Angeles

Accepted to NeurIPS 2024

[arXiv] [video] [github] [bibTeX]




Main Idea (gif)

GIF file






Main Contribution

GIF file






Convergence Analysis


GIF file

Our method provably converges.




png file

Moreover, with momentum (inertial KM iteration),



png file

It also provably converges.




Results


png file

Our gradient-free decoder inversion is:





Abstract

In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking while achieving comparable error levels.






BibTeX

 
@misc{hong2024gradient,
      title={Gradient-free Decoder Inversion in Latent Diffusion Models}, 
      author={Seongmin Hong and Suh Yoon Jeon and Kyeonghyun Lee and Ernest K. Ryu and Se Young Chun},
      year={2024},
      eprint={2409.18442},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2409.18442}, 
}