A Deep-Unfolded Reference-Based RPCA Network For Video Foreground-Background Separation

Huynh Van Luong, Boris Joukovsky, Yonina Eldar, Nikolaos Deligiannis

Research output: Chapter in Book/Report/Conference proceedingConference paper

10 Downloads (Pure)


Deep unfolded neural networks are designed by unrolling the iterations of optimization algorithms. They can be shown to achieve faster convergence and higher accuracy than their optimization counterparts. This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA) with application to video foreground-background separation. Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames. To this end, we perform the unfolding of an iterative algorithm for solving reweighted l1-l1 minimization; this unfolding leads to a different proximal operator (aka different activation function) adaptively learned per neuron. Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
Original languageEnglish
Title of host publication28th European Signal Processing Conference
Number of pages5
Publication statusPublished - 2 Oct 2020
Event28th European Signal Processing Conference - Amsterdam, Netherlands
Duration: 24 Aug 202028 Aug 2020
Conference number: 28


Conference28th European Signal Processing Conference
Abbreviated titleEUSIPCO2020
Internet address

Fingerprint Dive into the research topics of 'A Deep-Unfolded Reference-Based RPCA Network For Video Foreground-Background Separation'. Together they form a unique fingerprint.

Cite this