Description
Invited Talk at the mini-symposium "MS112 Algorithm Unrolling: Bridging the Gap between Theory and Practice - Part II of II"Abstract: Deep unfolding methods (a.k.a. algorithm unrolling) design deep neural networks (DNNs) as learned variations of iterative algorithms to solve various signal processing tasks, leading to models which are interpretable by design. Likewise, deep unfolding recurrent neural networks (RNNs) are obtained by unrolling iterative algorithms that are applied sequentially in time. In this presentation, we elaborate on a generic deep unfolding RNN architecture coined reweighted-RNN, originally designed for the task of video reconstruction. We investigate theoretical aspects of reweighted-RNN and similar models in terms of their generalization ability. Specifically, we derive generalization error bounds (GEB) via Rademacher complexity analysis: to our knowledge, these are the first generalization bounds proposed for deep unfolding RNNs. We conduct a series of experiments to empirically evaluate the performance and the GEB of reweighted-RNN, both in regression (video reconstruction and super-resolution) and classification (language modelling) settings. Our results indicate that reweighted-RNN outperforms traditional RNNs in both settings. These experiments allow us to relate the empirical generalization error to the theoretical bounds. Furthermore, we show that reweighted-RNN achieves tight theoretical error bounds with minimal decrease in accuracy, when trained with explicit regularization and weight constraints.
Period | 22 Sep 2022 |
---|---|
Event title | Conference on Mathematics of Data Science 2022 |
Event type | Conference |
Location | San Diego, United States, California |
Degree of Recognition | International |
Documents & Links
Related content
-
Research output
-
Interpretable Deep Recurrent Neural Networks via Unfolding Reweighted $\ell_1$-$\ell_1$ Minimization: Architecture Design and Generalization Analysis
Research output: Contribution to journal › Article
-
Generalization Error Bounds for Deep Unfolding RNNs
Research output: Chapter in Book/Report/Conference proceeding › Conference paper › Research
-
Designing Interpretable Recurrent Neural Networks for Video Reconstruction Via Deep Unfolding
Research output: Contribution to journal › Article › peer-review
-
Projects
-
Interpretable and Explainable Deep Learning for Video Processing
Project: Fundamental