Learned Multimodal Convolutional Sparse Coding for Guided Image Super-Resolution

Research output: Chapter in Book/Report/Conference proceedingConference paper

10 Citations (Scopus)

Abstract

The success of deep learning in various tasks, including solving inverse problems, has triggered the need for designing deep neural networks that incorporate domain knowledge. In this paper, we design a multimodal deep learning architecture for guided image super-resolution, which refers to the problem of super-resolving a low-resolution image with the aid of a high-resolution image of another modality. The proposed architecture is based on a novel deep learning model, obtained by unfolding a proximal method that solves the problem of convolutional sparse coding with side information. We applied the proposed architecture to super-resolve near-infrared images using RGB images as side information. Experimental results report average PSNR gains of up to 2.85 dB against state-of-the-art multimodal deep learning and sparse coding models.
Original languageEnglish
Title of host publication2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings
PublisherIEEE
Pages2891-2895
Number of pages5
ISBN (Electronic)9781538662496
DOIs
Publication statusPublished - Sep 2019
EventIEEE International Conference on Image Processing 2019 - Taiwan, Taipei, Taiwan, Province of China
Duration: 22 Sep 201925 Sep 2019

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2019-September
ISSN (Print)1522-4880

Conference

ConferenceIEEE International Conference on Image Processing 2019
Abbreviated titleICIP
Country/TerritoryTaiwan, Province of China
CityTaipei
Period22/09/1925/09/19

Keywords

  • Guided image super-resolution
  • convolutional sparse coding
  • multimodal deep neural networks

Fingerprint

Dive into the research topics of 'Learned Multimodal Convolutional Sparse Coding for Guided Image Super-Resolution'. Together they form a unique fingerprint.

Cite this