A plethora of new sensing devices continues to hit the market. The sensor types are very diverse,ranging from small single modality devices to complex sensors producing large volumes of multi- sensorial high-dimensional datasets. Moreover, these sensors tend to operate more and more in a distributed setting. Individual or arrays of visual sensors allow for sensing the 3D world in increasingly more complete fashion with a rich variety of multimodal devices such as RGB cameras, depth cameras, plenoptic cameras, thermal cameras, or LIDAR, to name a few. In parallel, artificial intelligence and associated deep learning technologies have reached the level of maturity, supported by the increasing amount of computation power and availability of specialized hardware.In this project we intend to merge these two worlds by designing advanced signal processing systems that combine classical more signal-model driven approaches with interpretable learning-based technologies to boost the performance of the signal and data processing in many applications.The main applications domains targeted include: Health, Industry 4.0, Environmental, city & earth observation (includes meteorology and climatology) and Immersive & social media.