Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks

Emilie Grégoire, Muhammad Hafeez Chaudhary, Sam Verboven

Research output: Working paperPreprint

25 Downloads (Pure)

Abstract

Multi-task learning (MTL) can improve the generalization performance of neural networks by sharing representations with related tasks. Nonetheless, MTL can also degrade performance through harmful interference between tasks. Recent work has pursued task-specific loss weighting as a solution for this interference. However, existing algorithms treat tasks as atomic, lacking the ability to explicitly separate harmful and helpful signals beyond the task level. To this end, we propose SLGrad, a sample-level weighting algorithm for multi-task learning with auxiliary tasks. Through sample-specific task weights, SLGrad reshapes the task distributions during training to eliminate harmful auxiliary signals and augment useful task signals. Substantial generalization performance gains are observed on (semi-) synthetic datasets and common supervised multi-task problems.
Original languageEnglish
PublisherArXiv
Publication statusPublished - 7 Jun 2023

Keywords

  • Multi-task learning
  • Machine learning
  • Deep Learning
  • Dynamic weighting algorithms

Fingerprint

Dive into the research topics of 'Sample-Level Weighting for Multi-Task Learning with Auxiliary Tasks'. Together they form a unique fingerprint.

Cite this