Robust speaker localization for real-world robots

Georgios Athanasopoulos, Werner Verhelst, Hichem Sahli

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)


Autonomous human–robot interaction ultimately requires an artificial audition module that allows the robot to process and interpret a combination of verbal and non-verbal auditory inputs. A key component of such a module is the acoustic localization. The acoustic localization not only enables the robot to simultaneously localize multiple persons and auditory events of interest in the environment, but also provides input to auditory tasks such as speech enhancement and speech recognition. The use of microphone arrays in robots is an efficient and commonly applied approach to the localization problem. In this paper, moving away from simulated environments, we look at the acoustic localization under real-world conditions and limitations. Our approach proposes a series of enhancements, taking into account the imperfect frequency response of the array microphones and addressing the influence of the robot's shape and surface material. Motivated by the importance of the signal's phase information, we introduce a novel pre-processing step for enhancing the acoustic localization. Results show that the proposed approach improves the localization performance in joint noisy and reverberant conditions and allows a humanoid robot to locate multiple speakers in a real-world environment.
Original languageEnglish
Pages (from-to)129-153
Number of pages25
JournalComputer Speech & Language
Issue number1
Publication statusPublished - 2 Apr 2015


Dive into the research topics of 'Robust speaker localization for real-world robots'. Together they form a unique fingerprint.

Cite this