Speech-interfaced systems for Human-Machine Intercation (HMI) | |
![]() |
Description The interaction between Humans and Machines is and holy grail in science and technology, and speech is one of the most natural media used on purpose for more than 40 years. In the last decade, due to the advent of modern Deep Learning techniques, a tremendous improvement of reliability of Automatic speech Recognition (ASR) for HMI has been registered. However, there are still many challenges which need to be faced by the scientific community, mostly related to the fact that such systems are required to work under harsh acoustic conditions, characterized by the presence of multiple overlapping speakers, different types of noise, reverberation and unknown microphones position. The present research is focused on developing innovative data-driven solutions for enhancing the quality of acquired speech signals in order to improve the performance of ASR systems in such difficult acoustic scenarios. |
![]() |
Laboratory: Computational Audio Processing LAB, DII - UnivPM |
![]() |
Contact Person: Stefano Squartini |
![]() |
Collaborations:
|
![]() |
Projects: AGEVOLA (Fondazione CARITRO 2019), MOHMI – MIRACLE (Piattaforma Domotica 2020) |
![]() |
People: Stefano Squartini, Emanuele Principi, Samuele Cornell |