논문
Visual Context-driven Audio Feature Enhancement for Robust End-to-End Audio-Visual Speech Recognition
Joanna Hong, Minsu Kim, Daehun Yoo, Yong Man Ro
Interspeech
2022
일반적으로 음성 인식을 할 때 음성에 다양한 노이즈가 섞여드는 경우가 일반적이며, 이는 실제 현실에 음성 인식 모델을 적용하는 데에 있어 하나의 큰 장애물이 됨. 이 논문은 이러한 상황에서 음성과 함께 말하는 얼굴 입력하여, 이미지에 있는 입술 정보를 바탕으로 노이즈가 상당히 심한 상황에서도 안정적으로 음성 인식이 되도록 하는 기술
This paper focuses on designing a noise-robust end-to-end Audio-Visual Speech Recognition (AVSR) system. To this end, we propose Visual Context-driven Audio Feature Enhancement module (V-CAFE) to enhance the input noisy audio speech with a help of audio-visual correspondence. The proposed V-CAFE is designed to capture the transition of lip movements, namely visual context and to generate a noise reduction mask by considering the obtained visual context. Through context-dependent modeling, the ambiguity in viseme-to-phoneme mapping can be refined for mask generation. The noisy representations are masked out with the noise reduction mask resulting in enhanced audio features. The enhanced audio features are fused with the visual features and taken to an encoder-decoder model composed of Conformer and Transformer for speech recognition. We show the proposed end-to-end AVSR with the V-CAFE can further improve the noise-robustness of AVSR. The effectiveness of the proposed method is evaluated in noisy speech recognition and overlapped speech recognition experiments using the two largest audio-visual datasets, LRS2 and LRS3.