Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, and Shrikanth Narayanan. Audio-Visual Child-Adult Speaker Classification in Dyadic Interactions. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8090–8094, , April 2024.

Download

[PDF] 

Abstract

Interactions involving children span a wide range of important domains from learning to clinical diagnostic and therapeutic contexts. Automated analyses of such interactions are motivated by the need to seek accurate insights and offer scale and robustness across diverse and wide-ranging conditions. Identifying the speech segments belonging to the child is a critical step in such modeling. Conventional child-adult speaker classification typically relies on audio modeling approaches, overlooking visual signals that convey speech articulation information, such as lip motion. Building on the foundation of an audio-only child-adult speaker classification pipeline, we propose incorporating visual cues through active speaker detection and visual processing models. Our framework involves video preprocessing, utterance-level child-adult speaker detection, and late fusion of modality-specific predictions. We demonstrate from extensive experiments that a visually aided classification pipeline enhances the accuracy and robustness of the classification. We show relative improvements of 2.38% and 3.97% in F1 macro score when one face and two faces are visible, respectively.

BibTeX Entry

@INPROCEEDINGS{10447515,
  author={Xu, Anfeng and Huang, Kevin and Feng, Tiantian and Tager-Flusberg, Helen and Narayanan, Shrikanth},
  booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  title={Audio-Visual Child-Adult Speaker Classification in Dyadic Interactions},
  year={2024},
  volume={},
  number={},
  pages={8090-8094},
  abstract={Interactions involving children span a wide range of important domains from learning to clinical diagnostic and therapeutic contexts. Automated analyses of such interactions are motivated by the need to seek accurate insights and offer scale and robustness across diverse and wide-ranging conditions. Identifying the speech segments belonging to the child is a critical step in such modeling. Conventional child-adult speaker classification typically relies on audio modeling approaches, overlooking visual signals that convey speech articulation information, such as lip motion. Building on the foundation of an audio-only child-adult speaker classification pipeline, we propose incorporating visual cues through active speaker detection and visual processing models. Our framework involves video preprocessing, utterance-level child-adult speaker detection, and late fusion of modality-specific predictions. We demonstrate from extensive experiments that a visually aided classification pipeline enhances the accuracy and robustness of the classification. We show relative improvements of 2.38% and 3.97% in F1 macro score when one face and two faces are visible, respectively.},
  keywords={Voice activity detection;Visualization;Motion segmentation;Lips;Pipelines;Buildings;Signal processing;speaker classification;child speech;audiovisual;deep learning;autism},
  doi={10.1109/ICASSP48485.2024.10447515},
  ISSN={2379-190X},
  link = {https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10447515},
  month={April},}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Mar 22, 2024 09:15:39