Chi-Chun Lee, Jangwon Kim, Angeliki Metallinou, Carlos Busso, Sungbok Lee, and Shrikanth S. Narayanan. Speech in Affective Computing, pp. 170–183, Handbook of Affective Computing, Oxford University Press, 2015.

Download

[PDF] 

Abstract

This chapter is from The Oxford Handbook of Affective Computing edited by Rafael Calvo, Sidney K. D'Mello, Jonathan Gratch, and Arvid Kappas. Speech is a key communication modality for humans to encode emotion. In this chapter, we address three main aspects of speech in affective computing: emotional speech production, acoustic feature extraction for emotion analysis, and the design of a speech-based emotion recognizer. Specifically we discuss the current understanding of the interplay of speech production vocal organs during expressive speech, extracting informative acoustic features from speech recording waveforms, and the engineering design of automatic emotion recognizers using speech acoustic-based features. The latter includes a discussion of emotion labeling for generating ground truth references, acoustic feature normalization for controlling signal variability, and choice of computational frameworks for emotion recognition. Finally, we present some open challenges and applications of a robust emotion recognizer.

BibTeX Entry

@inbook{Lee2014SpeechinAffectiveComputing,
 author = {Lee, Chi-Chun and Kim, Jangwon and Metallinou, Angeliki and Busso, Carlos and Lee, Sungbok and Narayanan, Shrikanth S.},
 bib2html_rescat = {emotion},
 chapter = {12},
 doi = {10.1093/oxfordhb/9780199942237.013.021},
 pages = {170--183},
 publisher = {Oxford University Press},
 series = {Handbook of Affective Computing},
 link = {http://sail.usc.edu/publications/files/Lee-OUPchapter2015.pdf},
 title = {Speech in Affective Computing},
    url = {https://doi.org/10.1093/oxfordhb/9780199942237.013.021},
    eprint = {https://academic.oup.com/book/0/chapter/212011660/chapter-ag-pdf/44596651/book\_28057\_section\_212011660.ag.pdf},
abstract = {This chapter is from The Oxford Handbook of Affective Computing edited by Rafael Calvo, Sidney K. D'Mello, Jonathan Gratch, and Arvid Kappas. Speech is a key communication modality for humans to encode emotion. In this chapter, we address three main aspects of speech in affective computing: emotional speech production, acoustic feature extraction for emotion analysis, and the design of a speech-based emotion recognizer. Specifically we discuss the current understanding of the interplay of speech production vocal organs during expressive speech, extracting informative acoustic features from speech recording waveforms, and the engineering design of automatic emotion recognizers using speech acoustic-based features. The latter includes a discussion of emotion labeling for generating ground truth references, acoustic feature normalization for controlling signal variability, and choice of computational frameworks for emotion recognition. Finally, we present some open challenges and applications of a robust emotion recognizer.},
 year = {2015}
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Jan 06, 2023 11:57:50