University of Southern California
  The Interactive Emotional Dyadic Motion Capture (IEMOCAP) Database

  Home

  More Info

  Release

  Publications

 
 

Publications

Journals


  • Angeliki Metallinou, Martin Woellmer, Athanasios Katsamanis, Florian Eyben, Bjoern Schuller and Shrikanth Narayanan, "Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification", IEEE Transactions of Affective Computing (TAC), accepted for publication, 2012
  • Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, "Emotion recognition using a hierarchical binary decision tree approach ", Speech Communication, 2011
  • Emily Mower, Maja J. Mataric and Shrikanth S. Narayanan, "A Framework for Automatic Human Emotion Classification Using Emotional Profiles ", IEEE Transactions on Audio, Speech and Language Processing, 19:5(1057-1070). May 2011
  • C. Busso, M. Bulut, C.C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J.N. Chang, S. Lee, and S.S. Narayanan, "IEMOCAP: Interactive emotional dyadic motion capture database," Journal of Language Resources and Evaluation, vol. 42, no. 4, pp. 335-359, December 2008. (download pdf)

 

Conferences


  • Daniel Bone, Chi-Chun Lee, and Shrikanth Narayanan, "A Robust Unsupervised Arousal Rating Framework using Prosody with Cross-Corpora Evaluation", In Proceedings of InterSpeech 2012
  • Soroosh Mariooryad and Carlos Busso, "Factorizing speaker, lexical and emotional variabilities observed in facial expressions", In IEEE International Conference on Image Processing (ICIP), 2012
  • Angeliki Metallinou, Athanasios Katsamanis and Shrikanth Narayanan, "A Hierarchical Framework for Modeling Multimodality and Emotional Evolution in Affective Dialogs", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012
  • Martin Woellmer, Angeliki Metallinou, Athanasios Katsamanis, Bjoern Schuller and Shrikanth Narayanan, "Analyzing the Memory of BLSTM Neural Networks for Enhanced Emotion Classification in Dyadic Spoken Interactions", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012
  • Ming Li, Angeliki Metallinou, Daniel Bone and Shrikanth Narayanan, "Speaker States Recognition using Latent Factor Analysis Based Eigenchannel Factor Vector Modeling", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012
  • Tauhidur Rahman and Carlos Busso, "A personalized emotion recognition system using an unsupervised feature adaptation scheme", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012
  • Emily Mower and Shrikanth S. Narayanan, "A Hierarchical Static-Dynamic Framework for Emotion Classification", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011
  • Martin Woellmer, Angeliki Metallinou, Florian Eyben, Bjorn Schuller and Shrikanth Narayanan, "Context-Sensitive Multimodal Emotion Recognition from Speech and Facial Expression using Bidirectional LSTM Modeling", In Proceedings of InterSpeech, 2010
  • Chi-Chun Lee and Shrikanth S. Narayanan, "Predicting interruptions in dyadic spoken interactions", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2010
  • Angeliki Metallinou, Sungbok Lee and Shrikanth S. Narayanan, "Decision level combination of multiple modalities for recognition and analysis of emotional expression", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2010
  • Angeliki Metallinou, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, "Visual emotion recognition using compact facial representations and viseme information", In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2010
  • Emily Mower, Maja J Mataric, and Shrikanth S. Narayanan, "Robust Representations for Out-of-Domain Emotions Using Emotion Profiles", IEEE Workshop on Spoken Language Technology (SLT), 2010
  • Emily Mower, Kyu Jeong Han, Sungbok Lee and Shrikanth S. Narayanan, "A Cluster-Profile Representation of Emotion Using Agglomerative Hierarchical Clustering", In Proceedings of InterSpeech, 2010
  • Dongrui Wu, Thomas Parsons, Emily Mower and Shrikanth S. Narayanan, "Speech Emotion Estimation in 3D Space", In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2010
  • Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, "Interpreting ambiguous emotional expressions," In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII), 2009
  • Emily Mower, Maja J. Mataric and Shrikanth S. Narayanan,"Evaluating evaluators: A case study in understanding the benefits and pitfalls of multi-evaluator modeling," In Proceedings of Interspeech, UK 2009
  • Chi-Chun Lee, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan,"Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions", In Proceedings of Interspeech, UK 2009
  • Emily Mower, Maja J Matarić, Shrikanth Narayanan,"Evaluating Evaluators: A Case Study in Understanding the Benefits and Pitfalls of Multi-Evaluator Modeling.", In Proceedings of Interspeech, UK 2009
  • Angeliki Metallinou, Sungbok Lee and Shrikanth S. Narayanan, "Audio-visual emotion recognition using Gaussian mixture models for face and voice," In Proceedings of the IEEE International Symposium on Multimedia (ISM) Berkeley, USA December 2008
  • Chi-Chun Lee, Sungbok Lee and Shrikanth S. Narayanan, "An analysis of multimodal cues of interruption in dyadic spoken interactions," In Proceedings of Interspeech, Brisbane, Australia, Sep 2008.
  • Carlos Busso and Shrikanth Narayanan. "The expression and perception of emotions: Comparing assessments of self versus others," In Proceedings of Interspeech, Brisbane, Australia, Sep 2008.
  • Carlos Busso and Shrikanth Narayanan. "Scripted dialogs versus improvisation: Lessons learned about emotional elicitation techniques from the iemocap database," In Proceedings of Interspeech, Brisbane, Australia, Sep 2008.
  • Carlos Busso and S.S. Narayanan, "Recording audio-visual emotional databases from actors: a closer look," in Second International Workshop on Emotion: Corpora for Research on Emotion and Affect, International conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, May 2008.

 

 

IMSC | SIPI | EE-Systems | University of Southern California

(c) 2004 Speech Analysis & Interpretation Laboratory

3710 S. McClintock Ave, RTH 320
Los Angeles, CA 90089, U.S.A