We are currently releasing the IEMOCAP data. It contains data from 10 actors, male and female, during their affective dyadic interaction. The database contains both improvised and scripted sessions and is described in detail in:.
C. Busso, M. Bulut, C.C. Lee, A. Kazemzadeh,
E. Mower, S. Kim, J.N. Chang, S. Lee, and S.S. Narayanan,
"IEMOCAP: Interactive emotional dyadic motion capture
database," Journal of Language Resources and Evaluation,
vol. 42, no. 4, pp. 335-359, December 2008. (download pdf)
In total we are releasing approximately 12 hours of audiovisual data. For each improvised and scripted recording, we provide detailed audiovisual and text information, which consists of the audio and video of both interlocutors, the Motion Capture data of the face, head and hand of one of the interlocutors in each recording, the text trascriptions of the conversation and their word-level, phone-level and syllable-level alignment. Also, for each utterance of the recordings, we provide annotations into categorical and dimensional labels, from multiple annotators.
The previous limited IEMOCAP release, which contains only data from two actors only, is also available upon request.
To obtain the IEMOCAP data you just need to fill out an electronic release form below. If you have any question, please contact:
Please read the license before submitting your request.
We are very interested in your feedback in order to improve the current release. For any comments or suggestions please contact us at the above e-mail address.
(c) 2004 Speech Analysis & Interpretation
3710 S. McClintock Ave, RTH 320
Los Angeles, CA 90089, U.S.A