University of Southern California
  Electromagnetic Articulography (EMA) Database

  Home

  More Info

  Release

 
 

General Information

  • Keywords: Emotional, Articulatory, Acoustic, Acted
  • Language: American English
  • 3 speakers: 1 male and 2 females
  • 14 sentences for a male / 10 sentences for females

    The list of sentences (10 sentences for females are 5-14.)
    1. I don't know how she could miss this opportunity.
    2. Toby and George stole the game.
    3. Hold your breath and combine all the ingredients in a large bowl.
    4. They vetoed his proposal instantly.
    5. Your grandmother is on the phone.
    6. Don't compare me to your father.
    7. I hear the echo of voices and the sound of shoes.
    8. That dress looks like it comes from Asia.
    9. They think the company and I will have a long future.
    10. The doctor made the scar. Foam antiseptic didn't help.
    11. That made being deaf tantamount to isolation.
    12. The doctor made the scar foam with antiseptic.
    13. I am talking about the same picture you showed me.
    14. It's hard being very deaf. Tantamount to isolation.

  • 4 acted emotions: anger, happiness, sadness and neutrality
  • Articulatory movements data (matlab file format) + speech waveforms (wav format)

Evaluations

  • Emotional Contents Rating:
    • Each utterance is evaluated by at least 3 human listeners
    • Each categorical emotion contents (anger, happiness, sadness and neutrality) are numerically rated (0 to 4) for each utterance
    • The utterances which obtain both high rates (3 or 4) for its target emotion and low rates (0 or 1) for the other emotions are chosen as "Best Emotion Utterances"; Their lists are in best_{abe,joy,lau}_files.txt
  • VAD (valence, activation and dominance) Rating:
    • Total 18 evaluators from various languages and cultural background.
    • Each perceptual dimension is numerically rated for each utterance

Detailed Description

  • The details of data collection and post processings (noise reduction, smoothing, alignment, etc.) are explained in below paper.

  • Sungbok Lee, Serdar Yildirim, Abe Kazemzadeh and Shrikanth S. Narayanan, An articulatory study of emotional speech production, in Proceedings of InterSpeech, pages 497-500, 2005 (download pdf)

 

SAIL | SIPI | EE-Systems | University of Southern California

2010 Signal Analysis and Interpretation Laboratory

3710 S. McClintock Ave, RTH 320
Los Angeles, CA 90089, U.S.A