2020

  1. Martinez, V. R., Somandepalli, K., Uhls, Y., & Narayanan, S. (2020). Joint Estimation and Analysis of Risk Behavior Ratings in Movie Scripts. In Empirical Methods in Natural Language Processing 2020
  2. Ramakrishna, A., & Narayanan, S. (2020). Sentence level estimation of psycholinguistic norms using joint multidimensional annotations. In Interspeech 2020
  3. Chiu, M.C., Feng T., Ren X., & Narayanan, S. (2020). Screenplay Quality Assessment: Can we predict who gets nominated?. In 1st Joint Workshop on Narrative Understanding, Storylines, and Events (NUSE), ACL 2020

2019

  1. Somandepalli, K., Kumar, N., Travadi, R., & Narayanan, S. (2019). Multimodal Representation Learning using Deep Multiset Canonical Correlation. ArXiv Preprint ArXiv:1904.01775.
  2. Somandepalli, K., & Narayanan, S. (2019). Reinforcing Self-expressive Representation with Constraint Propagation for Face Clustering in Movies. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4065–4069). IEEE.
  3. Hebbar, R., Somandepalli, K., & Narayanan, S. (2019). Robust Speech Activity Detection in Movie Audio: Data Resources and Experimental Evaluation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4105–4109). IEEE.
  4. Sharma, R., Somandepalli, K., & Narayanan, S. (2019). Toward Visual Voice Activity Detection for Unconstrained Videos. In 2019 IEEE International Conference on Image Processing (ICIP) (pp. 2991–2995). IEEE.
  5. Martinez, V. R., Somandepalli, K., Singla, K., Ramakrishna, A., Uhls, Y. T., & Narayanan, S. (2019). Violence Rating Prediction from Movie Scripts. (pdf)

2018

  1. Somandepalli, K., Martinez, V., Kumar, N., & Narayanan, S. (2018). Multimodal Representation of Advertisements Using Segment-level Autoencoders. In Proceedings of the 2018 on International Conference on Multimodal Interaction (pp. 418–422). ACM. (pdf)
  2. Hebbar, R., Somandepalli, K., & Narayanan, S. (2018). Improving Gender Identification in Movie Audio Using Cross-Domain Data. Proc. Interspeech 2018, 282–286. (pdf)
  3. Somandepalli, K., Kumar, N., Guha, T., & Narayanan, S. S. (2018). Unsupervised Discovery of Character Dictionaries in Animation Movies. IEEE Transactions on Multimedia, 20(3), 539–551. (pdf)

2017

  1. Baruah, S., Gupta, R., & Narayanan, S. (2017). A knowledge transfer and boosting approach to the prediction of affect in movies. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on (pp. 2876–2880). IEEE. (pdf)
  2. Huang, C.-W., Narayanan, S., & others. (2017). Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition. ArXiv Preprint ArXiv:1706.02901. (pdf)
  3. Ramakrishna, A., Martı́nez Victor R, Malandrakis, N., Singla, K., & Narayanan, S. (2017). Linguistic analysis of differences in portrayal of movie characters. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 1669–1678). (pdf)
  4. Somandepalli, K. (2017). Prediction race from face for movie data.

2016

  1. Kumar, N., Guha, T., Huang, C.-W., Vaz, C., & Narayanan, S. S. (2016). Novel affective features for multiscale prediction of emotion in music. In Multimedia Signal Processing (MMSP), 2016 IEEE 18th International Workshop on (pp. 1–5). IEEE. (pdf)
  2. Goyal, A., Kumar, N., Guha, T., & Narayanan, S. S. (2016). A multimodal mixture-of-experts model for dynamic emotion prediction in movies. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on (pp. 2822–2826). IEEE. (pdf)
  3. Kumar, N., Nasir, M., Georgiou, P. G., & Narayanan, S. S. (2016). Robust Multichannel Gender Classification from Speech in Movie Audio. In INTERSPEECH (pp. 2233–2237). (pdf)
  4. Tadimari, A., Kumar, N., Guha, T., & Narayanan, S. S. (2016). Opening big in box office? Trailer content can help. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on (pp. 2777–2781). IEEE. (pdf)
  5. Somandepalli, K., Gupta, R., Nasir, M., Booth, B. M., Lee, S., & Narayanan, S. S. (2016). Online affect tracking with multimodal kalman filters. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge (pp. 59–66). ACM. (pdf)

2015

  1. Ramakrishna, A., Malandrakis, N., Staruk, E., & Narayanan, S. (2015). A quantitative analysis of gender differences in movies using psycholinguistic normatives. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (pp. 1996–2001). (pdf)
  2. Gupta, R., Kumar, N., & Narayanan, S. (2015). Affect prediction in music using boosted ensemble of filters. In Signal Processing Conference (EUSIPCO), 2015 23rd European (pp. 11–15). IEEE. (pdf)
  3. Guha, T., Kumar, N., Narayanan, S. S., & Smith, S. L. (2015). Computationally deconstructing movie narratives: an informatics approach. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 2264–2268). Citeseer. (pdf)
  4. Guha, T., Huang, C.-W., Kumar, N., Zhu, Y., & Narayanan, S. S. (2015). Gender representation in cinematic content: A multimodal approach. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (pp. 31–34). ACM. (pdf)

2013

  1. Kim, S., Georgiou, P. G., & Narayanan, S. (2013). Annotation and classification of Political advertisements. In INTERSPEECH (pp. 1092–1096).
  2. Kim, S., Georgiou, P., & Narayanan, S. (2013). On-line genre classification of TV programs using audio content. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on (pp. 798–802). IEEE. (pdf)
  3. Malandrakis, N., Potamianos, A., & Narayanan, S. (2013). Continuous models of affect from text using n-grams. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on (pp. 8500–8504). IEEE.

2012

  1. Malandrakis, N. (2012). Affect extraction using aural, visual and linguistic features from multimedia documents. (pdf)

2011

  1. Tsiartas, A., Ghosh, P., Georgiou, P. G., & Narayanan, S. (2011). Bilingual audio-subtitle extraction using automatic segmentation of movie audio. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on (pp. 5624–5627). IEEE. (pdf)
  2. Unal, E., Chew, E., Georgiou, P. G., & Narayanan, S. (2011). A Preplexity Based Cover Song Matching System for Short Length Queries. In ISMIR (pp. 43–48). (pdf)
  3. Malandrakis, N., Potamianos, A., Iosif, E., & Narayanan, S. (2011). Emotiword: Affective lexicon creation with application to interaction and multimedia data. In Internatinoal Workshop on computational Intelligence for Multimedia Understanding (pp. 30–41). Springer.

2009

  1. Shah, D., Han, K. J., & Narayanan, S. S. (2009). A low-complexity dynamic face-voice feature fusion approach to multimodal person recognition. In Multimedia, 2009. ISM’09. 11th IEEE International Symposium on (pp. 24–31). IEEE. (pdf)
  2. Tsiartas, A., Ghosh, P. K., Georgiou, P. G., & Narayanan, S. S. (2009). Context-driven automatic bilingual movie subtitle alignment. In Tenth Annual Conference of the International Speech Communication Association. (pdf)

2006

  1. Sundaram, S., & Narayanan, S. (2006). An attribute-based approach to audio description applied to segmenting vocal sections in popular music songs. In Multimedia Signal Processing, 2006 IEEE 8th Workshop on (pp. 103–107). IEEE. (pdf)

2003

  1. Shih, H.-H., Narayanan, S. S., & Kuo, C.-C. J. (2003). A statistical multidimensional humming transcription using phone level hidden Markov models for query by humming systems. In Multimedia and Expo, 2003. ICME’03. Proceedings. 2003 International Conference on (Vol. 1, pp. I–61). IEEE. (pdf)

2002

  1. Li, Y., Narayanan, S., & Kuo, C.-C. J. (2002). Identification of speakers in movie dialogs using audiovisual cues. In Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on (Vol. 2, pp. II–2093). IEEE. (pdf)
  2. Shih, H.-H., Narayanan, S. S., & Kuo, C.-C. J. (2002). An HMM-based approach to humming transcription. In ICME (1) (pp. 337–340). (pdf)

2001

  1. Li, Y., Narayanan, S. S., Ming, W. H., & Kuo, C.-C. J. (2001). Automatic movie index generation based on multimodal information. In Internet Multimedia Management Systems II (Vol. 4519, pp. 42–54). International Society for Optics and Photonics. (pdf)
  2. Shih, H.-H., Narayanan, S. S., & Kuo, C.-C. J. (2001). Automatic main melody extraction from MIDI files with a modified Lempel-Ziv algorithm. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on (pp. 9–12). IEEE. (pdf)
  3. Shih, H.-H., Narayanan, S. S., & Kuo, C.-C. J. (2001). A dictionary approach to repetitive pattern finding in music. In null (p. 72). IEEE. (pdf)