Che-Wei Huang and Shrikanth Narayanan. Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition. IEEE Transactions on Affective Computing, 2018.
details
Abeer Alwan, Shrikanth S. Narayanan, B. Strope, and A. Shen. Speech production and perception models and their applications to synthesis, recognition, and coding. In Chollet, DiBenedetto, Esposito, and Marinaro, editors, Speech Processing, Recognition, and Artificial Neural Networks, pp. 138–161, Springer-Verlag, London, mar 1999.
details
doi
pdf
Abeer Alwan, Shrikanth S. Narayanan, B. Strope, and A. Shen. Speech production and perception models and their applications to synthesis, recognition, and coding. In Proceedings of the International Symposium on Signals, Systems, and Electronics (ISSSE), pp. 367–372, Monterey, CA, oct 1995.
details
doi
pdf
Sankaranarayanan Ananthakrishnan and Shrikanth S. Narayanan. Improved Speech Recognition Using Acoustic and Lexical Correlates of Pitch Accent in a N-best Rescoring Framework. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 873–876, 4, Honolulu, Hawaii, April 2007.
details
doi
pdf
Sankaranarayanan Ananthakrishnan and Shrikanth S. Narayanan. Fine-grained Pitch Accent and Boundary Tone Labeling with Parametric F0 Features. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4545–4548, Las Vegas, Nevada, April 2008.
details
doi
pdf
Sankaranarayanan Ananthakrishnan and Shrikanth S. Narayanan. A novel algorithm for unsupervised prosodic language model adaptation. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4181–4184, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Sankaranarayanan Ananthakrishnan and Shrikanth S. Narayanan. An automatic prosody recognizer using a coupled multi-stream acoustic model and a syntactic-prosodic language model. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 269–272 [Best Student Paper Award], Philadelphia, PA, mar 2005.
details
doi
pdf
Kartik Audhkhasi and Shrikanth S. Narayanan. Data-dependent evaluator modeling and its application to emotional valence classification from speech. In In Proceedings of InterSpeech, Makuhari, Japan, sep 2010.
details
pdf
Kartik Audhkhasi, Panayiotis Georgiou, and Shrikanth S. Narayanan. Accurate Transcription Of Broadcast News Speech Using Multiple Noisy Transcribers And Unsupervised Reliability Metrics. In In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2011.
details
doi
pdf
Kartik Audhkhasi and Shrikanth S. Narayanan. Emotion Classification From Speech Using Evaluator Reliability-Weighted Combination Of Ranked Lists. In In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), pp. 4956–4959, may 2011.
details
doi
pdf
Kartik Audhkhasi and Shrikanth S. Narayanan. A Globally-Variant Locally-Constant Model for Fusion of Labels from Multiple Diverse Experts Without Using Reference Labels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(4):769–783, apr 2013.
details
doi
pdf
Matthew P. Black and Shrikanth S. Narayanan. Improvements in predicting children's overall reading ability by modeling variability in evaluators subjective judgments. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), mar 2012.
details
doi
pdf
Brandon Booth, Asem Ali, Ian Bennett, and Shrikanth Narayanan. Toward Active and Unobtrusive Engagement Assessment of Distance Learners. In In Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction, October 2017.
details
Erik Bresch and Shrikanth S. Narayanan. Region segmentation in the frequency domain applied to upper airway real-time magnetic resonance images. IEEE Transactions on Medical Imaging, 28(3):323–338, mar 2009.
details
doi
pdf
Murtaza Bulut and Shrikanth S. Narayanan. Speech Synthesis Systems in Ambient Intelligence Environments. In Hamid Aghajan, Juan Carlos Augusto, and Ramón López-Cózar Delgado, editors, Human-centric interfaces for ambient intelligence, pp. 255–277, Elsevier, 2009.
details
Murtaza Bulut and Shrikanth S. Narayanan. On the robustness of overall F0-only modifications to the perception of emotions in speech. Journal of the Acoustical Society of America, 123(6):4547–4558, jun 2008.
details
doi
pdf
Murtaza Bulut, Sungbok Lee, and Shrikanth S. Narayanan. A statistical approach for modeling prosody features using POS tags for emotional speech synthesis. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1237–1240, Honolulu, Hawaii, apr 2007.
details
doi
pdf
Murtaza Bulut, Sungbok Lee, and Shrikanth S. Narayanan. Recognition for synthesis: Automatic parameter selection for resynthesis of emotional speech from neutral speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4629–4632, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Carlos Busso and Shrikanth S. Narayanan. Interrelation between speech and facial gestures in emotional utterances: A single subject study. IEEE Transactions on Audio, Speech, and Language Processing, 15(8):2331–2347, nov 2007.
details
doi
pdf
Carlos Busso, Murtaza Bulut, and Shrikanth S. Narayanan. Toward effective automatic recognition systems of emotion in speech, pp. XX–XX, In Social emotions in nature and artifact: emotions in human and human-computer interaction. S. Marsella J. Gratch, Ed., Oxford University Press, New York, NY, USA, 2012.
details
doi
pdf
Carlos Busso, Panayiotis Georgiou, and Shrikanth S. Narayanan. Real-time Monitoring of Participants' Interaction in a Meeting Using Audio-Visual Sensors. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 685–688, Honolulu, Hawaii, April 2007.
details
doi
pdf
Carlos Busso and Shrikanth S. Narayanan. Recording audio-visual emotional databases from actors: A closer look. In Proceedings of the International Conference on Language Resources and Evaluation (LREC), pp. 17–22, Marrakech, Morocco, may 2008.
details
pdf
Carlos Busso and Shrikanth S. Narayanan. Joint analysis of the emotional fingerprint in the face and speech: A single subject study. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 43–47, Chania, Greece, oct 2007.
details
doi
pdf
Carlos Busso and Shrikanth S. Narayanan. The expression and perception of emotions: Comparing assessments of self versus others. In Proceedings of InterSpeech, pp. 257–260, Brisbane, Australia, sep 2008.
details
pdf
Carlos Busso and Shrikanth S. Narayanan. Scripted dialogs versus improvisation: Lessons learned about emotional elicitation techniques from the IEMOCAP database. In Proceedings of InterSpeech, pp. 1670–1673, Brisbane, Australia, sep 2008.
details
pdf
Carlos Busso and Shrikanth S. Narayanan. Interplay between linguistic and affective goals in facial expression during emotional utterances. In Proceedings of the International Seminar on Speech Production (ISSP), pp. 549–556, Ubatuba, Brazil, dec 2006.
details
pdf
Dogan Can and Shrikanth S. Narayanan. On the Computation of Document Frequency Statistics from Spoken Corpora using Factor Automata. In Proceedings of InterSpeech, aug 2013.
details
pdf
Dogan Can and Shrikanth S. Narayanan. A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2388–2397, Association for Computational Linguistics, sep 2015.
details
doi
pdf
Theodora Chaspari, Emily Mower Provost, and Shrikanth S. Narayanan. Analyzing the structure of parent-moderated narratives from children with ASD using an entity-based approach. In Proceedings of InterSpeech, aug 2013.
details
pdf
Chaspari,Theodora, Tsiartas,Andreas, Tsilifis,Panagiotis, and Shrikanth S. Narayanan. Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations. IEEE Transactions on Signal Processing, 64:3077–3092, jun 2016.
details
doi
pdf
Selina Chu, Shrikanth S. Narayanan, and C.-C. Jay Kuo. A semi-supervised learning approach to online audio background detection. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1629–1632, Taipei, Taiwan, apr 2009.
details
doi
pdf
Selina Chu, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Unstructured Environmental Audio: Representation, Classification and Modeling, pp. 1–21, Machine Audition: Principles, Algorithms and Systems, Information Science Reference (IGI Global), Hershey, PA, 2010.
details
pdf
Selina Chu, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Efficient Rotation Invariant Retrieval of Shapes Using Dynamic Time Warping With Applications in Medical Databases. In Proceedings of the IEEE International Symposium on Computer-Based Medical Systems (CBMS), pp. 673–678, Salt Lake City, UT, June 2006.
details
pdf
Selina Chu, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Content analysis for acoustic environment classification in mobile robots. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Fall Symposium, Arlington, VA, oct 2006.
details
pdf
Selina Chu, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Environmental sound recognition using MP-based features. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1–4, Las Vegas, Nevada, apr 2008.
details
doi
Alireza A. Dibazar and Shrikanth S. Narayanan. A system for automatic recognition of pathological speech. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, nov 2002.
details
pdf
Giuseppe Di Fabbrizio, P. Ruscitti, Shrikanth S. Narayanan, and Candace Kamm. Extending computer telephony and IP telephony standards for voice-enabled services in a multi-modal user interface environment. In Proceedings of Interactive Dialogue in Multi-Modal Systems (IDS), pp. 9–12, Kloster Irsee, Germany, jun 1999.
details
Giuseppe Di Fabbrizio and Shrikanth S. Narayanan. Web-based monitoring, logging and reporting tools for multiservice, multimodal systems. In Proceedings of InterSpeech, pp. 1041–1044, Beijing, China, oct 2000.
details
Shadi Ganjavi, Panayiotis Georgiou, and Shrikanth S. Narayanan. A transcription scheme for languages employing the Arabic script motivated by speech processing applications. In Proceedings of the International Conference on Computational Linguistics, Geneva, Switzerland, aug 2004.
details
pdf
Matteo Gerosa and Shrikanth S. Narayanan. Investigating automatic assessment of reading comprehension in young children. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5057–5060, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Prasanta Kumar Ghosh and Shrikanth S. Narayanan. Automatic Speech recognition using articulatory features from subject-independent acoustic-to-articulatory inversion. J. Acoust. Soc. Am. Express Letters, 130(4):EL251–El257, sep 2011.
details
doi
pdf
Prasanta Kumar Ghosh and Shrikanth S. Narayanan. Information theoretic analysis of direct and estimated articulatory features for phonetic discrimination. In Proc. International Seminar on Speech Production (ISSP'11), jun 2011.
details
pdf
Prasanta Kumar Ghosh and Shrikanth S. Narayanan. On smoothing articulatory trajectories obtained from Gaussian Mixture Model based acoustic-to-articulatory inversion. J. Acoust. Soc. Am. Express Letters, 134(2):EL258–EL264, aug 2013.
details
doi
pdf
James Gibson, Maarten Van Segbroeck, and Shrikanth Narayanan. Comparing Time-Frequency Representations for Directional Derivative Features. In Proceedings of Interspeech, sep 2014.
details
pdf
James Gibson and Shrikanth Narayanan. Learning Multiple Concepts with Incremental Diverse Density. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2014.
details
doi
pdf
Michael Grimm, Kristian Kroschel, and Shrikanth S. Narayanan. Support vector regression for automatic recognition of spontaneous emotions in speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1085–1088, Honolulu, Hawaii, apr 2007.
details
doi
pdf
Rahul Gupta, Kartik Audhkhasi, and Shrikanth S. Narayanan. A Mixture of Experts Approach Towards Intelligibility Classification of Pathological Speech. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), pp. 1986–1990, April 2015.
details
doi
pdf
Rahul Gupta, Saurabh Sahu, Carol Espy-Wilson, and Shrikanth Narayanan. An Affect Prediction Approach through Depression Severity Parameter Incorporation in Neural Networks. In In Proceedings of Interspeech, August 2017.
details
Rahul Gupta Gupta, Saurabh Sahu, Carol Espy Wilson, and Shrikanth Narayanan. Semi-supervised and Transfer learning approaches for low resource sentiment classification. In In proceedings of ICASSP, April 2018.
details
Gurunath
Prashanth Gurunath Shivakumar, Alexandros Potamianos, Sungbok Lee, and Shrikanth Narayanan. Improving Speech Recognition for Children using Acoustic Adaptation and Pronunciation Modeling. In Proceedings of Workshop on Child, Computer and Interaction (WOCCI 2014), sep 2014.
details
pdf
Prashanth Gurunath Shivakumar, Ming Li, Vedant Dhandhania, and Shrikanth Narayanan. Simplified and Supervised i-vector Modeling for Speaker Age Regression. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2014.
details
doi
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. Improved Speaker Diarization of Meeting Speech With Recurrent Selection of Representative Speech Segments and Participant Interaction Pattern Modeling. In Proceedings of InterSpeech, pp. 1067–1070, Brighton, UK, September 2009.
details
pdf
Kyu Jeong Han, Samuel Kim, and Shrikanth S. Narayanan. Strategies to improve the robustness of agglomerative hierarchical clustering under data source variation for speaker diarization. IEEE Transactions on Audio, Speech, and Language Processing, 16(8):1590–1601, nov 2008.
details
doi
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. An Improved Cluster Model Selection Method for Agglomerative Hierarchical Speaker Clustering using Incremental Gaussian Mixture Models. In In Proceedings of InterSpeech, Makuhari, Japan, sep 2010.
details
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. Signature cluster model selection for incremental Gaussian mixture cluster modeling in agglomerative hierarchical speaker clustering. In Proceedings of InterSpeech, Brighton, UK, sep 2009.
details
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. Agglomerative hierarchical speaker clustering using incremental Gaussian mixture cluster modeling. In Proceedings of InterSpeech, pp. 20–23, Brisbane, Australia, sep 2008.
details
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. A novel inter-cluster distance measure combining GLR and ICR for improved agglomerative hierarchical speaker clustering. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4373–4376, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Kyu Jeong Han, Samuel Kim, and Shrikanth S. Narayanan. Robust Speaker Clustering Strategies to Data Source Variation for Improved Speaker Diarization. In Proceedings of the IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop, pp. 262–267, Kyoto, Japan, December 2007.
details
doi
pdf
Kyu Jeong Han and Shrikanth S. Narayanan. A robust stopping criterion for agglomerative hierarchical clustering in a speaker diarization system. In Proceedings of InterSpeech, pp. 1853–1856, Antwerp, Belgium, aug 2007.
details
pdf
Tad Hirsch, Kritzia Merced, Shrikanth Narayanan, Zac E. Imel, and David C. Atkins. Designing Contestability: Interaction Design, Machine Learning, and Mental Health. In Proceedings of the 2017 Conference on Designing Interactive Systems, pp. 95–99, DIS '17, ACM, New York, NY, USA, June 2017.
details
doi
pdf
Che Wei Huang and Shrikanth S. Narayanan. Comparison of Feature-level and Kernel-level Data Fusion Methods in Multi-Sensory Fall Detection. In Proceedings of IEEE Workshop on Multimedia Signal Processing, pp. 1–6, 2016.
details
doi
pdf
Che-Wei Huang and Shrikanth Narayanan. Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition. In Proceedings of the IEEE International Conference on Multimedia & Expo, 2017.
details
pdf
Che Wei Huang and Shrikanth S. Narayanan. Deep Convolutional Recurrent Neural Network with Attention Mechanism for Robust Speech Emotion Recognition. In Proceedings of the IEEE International Conference on Multimedia & Expo (ICME), jul 2017.
details
Che-Wei Huang Huang and Shrikanth Narayanan. SHAKING ACOUSTIC SPECTRAL SUB-BANDS CAN BETTER REGULARIZE LEARNING IN AFFECTIVE COMPUTING. In In proceedings of ICASSP, April 2018.
details
Ozlem Kalinli, Shiva Sundaram, and Shrikanth S. Narayanan. Saliency-driven unstructured acoustic scene classification using latent perceptual indexing. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP), Rio de Janeiro, Brazil, oct 2009.
details
doi
pdf
Ozlem Kalinli and Shrikanth S. Narayanan. Prominence detection using auditory attention cues and task-dependent high level information. IEEE Transactions on Audio, Speech, and Language Processing, 17(5):1009–1024, jul 2009.
details
doi
pdf
Ozlem Kalinli and Shrikanth S. Narayanan. Early Auditory Processing Inspired Features for Robust Automatic Speech Recognition. In Proceedings of European Signal Processing Conference (EUSIPCO), pp. 2385–2389, Poznan, Poland, September 2007.
details
pdf
Ozlem Kalinli and Shrikanth S. Narayanan. A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. In Proceedings of InterSpeech, pp. 1941–1944, Antwerp, Belgium, aug 2007.
details
pdf
Ozlem Kalinli and Shrikanth S. Narayanan. A top-down auditory attention model for learning task dependent influences on prominence detection in speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3981–3984, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Ozlem Kalinli and Shrikanth S. Narayanan. Combining task-dependent information with auditory attention cues for prominence detection in speech. In Proceedings of InterSpeech, pp. 1064–1067, Brisbane, Australia, sep 2008.
details
doi
pdf
Jiun-Yu Kao, Antonio Ortega, and Shrikanth Narayanan. Graph-based Approach for Motion Capture Data Representation and Analysis. In Proceedings of IEEE International Conference on Image Processing, oct 2014.
details
doi
pdf
Kato
Tsuneo Kato, Sungbok Lee, and Shrikanth S. Narayanan. An analysis of articulatory-acoustic data based on articulatory strokes. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4493–4496, Taipei, Taiwan, apr 2009.
details
doi
pdf
Abe Kazemzadeh. Toward a Computational Approach for Natural Language Description of Emotions. In Proceedings of Affective Computing and Intelligent Interaction (ACII), pp. 216–223, Lecture Notes in Computer Science, Springer, October 2011.
details
doi
pdf
Abe Kazemzadeh, Sungbok Lee, and Shrikanth S. Narayanan. Using model trees for evaluating dialog error conditions based on acoustic speech Information. In Proceedings of the International Workshop on Human-Centered Multimedia (HCM), Santa Barbara, CA, oct 2006.
details
doi
pdf
Abe Kazemzadeh, Sungbok Lee, and Shrikanth S. Narayanan. Acoustic correlates of user response to error in human-computer dialogues. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 215–220, St. Thomas, U.S. Virgin Islands, dec 2003.
details
doi
pdf
Jangwon Kim, Sungbok Lee, and Shrikanth S. Narayanan. A Study of Interplay between Articulatory Movement and Prosodic Characteristics in Emotional Speech Production. In In Proceedings of InterSpeech, Makuhari, Japan, sep 2010.
details
pdf
Jangwon Kim, Sungbok Lee, and Shrikanth S. Narayanan. Detailed Study of Articulatory Kinematics of Critical Articulators and Non‐critical Articulators of Emotional Speech. In Proceedings of the Meeting of the Acoustical Society of America, nov 2011.
details
doi
pdf
Jangwon Kim, Sungbok Lee, and Shrikanth Narayanan. Estimation of the movement trajectories of non-crucial articulators based on the detection of crucial moments and physiological constraints. In Proceedings of Interspeech, sep 2014.
details
pdf
Samuel Kim and Shrikanth S. Narayanan. Dynamic chroma feature vectors with applications to cover song identification. In Proceedings of the International Workshop on Multimedia Signal Processing (MMSP), pp. 984–987, Cairns, Australia, oct 2008.
details
doi
pdf
Samuel Kim, Erdem Unal, and Shrikanth S. Narayanan. Music fingerprint extraction for classical music cover song identification. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), pp. 1261–1264, Hannover, Germany, jun 2008.
details
doi
pdf
Samuel Kim, Panayiotis Georgiou, and Shrikanth S. Narayanan. A robust harmony structure modeling scheme for classical music opus identification. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1961–1964, Taipei, Taiwan, apr 2009.
details
doi
pdf
Yoon-Chul Kim, Shrikanth S. Narayanan, and Krishna S. Nayak. Accelerated 3D MRI of vocal tract shaping using compressed sensing and parallel imaging. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 389–392, Taipei, Taiwan, apr 2009.
details
doi
pdf
Naveen Kumar, Qun Feng Tan, and Shrikanth S. Narayanan. OBJECT CLASSIFICATION IN SIDESCAN SONAR IMAGES WITH SPARSE REPRESENTATION TECHNIQUES. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), mar 2012.
details
doi
pdf
Naveen Kumar and Shrikanth Narayanan. Hull Detection Based On Largest Empty Sector Angle with application to analysis of real time MR images. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2014.
details
doi
pdf
Naveen Kumar and Shrikanth S. Narayanan. A discriminative reliability-aware classification model with applications to intelligibility classification in pathological speech. In Proceedings of Interspeech, sep 2015.
details
pdf
Manoj Kumar, Daniel Bone, Kelly McWilliams, Shanna Williams, Thomas Lyon, and Shrikanth Narayanan. Multi-scale Context Adaptation for Improving Child Automatic Speech Recognition in Child-Adult Spoken Interactions. In In Proceedings of Interspeech, August 2017.
details
pdf
Soonil Kwon and Shrikanth S. Narayanan. A Study of Generic Models for Unsupervised On-line Speaker Indexing. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 423–428, St. Thomas, U.S. Virgin Islands, December 2003.
details
pdf
Soonil Kwon and Shrikanth S. Narayanan. A method for on-line speaker indexing using generic reference models. In Proceedings of InterSpeech, pp. 2653–2656, Geneva, Switzerland, sep 2003.
details
pdf
Soonil Kwon and Shrikanth S. Narayanan. Speaker model quantization for unsupervised speaker indexing. In Proceedings of InterSpeech, pp. 1517–1520, Jeju Island, Korea, oct 2004.
details
pdf
Adam Lammert, Michael I. Proctor, and Shrikanth S. Narayanan. Morphological Variation in the Adult Vocal Tract: A Study Using rtMRI. In Proc. 9th Intl. Seminar on Speech Production (ISSP'11), Montreal, Canada, jun 2011. 2011 InterSpeech Speaker State Challenge Award Winner
details
doi
pdf
Chul Min Lee and Shrikanth S. Narayanan. Toward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2):293–303 [IEEE Signal Processing Society Best Paper Award 2009], mar 2005.
details
doi
pdf
Chi-Chun Lee and Shrikanth S. Narayanan. Predicting interruptions in dyadic spoken interactions. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Dallas, TX, mar 2010.
details
doi
pdf
Sungbok Lee and Shrikanth S. Narayanan. Vocal tract contour analysis of emotional speech by the functional data curve representation. In In Proceedings of InterSpeech, Makuhari, Japan, sep 2010.
details
pdf
Chul Min Lee and Shrikanth S. Narayanan. Emotion Recognition Using a Data-driven Fuzzy Inference System. In Proceedings of InterSpeech, pp. 157–160, Geneva, Switzerland, September 2003.
details
pdf
Sungbok Lee, Tsuneo Kato, and Shrikanth S. Narayanan. Relation between geometry and kinematics of articulatory trajectory associated with emotional speech production. In Proceedings of InterSpeech, pp. 2290–2293, Brisbane, Australia, sep 2008.
details
pdf
Sungbok Lee, Erik Bresch, and Shrikanth S. Narayanan. An exploratory study of emotional speech production using functional data analysis techniques. In Proceedings of the International Seminar on Speech Production (ISSP), pp. 11–17, Ubatuba, Brazil, dec 2006.
details
pdf
Ming Li and Shrikanth S. Narayanan. Robust ECG Biometrics by Fusing Temporal and Cepstral Information. In Proceedings of 20th International Conference on Pattern Recognition (ICPR), aug 2010.
details
doi
pdf
Ming Li and Shrikanth S. Narayanan. Robust Talking Face Video Verification Using Joint Factor Analysis And Sparse Representation On GMM Mean Shifted Supervectors. In In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2011.
details
doi
pdf
Ming Li and Shrikanth Narayanan. Simplified Supervised I-vector Modeling with application to Robust and Efficient Language Identification and Speaker Verification. Computer, Speech, and Language, 2014.
details
doi
pdf
Ying Li, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Audiovisual-based adaptive speaker identification. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 565–568, Hong Kong, apr 2003.
details
doi
pdf
Ying Li, Shrikanth S. Narayanan, Wei Ming, and C.-C. Jay Kuo. Automatic movie index generation based on multimodal information. In Proceedings of the International Symposium on The Convergence of Information Technologies and Communications (ITCom), pp. 42–53, Denver, CO, aug 2001.
details
doi
pdf
Ying Li, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Identification of speakers in movie dialogs using audiovisual cues. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2093–2096, Orlando, FL, may 2002.
details
doi
pdf
Chuping Liu, Qian-Jie Fu, and Shrikanth S. Narayanan. Smooth GMM based multi-talker spectral conversion for spectrally degraded speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 141–144, Toulose, France, may 2006.
details
doi
pdf
Qinyi Luo, Rahul Gupta, and Shrikanth Narayanan. TRANSFER LEARNING BETWEEN CONCEPTS FOR HUMAN BEHAVIOR MODELING: AN APPLICATION TO SINCERITY AND DECEPTION PREDICTION. In In Proceedings of Interspeech, August 2017.
details
Nikolaos Malandrakis, Iosif Elias, Vassiliki Prokopi, Alexandros Potamianos, and Shrikanth S. Narayanan. DeepPurple: Lexical, String and Affective Feature Fusion for Sentence-Level Semantic Similarity Estimation. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pp. 103–108, Association for Computational Linguistics, jun 2013.
details
pdf
Nikolaos Malandrakis, Abe Kazemzadeh, Alexandros Potamianos, and Shrikanth S. Narayanan. SAIL: A hybrid approach to sentiment analysis. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pp. 438–442, Association for Computational Linguistics, jun 2013.
details
pdf
Nikolaos Malandrakis, Ondrej Glembek, and Shrikanth Narayanan. Extracting Situation Frames from non-English Speech: Evaluation Framework and Pilot Results. In In Proceedings of Interspeech, August 2017.
details
McErlean
Donal McErlean and Shrikanth S. Narayanan. Distribution detection and tracking in sensor networks. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, pp. 1174–1178, Pacific Grove, CA, nov 2002.
details
pdf
McMicken
Betty McMicken, Frederico Salles, Shelley Von Berg, Margaret Vento-Wilson, Kelly Rogers, Asterios Toutios, and Shrikanth Narayanan. Bilabial Substitution Patterns during Consonant Production in a Case of Congenital Aglossia. Journal of Communication Disorders, Deaf Studies and Hearing Aids, 5(2):175, September 2017.
details
pdf
Chartchai Meesookho and Shrikanth S. Narayanan. Distributed range difference based target localization in sensor network. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, pp. 205–209, Pacific Grove, CA, oct 2005.
details
doi
pdf
Angeliki Metallinou, Sungbok Lee, and Shrikanth S. Narayanan. Decision level combination of multiple modalities for recognition and analysis of emotional expression. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Dallas, Texas, mar 2010.
details
doi
pdf
Angeliki Metallinou and Shrikanth S. Narayanan. Annotation and Processing of Continuous Emotional Attributes: Challenges and Opportunities. In 2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE 2013), apr 2013.
details
doi
pdf
Emily Mower and Shrikanth S. Narayanan. A Hierarchical Static-Dynamic Framework For Emotion Classification. In In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2011.
details
doi
pdf
Emily Mower Provost and Shrikanth S. Narayanan. Simplifying Emotion Classification Through Emotion Distillation. In Proceedings of APSIPA Annual Summit and Conference, dec 2012.
details
pdf
Emily Mower Provost, Irene Zhu, and Shrikanth S. Narayanan. Using Emotional Noise to Uncloud Audio-Visual Emotion Perception. In IEEE International Conference on Multimedia & Expo (ICME), pp. 1–6, July 2013.
details
doi
pdf
Emily Mower, Maja J. Mataric, and Shrikanth S. Narayanan. Selection of emotionally salient audio-visual features for modeling human evaluations of synthetic character emotion displays. In Proceedings of the IEEE International Symposium on Multimedia (ISM), pp. 190–195, Berkeley, CA, dec 2008.
details
doi
pdf
Emily Mower, Sungbok Lee, Maja J. Mataric, and Shrikanth S. Narayanan. Human perception of synthetic character emotions in the presence of conflicting and congruent vocal and facial expressions. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2201–2204, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Emily Mower, David Feil-Seifer, Maja J. Mataric, and Shrikanth S. Narayanan. Investigating Implicit Cues for User State Estimation in Human-robot Interaction Using Physiological Measurements. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1125–1130, Jeju Island, Korea, August 2007.
details
doi
pdf
Thomas J. Murray, IV, Panayiotis Georgiou, and Shrikanth S. Narayanan. Knowledge as a constraint on uncertainty for unsupervised classification: A study in part-of-speech tagging. In Proceedings of the International Conference on Machine Learning (ICML), Helsinki, Finland, jul 2008.
details
pdf
Shrikanth S. Narayanan and Alexandros Potamianos. Creating conversational interfaces for children. IEEE Transactions on Speech and Audio Processing, 10(2):65–78 [IEEE Signal Processing Society Best Paper Award Wnner, 2005], feb 2002.
details
pdf
Shrikanth S. Narayanan and Abeer Alwan. Strange attractors and chaotic dynamics in the production of voiced and voiceless fricatives. In Proceedings of InterSpeech, pp. 77–80, Berlin, Germany, sep 1993.
details
Shrikanth S. Narayanan, H. Shahri, D. Youtkus, and M. Luo. Fast and efficient motion compensation techniques using subband analysis. In Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 265–269, 3, Philadelphia, PA, nov 1994.
details
doi
Shrikanth S. Narayanan and Abeer Alwan. Parametric hybrid source models for voiced and voiceless fricative consonants. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 377–380, Atlanta, GA, may 1996.
details
doi
pdf
Shrikanth S. Narayanan and Abeer Alwan. Imaging applications in speech production research. In Proceedings of the Society of Photographic Instrumentation Engineers (SPIE) Medical Imaging, pp. 120–131, 2079, Newport Beach, CA, feb 1996.
details
doi
pdf
Shrikanth S. Narayanan, Abeer Alwan, and Katherine Haker. An articulatory study of liquid approximants in American English. In Proceedings of the International Congress of Phonetic Sciences (ICPhS), pp. 576–579, 3, Stockholm, Sweden, aug 1995.
details
Shrikanth S. Narayanan. Towards modeling user behavior in human-machine interactions: Effect of errors and emotions. In Proceedings of the ISLE Workshop on Multimodal Dialog Tagging, Edinburgh, UK, dec 2002.
details
pdf
Shrikanth S. Narayanan and R. Mortensen. Nonlinear filtering and smoothing for noisy alternating renewal process signals. In Proceedings of the IEEE American Control Conference, pp. 225–228, Boston, MA, jun 1991.
details
Shrikanth S. Narayanan and Dagen Wang. Speech Rate Estimation Via Temporal Correlation and Selected Sub-band Correlation. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 413–416, 1, Philadelphia, PA, March 2005.
details
doi
pdf
Shrikanth S. Narayanan and Abigail Kaun. Acoustic modeling of Tamil retroflex liquids. In Proceedings of the International Congresses of Phonetic Sciences (ICPhS), pp. 2097–2100, San Francisco, CA, aug 1999.
details
pdf
Nimisha Patil, Timothy Greer, Reed Blaylock, and Shrikanth Narayanan. Comparison of Basic Beatboxing Articulations between Expert and Novice Artists using Real-Time Magnetic Resonance Imaging. In In Proceedings of Interspeech, August 2017.
details
pdf
Alexandros Potamianos and Shrikanth S. Narayanan. Spoken dialog systems for children. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 197–200, 1, Seattle, WA, may 1998.
details
pdf
Vikram Ramanarayanan, Maarten Van Segbroeck, and Shrikanth S. Narayanan. On the nature of data-driven primitive representations of speech articulation. In ISCA Workshop on Speech Production in Automatic Speech Recognition (SPASR), aug 2013.
details
pdf
Vikram Ramanarayanan, Maarten Van Segbroeck, and Shrikanth S. Narayanan. Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories. Computer, Speech, and Language, mar 2015.
details
doi
pdf
Vivek Rangarajan and Shrikanth S. Narayanan. Detection of non-native named entities using prosodic features for improved speech recognition and translation. In Proceedings of the International Speech Communication Association (ISCA) Multiling Workshop, Stellenbosch, South Africa, apr 2006.
details
pdf
Vivek Rangarajan and Shrikanth S. Narayanan. Analysis of disfluent repetitions in spontaneous speech recognition. In Proceedings of the European Signal Processing Conference (EUSIPCO), Florence, Italy, sep 2006.
details
pdf
Vivek Rangarajan, Srinivas Bangalore, and Shrikanth S. Narayanan. Modeling the intonation of discourse segments for improved online dialog act tagging. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 5033–5036, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Abhinav Sethy and Shrikanth S. Narayanan. Split-lexicon based hierarchical recognition of speech using syllable and word level acoustic units. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 772–775, Hong Kong, apr 2003.
details
doi
pdf
Abhinav Sethy, Bhuvana Ramabhadran, and Shrikanth S. Narayanan. Improvements in English ASR for the Malach project using syllable-centric models. In Proceedings of the IEEE workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 129–134, St. Thomas, U.S. Virgin Islands, dec 2003.
details
doi
pdf
Abhinav Sethy and Shrikanth S. Narayanan. Measuring convergence in language model estimation using relative entropy. In Proceedings of InterSpeech, pp. 1057–1060, Jeju Island, Korea, oct 2004.
details
pdf
Abhinav Sethy, Shrikanth S. Narayanan, and Bhuvana Ramabhadran. Data driven approach for language model adaptation using stepwise relative entropy minimization. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 177–180, Honolulu, Hawaii, apr 2007.
details
pdf
Abhinav Sethy, Shrikanth S. Narayanan, and S. Parthasarathy. A syllable based approach for improved recognition of spoken names. In Proceedings of the International Speech Communication Association (ISCA) Pronunciation Modeling and Lexicon Adaptation Workshop, pp. 1–4, Estes Park, Colorado, sep 2002.
details
pdf
Abhinav Sethy, Panayiotis Georgiou, and Shrikanth S. Narayanan. Selecting relevant text subsets from web-data for building topic specific language models. In Proceedings of the Human Language Technologies (HLT) Conference, pp. 145–148, New York City, New York, jun 2006.
details
pdf
Dhaval Shah, Kyu Jeong Han, and Shrikanth S. Narayanan. A low-complexity dynamic face-voice feature fusion approach to multimodal person recognition. In Proceedings of the IEEE International Symposium on Multimedia (ISM), San Diego, California, dec 2009.
details
doi
pdf
Hsuan-Huei Shih, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Multidimensional humming transcription using a statistical approach for query by humming systems. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 385–388, Hong Kong, apr 2003.
details
doi
pdf
Hsuan-Huei Shih, Shrikanth S. Narayanan, and C.-C. Jay Kuo. A Statistical Multidimensional Humming Transcription Using Phone Level Hidden Markov Models for Query by Humming Systems. In Proceedings of the IEEE International Conference on Multimedia & Expo (ICME), pp. 61–64, 1, Baltimore, MD, July 2003.
details
doi
pdf
Hsuan-Huei Shih, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Automatic main melody extraction from MIDI files with a modified Lempel-Ziv Algorithm. In Proceedings of the International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 9–12, Kowloon Shangri-La, Hong Kong, may 2001.
details
doi
pdf
Hsuan-Huei Shih, Shrikanth S. Narayanan, and C.-C. Jay Kuo. Music indexing with extracted main melody by using modified Lempel-Ziv algorithm. In Proceedings of the International Symposium on The Convergence of Information Technologies and Communications (ITCom), pp. 124–135, Denver, CO, aug 2001.
details
doi
pdf
JongHo Shin, Panayiotis Georgiou, and Shrikanth S. Narayanan. Analyzing the Multimodal Behaviors of Users of a Speech-to-Speech Translation Device by Using Concept Matching Scores. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 259–263, Chania, Greece, October 2007.
details
doi
pdf
JongHo Shin, Panayiotis Georgiou, and Shrikanth S. Narayanan. Enabling Effective Design of Multimodal Interfaces for Speech-to-Speech Translation System: An Empirical Study of Longitudinal User Behaviors over Time and User Strategies for Coping with Errors. Computer, Speech, and Language, 27(2):554–571, feb 2013.
details
doi
pdf
Silva
Jorge Silva and Shrikanth S. Narayanan. Universal consistency of data-driven partitions for divergence estimation. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), pp. 2021–2025, Nice, France, jun 2007.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Histogram-based estimation for the divergence revisited. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), pp. 468–472, Seoul, Korea, jun 2009.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Upper bound Kullback-Leibler divergence for hidden Markov models with application as discrimination measure for speech recognition. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), pp. 2299–2303, Seattle, WA, jul 2006.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Minimum probability of error signal representation. In Proceedings of the IEEE Machine Learning for Signal Processing (MLSP) Workshop, pp. 348–353, Thessaloniki, Greece, aug 2007.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. A Statistical Discrimination Measure for Hidden Markov Models Based on Divergence. In Proceedings of InterSpeech, pp. 657–660, Jeju Island, Korea, October 2004.
details
pdf
Jorge Silva and Shrikanth S. Narayanan. Optimal wavelet packets decomposition based on a rate-distortion optimality criterion. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 817–820, Honolulu, Hawaii, apr 2007.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Average divergence distance as a statistical discrimination measure for hidden Markov models. IEEE Transactions on Audio, Speech, and Language Processing, 14(3):890–906, may 2006.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. A Near-Optimal (Minimax) Tree-Structured Partition for Mutual Information Estimation. In Proceedings of The IEEE International Symposium on Information Theory (ISIT), jun 2010.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Non-Product Data-Dependent Partitions for Mutual Information Estimation: Strong Consistency and Applications. IEEE Transactions on Signal Processing, 58(7):3497–3511, jul 2010.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. On Data-Driven Histogram-Based Estimation for Mutual Information. In Proceedings of The IEEE International Symposium on Information Theory (ISIT), jun 2010.
details
doi
pdf
Jorge Silva and Shrikanth S. Narayanan. Nearly Optimal Estimation of Mutual Information based on a Complexity Regularized Tree-Structured Partition. IEEE Transactions on Information Theory, 58(3):1940 – 1952, mar 2012.
details
doi
pdf
J Smith, C Berkel, N Jordan, D Atkins, S Narayanan, C Gallo, K Grimm, T Dishion, A Mauricio, J Rudo-Stern, M Meacham, E Winslow, and M Bruening. An individually tailored family-centered intervention for pediatric obesity in primary care: Study protocol of a randomized type II hybrid implementation–effectiveness trial (Raising Healthy Children study). Implementation Science, 2017.
details
Justin D Smith, Cady Berkel, Neil Jordan, David C Atkins, Shrikanth S Narayanan, Carlos Gallo, Kevin J Grimm, Thomas J Dishion, Anne M Mauricio, Jenna Rudo-Stern, Mariah K Meachum, Emily Winslow, and Meg M Bruening. An individually tailored family-centered intervention for pediatric obesity in primary care: Study protocol of a randomized type II hybrid implementation–effectiveness trial (Raising Healthy Children study). Implementation Science, 13(11):1–15, January 2018.
details
doi
pdf
Krishna Somandepalli, Asterios Toutios, and Shrikanth Narayanan. Semantic Edge Detection for Tracking Vocal Tract Air-tissue Boundaries in Real-time Magnetic Resonance Images. In In Proceedings of Interspeech, August 2017.
details
Naveen Srinivasamurthy, Antonio Ortega, and Shrikanth S. Narayanan. Enhanced standard compliant distributed speech recognition (AURORA encoder) using rate allocation. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 485–488, Montreal, Canada, may 2004.
details
doi
pdf
Naveen Srinivasamurthy, Shrikanth S. Narayanan, and Antonio Ortega. Use of model transformations for distributed speech recognition. In Proceedings of the International Speech Communication Association (ISCA) Workshop on Adaptation Methods for Speech Recognition, pp. 113–116, Sophia Antipolis, France, aug 2001.
details
pdf
S. Subramanyam and Shrikanth S. Narayanan. Loading Effects on Indian Musical Drums: An Acoustic Analysis. In Proceedings of the Material Research Society, San Francisco, CA, November 1993.
details
Shiva Sundaram and Shrikanth S. Narayanan. An attribute-based approach to audio description applied to segmenting vocal sections in popular music songs. In Proceedings of the International Workshop on Multimedia Signal Processing (MMSP), pp. 103–107, Victoria, Canada, oct 2006.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Vector-based representation and clustering of audio using onomatopoeia words. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Fall Symposium, Arlington, VA, oct 2006.
details
pdf
Shiva Sundaram and Shrikanth S. Narayanan. A divide-and-conquer approach to latent perceptual indexing of audio for large web 2.0 applications. In Proceedings of the International Conference on Multimedia & Expo (ICME), pp. 466–469, Cancun, Mexico, jun 2009.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Experiments in automatic genre classification of full-length music tracks using audio activity rate. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP), pp. 98–102, Chania, Greece, oct 2007.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Audio retrieval by latent perceptual indexing. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 49–52, Las Vegas, Nevada, apr 2008.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Classification of sound clips by two schemes: using onomatopoeia and semantic labels. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), pp. 1341–1344, Hannover, Germany, jun 2008.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Analysis of audio clustering using word descriptions. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 769–772, Honolulu, Hawaii, apr 2007.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Discriminating Two Types of Noise Sources Using Cortical Representation and Dimension Reduction Technique. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 213–216, 1, Honolulu, Hawaii, April 2007.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. Spoken language synthesis: Experiments in synthesis of spontaneous monologues. In Proceedings of the IEEE Speech Synthesis Workshop, pp. 203–206, Santa Monica, CA, sep 2002.
details
doi
pdf
Shiva Sundaram and Shrikanth S. Narayanan. An empirical text transformation method for spontaneous speech synthesizers. In Proceedings of InterSpeech, pp. 1221–1224, Geneva, Switzerland, sep 2003.
details
pdf
Qun Feng Tan and Shrikanth S. Narayanan. Novel Variations of Group Sparse Regularization Techniques with Applications to Noise Robust Automatic Speech Recognition. IEEE Transactions on Audio, Speech and Language Processing, 20(4):1337–1346, may 2012.
details
doi
pdf
Qun Feng Tan and Shrikanth S. Narayanan. Combining Window Predictions Efficiently-- A new imputation Approach for noise robust Automatic Speech Recognition. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), may 2013.
details
doi
pdf
Joseph Tepperman and Shrikanth S. Narayanan. Hidden-articulator Markov models for pronunciation evaluation. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 174–179, San Juan, Puerto Rico, nov 2005.
details
doi
pdf
Joseph Tepperman and Shrikanth S. Narayanan. Automatic syllable stress detection using prosodic features for pronunciation evaluation of language learners. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 937–940, Philadelphia, PA, mar 2005.
details
doi
pdf
Joseph Tepperman and Shrikanth S. Narayanan. Using articulatory representations to detect segmental errors in nonnative pronunciation. IEEE Transactions on Audio, Speech, and Language Processing, 16(1):8–22, jan 2008.
details
doi
pdf
Samuel Thomas, George Saon, Maarten Van Segbroeck, and Shrikanth S. Narayanan. Improvements to the IBM speech activity detection system for the DARPA RATS program. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), apr 2015.
details
doi
pdf
Asterios Toutios and Shrikanth S. Narayanan. Factor Analysis Of Vocal-tract Outlines Derived From Real-time Magnetic Resonance Imaging Data. In Proceedings of International Congress of Phonetic Sciences (ICPhS 2015), aug 2015.
details
pdf
Asterios Toutios and Shrikanth S. Narayanan. Advances in Real-time Magnetic Resonance Imaging of the Vocal Tract for Speech Science and Technology Research. APSIPA Transactions on Signal and Information Processing, 5:e6, Cambridge Univ Press, 2016.
details
doi
pdf
Ruchir Travadi, Maarten Van Segbroeck, and Shrikanth Narayanan. Modified- prior i-Vector Estimation for Language Identification of Short Duration Utterances. In Proceedings of Interspeech, sep 2014.
details
pdf
Johannes Töger, Tanner Sorensen, Krishna Somandepalli, Asterios Toutios, Sajan Goud Lingala, Shrikanth Narayanan, and Krishna Nayak. Test–retest repeatability of human speech biomarkers from static and real-time dynamic magnetic resonance imaging. The Journal of the Acoustical Society of America, 141(5):3323–3336, May 2017.
details
doi
pdf
Erdem Unal, Shrikanth S. Narayanan, and Elaine Chew. A statistical approach to retrieval under user-dependent uncertainty in query-by-humming systems. In Proceedings of the ACM SIGMM International Workshop on Multimedia Information Retrieval (MIR), pp. 113–118, New York City, NY, oct 2004.
details
doi
pdf
Maarten Van Segbroeck and Shrikanth S. Narayanan. A robust frontend for ASR: combining denoising, noise masking and feature normalization. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), may 2013.
details
doi
pdf
Maarten Van Segbroeck, Ruchir Travadi, and Shrikanth Narayanan. UBM Fused Total Variability Modeling for Language Identification. In Proceedings of Interspeech, pp. 3027–3031, September 2014.
details
pdf
Maarten Van Segbroeck, Allison Knoll, Pat Levitt, and Shrikanth Narayanan. MUPET - Mouse Ultrasonic Profile ExTraction: A signal processing tool for rapid and unsupervised analysis of ultrasonic vocalizations. Neuron, 94:465–485, March 2017.
details
doi
pdf
Colin Vaz, Andreas Tsiartas, and Shrikanth Narayanan. Energy-Constrained Minimum Variance Response Filter for Robust Vowel Spectral Estimation. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), may 2014.
details
doi
pdf
Colin Vaz, Dimitrios B Dimitriadis, and Shrikanth Narayanan. Enhancing Audio Source Separability Using Spectro-Temporal Regularization with NMF. In Proceedings of Interspeech, sep 2014.
details
pdf
Colin Vaz, Dimitrios B Dimitriadis, Samuel Thomas, and Shrikanth S. Narayanan. CNMF-Based Acoustic Features for Noise-robust ASR. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), pp. 5735–5739, March 2016.
details
doi
pdf
Marilyn Walker, Jeanne Fromer, and Shrikanth S. Narayanan. Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In Proceedings of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL), pp. 1345–1351, Montreal, Canada, aug 1998.
details
doi
pdf
Wang
Dagen Wang and Shrikanth S. Narayanan. A multi-pass linear fold algorithm for sentence boundary detection using prosodic cues. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 525–528, Montreal, Canada, may 2004.
details
doi
pdf
Dagen Wang and Shrikanth S. Narayanan. A confidence-score based unsupervised MAP adaptation for speech recognition. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, pp. 222–226, Pacific Grove, CA, nov 2002.
details
doi
pdf
Dagen Wang and Shrikanth S. Narayanan. Piecewise linear stylization of pitch via wavelet analysis. In Proceedings of InterSpeech, pp. 3277–3280, Lisbon, Portugal, oct 2005.
details
pdf
Dagen Wang and Shrikanth S. Narayanan. An unsupervised quantitative measure for word prominence in spontaneous speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 377–380, Philadelphia, PA, mar 2005.
details
doi
pdf
Martin Wöllmer, Angeliki Metallinou, Florian Eyben, Björn Schuller, and Shrikanth S. Narayanan. Context-Sensitive Multimodal Emotion Recognition from Speech and Facial Expression Using Bidirectional LSTM Modeling. In In Proceedings of InterSpeech, pp. 2362–2365, Makuhari, Japan, September 2010.
details
pdf
Zhaojun Yang and Shrikanth S. Narayanan. Modeling mutual influence of multimodal behavior in affective dyadic interactions. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), apr 2015.
details
doi
pdf
Zhaojun Yang and Shrikanth S. Narayanan. LIGHTLY-SUPERVISED UTTERANCE-LEVEL EMOTION IDENTIFICATION USING LATENT TOPIC MODELING OF MULTIMODAL WORDS. In Proceedings of IEEE International Conference on Audio, Speech and Signal Processing (ICASSP), mar 2016.
details
doi
pdf
Zhaojun Yang, Boqing Gong, and Shrikanth Narayanan. Weighted Geodesic Flow Kernel for Interpersonal Mutual Influence Modeling and Emotion Recognition in Dyadic Interactions. In In Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction, October 2017.
details
Serdar Yildirim and Shrikanth S. Narayanan. An information-theoretic analysis of developmental changes in speech. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. I–480-I-483, Hong Kong, apr 2003.
details
doi
pdf
Serdar Yildirim and Shrikanth S. Narayanan. Recognizing child’s emotional state in problem-solving child-machine interactions. In Proceedings of the Workshop on Child, Computer and Interaction, Cambridge, MA, nov 2009.
details
doi
pdf
Serdar Yildirim and Shrikanth S. Narayanan. Automatic detection of disfluency boundaries in spontaneous speech of children using audio-visual information. IEEE Transactions on Audio, Speech, and Language Processing, 17(1):2–12, jan 2009.
details
doi
pdf