Haoqi Li, Brian Baucom, Shrikanth Narayanan, and Panayiotis Georgiou. Unsupervised Speech Representation Learning for Behavior Modeling using Triplet Enhanced Contextualized Networks. Computer Speech & Language, pp. 101226, Elsevier, 2021.

Download

[HTML] 

Abstract

Speech encodes a wealth of information related to human behavior and has been used in a variety of automated behavior recognition tasks. However, extracting behavioral information from speech remains challenging including due to inadequate training data resources stemming from the often low occurrence frequencies of specific behavioral patterns. Moreover, supervised behavioral modeling typically relies on domain-specific construct definitions and corresponding manually-annotated data, rendering generalizing across domains challenging. In this paper, we exploit the stationary properties of human behavior within an interaction and present a representation learning method to capture behavioral information from speech in an unsupervised way. We hypothesize that nearby segments of speech share the same behavioral context and hence map onto similar underlying behavioral representations. We present an encoder-decoder based Deep Contextualized Network (DCN) as well as a Triplet-Enhanced DCN (TE-DCN) framework to capture the behavioral context and derive a manifold representation, where speech frames with similar behaviors are closer while frames of different behaviors maintain larger distances. The models are trained on movie audio data and validated on diverse domains including on a couples therapy corpus and other publicly collected data (e.g., stand-up comedy). With encouraging results, our proposed framework shows the feasibility of unsupervised learning within cross-domain behavioral modeling.

BibTeX Entry

@article{li2021unsupervised,
  title={Unsupervised Speech Representation Learning for Behavior Modeling using Triplet Enhanced Contextualized Networks},
  author={Li, Haoqi and Baucom, Brian and Narayanan, Shrikanth and Georgiou, Panayiotis},
  journal={Computer Speech \& Language},
  pages={101226},
  year={2021},
issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2021.101226},
url = {https://www.sciencedirect.com/science/article/pii/S0885230821000334},
keywords = {Behavior modeling, Unsupervised representation learning, Context information, Metric learning},
abstract = {Speech encodes a wealth of information related to human behavior and has been used in a variety of automated behavior recognition tasks. However, extracting behavioral information from speech remains challenging including due to inadequate training data resources stemming from the often low occurrence frequencies of specific behavioral patterns. Moreover, supervised behavioral modeling typically relies on domain-specific construct definitions and corresponding manually-annotated data, rendering generalizing across domains challenging. In this paper, we exploit the stationary properties of human behavior within an interaction and present a representation learning method to capture behavioral information from speech in an unsupervised way. We hypothesize that nearby segments of speech share the same behavioral context and hence map onto similar underlying behavioral representations. We present an encoder-decoder based Deep Contextualized Network (DCN) as well as a Triplet-Enhanced DCN (TE-DCN) framework to capture the behavioral context and derive a manifold representation, where speech frames with similar behaviors are closer while frames of different behaviors maintain larger distances. The models are trained on movie audio data and validated on diverse domains including on a couples therapy corpus and other publicly collected data (e.g., stand-up comedy). With encouraging results, our proposed framework shows the feasibility of unsupervised learning within cross-domain behavioral modeling.}
  publisher={Elsevier}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat Nov 20, 2021 15:31:36