Prashanth Gurunath Shivakumar, Panayiotis Georgiou, and Shrikanth S. Narayanan. Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords. PLoS ONE, 17(3):1–20, Public Library of Science, 03 2022.
Word vector representations enable machines to encode human language for spoken language understanding and processing. Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information. Confusion2vec provides a robust spoken language representation by considering inherent human language ambiguities. In this paper, we propose a novel word vector space estimation by unsupervised learning on lattices output by an automatic speech recognition (ASR) system. We encode each word in Confusion2vec vector space by its constituent subword character n-grams. We show that the subword encoding helps better represent the acoustic perceptual ambiguities in human spoken language via information modeled on lattice-structured ASR output. The usefulness of the proposed Confusion2vec representation is evaluated using analogy and word similarity tasks designed for assessing semantic, syntactic and acoustic word relations. We also show the benefits of subword modeling for acoustic ambiguity representation on the task of spoken language intent detection. The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset. We demonstrate that Confusion2vec subword modeling eliminates the need for retraining/adapting the natural language understanding models on ASR transcripts.
@article{Shivakumar-PLOSOne-Confusion2Vec2, title={Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords}, author={Shivakumar, Prashanth Gurunath and Georgiou, Panayiotis and Narayanan, Shrikanth S.}, bib2html_rescat = {speechlinks}, url = {https://doi.org/10.1371/journal.pone.0264488}, doi = {10.1371/journal.pone.0264488}, journal = {PLoS ONE} publisher = {Public Library of Science} month = {03}, volume = {17}, pages = {1-20}, abstract = {Word vector representations enable machines to encode human language for spoken language understanding and processing. Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information. Confusion2vec provides a robust spoken language representation by considering inherent human language ambiguities. In this paper, we propose a novel word vector space estimation by unsupervised learning on lattices output by an automatic speech recognition (ASR) system. We encode each word in Confusion2vec vector space by its constituent subword character n-grams. We show that the subword encoding helps better represent the acoustic perceptual ambiguities in human spoken language via information modeled on lattice-structured ASR output. The usefulness of the proposed Confusion2vec representation is evaluated using analogy and word similarity tasks designed for assessing semantic, syntactic and acoustic word relations. We also show the benefits of subword modeling for acoustic ambiguity representation on the task of spoken language intent detection. The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset. We demonstrate that Confusion2vec subword modeling eliminates the need for retraining/adapting the natural language understanding models on ASR transcripts.}, number = {3}, link = {http://sail.usc.edu/publications/files/Shivakumar-PLOSOne2022.pdf}, year={2022} }
Generated by bib2html.pl (written by Patrick Riley ) on Wed Feb 19, 2025 09:51:49