Shanti Stewart, Kleanthis Avramidis, Tiantian Feng, and Shrikanth Narayanan. Emotion-Aligned Contrastive Learning Between Images and Music. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8135–8139, , April 2024.

Download

[PDF] 

Abstract

Traditional music search engines rely on retrieval methods that match natural language queries with music metadata. There have been increasing efforts to expand retrieval methods to consider the audio characteristics of music itself, using queries of various modalities including text, video, and speech. While most approaches aim to match general music semantics to the input queries, only a few focus on affective qualities. In this work, we address the task of retrieving emotionally-relevant music from image queries by learning an affective alignment between images and music audio. Our approach focuses on learning an emotion-aligned joint embedding space between images and music. This embedding space is learned via emotion-supervised contrastive learning, using an adapted cross-modal version of the SupCon loss. We evaluate the joint embeddings through cross-modal retrieval tasks (image-to-music and music-to-image) based on emotion labels. Furthermore, we investigate the generalizability of the learned music embeddings via automatic music tagging. Our experiments show that the proposed approach successfully aligns images and music, and that the learned embedding space is effective for cross-modal retrieval applications.

BibTeX Entry

@INPROCEEDINGS{10447276,
  author={Stewart, Shanti and Avramidis, Kleanthis and Feng, Tiantian and Narayanan, Shrikanth},
  booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  title={Emotion-Aligned Contrastive Learning Between Images and Music},
  year={2024},
  volume={},
  number={},
  pages={8135-8139},
  abstract={Traditional music search engines rely on retrieval methods that match natural language queries with music metadata. There have been increasing efforts to expand retrieval methods to consider the audio characteristics of music itself, using queries of various modalities including text, video, and speech. While most approaches aim to match general music semantics to the input queries, only a few focus on affective qualities. In this work, we address the task of retrieving emotionally-relevant music from image queries by learning an affective alignment between images and music audio. Our approach focuses on learning an emotion-aligned joint embedding space between images and music. This embedding space is learned via emotion-supervised contrastive learning, using an adapted cross-modal version of the SupCon loss. We evaluate the joint embeddings through cross-modal retrieval tasks (image-to-music and music-to-image) based on emotion labels. Furthermore, we investigate the generalizability of the learned music embeddings via automatic music tagging. Our experiments show that the proposed approach successfully aligns images and music, and that the learned embedding space is effective for cross-modal retrieval applications.},
  keywords={Training;Semantics;Pipelines;Natural languages;Self-supervised learning;Tagging;Signal processing;Multimodal Learning;Contrastive Learning;Cross-Modal Retrieval;Music Information Retrieval},
  doi={10.1109/ICASSP48485.2024.10447276},
  ISSN={2379-190X},
  link = {https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10447276},
  month={April},}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Oct 02, 2024 21:13:44