Acoustic variables in the communication of composer emotional intent.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Music emotion recognition algorithms seek to automatically classify analysed music in terms of the emotion it expresses. Typically these approaches utilise low level acoustical features extracted from the digital music waveform. Research in this area concentrates on the perception of expressed emotion from the user perspective. This approach has received some criticism in that it is limited in terms of unpicking the many facets of emotional communication between the composer and the listener (Miell, MacDonald & Hargreaves 2005), defined in e.g. the lens model of Juslin (2001). The use of acoustical analysis and classification processes can be expanded to include aspects of the musical communication model. This has the potential to shed light on how the composer conveys emotion, and how this is reflected in the acoustical characteristics of the music.
    Original languageEnglish
    Title of host publicationProceedings of ICMPC-ESCOM, Thessaloniki, Greece, 2012.
    EditorsE. Cambouropoulos, C. Tsougras, P. Mavromatis, K. Pastiadis
    Pages536
    Number of pages1
    Publication statusPublished - 23 Jul 2012

    Keywords

    • emotion
    • acoustic analysis
    • composition

    Fingerprint Dive into the research topics of 'Acoustic variables in the communication of composer emotional intent.'. Together they form a unique fingerprint.

  • Cite this

    Knox, D., & Cassidy, G. (2012). Acoustic variables in the communication of composer emotional intent. In E. Cambouropoulos, C. Tsougras, P. Mavromatis, & K. Pastiadis (Eds.), Proceedings of ICMPC-ESCOM, Thessaloniki, Greece, 2012. (pp. 536)