Abstract
Music emotion recognition algorithms seek to automatically
classify analysed music in terms of the emotion it expresses.
Typically these approaches utilise low level acoustical features
extracted from the digital music waveform. Research in this
area concentrates on the perception of expressed emotion from
the user perspective. This approach has received some criticism
in that it is limited in terms of unpicking the many facets of
emotional communication between the composer and the
listener (Miell, MacDonald & Hargreaves 2005), defined in e.g.
the lens model of Juslin (2001). The use of acoustical analysis
and classification processes can be expanded to include aspects
of the musical communication model. This has the potential to
shed light on how the composer conveys emotion, and how this
is reflected in the acoustical characteristics of the music.
Original language | English |
---|---|
Title of host publication | Proceedings of ICMPC-ESCOM, Thessaloniki, Greece, 2012. |
Editors | E. Cambouropoulos, C. Tsougras, P. Mavromatis, K. Pastiadis |
Publisher | Aristotle University of Thessaloniki |
Pages | 536 |
Number of pages | 1 |
Publication status | Published - 23 Jul 2012 |
Keywords
- emotion
- acoustic analysis
- composition