Emotion classification algorithms rely on an extraction process to create numerical representations of raw music information. The result is a feature vector which is used to characterise the audio signal within a specific context or application. Frequency domain analysis is the primary method of creating these features which range from low-level acoustical parameters to high-level structural representations. Previous studies in music emotion classification have concentrated on feature sets for classical, film and folk music with little attention being given to western contemporary music. This may be due to the constraints of the composition and production techniques associated with this type of content which create difficulties in extracting meaningful features. Despite these issues the ubiquity of this genre shows a clear need for work in this area. Furthermore, if any emotion classification system is to have `real-world' value it must have the ability to deal with this type of popular content.
|Publication status||Published - 1 Aug 2010|
- music information retrieval
- music emotion classification
- contemporary music