Music Based Mood Detection

Show simple item record

dc.contributor.author Ishraq, Shahid
dc.contributor.author Kamal, Zia Uddin
dc.date.accessioned 2021-10-06T05:25:06Z
dc.date.available 2021-10-06T05:25:06Z
dc.date.issued 2017-11-15
dc.identifier.citation Aucouturier, J.-J., & Pachet, F. (2002). Music Similarity Measures: What's The Use? Audio Music Mood Classification Results. (s.d.). Obtido em 20 de January de 2010, de Audio Music Mood Classification Results - MIREX 2008: http://www.music-ir.org/mirex/2008/index.php/Audio_Music_Mood_Classification_Results#MIREX_2 008_Audio_Mood_Classification_Run_Times de Cheveigné, A., & Kawahara, H. (2002). YIN, a fundamental frequency estimator for speech and music. Acoustical Society of America . Euclidean distance. (s.d.). Obtido em 22 de January de 2010, de Wikipedia, the free encyclopedia: http://en.wikipedia.org/wiki/Euclidean_distance jAudio. (s.d.). Obtido em 19 de January de 2010, de jAudio: http://jmir.sourceforge.net/jAudio.html Laar, B. v. (2005). Emotion detection in music, a survey. Lartillot, O. (2008). MIRtoolbox 1.1 User's Manual. Jyväskylä, Finland. 60 Lartillot, O., Toiviainen, P., & Eerola, T. (s.d.). Department of Music: MIRtoolbox. Obtido em 20 de January de 2010, de Jyväskylä University: https://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox Lu, L., Liu, D., & Zhang, H.-J. (2006). Automatic Mood Detection and Tracking of Music Audio Signals. IEEE Transactions on Audio, Speech and Language Processing , 14 (1). Marsyas. (s.d.). Obtido em 20 de January de 2010, de About: http://marsyas.info/about/overview McKay, C. (s.d.). jAudio: Towards a standardized extensible audio music feature extraction system. Meyers, O. C. (2007). A Mood-Based Music Classification and Exploration System. Overview. (s.d.). Obtido em 20 de January de 2010, de Marsyas: http://marsyas.info/about/overview Paiva, R. P. (2006). Melody Detection in Polyphonic Audio. Pampalk, E. (2005). Tutorial: Music Similarity. ISMIR. Pauws, S., & Eggen, B. (2002). PATS: Realization and User Evaluation of an Automatic Playlist Generator. Ribeiro, B. (2009). Pattern Recognition Techniques slides. Scherrer, B. (2007). Gaussian Mixture Model Classifier en_US
dc.identifier.uri http://hdl.handle.net/123456789/1097
dc.description Supervised by Hasan Mahmud, Assistant Professor, Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh. en_US
dc.description.abstract Music mood describes the inherent emotional expression of a music clip. It is helpful in music understanding, music retrieval, and some other music-related applications. In this paper, a hierarchical framework is presented to automate the task of mood detection from acoustic music data, by following some music psychological theories in western cultures. The hierarchical framework has the advantage of emphasizing the most suitable features in different detection tasks. Three feature sets, including intensity, timbre, and rhythm are extracted to represent the characteristics of a music clip. The intensity feature set is represented by the energy in each sub-band, the timbre feature set is composed of the spectral shape features and spectral contrast features, and the rhythm feature set indicates three aspects that are closely related with an individual’s mood response, including rhythm strength, rhythm regularity, and tempo. Furthermore, since mood is usually changeable in an entire piece of classical music, the approach to mood detection is extended to mood tracking for a music piece, by dividing the music into several independent segments, each of which contains a homogeneous emotional expression. Preliminary evaluations indicate that the proposed algorithms produce satisfactory results. On our testing database composed of 800 representative music clips, the average accuracy of mood detection achieves up to 86.3%. We can also on average recall 84.1% of the mood boundaries from nine testing music pieces. A method is proposed for detecting the emotions of song lyrics based on an affective lexicon. The lexicon is composed of words translated from ANEW and words selected by other means. For each lyric sentence, emotion units, each based on an emotion word in the lexicon, are found out, and the influences of modifiers and tenses on emotion units are taken into consideration. The emotion of a sentence is calculated from its emotion units. Tofigure out the prominent emotions of a lyric, a fuzzy clustering method is used to group the lyric’s sentences according to their emotions. The emotion of a cluster is worked out from that of its sentences considering the individual weight of each sentence. Clusters are weighted according to the weights and confidences of their sentences and singing speeds of sentences are considered as the adjustment of the weights of clusters. Finally, the emotion of the cluster with the highest weight is selected from the prominent emotions as the main emotion of the lyric. The performance of our approach is evaluated through an experimentof emotion classification of 400 song lyrics. Therefore, the main idea is to merge the two ideas altogether. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.title Music Based Mood Detection en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics