| Login
dc.contributor.author | Saad, Fardin | |
dc.contributor.author | Shaheen, Md. Al-Amin | |
dc.date.accessioned | 2020-10-28T09:19:21Z | |
dc.date.available | 2020-10-28T09:19:21Z | |
dc.date.issued | 2019-11-15 | |
dc.identifier.citation | [1] Rajoo, Rajesvary, and Ching Chee Aun. "In uences of languages in speech emotion recognition: A comparative study using Malay, English and Mandarin languages." 2016 IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE). IEEE, 2016. [2] Sudhakar, Rode Snehal, and Manjare Chandraprabha Anil. "Analysis of speech features for emotion detection: a review." 2015 International Conference on Computing Communication Control and Automation. IEEE, 2015. [3] Noroozi, Fatemeh, Marina Marjanovic, Angelina Njegus, Sergio Escalera, and Gholamreza Anbarjafari. "A Study of Language and Classi erindependent Feature Analysis for Vocal Emotion Recognition." arXiv preprint arXiv:1811.08935 (2018). [4] Kandali, Aditya Bihar, Aurobinda Routray, and Tapan Kumar Basu. "Emotion recognition from Assamese speeches using MFCC features and GMM classi er." TENCON 2008-2008 IEEE region 10 conference. IEEE, 2008. [5] M. ElAyadi, M.S. Kamel and F. Karray, \Survey on speech emotion recognition: Features, classi cation schemes, and databases", Pattern Recognition, vol. 44, no.3, pp. 572{587, March 2011.. [6] R.W. Picard, \A ective computing. Technical Report 321", MIT Media Laboratory Perceptual Computing Section, Cambridge, MA,USA, November 1995. [7] M.D. Pell, S. Paulmann, C. Dara, A. Alasseri, S. A. Kotz, \Factors in the recognition of vocally expressed emotions: A comparison of four languages". Journal of Phonetics, vol. 37, no. 4, pp. 417-435, October 2009. [8] M. Gjoreski, H. Gjoreski, A. Kulakov, \Machine Learning Approach for Emotion Recognition in Speech", Informatica, vol. 38, pp. 377{384, December 2014. 50 [9] Schuller, Bj orn, Gerhard Rigoll, and Manfred Lang. "Hidden Markov modelbased speech emotion recognition." 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).. Vol. 2. IEEE, 2003. [10] Dupuis, Kate, and M. Kathleen Pichora-Fuller. "Recognition of emotional speech for younger and older talkers: Behavioural ndings from the Toronto Emotional Speech Set." Canadian Acoustics 39.3 (2011): 182-183. [11] W. Hess, Pitch determination of speech signals: algorithms and devices. Springer Science & Business Media, 2012, vol. 3. [12] S.-H. Lee, T.-Y. Hsiao, and G.-S. Lee, \Audio{vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in di erent noises," Hearing research, vol. 324, pp. 1{ 6, 2015. [13] E. Globerson, N. Amir, O. Golan, L. Kishon-Rabin, and M. Lavidor, \Psychoacoustic abilities as predictors of vocal emotion recognition," Attention, Perception, & Psychophysics, vol. 75, no. 8, pp. 1799{1810, 2013. [14] J. Harrington, Phonetic analysis of speech corpora. John Wiley & Sons, 2010. [15] G. Chronaki, J. A. Hadwin, M. Garner, P. Maurage, and E. J. Sonuga-Barke, \The development of emotion recognition from facial expressions and nonlinguistic vocalizations during childhood," British Journal of Developmental Psychology, vol. 33, no. 2, pp. 218{236, 2015. [16] R. Allgood and P. Heaton, \Developmental change and crossdomain links in vocal and musical emotion recognition performance in childhood," British Journal of Developmental Psychology, vol. 33, no. 3, pp. 398{403, 2015. [17] Y. Pan, P. Shen, and L. Shen, \Speech emotion recognition using support vector machine," International Journal of Smart Home, vol. 6, no. 2, pp. 101{108, 2012. 51 [18] P. Laukka, D. Neiberg, and H. A. Elfenbein, \Evidence for cultural dialects in vocal emotion expression: Acoustic classi cation within and across ve nations." Emotion, vol. 14, no. 3, p. 445, 2014. [19] Amer, Mohamed R., et al. "Emotion detection in speech using deep networks." 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2014. [20] M. R. Amer, B. Siddiquie, S. Khan, A. Divakaran, and H. Sawhney, \Multimodal fusion using dynamic hybrid models," in WACV, 2014. [21] Yelin Kim, Honglak Lee, and Emily Mower Provost, \Deep learning for robust feature generation in audiovisual emotion recognition," in ICASSP, 2013. [22] Bhatti, Muhammad Waqas, Yongjin Wang, and Ling Guan. "A neural network approach for human emotion recognition in speech." 2004 IEEE International Symposium on Circuits and Systems (IEEE Cat. No. 04CH37512). Vol. 2. IEEE, 2004. [23] Shaukat, Arslan, and Ke Chen. "Exploring language-independent emotional acoustic features via feature selection." arXiv preprint arXiv:1009.0117 (2010). [24] S.G. Koolagudi, S. Devliyal, B. Chawla, A. Barthwal, \Recognition of Emotions from Speech using Excitation Source Features". Procedia Engineering, vol. 38, pp. 3409 { 3417, 2012. [25] M. Gjoreski, H. Gjoreski, A. Kulakov, \Machine Learning Approach for Emotion Recognition in Speech", Informatica, vol. 38, pp. 377{384, December 2014. [26] P. Shen, Z. Changjun, X. Chen, \Automatic Speech Emotion Recognition Using Support Vector Machine", International Conference on Electronic & Mechanical Engineering and Information Technology, vol. 2, pp. 621 { 625, August 2011. | en_US |
dc.identifier.uri | http://hdl.handle.net/123456789/610 | |
dc.description | Supervised by Prof. Dr. Md Kamrul Hasan | en_US |
dc.description.abstract | Emotion recognition plays a major role in a ective computing and adds value to machine intelligence. While the emotional state of a person can be expressed in di erent ways such as facial expressions, gestures, movements and postures, recognition of emotion from speech has gathered much interest over others. However, after years of research, recognizing the emotional state of individuals from their speech as accurately as possible still remains a challenging task. This motivates an attempt to study the factors that in uence identi cation of Speech Emotion Recognition (SER) such as gender, culture, dialects, education, social status and age. The aim of this study is to investigate whether a SER system can identify the emotional state of a person regardless of the language used. To investigate the in uence of languages in SER, we explored how spoken expressions of six selected emotions (happiness, anger, sadness, neutral, fear & disgust) varied in two languages of interest: English and Bangla. In addition, the perceptual outcomes were studied in relation to identifying the advantage of speech emotion expression produced by native speakers and also by bilingual speakers | en_US |
dc.language.iso | en | en_US |
dc.publisher | Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh | en_US |
dc.title | Is Speech Emotion Recognition Language independent? A Comparative Analysis of Speech Emotion Recognition using English and Bangla Languages | en_US |
dc.type | Thesis | en_US |