Volume- 3
Issue- 3
Year- 2015
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
Mekhala Sridevi Sameera, , A Satish Kumar, Kotte Sandeep
This paper proposes a peculiar and very important developing area concerns the remote monitoring of elderly or ill people. Indeed, due to the increasing aged population, Human-Computer Intelligent Interaction (HCII) systems able to help live independently are regarded as useful tools. In this context recognizing people emotional state and giving a suitable feedback may play a crucial role. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into seven emotional states including anger, boredom, disgust, fear, happiness, sadness and neutral state. Emotions have been classified separately for male and female based on the fact male and female voice has altogether different range. It provides a solution by improving interaction among human and computers, thus allowing human-computer intelligent interaction. The system is composed of two subsystems: 1) gender recognition (GR) and 2) emotion recognition (ER). It distinguishes a single emotion versus all other possible ones as in proposed numerical results. Speech based emotion recognition system consists of four principle parts: Feature Extraction, Feature Selection, Database and Classification. Nowadays, the research is focused on finding powerful combinations of classifiers that increases the classification efficiency in real-life speech emotion recognition applications. From these acoustic signals, this project will calculate pitch, short time energy, zero crossing rate and Mel frequency cepstral coefficients, and correlate it to emotions of the driver. We also define these features and the feature extraction methods. In paper, a demonstration on how one can distinguish the emotion based on these features (or combination of features) by testing them over Berlin emotion database
[1] I.M. Guyon, S.R. Gunn, M. Nikravesh, and L. Zadeh, editors. Feature Extraction, Foundations and Applications. Springer, 2006.
[2] Hua Ai, Diane J. Litman, Kate Forbes-riley, Mihai Rotaru, Joel Tetreault, and Amruta Pur. Using system and user performance features to improve emotion detection in spoken tutoring dialogs. In Proceedings of Interspeech, pages 797-800, 2006.
[3] Laurence Devillers and Laurence Vidrascu. Real-life emotion detection with lexical and paralinguistic cues on Human-Human call center dialogs. Proc. INTERSPEECH' 06. Pittsburgh, 2006.
[4] J I. Murray, Arnott. Toward the simulation of emotion in synthetic speech: A review of the literature on human vocal emotion. The Journal of the Acoustical Society of America, 93(2):1097-1108, 1993.
[5] Dimitrios Ververidis and Constantine Kotropoulos. Emotional speech recognition: Resources, features, and methods. Speech Communication, 48(9):1162 -1181, 2006.
[6] Thurid Vogt, Elisabeth Andre, and Johannes Wagner. Automatic recognition of emotions from speech: A review of the literature and recommendations for practical realization. pages 75-91, 2008.
[7] T. Barbu. Discrete speech recognition using a hausdorff-based metric. In In Proceedings of the 1st Int. Conference of E-Business and Telecommunication Networks, ICETE 2004, volume 3, pages 363_368, Setubal, Portugal, Aug 2004.
[8] T. Barbu. Speech-dependent voice recognition system using a nonlinear metric. In International Journal of Applied Mathematics, volume 18, pages 501-514, 2005.
[9] J.Beveridge K.Baek, B. A.Draper and K. She. Analysis of pca-based and fisher discriminant-based image recognition algorithms. Technical report, Department of Computer Science., 2000. [10] M. Rolfes W. Sendlmeier F. Burkhardt, A. Paeschke and B. Weiss. Berlin database of emotional speech on-line. In Interspeech: http://pascal.kgw.tu-berlin.de/emodb/index-1024.html, pages 1517_1520, 2005
Computer Science and Engineering, Dhanekula Institute of Engineering & Technology, Vijayawada, India
No. of Downloads: 11 | No. of Views: 1196