Mostrar el registro sencillo del ítem

dc.contributor.authorMartinez Arroyo, Miriam
dc.contributor.authorBello Ambario, Vicente
dc.contributor.authorMontero Valverde, Jose Antonio
dc.contributor.authorDe La Cruz Gamez, Eduardo
dc.contributor.authorHernández Hernández, Mario
dc.contributor.authorHernández-Hernández, José Luis
dc.creatorMARTINEZ ARROYO, MIRIAM; 68990
dc.creatorBELLO AMBARIO, VICENTE; 789865
dc.creatorMONTERO VALVERDE, JOSE ANTONIO; 253634
dc.creatorDE LA CRUZ GAMEZ, EDUARDO; 296447
dc.creatorHernández Hernández, Mario;#0000-0001-8330-4779
dc.creatorHernández-Hernández, José Luis;#0000-0003-0231-2019
dc.date.accessioned2023-03-21T20:04:34Z
dc.date.available2023-03-21T20:04:34Z
dc.date.issued2021-10
dc.identifier.issnhttps://doi.org/10.1007/978-3-030-88262-4_4
dc.identifier.urihttp://ri.uagro.mx/handle/uagro/3504
dc.description.abstractThe recognition and classification of human emotions through voice analysis, it is a very interesting research area, due to the wide variety of appli cations: telecommunications, learning, human-computer interface, entertainment, etc. In this investigation a methodology is proposed for the recognition of emotions analyzing voice segments. The methodology is mainly based on the fast Fourier transform (FFT) and Pearson¿s correlation coefficients. The tone (pitch), the fun damental frequency (Fo), the strength of the voice signal (energy) and the speech rate have been identified as important indicators of the emotion in the voice. The system consist of a graphical interface that allows user interaction by means of a microphone integrated into the computer, which automatically processes the data acquired. In our environment, human beings are programmed to let our voice flow, in multiple ways to communicate and to capture through it emotional states. There are various investigations where the Berlin database is used, which is free and many researchers have used it in their research. However, the creation of an emotional corpus with Spanish phrases, was needed for testing that provide clearer results. The corpus contains 16 phrases per emotion created by 11 users (9 women and 2 men) with a total of 880 audio samples. The following basic emotions were considered: disgust, anger, happiness, fear and neutral. Results obtained indicate that the emotion recognition algorithm offers an 80% of effectiveness.
dc.formatpdf
dc.language.isoeng
dc.publisherCommunications in Computer and Information Science
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.subjectEmotional state
dc.subjectParameterization
dc.subjectStatistical models
dc.subjectPattern recognition
dc.subject.classificationINGENIERÍA Y TECNOLOGÍA::CIENCIAS TECNOLÓGICAS::TECNOLOGÍA DE LAS TELECOMUNICACIONES::ONDAS ELECTROMAGNÉTICAS
dc.titleEmotional Corpus, Feature Extraction and Emotion Classification Using the Parameterized Voice Signal
dc.typeArtículo
dc.type.conacytarticle
dc.rights.accesopenAccess
dc.audiencegeneralPublic
dc.identificator7||33||3325||220204
dc.format.digitalOriginBorn digital
dc.thesis.degreelevelDoctorado
dc.thesis.degreenameDoctorado en Innovación y Cultura Digital
dc.thesis.degreegrantorUniversidad Autónoma de Guerrero
dc.thesis.degreedepartmentFacultad de Ingeniería
dc.thesis.degreedisciplineIngeniería y Tecnología
dc.identifier.cvuagro11228


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

http://creativecommons.org/licenses/by-nc-nd/4.0
Excepto si se señala otra cosa, la licencia del ítem se describe como http://creativecommons.org/licenses/by-nc-nd/4.0