Spoken affect classification using neural networks

Donn Morrison, Ruili Wang, Liyanage C. De Silva

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

8 Citations (Scopus)

Abstract

This paper aims to build an affect recognition system by analysing acoustic speech signals. A database of 391 authentic emotional utterances was collected from 11 speakers. Two emotions, angry and neutral, were considered. Features relating to pitch, energy and rhythm were extracted and used as feature vectors for a neural network. Forward selection was employed to prune redundant and harmful inputs. Initial results show a classification rate of 86.1%.

Original languageEnglish
Title of host publication2005 IEEE International Conference on Granular Computing
Pages583-586
Number of pages4
DOIs
Publication statusPublished - 2005
Externally publishedYes
Event2005 IEEE International Conference on Granular Computing - Beijing, China
Duration: 25 Jul 200527 Jul 2005

Publication series

Name2005 IEEE International Conference on Granular Computing
Volume2005

Conference

Conference2005 IEEE International Conference on Granular Computing
Country/TerritoryChina
CityBeijing
Period25/07/0527/07/05

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Spoken affect classification using neural networks'. Together they form a unique fingerprint.

Cite this