Show simple item record

dc.contributor.advisorJotwani, Naresh D.
dc.contributor.authorSingh, Archana
dc.date.accessioned2017-06-10T14:37:02Z
dc.date.available2017-06-10T14:37:02Z
dc.date.issued2006
dc.identifier.citationSingh, Archana (2006). Speech driven facial animation system. Dhirubhai Ambani Institute of Information and Communication Technology, x, 45 p. (Acc.No: T00084)
dc.identifier.urihttp://drsr.daiict.ac.in/handle/123456789/121
dc.description.abstractThis thesis is concerned with the problem of synthesizing animating face driven by new audio sequence, which is not present in the previously recorded database. The main focus of the thesis is on exploring the efficient mapping of the features of speech domain to video domain. The mapping algorithms consist of two parts: building a model to fit the training data set and predicting the visual motion with the novel audio stimuli. The motivation was to construct the direct mapping mechanism from acoustic signals at low levels to visual frames. Unlike the previous efforts at higher acoustic levels (phonemes or words), the current approach skips the audio recognition phase, in which it is difficult to obtain high recognition accuracy due to speaker and language variability.
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology
dc.subjectCo-articulation
dc.subjectFacial animation system
dc.subjectFrame
dc.subjectGaussian mixture model
dc.subjectHidden markov model
dc.subjectSpeech recognition
dc.subjectVector quantization
dc.subjectViseme
dc.subjectViterbi Algorithm
dc.classification.ddc621.3994 SIN
dc.titleSpeech driven facial animation system
dc.typeDissertation
dc.degreeM. Tech
dc.student.id200411023
dc.accession.numberT00084


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record