Please use this identifier to cite or link to this item: http://drsr.daiict.ac.in//handle/123456789/1136
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorPatil, Hemant A.-
dc.contributor.advisorSailor, Hardik B.-
dc.contributor.authorChaturvedi, Shreya Sanjay-
dc.date.accessioned2024-08-22T05:21:08Z-
dc.date.available2024-08-22T05:21:08Z-
dc.date.issued2022-
dc.identifier.citationChaturvedi, Shreya Sanjay (2022). Self-Supervised Speech Representation for Speech Recognition. Dhirubhai Ambani Institute of Information and Communication Technology. xi, 81 p. (Acc. # T01056).-
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/1136-
dc.description.abstractVoice Assistants (VAs) are nowadays an integral part of human�s life. The low resource applications of VAs, such as regional languages, children speech, medical conversation, etc are the key challenges faced during development of these VAs. On a broader perspective, VAs consist of three parts, namely, Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text to Speech (TTS) model. This thesis is focused on one part of them, i.e., ASR. In particular, opti- mization of low resource ASR is targeted with the application of children�s speech. Initially, a data augmentation technique was proposed to improve the performance of isolated hybrid DNN HMM ASR for children�s speech. Hence, we have used CycleGAN based augmentation technique, where children to children voice conversion is performed. Here, for conversion of characteristics, the speech signals were categorized into two classes based on the fundamental frequency threshold of speech. In this work, a detailed experimental analysis of various augmentation, such as SpecAugment, speed perturbation, and volume perturbation are done w.r.t. to ASR. Further, to optimize low resource ASR, the self supervised learning, i.e., wav2vec 2.0 have been explored. It is a semi supervised approach, where pretraining is performed with unlabelled data and then finetuned with labelled data. In addition, the fusion of Noisy Student Teacher (NST) learning is done with self supervised learning techniques. The key achievement of this work was efficient use of unlabelled data and even though the process involves iterative training, redundant training was negligible. The filtering of pseudo labelled data was done before utilizing it for finetuning. After Acoustic Model (AM) decoding, the Language Model (LM) was also used to optimize the performance. Additional work was also done in the direction of replay Spoofed Speech Detection (SSD). In this work, the significance of Delay and Sum (DAS) beamformer was investigated over State of the Art (SoTA) Minimum Variance Distortionless Response (MVDR) beamforming technique for replay SSD.-
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology-
dc.subjectAutomatic Speech Recognition-
dc.subjectData Augmentation-
dc.subjectSelf Supervised Learning-
dc.subjectNoisy Student Teacher Learning-
dc.subjectReplay Spoof Speech Detection-
dc.classification.ddc006.454 CHA-
dc.titleSelf-Supervised Speech Representation for Speech Recognition-
dc.typeDissertation-
dc.degreeM. Tech (EC)-
dc.student.id202015004-
dc.accession.numberT01056-
Appears in Collections:M Tech (EC) Dissertations

Files in This Item:
File SizeFormat 
202015004.pdf1.94 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.