• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    Self-Supervised Speech Representation for Speech Recognition

    Thumbnail
    View/Open
    202015004.pdf (1.896Mb)
    Date
    2022
    Author
    Chaturvedi, Shreya Sanjay
    Metadata
    Show full item record
    Abstract
    Voice Assistants (VAs) are nowadays an integral part of human�s life. The low resource applications of VAs, such as regional languages, children speech, medical conversation, etc are the key challenges faced during development of these VAs. On a broader perspective, VAs consist of three parts, namely, Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text to Speech (TTS) model. This thesis is focused on one part of them, i.e., ASR. In particular, opti- mization of low resource ASR is targeted with the application of children�s speech. Initially, a data augmentation technique was proposed to improve the performance of isolated hybrid DNN HMM ASR for children�s speech. Hence, we have used CycleGAN based augmentation technique, where children to children voice conversion is performed. Here, for conversion of characteristics, the speech signals were categorized into two classes based on the fundamental frequency threshold of speech. In this work, a detailed experimental analysis of various augmentation, such as SpecAugment, speed perturbation, and volume perturbation are done w.r.t. to ASR. Further, to optimize low resource ASR, the self supervised learning, i.e., wav2vec 2.0 have been explored. It is a semi supervised approach, where pretraining is performed with unlabelled data and then finetuned with labelled data. In addition, the fusion of Noisy Student Teacher (NST) learning is done with self supervised learning techniques. The key achievement of this work was efficient use of unlabelled data and even though the process involves iterative training, redundant training was negligible. The filtering of pseudo labelled data was done before utilizing it for finetuning. After Acoustic Model (AM) decoding, the Language Model (LM) was also used to optimize the performance. Additional work was also done in the direction of replay Spoofed Speech Detection (SSD). In this work, the significance of Delay and Sum (DAS) beamformer was investigated over State of the Art (SoTA) Minimum Variance Distortionless Response (MVDR) beamforming technique for replay SSD.
    URI
    http://drsr.daiict.ac.in//handle/123456789/1136
    Collections
    • M Tech (EC) Dissertations [17]

    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     


    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV