• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    Image Captioning Using Visual And Semantic Attention Mechanism

    Thumbnail
    View/Open
    201911012_Final MTT - Manish Khare.pdf (11.76Mb)
    Date
    2021
    Author
    Patel, Abhikumar
    Metadata
    Show full item record
    Abstract
    Image captioning is a method of generating captions/descriptions for the image. Image captioning have many applications in various fields like image indexing for content based image retrieval, Self-driving car, for visually impaired persons, in smart surveillance system and many more. It connects two major research communities of computer vision and natural language processing. The main challenges in image captioning are to recognize the important objects, their attributes, and their visual relationships of objects within an image, then it also needs to generate syntactically and semantically correct sentences. Currently, most of the architectures for image captioning are based on the encoder-decoder model, in which the image is first encoded using CNN to get an abstract version of the image then it is decoded using RNN to get proper caption for the image. So finally I have selected one base paper which was based on visual attention on the image to attend the most appropriate region of the image while generating each word for the caption. But they have miss one important factor while generating the caption for the image which was visual relationships between the objects present in the image. So I have decided to add one relationship detector module to that model to consider the relationships between objects. After combining this module with existing show-attend and tell model we get the caption for the image which consider the relationships between object, which ultimately enhance the quality of the caption for the image. I have performed experiments on various publicly available standard datasets like Flickr8k dataset, Flickr30k dataset and MSCOCO dataset.
    URI
    http://drsr.daiict.ac.in//handle/123456789/1000
    Collections
    • M Tech Dissertations [923]

    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     


    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV