Please use this identifier to cite or link to this item: http://drsr.daiict.ac.in//handle/123456789/1098
Title: Cross-domain Person Re-identification
Authors: Bhilare, Shruti
Hati, Avik
Shah, Raj H.
Keywords: Reidentification
Computer vision community
Lowresolution images
Omni-scale Network
Issue Date: 2022
Publisher: Dhirubhai Ambani Institute of Information and Communication Technology
Citation: Shah, Raj H. (2022). Cross-domain Person Re-identification. Dhirubhai Ambani Institute of Information and Communication Technology. viii, 33 p. (Acc. # T01018).
Abstract: The problem of person reidentification has been getting much attention in the computer vision community. The task is to recognize pictures of the same indi� viduals in images with different backgrounds taken from multiple cameras. It involves complexities such as different people with similar outfits or the same person with different outfits, differential illuminations, lowresolution images, in� accurate bounding boxes, and occlusion. The increasing progress is due to the increased demand for automated surveillance systems. The problem is often formulated as a retrieval task. When given a query image and a gallery set of images taken from different cameras, possibly at different locations, the system aims to find the pictures with the same person. In this thesis, we have used ResNetSO and Omni-scale Network (OSNet) for feature extraction and different loss functions such as softmax loss, triplet loss, quadruplet loss, KL-divergence, etc. to train and infer on models for cross-domain person re-identification (re-id). We observe that using a multi-source training strategy boosts the performance of such cross-domain re-id systems. We also show that using re-ranking significantly improves the performance of both same domain and cross-domain person re-id.
URI: http://drsr.daiict.ac.in//handle/123456789/1098
Appears in Collections:M Tech Dissertations

Files in This Item:
File SizeFormat 
202011028.pdf10.75 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.