Video captioning
Abstract
In recent years, models for video captioning task has been improved very much. Despite advancement, it is still impeded by hardware constraints. Video captioning models takes a sequence of images and caption as inputs, which makes it one of the most memory consuming and computation required task. In this project work, we exploit the importance of required frames from the video to get the desired performance. We also propose the use of a video summarizing model embedded with the captioning model for dynamically selecting frames, which allows the reduction of required frames without losing Spatio-temporal information of the video.
Collections
- M Tech Dissertations [923]