Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPal, Amisha-
dc.contributor.authorKhan, Nafis Uddin [Guided by]-
dc.contributor.authorGupta, Pradeep Kumar [Guided by]-
dc.descriptionEnrolment No. 191454en_US
dc.description.abstractIn this project we try to create and examine the working of an image caption generator made using CNN and LSTM .Image Captioning refers to the process of generating a human-readable textual description or a sentence of the image that is taken as input explaining the content of the same.Image Captioning has been a popular topic in this age of technology due to its various advantages like helping the blind or visually impaired people easily access and understand the image they are viewing on the internet. There are various scenarios experienced by software developers where an image and the capabilities of vision is not sufficient alone to build more interactive, intelligent and accessible software through images. Extra content and clarification or an alternative text is needed in these situations to provide a more accessible experience. Since currently over the internet a great number of images remain to be described it is impossible to be done manually, Thus taking help of deep learning, image processing and natural language processing we can give the power to a computer to describe images on its own. In this proposed model we create a two staged model using Deep Neural algorithms(Convolutional Neural Networks and Lost Short Term Memory).en_US
dc.publisherJaypee University of Information Technology, Solan, H.P.en_US
dc.subjectImage captioningen_US
dc.subjectConvolutional neural networken_US
dc.titleImage Captioning using CNN and LSTMen_US
dc.typeProject Reporten_US
Appears in Collections:B.Tech. Project Reports

Files in This Item:
File Description SizeFormat 
Image Captioning using CNN and LSTM.pdf3.03 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.