A NOVEL HYBRID MODEL FOR AUTOMATIC IMAGE CAPTIONING

Mariam Abd Elmohsen Mohamed Ramadan Mohamed Hafez;

Abstract


A set of deep learning models and various datasets were tested to solve the problem by finding the link between the image features and the words it represented. The work was divided into two phases: the first was to extract features and determine classes. Various models were tested on different datasets (ImageNet, MS-COCO) to determine the effectiveness of their use. Combination of ALEXNET network, multi-class SVM was the best with accuracy 84.25%. The second was to generate captions, by entering the features and classes from the first stage. Various models were tested, and concluded that LSTM was the best model. The two phases resulted in a hybrid model of ALEXNET network, multi-class SVM and LSTM as the best model with accuracy 88.4%. The model was tested on the complete MS-COCO dataset, reaching an accuracy 90.7%, and was shown to reduce image processing time and high accuracy compared to previous models.


Other data

Title A NOVEL HYBRID MODEL FOR AUTOMATIC IMAGE CAPTIONING
Other Titles نموذج هجين جديد للتوضيح التلقائى للصور
Authors Mariam Abd Elmohsen Mohamed Ramadan Mohamed Hafez
Issue Date 2021

Attached Files

File SizeFormat
BB9583.pdf740.03 kBAdobe PDFView/Open
Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check

views 2 in Shams Scholar


Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.