HEJAZI, HANI DAOUD2022-04-262022-04-262022-0220181482https://bspace.buid.ac.ae/handle/1234/1995Captioning of images has been a major concern for the last decade, with most of the efforts aimed at English captioning. Due to the lack of work done for Arabic, relying on translation as an alternative to creating Arabic captions will lead to accumulating errors during translation and caption prediction. When working with Arabic datasets, preprocessing is crucial, and handling Arabic morphological features such as Nunation requires additional steps. We tested 32 different variables combinations that affect caption generation, including preprocessing, deep learning techniques (LSTM and GRU), dropout, and features extraction (Inception V3, VGG16). Moreover, our results on the only publicly available Arabic Dataset outperform the best result with BLEU-1=36.5, BLEU-2=21.4, BLEU-3=12 and BLEU4=6.6. As a result of this study, we demonstrated that using Arabic preprocessing and VGG16 image features extraction enhanced Arabic caption quality, but we saw no measurable difference when using Dropout or LSTM instead of GRU.enNLPLSTMVGG16INCEPTION V3deep learningArabic image captioningArabic text preprocessingArabic morphological featuresdeep learning techniquesArabic datasetArabic Image Captioning (AIC): Utilizing Deep Learning and Main Factors Comparison and Prioritization.Dissertation