Comparative Study of Deep Learning Models for Unimodal & Multimodal Disaster Data for Effective Disaster Management

Loading...
Thumbnail Image
Date
2021-07
Journal Title
Journal ISSN
Volume Title
Publisher
The British University in Dubai (BUiD)
Abstract
Multimodal data of text and images on social media posts hold valuable information that can be utilized during crisis events. Such information includes requests for help, rescue efforts, warnings, infrastructure damage, missing people, injured or dead individuals, volunteers, donations, and many more. Many studies focus only on the text modalities, single classification tasks and small-scale home-grown datasets when studying how useful social media data can be for emergency services. In this study, a multimodal deep learning system for automatic classification of disaster tweets was developed. Two classification tasks were tackled, which are informativeness and the humanitarian category. An extensive comparison between unimodal text-only, unimodal image-only and multimodal deep learning models across three different representative disaster datasets (CrisisMMD, CrisisNLP, and CrisisLex26) was done. Convolutional neural networks are utilized for defining the deep learning architectures. Experiments across the multiple settings and datasets show that multimodal models perform better than their unimodal counterparts. It was also found that mapping between the diverse humanitarian categories and consolidating smaller datasets with larger ones significantly improves the models’ performance when compared to individual datasets. The consolidated dataset can serve as a new baseline multimodal dataset for further research directions.
Description
Keywords
deep learning models, disaster management, multimodal data
Citation