Document Type



Doctor of Philosophy (PhD)


Computer Science

First Advisor's Name

Shu-Ching Chen

First Advisor's Committee Title

Committee chair

Second Advisor's Name

Sitharama S. Iyengar

Second Advisor's Committee Title

Committee member

Third Advisor's Name

Jainendra K. Navlakha

Third Advisor's Committee Title

Committee member

Fourth Advisor's Name

Xudong He

Fourth Advisor's Committee Title

Committee member

Fifth Advisor's Name

Keqi Zhang

Fifth Advisor's Committee Title

Committee member


Data Analytics, Data Science, Multimedia Data Mining, Machine Learning, Multimodal Deep Learning, Transfer Learning, Genetic Algorithms, Evolutionary Algorithms, Neural Networks

Date of Defense



Advances in technologies have rapidly accumulated a zettabyte of “new” data every two years. The huge amount of data have a powerful impact on various areas in science and engineering and generates enormous research opportunities, which calls for the design and development of advanced approaches in data analytics. Given such demands, data science has become an emerging hot topic in both industry and academia, ranging from basic business solutions, technological innovations, and multidisciplinary research to political decisions, urban planning, and policymaking. Within the scope of this dissertation, a multimodal data analytics and fusion framework is proposed for data-driven knowledge discovery and cross-modality semantic concept detection. The proposed framework can explore useful knowledge hidden in different formats of data and incorporate representation learning from data in multimodalities, especial for disaster information management. First, a Feature Affinity-based Multiple Correspondence Analysis (FA-MCA) method is presented to analyze the correlations between low-level features from different features, and an MCA-based Neural Network (MCA-NN) ispro- posedto capture the high-level features from individual FA-MCA models and seamlessly integrate the semantic data representations for video concept detection. Next, a genetic algorithm-based approach is presented for deep neural network selection. Furthermore, the improved genetic algorithm is integrated with deep neural networks to generate populations for producing optimal deep representation learning models. Then, the multimodal deep representation learning framework is proposed to incorporate the semantic representations from data in multiple modalities efficiently. At last, fusion strategies are applied to accommodate multiple modalities. In this framework, cross-modal mapping strategies are also proposed to organize the features in a better structure to improve the overall performance.







Rights Statement

Rights Statement

In Copyright. URI:
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).