Information Fusion for Multi-Source Data Classification



Journal Title

Journal ISSN

Volume Title



Multi-source data, either from different sensors or disparate features extracted from the same sensor, are often valuable for data analysis due to their potential for providing complementary information. Effective fusion of information from such multi-source data is critical to enhanced and robust interpretation about the underlying classification problem. Nevertheless, multi-source data also bring unique challenges for data processing, e.g., high-dimensional features, lack of compact representation, and insufficient quantity of labeled data. To make the most use of multi-source data and to address the above challenges, in this research, we develop and validate data fusion algorithms on multiple datasets in two active research areas: remote sensing and brain machine interface (BMI).

We develop a mixture-of-kernels approach for data fusion, and demonstrate its efficacy at fusion of multi-source data in the kernel space. In the proposed approach, each source of data is represented by a dedicated kernel -- one can then learn a classifier (or an ``optimal'' feature subspace) by optimizing the kernel parameters for maximum discriminative potential. A direct related benefit is that this learning framework provides a natural and automated mechanism to learn weight distributions in the weighted mixture of kernels, that are strongly indicative of strengths and weaknesses of various sources in the underlying multi-source data analysis problem. We illustrate the benefit of this property and apply it to infer the relative importance of different sources of information in a BMI application. Additionally, to save the labor of labeling a large quantity of samples in real world remote sensing applications, an ensemble based multiple kernel active learning framework is proposed to effectively select important unlabeled samples from multi-source data for classification. We also propose a multi-source feature extraction method based on a composite kernel mapping, to project the multi-source data to a lower dimensional subspace for effective feature fusion.

Finally, to effectively represent multi-source data in a compact and robust manner, we propose a joint sparse representation model with adaptive locality weights for classification. By adapting the penalty on individual atoms in the dictionary, we show that one can achieve better signal representation and reduce estimation errors. Further, we also develop a kernel variant of the proposed fusion framework, which is conceptually consistent and aligned with the mixture-of-kernels approach developed previously.



Data fusion, Hyperspectral imaging, Classification, Brain computer interface (BCI), Electroencephalography (EEG)


Portions of this document appear in: Zhang, Yuhang, Hsiuhan Lexie Yang, Saurabh Prasad, Edoardo Pasolli, Jinha Jung, and Melba Crawford. "Ensemble multiple kernel active learning for classification of multisource remote sensing data." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8, no. 2 (2015): 845-858. And in: Zhang, Yuhang, and Saurabh Prasad. "Locality preserving composite kernel feature extraction for multi-source geospatial image analysis." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8, no. 3 (2015): 1385-1392.