Recently, Synthetic Aperture Radar (SAR) data are of high interest for different applications in remote sensing specially land cover classification. SAR imaging is independent of solar illumination and weather conditions; it is not affected by rain, fog, hail, smoke, or most importantly, clouds. It can even penetrate some of the Earth’s surface materials to return information about subsurface features. However, SAR images are difficult to interpret due to their special characteristics, i.e., the geometry and spectral range of SAR are different from optical imagery. In addition, the exhibition of the property of speckle caused SAR image is visually difficult to interpret. Consequently, optical data can be applied in fusion of SAR data to improve land cover classification. On the other hand, Light Detection and Ranging (LiDAR) data provide accurate height information for objects on the Earth, which makes LiDAR become more and more popular in terrain and land surveying. Regarding to the limitations and benefits of these three remote-sensing sensors, fusion of them improved land-cover classification. For this purpose, it is necessary to apply data fusion techniques. In recent years, significant attention has focused on multisensory data fusion for remote sensing applications and, more specifically, for land cover mapping. Data fusion techniques combine information from multiple sources, providing potential advantages over a single sensor in terms of classification accuracy. In most cases, data fusion provided higher accuracy than single sensors. Furthermore, fusion of sensors with inherent differences such as SAR, optical and LiDAR data need higher level of fusion strategies. Ability to fuse different types of data from different sensors, independence to errors in data registration step and accurate fusion methods could be mentioned as the benefits of decision-level fusion methods rather than other level fusion (pixel and feature level fusion methods).
This paper presents a method based on the simultaneously using of RADAR, multispectral and LiDAR data for classification of urban areas. First, different feature extraction strategies are utilized on all three data then a feature selection method based on Ant Colony Optimization (ACO) is applied to select optimized features. Maximum Likelihood (ML), Support Vector Machine (SVM) and K-nearest neighbor (KNN) are applied to classify optimized feature space as three classification methods. Finally a decision fusion method based on Weighted Majority Voting (WMV) is applied to provide final decision. A co-registered TerrraSAR-X, WorldView-2 and LiDAR data set form San Francisco of USA was available to examine the effectiveness of the proposed method. The results show that the proposed method based on the simultaneous use of three radar, optical and LiDAR data can increase the accuracy of some classes more and some other classes less. Also, the results of the data fusion can provide a different improvement compared to the results of the classification of each data individually. Generally, the results show that the use of multisensor imagery is worthwhile and the classification accuracy is significantly increased by such data sets. There are also several practical issues to be considered in future studies. Note that only a decision ensemble system was explored in this study. Additional research is needed in areas such as further study on more powerful feature spaces on each data, further processing of DSM of LiDAR, novel fusion strategies such as rule-based and object based