Abstract:
Due to the significant differences in geometric and nonlinear intensity, the multimodal image matching is still a challenging problem. To address this issue, a novel matching method is proposed by using local self-similarity structural features for multimodal images. Firstly, nonlinear diffusion filter is used for image smoothing, the phase congruency (PC) maps are calculated, and the feature points are extracted from the PC maps by the Harris detector. Then, the PC model is extended to establish the PC orientation map. Combined with the theory of self-similarity, a local self-similarity structural feature descriptor is designed for multimodal images. Finally, the Euclidean distance is used as the matching measure for the corresponding point recognition. Experimental results conducted on various real multimodal image pairs demonstrate that the proposed method can achieve better performance in terms of the number of correct matches and the registration precision in comparison with the traditional methods.