The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ℓ2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ℓ2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches.

Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

Maggiolo L.;Solarna D.;Moser G.;Serpico S. B.
2022-01-01

Abstract

The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based ℓ2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple ℓ2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches.
File in questo prodotto:
File Dimensione Formato  
22.rs.luca.compressed.pdf

accesso aperto

Descrizione: Articolo su rivista
Tipologia: Documento in versione editoriale
Dimensione 7.64 MB
Formato Adobe PDF
7.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1093202
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact