论文部分内容阅读
Despite the advantages of all-weather and all-day high-resolution imaging,synthetic aperture radar(SAR)images are much less viewed and used by general people because human vision is not adapted to microwave scattering phenomenon.However,expert interpreters can be trained by comparing side-by-side SAR and optical images to learn the mapping rules from SAR to optical.This paper attempts to develop machine intelligence that is trainable with large-volume co-registered SAR and optical images to translate SAR images to optical version for assisted SAR image interpretation.Reciprocal SAR-optical image translation is a challenging task because it is a raw data translation between two physically very different sensing modalities.Inspired by recent progresses in image translation studies in computer vision,this paper tackles the problem of SAR-optical reciprocal translation with an adversarial network scheme where cascaded residual connections and hybrid L1-GAN loss are employed.It is trained and tested on both spaceborne Gaofen-3(GF-3)and airborne Uninhabited Airborne Vehicle Synthetic Aperture Radar(UAVSAR)images.Results are presented for datasets of different resolutions and polarizations and compared with other state-of-the-art methods.The Frechet inception distance(FID)is used to quantitatively evaluate the translation performance.The possibility of unsupervised learning with unpaired/unregistered SAR and optical images is also explored.Results show that the proposed translation network works well under many scenarios and it could potentially be used for assisted SAR interpretation.