The present work is focused on a semantic segmentation strategy implemented in the workflow of the tool MAGO (standing for “Adaptive Mesh for Orthophoto Generation”), considering the contribution of the 3D geometry and the colour information, both deriving from the point cloud of the scene. Moreover, the 2D source imagery, previously used to obtain the photogrammetric point cloud, is employed even to enhance the procedure with the recognition of moving objects, comparing the evolution of epochs. The analysed context is an urban scene, deriving from the UAVid dataset proposed for the ISPRS benchmark. In particular, the so-called “seq18”, a set of high-resolution oblique images taken by UAV (Unmanned Aerial Vehicle), has been used to test the semantic segmentation. The workflow includes the production of two Digital Surface Models (DSMs), containing the geometric and radiometric information, respectively, and their processing by means of the Harris corner detector, allowing the understanding of the image variability. Then, starting from the source geometry and colour information and combining them with their variability mapping, a preliminary classification is performed. Further criteria allow the segmentation of the humans and cars present in the scene. In particular, static objects are identified according to the content of the neighbour pixels in a certain kernel, while the evolution in time of moving elements is recognized by means of the comparison of the projected images belonging to the different epochs. The presented preliminary achievements show some criticalities that require further attention and improvement. In particular, the strategy could be enriched getting more information from the source 2D images, which at the moment are directly used only for the comparison of consecutive epochs.

MAGO approach for semantic segmentation: the case study of UAVid benchmark dataset

S. Gagliolo;D. Sguerso
2021

Abstract

The present work is focused on a semantic segmentation strategy implemented in the workflow of the tool MAGO (standing for “Adaptive Mesh for Orthophoto Generation”), considering the contribution of the 3D geometry and the colour information, both deriving from the point cloud of the scene. Moreover, the 2D source imagery, previously used to obtain the photogrammetric point cloud, is employed even to enhance the procedure with the recognition of moving objects, comparing the evolution of epochs. The analysed context is an urban scene, deriving from the UAVid dataset proposed for the ISPRS benchmark. In particular, the so-called “seq18”, a set of high-resolution oblique images taken by UAV (Unmanned Aerial Vehicle), has been used to test the semantic segmentation. The workflow includes the production of two Digital Surface Models (DSMs), containing the geometric and radiometric information, respectively, and their processing by means of the Harris corner detector, allowing the understanding of the image variability. Then, starting from the source geometry and colour information and combining them with their variability mapping, a preliminary classification is performed. Further criteria allow the segmentation of the humans and cars present in the scene. In particular, static objects are identified according to the content of the neighbour pixels in a certain kernel, while the evolution in time of moving elements is recognized by means of the comparison of the projected images belonging to the different epochs. The presented preliminary achievements show some criticalities that require further attention and improvement. In particular, the strategy could be enriched getting more information from the source 2D images, which at the moment are directly used only for the comparison of consecutive epochs.
File in questo prodotto:
File Dimensione Formato  
isprs-archives-XLIII-B2-2021-353-2021.pdf

accesso aperto

Descrizione: Contributo in atti di convegno
Tipologia: Documento in versione editoriale
Dimensione 1.65 MB
Formato Adobe PDF
1.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1049710
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact