Dimitri Lallement

After obtaining his engineering degree in Computer Science and Applied Mathematics from ENSIMAG, Grenoble (France), Dimitri Lallement joined the Earth Observation Lab of the French Space Agency (CNES) in 2020. He currently works there as an image processing engineer. His primary reserach interests include Digital Terrain Model (DTM) extraction, 3D reconstruction, and multimodal change detection.


Sessions

07-18
14:30
30min
PICANTEO: a multi-modal open source change detection pipeline for reliable uncertainty-aware disaster response
Dimitri Lallement

When a natural disaster strikes, the International Charter: Space and Major Disasters [1] can be activated to deliver images of affected areas in a matter of hours or days. Currently, most rapid mapping services are based on error-prone and time-consuming annotations of photo-interpretation experts, who manually label impacted structures in order to provide maps that locate and quantify damage to local authorities and emergency services. This kind of emergency mapping is known as "Rapid Mapping" by Copernicus Emergency Management Services, and differs from "Risk and Recovery", which aims to provide more detailed assessment of destruction and reconstruction for post-disaster monitoring purposes within a time span of several weeks to months.

To support human annotators in their task, CNES (the French space agency) is developing a change detection software tool [2] that takes advantage of the fusion of very high resolution 2D and 3D satellite data (less than 70~cm resolution). This tool, called PICANTEO for ‘Photogrammetry and Imagery Change Analysis based on Neural network Toolbox for Earth Observation’, aims to identify destroyed buildings and provide accurate damage maps as quickly as possible to annotator teams. To promote its use, PICANTEO is distributed as open source on Github (github.com/CNES/picanteo).

This tool features a 2D change detection pipeline. Based on state-of-the-art deep neural networks that detect semantic changes between satellite images, and optionally a 3D change detection pipeline that confirms and completes the previous results using data fusion. This paper will explain in detail how PICANTEO integrates 2D uncertainty estimation for semantic segmentation and ambiguity estimation for 3D reconstruction to filter out false detections and improve the reliability of multimodal change detection results.

Given that the 2D modality is always available after a disaster in the case of "Rapid Mapping", priority is given to the robustness and transferability of models that are supported by reliable estimates of their uncertainty. This involves integrating advanced data augmentation techniques to create models that are resilient to a wide range of potentially degraded acquisition conditions (off-nadir, important cloud cover, etc.) while ensuring transferability between a selection of sensors. In addition to demonstrating the impact of incorporating uncertainty in the 2D change detection pipeline, this paper also includes an overview of various basic approaches for estimating uncertainty using modern deep neural network architectures. This overview includes a unified framework for comparing and evaluating these approaches in terms of model calibration and significance of derived uncertainty indicators, as well as a qualitative visualization of different uncertainty maps.

Although the availability of very high-resolution stereo acquisition is currently fairly limited from a global point of view and therefore more suited to programmed acquisitions in the case of "Risk and Recovery", new satellite constellations such as the CO3D [3] mission will greatly increase the accessibility of the 3D modality. This work can therefore also be seen as a prelude to the arrival of this new data source, in anticipation of future change detection approaches taking advantage of this additional feature. PICANTEO includes an optional 3D module to extend the 2D pipeline. It relies on the production of a Digital Surface Model (DSM) from a stereo acquisition using the open-source photogrammetry tool CARS [4]. This processing is applied to both acquisitions: before and after the disaster, and CARS incorporates a geometric refinement module that enables the DSMs to be registered together. The next step is to use Bulldozer [5] to extract the Digital Terrain Models (DTMs) from the DSMs in order to obtain the Digital Height Model (DHM) or nDSM to capture the above-ground features (building, vegetation) from the two DSMs. The point of calculating 3D change on DHMs rather than DSMs is to avoid taking into account specific 3D changes that interfere with building change detection, such as vibrations in the DSM or landslides. The XDEM tool is then used to compare the DHMs at the two dates and co-register them to correct any residual bias. Finally, the 3D pipeline incorporates filtering methods, in particular the integration of the ambiguity concept [6], which avoids interpreting correlation errors during the DSM generation step (water or shadow areas, for example) as 3D changes.

In order to simplify the user experience, a visualization portal has been implemented to enable navigation within the studied scene. Users can use this dashboard to easily browse the different layers mentioned above: 2D changes, building footprints, 2D uncertainties, 3D changes, DHM, 3D ambiguity, 2D/3D changes fused, etc. The dashboard is made up of three side-by-side panels: one for viewing the 2D layers before the disaster, a second one for the 2D post-event layers, and, if 3D data are available, a third tile displays the 3D layers. This dashboard allows users to easily navigate and assess damage after the disaster.

To evaluate the performance of our method in the context of natural disaster response, this paper will include a recent case of activation of the International Charter on Space and Major Disasters, for which the damage has been annotated.

State of software
SA01