DoRIAH

Project Details

Funding

FFG

Grant Number

880883

Duration

2020/12/01-2023/11/30

Contact

Sebastian Zambanini

Persons

Sebastian Zambanini
Robert Sablatnig

Domain-adaptive Remote sensing Image Analysis with Human-in-the-loop

Analyzing remote sensing images on a large scale requires to balance two major constraints: accuracy of results and the time it takes to process the images. Using human analysts to accomplish the goal usually provides highly accurate results, but is often not feasible in large-scale scenarios due to the shear amount of image data to be processed. Consequently, fully automatic image analysis approaches are widely considered but often lack the accuracy needed for the specific problem domain. Therefore, for a wide generalization across different domains it is inevitable to combine modern image analysis methods with human supervision to ease the domain transfer.
As a solution, the DoRIAH project (Domain-adaptive Remote sensing Image Analysis with Human-in-the-loop) aims to investigate the analysis of remote sensing images from a human-in-the-loop perspective. Its goal is to allow the semi-automatic detection of various small-size objects in remote sensing images of any kind, from historical aerial images to modern-day satellite images, which is a common goal in many different application domains: for instance, detecting bomb craters in aerial images from WW2 is a major task for estimating the risks of UneXploded Ordnances (UXOs). In modern-day images, the detection of vehicles provides a rich information source for traffic monitoring or parking lot analysis.
The unified approach of DoRIAH involves two basic steps: (1) Georeferencing and 3D reconstruction from remote sensing imagery and (2) interactive detection of objects of interest. Both steps will be equipped with feedback loops to introduce the human cognitive power into the process. While human feedback tells the system about the accuracy and correctness of results, visual feedback of (improved) system results allows for meaningful interpretation on the user side.

Project Consortium and Partners: