PhD Position, Active Learning for Object Detection in Remote Sensing Images

Background
Analyzing remote sensing images on a large scale requires to balance two major constraints: accuracy of results and the time it takes to process the images. Using human analysts to accomplish the goal usually provides highly accurate results, but is often not feasible in large-scale scenarios due to the shear amount of image data to be processed. Consequently, fully automatic image analysis approaches are widely considered but often lack the accuracy needed for the specific problem domain. Therefore, for a wide generalization across different domains it is inevitable to combine modern image analysis methods with human supervision to ease the domain transfer.
The recently granted DoRIAH project (Domain-adaptive Remote sensing Image Analysis with Human-in-the-loop) aims to investigate the analysis of remote sensing images from a human-in-the-loop perspective. Its goal is to allow the semi-automatic detection of various small-size objects in remote sensing images of any kind, from historical aerial images to modern-day satellite images, which is a common goal in many different application domains: for instance, detecting bomb craters in aerial images from WW2 is a major task for estimating the risks of UneXploded Ordnances (UXOs). In modern-day images, the detection of vehicles provides a rich information source for traffic monitoring or parking lot analysis.

 

PhD Topic
We offer a full PhD position for 3 years, where you will explore ways to combine deep learning based object detection methods with human interaction. You will work in a project team of visual computing and photogrammetry scientists  and a company working in the field of aerial image analysis.

Requirements:

  • Applicants should hold a master’s degree in Computer Science, Mathematics or similar
  • Applicants should have profound knowledge of Machine Learning, in particular Deep Learning
  • Experience with Computer Vision, Remote Sensing and Python Deep Learning frameworks advantageous

Contact: robert.sablatnig@tuwien.ac.at, sebastian.zambanini@tuwien.ac.at