Status: available Supervisor: Daniel Helm, Martin Kampel
Start: as soon as possible
Automatic video analysis in large historical film collections is a challenging task due to different aspects. One major challenge is related to the quality of historical films, e.g. over-, underexposures and blur. Furthermore, they include scratches, cracks or press cuts and they are characterized by mold, dust or wet splices. Creating a dataset with meaningful ground truth labels for developing and evaluating further video analysis tasks manually is very time consuming and not trivial. Modern techniques such as CGANs or CycleGans demonstrate impressive results by generating synthetic datasets as well as to transfer image styles from one source domain to a target domain.
The goal of this practical work is as follows:
The goal is to explore state-of-the-art deep learning-based as well as traditional-based deformation strategies in order to create a photorealistic historical image dataset based on a selected benchmark dataset (e.g. MS COCO).
- Literature Review – getting to know the algorithms (Papers, Github repos, …)
- Exploration of state-of-the-art approaches (Deep-learning-based vs. traditional-based vs. combination)
- Creation of a usable dataset (original image + deformed image + mask)
- Implementation of own solution to create photorealistic image samples
- How can we measure the quality of a deformed image?
- Sensitivity threshold for selecting grade of deformation
- Evaluation (qualitative vs. quantitative)
- Readable and documented source-code
- Usable software package
- Final report + presentation
Basic knowledge in computer vision Experience in Python. Interest in Deep Learning (Tensorflow, Keras, PyTorch) and Machine Learning
This work is part of the project “Visual History of the Holocaust”.
The practical course is part of an ongoing research project. A “Forschungsbeihilfe” is available for the selected student.