Generative Adversarial Nets (GANs) for the Enhancement of Ancient Manusctripts

Status: available Supervisor: Simon Brenner

Problem Statement

Multispectral Imaging (MSI) has proven a powerful tool to recover degraded texts in historic manuscripts. In this context, the goal of image enhancement methods is to take the raw multispectral images as input and produce and output image with maximal visibility/legibility of a degraded text. Fig. 1 shows an example, where a second layer of text could be visualized.

Fig. 1: Multispectral Imaging: In this example, a palimpsest (a text which has been erased to recycle the parchment) is revealed.

Goal

Image-to-image translation is a prominent application example of GANs. This even works when you don’t have paired training data: https://arxiv.org/abs/1703.10593. This approach can for example be used to transform a photo of yourself to a Van Gogh painting.. Now the question is: can you use a GAN for manuscript enhancement? Possible image translation scenarios:

  • Multispectral layers => readable document (the standard problem, see Fig. 2) Most challenging.
  • False-color image (like in Fig.1, right) => “naturally colored” image
  • Intact manuscript => degraded manuscript (generate “synthetic training” data for other enhancement approaches)
Fig. 2: Multispectral layers to something readable.

 

Workflow

  • Literature Review
  • Implementation
  • Evaluation
  • Written Thesis and final presentation

  If you are interested in one of the topics, write to sbrenner@cvl.tuwien.ac.at