Supervisor: Roman Pflugfelder
Visual tracking is a foundational building block in almost every application from cell tracking in bioimaging, to object perception in robotic systems, to localisation in user-centric computing. Visual tracking is fascinating as it is unsolved, it is hard as it combines several difficult problems such as visual representation, adaptation and the inference of the object’s state, and last but not least it actually works as research has progressed significantly over the last years.
A current trend is tracking unknown objects in unknown scenes, the most general case of tracking, urgently needed in self-driving cars and service robots. Since 2015, this trend has triggered a new view on learning tracking by following connectionism again. Very recently a couple of new deep learning approaches appeared which see tracking similar to categorial object recognition as supervised offline learning problem where deep learning can be applied.
Implement and compare existing trackers following the deep learning principle.
The thesis can be combined with a preceding Informatik Praktika.
- Review literature since 2015
- Create training and validation dataset based on ILSVRC’16 (www.image-net.org)
- Implement training and test algorithms
- Test results on VOT benchmarks (www.votchallenge.net)
- Optional: Improve algorithms for better results
- Written report/thesis and final presentation
- Basic knowledge in computer vision
- Basic experience in Matlab, C++, Python
- Interest in Machine Learning, maths, statistics
- Interest in GPU programming
- D. Held, S. Thrun, and S. Savarese. Learning to track at 100 fps with deep regression net- works. ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I, pages 749–765, Springer, 2016.
- L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr. Fully-convolutional siamese networks for object tracking. ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II, pages 850–865, Springer, 2016.