Learned Image Compression
With more than 85% of all global internet traffic, image and video data make up the majority of information transmitted via the internet. In particular, the growing market of streaming services together with an increase in video quality are making the availability of highly efficient image and video compression methods a necessity to sustain communication stability.
Traditional methods for compression typically rely on intricate hand-designed pipelines. Improvements on these methods have shown diminishing returns in recent years and a smooth user experience can only be achieved through special purpose hardware for decoding that is baked into modern GPUs and CPUs. On the one hand this allows for complex algorithms to run in real-time, but on the other hand this hinders innovation since these algorithms can only be updated by putting new hardware into the hands of consumers. In contrast to this, Machine learning hardware in modern CPUs and GPUs provides an efficient way to solve a multitude of problems that require parallel processing without designing purpose build circuitry.
AIStream aims to tackle image compression by developing machine learning based compression algorithms that learn directly from data. This allows an easy adaption to specific image and video subdomains like Stereo Images, Medical Images, and others. Further, with a push of the button the new and improved model tailored to one specific content can be put in the hands of consumers. No new hardware required. That is innovation. During the course of the project we will develop working prototypes for four image and/or video subdomains to demonstrate the flexibility and potential of this approach.