fbpx
The Stack Archive

Star Wars as expressionism: neural networks enable style transfer for video

Tue 3 May 2016

Neural networks and algorithms have already provided some impressive examples of the transposition of distinctive visual styles onto existing images. We featured the pioneering work of Gatys, Ecker and Bethge here at The Stack last year, when their paper A Neural Algorithm of Artistic Style [PDF] demonstrated how convolutional neural networks could ‘apply’ the style of Vincent Van Gogh. Pablo Picasso or Frida Kahlo to modern-day photos and images.

Now a new team from the Department of Computer Science at the University of Freiburg has taken the research an eye-boggling step further by producing a workable method of applying any relatively distinct visual style to any video, with Artistic style transfer for videos [PDF].

To demonstrate the technique researchers Manuel Ruder, Alexey Dosovitskiy and Thomas Brox have also produced a remarkable video in which a scene from the 1983 Star Wars entry Return Of The Jedi is re-interpreted in the expressionist style of early 20th century expressionist painter Heinrich Schlief, and CGI children’s movie Ice Age (2002) is re-rendered in the style of prehistoric cave painting…amongst others. See the video below:

miss-marple-style-applied

‘The multi-pass algorithm applied to a scene from Miss Marple. With the default method, the image becomes notably brighter and loses contrast, while the multi-pass algorithm yields a more consistent image quality over time.’

It proved no small task to upgrade the Gatys method to video, since simply applying the algorithm to each individual frame would have resulted in incoherent sizzling, with the technique not taking into consideration any of the preceding or following frames. Therefore the team introduced a temporal constraint that punishes any deviation between two consecutive frames. But in order to achieve a dynamic and fluid vision, it was also necessary to account for occluded and moving objects, and introduce special considerations for these.

The team are still working on certain discontinuities which appear when an object which was hidden moves into plain view, and also had to do a lot of work to stop the algorithm ‘gathering’ at the frame borders.

‘For static images, these artifacts are hardly visible, yet for videos with strong camera motion they move towards the center of the image and get amplified. We developed a multi-pass algorithm, which processes the video in alternating directions using both forward and backward flow. This results in a more coherent video.’

Tags:

AI news research
Send us a correction about this article Send us a news tip