fbpx
The Stack Archive

Tackling Global Illumination in Virtual Reality

Wed 14 Oct 2015

The challenges facing the rebirth of Virtual Reality will be quite familiar to anyone who has followed the historical progress of CGI rendering, either from the sharp end of 3D scene-creation programs – such as Maya, Lightwave and Cinema4D – or from the consumer end of video games rendering. The parameters at which virtual environments begin to ‘crack’ are defined by many of the same limitations and bottlenecks that strangled the public’s interest in the potential of VR in the 1990s: bitmap resolution (‘Don’t stand too near the walls’), polygonal resolution (‘I never knew there were so many angles in a sphere!’), reactive frame rates (‘Don’t look sideways too quickly’), and clipping snafus (‘I just walked through a wall’), amongst others.

One of the major challenges in achieving convincing environments is the task of providing realistic ray-traced lighting, not only on a real-time basis but, for the pending generation of VR head-sets, twice over at 60fps, compensating for chromatic aberration, and with such a wide field-of-view (to simulate the scope of human vision) that practically 180 degrees of the scene has to be rendered 120 times a second.

It can’t be done; but it can be faked. Even faking it is a significant logistical task. Geomerics, acquired by ARM in 2013 and based between Vancouver and Cambridge (UK), is addressing Global illumination in Virtual Reality with the Enlighten runtime engine, a real-time lighting solution portable to all the major games platforms including the Xbox and PlayStation ranges, OSX, Linux, Android and PC (Windows).

global-illuminationGlobal Illumination follows the same virtual light source as standard ray tracing, but calculates what happens to the light after it reaches its primary destination, ricocheting off walls and other surfaces, being caught or passed on for further journeys depending on the nature of new materials that the bounced light lands on, and generally taking quite a detour round a scene before running out of energy. At the very least GI represents the ‘fill light’ that gives body and variance to shadows.

geomerics-global-illumination-enlightenEnlighten’s GI implementation runtime is multi-threaded and optimised, with separate handling for static and moving objects. A great deal of processing can be saved by pre-calculating – or ‘baking’ – the lighting in a scene. Baking lighting into textures or shaders involves making shadows and light variance actually part of the texture itself, and it’s a technique that isn’t going to work well if a radical change of lighting is envisioned for the scene, such as when some boss monster rips off the roof of the environment to let a blazing sun cast over the scene geometry. Or someone just opens the curtains.

geomerics-global-illumination-enlighten2But when the environment is as tightly controlled as in the sphere of videogames or VR scenarios, these variables are known, and complex lighting can be ‘booked’ in advance of gameplay or virtual immersion. The Enlighten system, aware of breaking the GPU budget in VR scenarios whose parameters are set to test the latest and best hardware to their limits, offloads the task of Global Illumination onto the CPU by precalculating the location and topography of static geometrical items, applying lighting and compressing the resultant runtime data, which operates as an overlay on the GPU’s work in rendering the scene in real time.

By providing GI asynchronously the GPU frame rate is not affected by the added lighting quality, and since it is freed from the task of complex lighting calculations, the GPU is able to create two renders per scene calculation and leave Enlighten to overlay GI onto its work.

geomerics-global-illumination-enlighten3Geomerics have illuminated the work of some of the biggest players in the videogames industry, including EA, 2K Games, Microsoft Studios and Square Enix, and are clearly excited about the possibilities for new applications of their work to pending VR RTM items such as Oculus Rift, Sony VR, OSVR Razer, Samsung Gear VR and HTC Vive.

The videogame and CGI world is a shallow one, when you take a look inside it. The models are hollow, the buildings usually have no backs, and most of the lighting is painted on as bitmap information, except for those more dynamic moving objects, which are never able to reach quite the same level of verisimilitude as the ‘fixed’ items, because they have to be more reactive. Whatever room you’re in and whatever doors you can see from there, there’s a good chance that the contents of that room won’t be loaded into memory until you approach it. For what that means philosophically, we’ll be needing Jean Paul Sartre – but in terms of logistics and processing, the history of VR 2.0 is going to be defined by new advances in legerdemain, optimisation and increasingly hardware-throttling task negotiation.

Send us a correction about this article Send us a news tip