New technology able to seamlessly blend the digital with the real has grown into a global industry projected to be worth more than $220 billion by the end of the decade.

But the immersive world of augmented and mixed reality, or AR/MR as it's better known, remains far from perfect.

The beauty of AR/MR is that it can make people feel that virtual objects and real backgrounds they're set against are part of the same world.

But it's still extremely difficult for engineers to accurately model real-world properties like light and reflectance.


The conundrum has held AR/MR back from hitting its potential - and now a new million-dollar New Zealand study aims to solve the puzzle.

One of the primary goals in computer graphics is to generate photo-realistic rendered synthetic objects.

This combines an object, 3D geometry and light, processes it with a rendering algorithm, and then produces a 2D image as an output.

While the geometry defines the overall 3D shape of an object, its material properties define how the light reflects off the surface, allowing us to perceive the shape.

The final rendering process takes all of these elements and simulates complex light transportation in a 3D scene to generate images.

This becomes even trickier when it comes to AR/MR, where rendered objects must be blended into actual backgrounds.

Current approaches, which often involve manual refinements by skilful artists, simply aren't good enough for the next-generation interactive experience that AR/MR promises viewers.

One technique, called inverse rendering, can automatically estimate light and reflectance from a photograph, but has significant limitations.


Engineers have struggled with doing this effectively - and solving the problem in the real-time speed that the technology demands is another headache.

A team led by Associate Professor Taehyun Rhee, of Victoria University's Faculty of Engineering and Computer Science, aims to overcome the problem with a new novel method.

Their study, supported by the Government's Endeavour Fund, involves creating real-world lighting using what's called image-space analysis.

Rhee and his colleagues will draw on image-based lighting techniques already used in movie visual effects, while also building algorithms to enable big data analysis and machine-learning by artificial intelligence.

"With known geometry and a 2D image, reconstructing both the light and reflectance model is very challenging, since separating the influence of light and reflectance from the final illumination is difficult, due to reciprocity and dependency between the two," he explained.

"The aim of this project is to solve this challenging problem by developing a strategy to separate the influence of light and the reflectance model from the final illumination.


"The problem is challenging, but successful results will vastly improve the visual quality of visual effects, and immersive augmented and mixed reality."

The solution would be picked up by the university's programme for potential start-ups in the space, as well as Rhee's own one, DreamFlux, which recently teamed up with agency Wrestler for a virtual reality experience at Wellington Airport.

Its potential applications were endless, spanning from films and games to digital manufacturing and virtual prototyping.

Rhee saw it being used for a new wave of video conferencing, social networks and virtual tourism, or in architecture and biomedical research.

"Combined with immersive VR display, it will provide individual users with an immersive experience that transports them into a reconstructed remote world."