Washington, Aug 14 : Soon, you can see animated version of your favourite actors wandering through 3D virtual worlds with hair that looks almost exactly like it does in real life, thanks to a new method that accurately captures the shape and appearance of a person's hairstyle for use in animated films and video games.
Researchers at UC San Diego's Jacobs School of Engineering, Adobe Systems Incorporated (Nasdaq: ADBE) and the Massachusetts Institute of Technology have presented their new method at ACM SIGGRAPH, one of the most competitive computer graphics conferences in the world.
"We want to give movie and video game makers the tools necessary to animate actors and have their hair look and behave as it would in the real world," said UC San Diego computer science professor Matthias Zwicker.
The researchers captured the shape and appearance of hairstyles of real people using multiple cameras, light sources and projectors.
Then, they created algorithms to automatically "fill in the blanks" and generate photo-realistic images of the hairstyles from new angles and new lighting situations.
Adobe researcher and study author Sylvain Paris said that replicating hairstyles for every possible angle and then getting individual strands of hair to realistically shine in the sun and blow in the wind would be extremely difficult and time consuming for digital artists to do manually.
Side-by-side comparisons of computer generated hairstyles and actual photographs of the same hairstyle.
For fair comparison, the reference photographs were removed from the data set used by the authors' image-based rendering method. Zwicker said that the makers of the movie The Matrix used digital face replacement to generate realistic images of human faces even though they had no photographs from these angles.
"Our graphics group at UC San Diego helped to create computer graphics algorithms that do the same thing for hairstyles. If you had an infinite number of cameras and light sources, there would be no angles, views or shots that need to be computer generated. "But this is totally impractical," said Zwicker.
Instead, for each of the hairstyles that received "The Matrix treatment," the researchers captured about 2,500 real-world images using 16 cameras, 150 light sources and three projectors arranged in a dome setup.
With all this data, the researchers determined the physical position and orientation of all visible strands of hair.
The algorithms then generate complex hair models, producing on the order of 100,000 hair strands.
From here, the researchers found a new way to precisely simulate how light would reflect off each strand of hair.
The result is the ability to create photo-realistic images of the hairstyle from any angle.
The automated system even creates realistic highlights. This process of creating new images based on data from related images is called interpolation.