Washington, November 16 : A software programme developed by Stanford artificial intelligence researchers can make it easy to edit videos and add various items in it with such an accuracy that they seem to be part of the footage right from the beginning.
The researchers say that their software does not past a picture on top of the existing video, but embeds in it. It even allows a user to play a video on a wall inside a video.
According to them, the software can reduce the cost of performing some of the tricks that presently involve expensive commercial editing systems.
The researchers - computer science graduate students Ashutosh Saxena and Siddharth Batra, and Assistant Professor Andrew Ng - see interesting potential for the technology they call ZunaVision.
They claim that their software can make it possible to plunk an image on almost any planar surface in a video, whether wall, floor or ceiling. Not only still pictures but videos can also be plunked, they add.
The software can handle most such objects by keeping track of which pixels belong to the photo, and which belong to the person walking in the foreground. A photo disappears behind the person walking by, and then reappears just as in the original video.
It also involves camera motion that causes the portion of the wall containing the embedded object to move and change shape.
The software does so by building a model, pixel by pixel, of the area of interest in the video.
"If the lighting begins to change with the motion of the video or the sun or the shadows, we keep a belief of what it will look like in the next frame. This is how we track with very high sub-pixel accuracy," Batra said.
The researchers have arranged a presentation on the technology at the website http://zunavision.stanford.edu.