London, November 20 : Researchers at Carnegie Mellon University say that they have developed a computational model that can prove helpful in understanding how the brain makes sense of the natural scenes surrounding any individual.
Michael S. Lewicki, associate professor in Carnegie Mellon's Computer Science Department and the Center for the Neural Basis of Cognition, says highlights the fact that a type of visual neuron called simple cells can detect lines and edges, but the computation they perform is insufficient to make sense of natural scenes.
Given that variations in the foreground and background surfaces within the scene generally obscure the edges, the researcher insists that more sophisticated processing is necessary to understand the complete picture.
However, little is known about how the visual system accomplishes this feat, according to the researcher.
Lewicki and his graduate student Yan Karklin have revealed that their new computational model of this visual processing employs an algorithm that analyses the myriad patterns that compose natural scenes, and statistically characterizes them to determine which patterns are most likely associated with each other.
The researchers say that the response of their model neurons to images used in physiological experiments matches well with the response of neurons in higher visual stages.
Though the "complex cells", so called for their more complex response properties, have been extensively studied, the role they play in visual processing has been elusive.
"We were astonished that the model reproduced so many of the properties of these cells just as a result of solving this computational problem," Nature magazine quoted Lewicki as saying.
The researchers are of the opinion that gaining a deeper understanding of how the brain perceives the world may help improve computer vision systems.