London, July 9 (ANI): British scientists have developed a software program that can enable computers to learn sign language by watching TV.
It learns sign language by absorbing TV shows that are both subtitled and signed.
Most of the shows in Britain are broadcast with subtitles and sign language because it is easier for many deaf people to follow.
Patrick Buehler and Andrew Zisserman at the University of Oxford, along with Mark Everingham at the University of Leeds, have come up with an algorithm to recognise the gestures made by the signer.
The software uses the arms to work out the rough location of the fast-moving hands, and identifies flesh-coloured pixels in those areas to reveal precise hand shape.
After ensuring that the computer can identify different signs, the scientists exposed it to around 10 hours of TV footage.
The software was tasked to learn 210 words, of which it could correctly learn 136 words.
"Some words have different signs depending on the context - for example, cutting a tree has a different sign to cutting a rose," New Scientist quoted Everingham as saying.
In a separate research, Helen Cooper and Richard Bowden at the University of Surrey, UK, used the same software in a different way to teach their own computer sign language.
"Our approach achieves higher accuracy levels with less data," Cooper says.
During the study, Cooper and Bowden got the software to scan all the signs in a video sequence and identify those that appear frequently and so likely represent common words.
The meaning of each of those signs is then determined by referring to the subtitles.
"That approach is very scalable - it can run quickly on large amounts of data," said Everingham.
However, he thinks that it leaves the software less able to differentiate between terms than using his team's more word-specific method. (ANI)