Jerusalem, August 16 (ANI): Researchers at Ben-Gurion University of the Negev (BGU) in Israel are all set to use new computer algorithms in a new project to analyze historic Hebrew and Arabic documents.
The effort to develop new computer algorithms will help provide scholars with valuable answers regarding Jewish liturgical texts and Arabic historical texts.
The technical goal of the research is to develop new state of the art algorithms for analyzing text and combine them into an easy to operate, open source system of tools to aid historical ocument research throughout the world.
Experiments are being conducted on degraded documents from sources such as the Cairo Geniza, copies of which are located at the national liturgy project at BGU, the El-Aqsa manuscript library in Jerusalem and the Al-Azar manuscript library in Cairo.
Most fragments that have been discovered at the Geniza are now in libraries at Cambridge and Oxford universities, the Jewish Theological Seminary in New York, The British Library and in Israel and Paris.
Until now, the documents have not been researched systematically.
According to Professor Uri Ehrlich of the Goldstein-Goren Department of Jewish Thought, There was one book that was originally used as a Hebrew prayer book from the 12th century, but had been scratched off, and the parchment used to write an Arabic text (called a palimpsest)." Our aim was to read the first book and not the second book. So, we needed to find out how the Arab book could disappear and would leave only the Hebrew letters of the original book. This is why the computer sciences and humanities departments at BGU decided to collaborate," he said.To solve the problem, we created an algorithm to cover the text in a dark grey color, which then highlights lighter colored pixels as background space and identifies the darker pixels as outlining the original Hebrew lettering," said Professor Klara Kedem of the Department of Computer Sciences and one of the system's creators.
Many of the new methods will apply to other languages as well, including binarization of highly degraded documents, segmentation of skewed and curved lines and word spotting in both curved and highly degraded documents.
Other algorithms will be more language specific, such as paleographic analysis of Hebrew and Arabic historical documents that will include automatic indexing of document collections, determining authorship, location and date of the documents. (ANI)