Sign language Recognition and Automatic Annotation
- Date
- Wednesday 23 January 2019
- Time
- Location
-
Johan Huizinga
Doelensteeg 16
2311 VL Leiden - Room
- Conference Room
Over the last years various corpus projects documenting sign languages have started all over the world. Between 2007 and 2014, four large video corpora of West African sign languages have been compiled at Leiden University. These corpora contain over 120 hours of videos along with their annotations. During the annotation process the researcher has to determine the precise time a sign occurs and properly gloss it. As a result, the annotation process is extremely labor intensive, but a condition for a reliable quantitative analysis of the sign language corpora.
The aim of this project is to develop a tool that automatically annotates the signs and their phonological features in a video. The first part towards automatic annotation is to recognize the exact time-frame a sign occurs. To remove the redundant information from the raw video a pose estimation framework (namely OpenPose) has been used. The extracted hand locations have been used to train and test four different classifiers. The result of this process is a tool that uses XGBoost to accurately predict the span of a sign and automatically create the annotation.
