Linguistic and Assistive Technologies Laboratory (LATLab)
The ASL Motion-Capture Corpus is the result of a multi-year project to collect, annotate, and analyze an ASL motion-capture corpus of multi-sentential discourse. At this time, we are ready to release to the research community the first sub-portion of our corpus that has been checked for quality. The corpus consists of unscripted, single-signer, multi-sentence ASL passages that were the result of various prompting strategies that were designed to encourage signers to use pronominal spatial reference yet minimize the use of classifier predicates. The annotation of the corpus includes glosses for each sign, an English translation of each passage, and details about the establishment and use of pronominal spatial reference points in space. Using this data, we are seeking computational models of the referential use of signing space and of spatially inflected verb forms for use in American Sign Language (ASL) animations, which have accessibility applications for deaf users.
Please send email to matt at cs.qc.cuny.edu to inquire about accessing the corpus.
The corpus consists of four types of files, for each story that we have recorded.
This first release of the corpus consists of data collected from 3 signers, a total of 98 stories. Each story is generally 30 seconds to 4 minutes in length.
If you make use of this corpus, please cite the following publication:
Pengfei Lu, Matt Huenerfauth. 2012. "CUNY American Sign Language Motion-Capture
Corpus: First Release." Proceedings of the 5th Workshop on the Representation and
Processing of Sign Languages: Interactions between Corpus and Lexicon, The 8th
International Conference on Language Resources and Evaluation (LREC 2012), Istanbul,
[Adobe Acrobat PDF.]
Examples of excerpts of the data contained in the corpus may be available by request. Please send email to matt at cs.qc.cuny.edu to request access.
This material is based upon work supported in part by the National Science Foundation under award number 0746556.