Research Projects

Learning ASL through Real-Time Practice

Enabling students learning American Sign Language (ASL) to practice independently through a tool that provides feedback automatically...

Facial Expression for Animations of ASL

Producing linguistically accurate facial expressions for animations of ASL to make them more understandable and natural...

Generating ASL Animation from Motion-Capture Data

Collecting a motion-capture corpus of native ASL signers and modeling this data to produce linguistically accurate animations...

ASL Animation Tools & Technologies

Developing technologies to automate the process of synthesizing animations of a virtual human character performing American Sign Language...

Predicting English Text Readability for Users

Analyzing English text automatically using computational linguistic tools to identify the difficult level of the content for users...

Eye-Tracking to Predict User Performance

Analyzing eye-movement behaviors to automatically predict when a user is struggling to understand information content...

Learning ASL through Real-Time Practice

We are investigating new video and motion-capture technologies to enable students learning American Sign Language (ASL) to practice their signing independently through a tool that provides feedback automatically.

Facial Expression for Animations of ASL

We are investigating techniques for producing linguistically accurate facial expressions for animations of American Sign Language; this would make these animations easier to understand and more effective at conveying information -- thereby improving the accessibility of online information for people who are deaf.


This project is joint work with researchers at Boston University and Rutgers University.

Generating ASL Animation from Motion-Capture Data

This project is investigating techniques for making use of motion-capture data collected from native ASL signers to produce linguistically accurate animations of American Sign Language. In particular, this project is focused on the use of space for pronominal reference and verb inflection/agreement.

This project also supported a summer research internship program for ASL-signing high school students, and REU supplements from the NSF have supported research experiences for visiting undergraduate students.


Data & Corpora

The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.

ASL Animation Tools & Technologies

The goal of this research is to develop technologies to generate animations of a virtual human character performing American Sign Language.

The funding sources have supported various animation programming platforms that underlie research systems being developed and evaluated at the laboratory.

In current work, we are investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.

Predicting English Text Readability for Users

This project has investigated the use of computational linguistic technologies to identify whether textual information would meet the special needs of users with specific literacy impairments.

In research conducted prior to 2012, we investigated text-analysis tools for adults with intellectual disabilities. A state-of-the-art predictive model of readability was developed that was based on discourse, syntactic, semantic, and other linguistic features.

In current work, we are investigating technologies for a wider variety of users.

Eye-Tracking to Predict User Performance

Computer users may benefit from user-interfaces that can predict whether the user is struggling with a task based on an analysis of the user's eye movement behaviors. This project is investigating how to conduct precise experiments for measuring eye-tracking movements and user task performance -- relationships between these variables can be examined using machine learning techniques in order to produce preditive models for adaptive user-interfaces.

An important branch of this research has investigated whether eye-tracking technology can be used as a complementary or alternative method of evaluation for animations of sign language, by examining the eye-movements of native signers who view these animations to detect when they may be more difficult to understand.

Want to participate?