TIIC2015 – Real-time American Sign Language Recognition Using Wrist-worn Inertial and Surface EMG Sensors


University: University of Texas at Dallas
Team Members: Jian Wu, Zhongjun Tian, Lu Sun 
TI Parts Used:

  • CC2538
  • CC2564
  • MSP430
  • ADS1299
  • LMP7721
  • BQ24073
  • TPS79133
  • TPS61252
  • TPS22901
  • TPS735285
  • REF3330

Project Description

A Sign Language Recognition (SLR) system enables communication between hearing disabled individuals and those who can hear and speak. With the prevalence of the wearable computers, this technology is becoming an important human computer interface capable of reading hand gestures and inferring user’s intent. In this project, we developed a real-time American Sign Language recognition system leveraging fusion of surface electromyography (sEMG) and a wrist-worn inertial sensor at the feature level. The inertial sensor based system is capable of capturing hand orientations and hand and arm movements during the gesture while the sEMG is good at measuring of finger movements and the muscle activity patterns for the hand and arm. They can be complementary to each other and the fusion of these two systems will enhance the recognition accuracy for different signs, thus making the recognition of large vocabulary of signs easier.

We design and develop two sensor platforms to capture and stream the motion data and sEMG signals. The first platform is 9-axis motion sensor with a size of 1”x1.5”. There are two different versions we have designed. The version 1 is called MotionStorage. In MotionStorage, an InvenSense MPU9150 9-axis MEMS sensor is used to measure the 3-axis acceleration, 3-axis angular velocity and 3-axis magnetic strength. A Texas Instruments (TI) 16-bit low power microcontroller, MSP430F5528, is used as the central processor.  A dual mode Bluetooth unit from Blueradios which uses Texas Instruments (TI) CC2564 is included as the wireless unit. A microSD card unit is also available in our sensor board. The user can choose to stream the data to a PC/tablet for real-time processing or log the data onto a microSD card. We also have voltage regulator (TI TPS79133), charging circuit (TI BQ24073) and load switches (TI TPS22901) on the board. In order to get more processing power on the sensor itself, we designed version 2 of our motion sensor, which is called MotionNet. The major difference is that we change MSP430 to TI CC2538 which has Cortex M3 processor and 802.15.4 wireless unit that will enable IP based low power wireless for IoT (6LowPAN).

The second platform is the sEMG signal acquisition systems. TI ADS1299 analog front end is used to capture 4-channel sEMG and a TI MSP430 microcontroller collects data and forwards it to the Bluetooth module. The dual-mode Bluetooth module is BlueRadios BR-LE4.0-D2A which uses TI CC2564. For the power management, we have a voltage booster (TI TPS61252), a LDO (TPS735285) and a voltage reference (REF3330) on the board.

For our sign language recognition system, the sensor systems introduced above capture the motion and sEMG signals and stream the data to a PC via Bluetooth/802.15.4. The real-time signal processing and machine learning techniques are implemented in JAVA code in a PC and they can be easily adapted to an android based cell phone. To do the recognition, we created an automatic adaptive segmentation technique to determine the period when a gesture is performed. 76 features from sEMG signal and 192 features from inertial sensors are studied and the best subset features are selected based on a feature ranking algorithm. Four different classification algorithms are evaluated, which are Decision Tree, Nearest Neighbor, Naïve Bayes and Support Vector Machine. The experimental results show that after feature selection and conditioning, our system achieves 95.94% recognition rate for detecting the signs. The results also illustrate the fusion of two modalities perform better than using only the inertial sensor. We observed that only one channel of sEMG (out of four) located on the wrist and under the wrist-watch is sufficient.

Our paper about this project is accepted to 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN2015 at MIT) as oral presentation (20% acceptance rate which is very competitive in scientific communities). You can find more details about our project in the paper which is included in the resource folder. 

Features

We design and develop two different sensor platforms (inertial sensors including accelerometers and gyroscopes, sEMG data acquisition sensor) and most of the components are from TI. The hardware, firmware and application software are all developed by our group. The advanced signal processing and machine learning algorithms are analyzed and applied to enable a good performance of the system.

This project is unique in the sense that we combine modalities of various types (inertial and muscle activities) to detect gestures in real-time. The fusion happens at the feature level and there are unique classifiers that combine the two signal modalities. Although there is quite bit of work in the wearable computing community, most existing efforts have been focusing on signal processing for homogenous sensors. If heterogeneous sensors are present, typically they are fused after the classification stage and not at the feature level.

The other unique feature of this project is our observation and discovery that only one sEMG sensor on the wrist where typically watches are worn is sufficient. This will enable the next generation of wrist-watches with sEMG sensors capable of detecting hand gestures and sign language.

At the system level, this is the first of a kind American Sign Language recognition system leveraging both motion and sEMG signals.

Our system is wireless. The sEMG signals are streamed to PC/tablet via Bluetooth while the motion signals are streamed to PC/tablet via Bluetooth/802.15.4.

Our system does not have energy harvesting at this stage as its required power budget is outside the range of what harvesting modules could supply.

Resources

All project resources are uploaded to this project repo. (Shematics, BOM, Firmware, Software and published research paper).

User's Guide

There are 'Readme' files in the resource folder which provide the instructions.