30-08-2014, 10:22 AM
Wireless 3D Gesture and Character Recognition System
Wireless 3D Gesture.doc (Size: 698 KB / Downloads: 25)
Abstract
This paper presents a novel implementation of 3D gesture and character recognition system. The implementation consists of two wireless modules-gesture transmitter and gesture receiver. The main criteria for choosing the transmitter building blocks of this implementation are, very low power consumption and small form factor in order to power it using a button cell. The gesture data is captured from a MEMS motion sensor, in the form of acceleration towards X, Y and Z directions using MCU, and transmitted wirelessly to the receiver module using a 2.4GHz RF Transceiver. The receiver module is connected to a host PC; it preprocesses and provides the motion data to the host applications for decoding and use. The 3D gesture recognition system can be used for a host of applications including gesture-based User interfaces, character recognition systems, human interface devices like wireless mouse, keypad, joystick, etc and virtual reality in 3-D gaming.
INTRODUCTION
As computers become an integral part of everyday life, an increasing life is spent on developing multimodal human computer interfaces that would replace the traditional keyboard and mouse with more natural input modalities for a variety of applications such as video games, virtual realities etc. Touch screen is the latest and advanced method to input any option and now the advanced technology has took a step forward and new version.
The result of this new advanced technology is that through gestures that is just by moving the hands here and there by sitting in front of the computer or television the input is forwarded and it is by this technology there is no use of remote controls to operate the channels
MEMS Motion Sensor
The MEMS motion sensor is an ultra compact, low power, digital output, 3-axis linear accelerometer consisting of a sensing element and the I2C/SPI serial interface for providing access to the external world. A proprietary process is used to create a surface micro machined accelerometer. The technology allows the creation of suspended silicon structures that are attached to the substrate at a few points called anchors, and is free to move in the direction of the sensed acceleration.
When acceleration is applied to the sensor, the roof mass displaces from its nominal position, causing an imbalance in the capacitive bridge. This imbalance is measured using charge integration in response to a voltage pulse applied to the sense capacitor.
The complete measurement chain consists of a low noise capacitive amplifier stage followed by an analog to digital converter.
Typically IC LIS302DL can be used as the motion sensor for this implementation
The RF Transceiver
An RF transceiver receives raw data and manages over-the-air protocol to achieve successful data communication. This implementation uses IC CC2500 as the RF Transceiver. The CC2500 is a low cost single chip 2.4 GHz transceiver designed for very low power wireless applications. The circuit is intended for the ISM (Industrial, Scientific and Medical) and SRD (Short Range Device) frequency band at 2400 – 2483.5MHz
The RF transceiver can be configured to achieve Optimum performance for many different applications. Configuration is done using the SPI interface. The following key parameters can be programmed:
• Power down / power up mode
• Crystal oscillator power up / power down
• Receive / Transmit mode
• RF channel selection
• Data rate
• Modulation format
• RX channel filter bandwidth
The MCU
Micro-electro-mechanical systems are the 'technology of the very small,' associated with the three accelerometers that are mounted orthogonally within the Vectorseis sensor. The MCU MSP430F2274 serves our purpose. MSP430F2274 is an ultra-low power MCU with several peripherals targeted for various applications. The architecture, combined with five low power modes (LPM0 – LPM4) is optimized to achieve extended battery life. The device features a powerful 16-bit RISC CPU, 16-bit registers, and constant generators that contribute to the maximum code efficiency
Block diagram of 3D gesture transmitter module
A universal serial communication interface (USCIB) of the MSP430F2274 is configured as SPI to communicate with the RF transceiver (CC2500) and the GDO2 pin of CC2500 generates an interrupt at port pin P2.7 of MSP430F2274 to indicate that the initiated transmission or reception activity has been completed.
The port pins P4.3 and P4.4 are used for I2C Protocol emulation and interfaced with the I2C lines (SDA, SCL) of the motion sensor (LIS302DL). The INT1 line of the motion sensor (LIS302DL) interrupts the MSP430F2274 at port pin P2.0 to indicate that a new set of 3-axis acceleration data is available. The detailed hardware configuration is shown in figure 2.
The basic firmware flow is shown in figure 3. Initially, the motion sensor, RF transceiver and board specific parameters are configured. The MCU enters into a very low power mode (LPM3) and waits in a super loop for a Sensor Data Ready Interrupt.
Hardware configuration of the 3D gesture receiver module
(USB 2.0) interface. The block diagram of the module is shown in figure 4.
Similar to the transmitter module, a universal serial communication interface (USCIB) of the MSP430F2274 is configured as SPI to communicate with RF transceiver (CC2500). The GDO2 pin of CC2500 generates an interrupt at port pin P2.7 of the MSP430F2274 to indicate that the initiated transmission or reception activity has been completed. It is connected to any USB2.0 port of the host PC. The Communication between host side application and receiver firmware is achieved using serial emulation over the USB interface. The hardware configuration is shown in figure 5.
3D Gesture and Character Recognition Module
The received data undergoes median filtering and is sent to the recognition module. The features used for recognition consists of un-directional un-weighted graph based detection for straight-line gestures and characters. Chain code based detection is used for curved gestures. Each method recognizes the input with some confidence factor lying in the closed interval (0, 1). A multifactorial approach is used to make a final decision about the recognized gestures and characters. Figure 7 shows a flowchart of the recognition algorithm. A recognition accuracy of more than 90% has been achieved with input test vectors created by 22 users, where each user is asked to write each letter five times.
WEBCAM BASED-HAND, FACE AND BODY TRACKING SOLUTIONS
Gesture recognition system has transformed the way people interact with consumer electronic devices. It provides a highly precise and reliable gesture-based user interface for interacting with any display screen. Whether on a personal computer, set top box, television set, mobile device, game console, digital sign or interactive kiosk gesture recognition system enables users to control onscreen interaction with simple hand motion instead of a remote control, keyboard or touch screen.
LIFELIKE 3-D VIRTUAL REALITY EXPERIENCES
The 3-D gesture recognition software allows people to watch their video image or full-body 3-D avatar while interacting in real time with computer generated characters and objects. The system repeatedly tracks full body movement and subtle gestures in complete 3-D space Resistant to distractions like background movement or variable lighting, the system looks on to the user and translates their unique movements into specific computer commands and events
UNLIMITED PRACTICAL APPLICATIONS IN COUNTLESS SECTORS
This technology has proven success with avatars Interactive games and specific hand tracking .Along with factory automation, industrial design and military applications, here are a few other exciting applications of our 3D tracking technology 3D avatar representation, where onscreen avatars respond directly to a user's movements as they provide instructions or deliver information.
Interactive advertising, where users can transport their 3D image right into the heart of an interactive digital sign or advertisement.
Virtual rehabilitation, where patients can watch their 3D image and see progression in range of motion as they move through therapeutic exercises
FUTURE CHALLENGES
As explained above there are several approaches that can be used for integrating gesture recognition into a user interface. The current mechanism as an independent application requires the least modification to other derivatives but also fails to provide tight integration with applications.
Tighter integration with other applications would allow those applications to act on aspects of the gesture itself, rather than just the synthetic event triggered by recognition. For example, a scribble gesture might be used to indicate the deletion of the text covered by the gesture.
A significant limitation of the current user-interface is that only single-stroke gestures are collected for recognition. There is no inherent limitation in the recognition engine that would prevent it from recognizing multi-stroke gestures. This capability would allow much more natural letter forms to be used.
CONCLUSION
3-D gesture and character recognition system today can be used reasonably well in applications like virtual rehabilitation, interactive advertising, gesture based control actions and many more… This system will help in improving the quality of life significantly. While there is still much room for improvement, current gesture and character recognition systems have remarkable performance. As we develop this technology and build remarkable changes we attain certain achievements. Rather than asking what is still deficient, we ask instead what should be done to make it efficient