25-08-2017, 09:32 PM
Seminar Report on Humanoids Robotics
Humanoids.doc (Size: 346.5 KB / Downloads: 29)
ABSTRACT
The field of humanoids robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels.
People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures ranging from “yes”, ”no” and “may be” to detecting winks, blinks and sleeping.
INTRODUCTION.:
The field of humanoids robotics, widely recognized as the current challenge for robotics research, is attracting the interest of many research groups worldwide. Important efforts have been devoted to the objective of developing humanoids and impressive results have been produced, from the technological point of view, especially for the problem of biped walking.
In Japan, important humanoid projects, started in the last decade, have been carried on by the Waseda University and by Honda Motor Co.The Humanoid Project of the Waseda University, started in 1992, is a joint project of industry, government and academia, aiming at developing robots which support humans in the field of health care and industry during their life and that share with human information and behavioral space, so that particular attention have been posed to the problem of human-computer interaction. Within the Humanoid Project, the Waseda University developed three humanoid robots, as research platforms, namely Hadaly 2,Wabian and Wendy.
SYSTEM ARCHITECTURE:
The proposed biomechatronic hand will be equipped with three actuators systems to provide a tripod grasping: two identical finger actuators systems and one thumb actuator system.
The finger actuator system is based on two micro actuators which drive respectively the metacarpo-phalangeal joint (MP) and the proximal inter-phalangeal joint (PIP); for cosmetic reasons, both actuators are fully integrated in the hand structure: the first in the palm and the second within the proximal phalanx. The distal inter-phalangeal (DIP) joint is driven by a four bar link connected to the PIP joint.
KINEMATIC ARCHITECTURE:
A first analysis based on the kinematics characteristics of the human hand, during grasping tasks, led us to approach the mechanical design with a multi-DOF hand structure. Index and middle finger are equipped with active DOF respectively in the MP and in the PIP joints, while the DIP joint is actuated by one driven passive DOF.
The thumb movements are accomplished with two active DOF in the MP joint and one driven passive DOF in the IP joint. This configuration will permit to oppose the thumb to each finger.
A NEURO-FUZZY APPROACH TO GRASP PLANNING:
The first module has the aim of providing the capability of planning the proper hand, in the case of a multi-fingered hand, based on geometrical features of the object to be grasped. A neuro-fuzzy approach is adopted for trying to replicate human capability of processing qualitative data and of learning.
The base of knowledge on which the fuzzy system can process inputs and determine outputs is built by a neural network (NN). The trained system has been validated on a test set of 200 rules, of which the 92.15% was correctly identified.
THE VISION SYSTEM:
The use of MEP tracking system is made to implement the facial gesture interface. This vision system is manufactured by Fujitsu and is designed to track in real time multiple templates in frames of a NTSC video stream. It consists of two VME-bus cards, a video module and tracking module, which can track up to 100 templates simultaneously at video frame rate (30Hz for NTSC).
The tracking of objects is based on template (8x8 or 16x16 pixels) comparison in a specified search area. The video module digitizes the video input stream and stores the digital images into dedicated video RAM. The tracking module also accesses this RAM. The tracking module compares the digitized frame with the tracking templates within the bounds of the search windows. This comparison is done by using a cross correlation which sums the absolute difference between corresponding pixels of the template and the frame. The result of this calculation is called the distortion and measures the similarity of the two comparison images. Low distortions indicate a good match while high distortions result when the two images are quite different.
TRACKING THE FACE:
Our basic idea is that individual search windows help each other to track their features. From the known geometric relationship between the features in a face, a lost search window can be repositioned with help from features that are still tracking. We use a two-dimensional model of the face in which features for tracking are joined to form a small network. The reference vectors connecting the features are derived from a single image automatically by the system or by a human operator. Figure 1 shows a face with boxes marking the nine (9) tracking features. We use the iris, the corners of the eyes, the eyebrows and the middle and corners of the mouth. The sizes of the boxes shown are the actual template sizes (16x16 pixels). The line connections shown in the figure indicate which features assist the other features for readjusting the search windows. We also use several templates to track features that can change their appearance. For example the eyes can be open or closed. In such cases we use three (3) templates for the different states (opened, closed and half-open-closed) of the eyes simultaneously. This makes it possible to determine the state of the tracking features e.g. an eye is open or the mouth is closed.
CONCLUSION
The humanoid research is an approach to understand and realize flexible complex interactions between robots, environment and humans.
A humanoid robot is an ideal tool for the robotics research; First of all it introduces complex interactions due to its complex structure. It can be involved in various physical dynamics by just changing its posture without need for a different experimental platform. This promotes a unified approach to handling different dynamics. Since it resembles humans, we can start by applying our intuitive strategy and investigate why it works or not. Moreover, it motivates social interactions such as gestural communication or cooperative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels.