03-12-2009, 11:16 PM
Hi tanvi,
get the full abstract and report for blue eyes here
and regarding robots using image processing:
Abstract:
We here describe a straight forward technique for tracking a human hand based on images acquired by an active stereo camera system.In simple terms , the robot can interpret and grasp an object pointed by a human.
Introduction:
If the robot is equipped with a camera system, it would be much more intuitive to just point to the object to grasp and let the robot detect its position visually.
Hand detection:
we detect the hand on the basis of its typical skin color.The robot can also 'learn' the skin colour of the operator.
Object recognition:
The detection of the hand position serves as a pre-selection of the region of interest (ROI) in the image.the object which is nearest to the fingertip is considered to be specified by the userâ„¢s pointing gesture.
Grasping
As soon as the position of the object is known, its 3-d coordinates and orientation, calculated by a Hough transformation, are transmitted to the manipulator control to initiate the grasping process.
In addition there is a speech recognition system which can identify spoken keyword commands . By means of a speech synthesizer, the robot can express its
behavioral state.
Conclusion and outlook:
Implementing a straight forward technique for tracking a human hand and extracting
a poiting gesture we could demonstrate how man and machine become a coupled system in a very natural way. For the future we plan to extend the multi-modal manmachine interaction system by integrating recognition of the humanâ„¢s direction of view.
Report:
robots using image processing.pdf (Size: 224.35 KB / Downloads: 155)
For a more advanced paper: digital image processing in robot vision:
digital image processing in robot vision.pdf (Size: 1.14 MB / Downloads: 227)
get the full abstract and report for blue eyes here
and regarding robots using image processing:
Abstract:
We here describe a straight forward technique for tracking a human hand based on images acquired by an active stereo camera system.In simple terms , the robot can interpret and grasp an object pointed by a human.
Introduction:
If the robot is equipped with a camera system, it would be much more intuitive to just point to the object to grasp and let the robot detect its position visually.
Hand detection:
we detect the hand on the basis of its typical skin color.The robot can also 'learn' the skin colour of the operator.
Object recognition:
The detection of the hand position serves as a pre-selection of the region of interest (ROI) in the image.the object which is nearest to the fingertip is considered to be specified by the userâ„¢s pointing gesture.
Grasping
As soon as the position of the object is known, its 3-d coordinates and orientation, calculated by a Hough transformation, are transmitted to the manipulator control to initiate the grasping process.
In addition there is a speech recognition system which can identify spoken keyword commands . By means of a speech synthesizer, the robot can express its
behavioral state.
Conclusion and outlook:
Implementing a straight forward technique for tracking a human hand and extracting
a poiting gesture we could demonstrate how man and machine become a coupled system in a very natural way. For the future we plan to extend the multi-modal manmachine interaction system by integrating recognition of the humanâ„¢s direction of view.
Report:
robots using image processing.pdf (Size: 224.35 KB / Downloads: 155)
For a more advanced paper: digital image processing in robot vision:
digital image processing in robot vision.pdf (Size: 1.14 MB / Downloads: 227)