22-05-2012, 11:21 AM
An Autonomous Vision-Guided Helicopter
An Autonomous Vision-Guided Helicopter.doc (Size: 478.5 KB / Downloads: 44)
ABSTRACT
Helicopters are air vehicles used for many applications ranging from rescue and crime fighting to inspection and surveillance. They are most effective when flown at close proximity to objects of interest. These tasks require dangerous flight patterns which risk human pilot safety. An unmanned helicopter which operates autonomously can carry out such tasks more effectively without risking human lives. The work presented in this dissertation develops an autonomous helicopter system for such applications. The system employs on-board vision for stability and guidance relative to objects of interest in the environment. Developing a vision-based helicopter positioning and control system is challenging for several reasons. First, helicopters are inherently unstable. They are highly sensitive to control inputs and require high frequency feedback with minimum delay for stability. Second, since helicopters rotate at high angular rates to direct main rotor thrust for translational motion, it is difficult to disambiguate rotation from translation with vision alone to estimate helicopter 3D motion. Third, helicopters have limited on-board power and payload capacity. Finally, helicopters are extremely dangerous and present major obstacles to safe and calibrated experimentation to design and evaluate on-board systems. This dissertation addresses these issues by developing: a “visual odometer” for helicopter position estimation, real-time and low latency vision machine architecture to implement an on-board visual odometer machine.
INTRODUCTION
Precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and security to inspection and monitoring operations. Helicopters are indispensable air vehicles for finding and rescuing stranded individuals or transporting accident victims. Police departments use them to find and pursue criminals. Fire fighters use helicopters for precise delivery of fire extinguishing chemicals to forest fires. More and more electric power companies are using helicopters to inspect towers and transmission lines for corrosion and other defects and to subsequently make repairs. All of these applications demand dangerous close proximity flight patterns, risking human pilot safety. An unmanned autonomous helicopter will eliminate such risks and will increase the helicopter’s effectiveness. Typical missions of autonomous helicopters require flying at low speeds to follow a path or to hovering near an object of interest. Accurate position estimation of the helicopter relative to objects is necessary to perform such tasks. In general, such positioning equipment as inertial navigation systems or global positioning systems are well suited for long range, low precision helicopter flight and fall short for very precise, close proximity flight. Moreover these sensors estimate absolute position and cannot sense position in relation to task objects of interest. Visual sensing is the richest source of data for this relative position estimation.
1.1 CHALLENGES OF VISION-BASED HELICOPTER FLIGHT
Helicopters are inherently unstable and require constant compensation for stable flight. The effectiveness of an autonomous helicopter is critically dependent on its accurate and stable positioning relative to objects in the environment. Estimating this relative position by on-board vision, a 3D object tracking problem, is difficult for several reasons.
• Helicopters can move quickly. Small and mid-sized helicopters can accelerate in the range of 0.5-0.7 g and can exhibit 40-60 degrees per second angular velocity under normal operating conditions. To keep up with the helicopter’s high degree of maneuverability, an on-board vision system must sample and process camera images at high frequency. On-board image processing must be performed at frame rate (30 Hz) or higher for effective vision-based object tracking. Higher rate image sampling also simplifies the tracking problem by limiting object displacements in successive images.
• Helicopters are highly sensitive to control inputs. Feedback latency is critical to stable helicopter flight. High throughput of image processing alone is not sufficient. Object tracking must be performed with minimum latency to provide adequate and timely feedback for stability. Small model helicopters require system latencies of no more than 1/30 to 1/60 seconds for stability.
• Helicopters move with typically significant attitude variations. A helicopter can bank 30 degrees as it transitions to forward flight. To maintain relative position, on-board vision must distinguish helicopter translation from rotation. Distinguishing rotation from translation in images under perspective projection can be difficult since small attitude variations can look virtually indistinguishable from small translational motion. This effect is exaggerated for the helicopter application since tracked objects are frequently small relative to the helicopter altitude and cannot provide sufficient 3D clues for distinguishing rotation from translation.
• Helicopters have strictly limited payloads and available power. A vision system capable of meeting the above criteria must also be compact and efficient for practical on-board integration. Small (<200 Ibs) helicopter payloads range from 5-40 pounds.
• Helicopters are dangerous. The spinning rotor blades pose an immediate danger to nearby individuals. The responsive nature of helicopters makes them prone to out-of-control flight or crashes during experiments.
STEPS INVOLVED
1. The first autonomous robot helicopter stabilized and guided by an on-board “visual odometer” for position estimation: The odometer visually locks on to ground objects and maintains helicopter position at field rate (60 Hz) during flight. The helicopter integrates the visual odometer with sensors such as gyroscopes and a global positioning system (GPS) receiver, control and actuation, as well as safety and human augmentation systems.
2. A new vision machine architecture for real-time and low latency image processing: The architecture balances computational power and data bandwidth requirements to realize vision machines tailored to the applications at hand. Based on this architecture, a visual odometer machine is designed and realized on-board an autonomous helicopter.
HELICOPTER CONTROL
Controlling with vision:
The positioning feedback for the above helicopter control experiments is primarily provided by onboard INS/GPS or ground-based beacon systems instead of on-board computer vision. The computational complexity and the high data bandwidth requirements of vision have been major obstacles to practical and robust vision-based positioning and control systems. In spite of these drawbacks, promising results have been recently demonstrated in real-time vision processing, visual servoing of robotic manipulators, and accurate vision-based position estimation systems. The development of low-cost special-purpose image correlation chips and multi-processor architectures capable of high communication rates has made a great impact on image processing.
RAPiD and DROID, developed by Roke Manor Research Limited, are systems designed for vision-based position estimation in unknown environments. RAPiD is a model-based tracker capable of extracting the position and orientation of known objects in the scene. DROID is a feature-based system which uses the structure-from-motion principle for extracting scene structure using image sequences. Real-time implementations of these systems have been demonstrated using dedicated hardware.
APPROACH
Vision-Based Position Estimation
A visual odometer locks on to and tracks feature rich objects to sense helicopter motion. The odometer maintains this visual lock using high speed image correlators which estimate helicopter range and motion relative to the ground objects. The odometer closely integrates on-board attitude sensors with the image correlators to resolve 3D helicopter translation which is key to accurate helicopter control. The visual odometer implements an object tracking algorithm. Viewing the ground through a pair of on-board cameras, the algorithm locks on and tracks feature-rich objects appearing in image windows or templates. As shown in Figure 1-1, the algorithm initially locks on to objects appearing at the image center and maintains this lock while the objects are in the field of view. As the objects leave the image, the algorithm selects another image template to lock on to and continues positioning the helicopter. Image templates are tracked by high-speed image correlation or template matching. For full 3D motion estimation, the algorithm matches templates in both camera images simultaneously for stereo range detection, and in successive images, for velocity estimation. Since helicopter attitude and height variations can significantly affect the appearance of tracked objects in successive images, the algorithm must actively update image templates for robust tracking. Furthermore, the algorithm must sense helicopter translation for accurate control which requires eliminating the effects of rotation on image displacements. The algorithm accomplishes these difficult tasks by tracking multiple templates and by measuring helicopter attitude with on-board angular sensors. The relative motion of two tracked templates in images determines height and heading changes which are used for scaling or rotating tracked templates for consistent matches. The effects of helicopter roll and pitch variations are determined by tagging each image with helicopter attitude during camera shutter exposure interval. Using this synchronized attitude data, the algorithm estimates the effects of rotation on image displacements based on camera lens parameters. This dissertation develops custom-designed hardware to filter and tag the camera images for the tracking algorithm.
Summary and conclusion
A visual odometer for helicopter positioning, one of the major contributions of the work presented in this dissertation. The odometer incrementally maintains helicopter position by sensing image displacements. It senses these displacements by visually locking onto ground objects by image template matching. The odometer eliminates the effects of helicopter rotation from sensed image displacement by measuring changes in the helicopter attitude with each camera image capture. The disambiguated image displacement is then transformed to determine helicopter motion.
Many more further progress are to be made in this field which requires more effort and hard work. The area of image processing needs to be improved so that faster rate of processing could be achieved to control the helicopter.