DYNAMIC IMAGE PERCEPTION FOR ROBOT NAVIGATION
YASSER ATWA IBRAHIM;
Abstract
This research investigate the development of a binocular stereo vision system to guide an autonomous land vehicle ALV, that perform visual navigation. The robot gathers
information about its environment through external sensors, interpret the output of these
sensors, construct a scene map and a plan for the required task, and then monitor and execute the plan. In summary the navigation system integrates perception, planning, and execution of actions. As an example we might imagine a robot in charge of delivering food to different tables in a restaurant, or delivering component parts to work area in a factory. The robot must move in an unconstrained environment inhibited by other objects moving on unknown trajectory and with unknown velocities such as people and other robots.
All the work in this research is inspired by the above scenario namely the
perception for robot navigation. The goal is to develop methods for performing metric measurements from images. Since very often these images have been acquired using television camera or photographic cameras, we spent some time defining accurate quantitative models of these devices, and explore the relationship between these models and projective geometry.
Generally, camera calibration means the process of computing a camera's physical parameters like image center, focal length, position and orientation, This is called explicit calibration. Explicit calibration is of universal use in all aspects of computer vision, yet in some specific cases like stereo vision, the camera physical parameters are not necessarily required. Some intermediate parameters can also be calibrated for either making three-
information about its environment through external sensors, interpret the output of these
sensors, construct a scene map and a plan for the required task, and then monitor and execute the plan. In summary the navigation system integrates perception, planning, and execution of actions. As an example we might imagine a robot in charge of delivering food to different tables in a restaurant, or delivering component parts to work area in a factory. The robot must move in an unconstrained environment inhibited by other objects moving on unknown trajectory and with unknown velocities such as people and other robots.
All the work in this research is inspired by the above scenario namely the
perception for robot navigation. The goal is to develop methods for performing metric measurements from images. Since very often these images have been acquired using television camera or photographic cameras, we spent some time defining accurate quantitative models of these devices, and explore the relationship between these models and projective geometry.
Generally, camera calibration means the process of computing a camera's physical parameters like image center, focal length, position and orientation, This is called explicit calibration. Explicit calibration is of universal use in all aspects of computer vision, yet in some specific cases like stereo vision, the camera physical parameters are not necessarily required. Some intermediate parameters can also be calibrated for either making three-
Other data
| Title | DYNAMIC IMAGE PERCEPTION FOR ROBOT NAVIGATION | Other Titles | ادارك الصور المتحركة للملاحة الروبوتية | Authors | YASSER ATWA IBRAHIM | Issue Date | 1999 |
Attached Files
| File | Size | Format | |
|---|---|---|---|
| YASSER ATWA IBRAHIM.pdf | 1.44 MB | Adobe PDF | View/Open |
Similar Items from Core Recommender Database
Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.