This is an old revision of the document!
Title: Applying visual servoing to autonomous vehicle guidance - Lagadic's contribution to the CityVIP project
Abstract:
In recent work, autonomous vehicle guidance is done by processing information from the vision sensors. Cameras appear as very attractive sensors, especially in urban environments, where numerous visual 'points of interest' exist, and GPS signals are likely to be masked. Our navigation framework relies on a monocular camera, and the path is represented as a series of key images. In the first part of the seminar, we present a controller, which utilizes a time-independent varying reference, determined using a vector field, derived from the previous and next key images. The results show the advantages of the varying reference, with respect to a fixed one, in the image, as well as in the 3D state space. In the second part, we improve the framework, by introducing simultaneous obstacle avoidance. Kinematic redundancy guarantees that obstacle avoidance and visual navigation are independently achieved.
October 10th, 2009 – Philippe Bonnifait, Proffesor UTC
October 13rd, 2009 – Menhour Laghni, PhD Student Heudiasyc
October 20th, 2009 – Sergio Rodriguez, PhD Student Heudiasyc
November 17th, 2009 – Mohamed Bouai / Laura Muñoz, PhD Students Heudiasyc
November 24th, 2009 – Clement Fouque, PhD Student Heudiasyc
December 8th, 2009 – Benjamin Lussier, Einseignat chercheur
December 12sd, 2009 – Andrea Cherubini, Postdoc Rennes
Janvier 5th, 2010 – Cédric Tessier,