by Andrea CHERUBINI, Postdoc Rennes
In recent work, autonomous vehicle guidance is done by processing information from the vision sensors. Cameras appear as very attractive sensors, especially in urban environments, where numerous visual 'points of interest' exist, and GPS signals are likely to be masked. Our navigation framework relies on a monocular camera, and the path is represented as a series of key images. In the first part of the seminar, we present a controller, which utilizes a time-independent varying reference, determined using a vector field, derived from the previous and next key images. The results show the advantages of the varying reference, with respect to a fixed one, in the image, as well as in the 3D state space. In the second part, we improve the framework, by introducing simultaneous obstacle avoidance. Kinematic redundancy guarantees that obstacle avoidance and visual navigation are independently achieved.