Keywords: Computer Vision and Multi-Sensor Based Perception, Machine Learning, Information Fusion, Artificial Intelligence, Systems-of-Systems.

Applications: Driving scene understanding, Object detection, Collaborative perception, Human behavior, Mobile robotic systems.


Recent Funded Projects

Urban centers are increasingly invaded by new means of Personal Mobility Platforms (PMP) (Personal mobility devices: scooters, Hoverboards, Gyro-wheels, bikes, etc.), directly or indirectly at the source of unpredictable behaviors in the traffic environment. The “Loi mobilité 2019” bill provides for the return of the scooters to the traffic lane when the dedicated bicycle lanes do not exist. In such a context, autonomous vehicles suffer from their limited perception obtained only from on-board sensors (forced to undergo the movements of the vehicle) and sometimes reduced in the measurement field by bulky obstacles (buses, trucks, etc.) or an occluding environment (buildings or urban structures). In such situation, unforeseen and unexpected events take source from the presence of new electrical mobility systems, or from behaviors of unstable pedestrians using (or not) new PMP and respecting (or not) the traffic rules. ANNAPOLIS will increase the vehicle’s perception capacity both in terms of precision, measurement field of view and information semantics, through vehicle to intelligent infrastructure communication. The project will also seek new models or concepts to take into account unpredictable behaviors of the new means of individual electric transport, to interpret and analyze scenes under constant evolution, and finally to decide the best future and safe motion of the self-driving car even in highly dynamic environments with unexpected and dangerous events.

We aimed at enhancing the vehicle’s perception and situational awareness of the complex and highly dynamic traffic scene, for the sake of a better autonomy, while making use of more sources of information than the one provided in a standalone way by its on-board sensors. Other traffic participants like cars, buses, trucks, pedestrians, bikes, or elements of the infrastructure, all seen as nodes of a mobile ad hoc network, can helpfully share such information. We address the problem of distributed data fusion within a complex dynamic system, in which the vehicles are not controlled by others but cooperate together. Moreover, every vehicle is not necessarily cooperative: perceived vehicles/moving objects can be shared in between the cooperative vehicles network, allowing to get a complete view of the traffic scene in some situations. We participated to the Grand Cooperative Driving Challenge (GCDC, May 28-29, 2016 - Automotive Campus Helmond, The Netherlands).

 At LIAMA, right beside Antoine Petit, Jianhua Tao, Jean-François Monin