Donnons un sens à l'innovation

Séminaires du labo

02/07/2024 – Fady MOHAREB (EC Uni­ver­sité de Cran­field, séjour mobil­ité à l’UTC)
Présen­ta­tion de ses sujets de recherche

02/07/2024 – SYRI and CID Col­lab­o­ra­tive Work­shop
- Sta­tis­ti­cal guar­an­tees for object detec­tion
- Har­ness­ing Super­class­es for Learn­ing from Hier­ar­chi­cal Data­bas­es
- Enhanc­ing Local­iza­tion through Per­cep­tion: Appli­ca­tions of Vec­tor Maps
- Intro­duc­tion à des archi­tec­tures de réseaux de neu­rone de traite­ment des événe­ments
- Esti­ma­tions d’in­cer­ti­tudes pour le cal­i­brage entre cap­teurs par appren­tis­sage pro­fond
- Présen­ta­tion de début de thèse: Esti­ma­tion de l’in­cer­ti­tude et de l’in­tégrité pour les sys­tèmes de per­cep­tion basés sur l’ap­pren­tis­sage automatique

02/05/2023 – Vu-Linh NGUYEN

Miguel Angel SOTELO

Mar­di 12 novem­bre 2019 à 15 h en GI 042 (Bâti­ment Blaise Pas­cal – UTC)

Pro­fes­sor Miguel Angel SOTELO (Fel­low IEEE) received the Ph.D. degree in Elec­tri­cal Engi­neer­ing in 2001 from the Uni­ver­si­ty of Alcalá, Spain. He is Head of the INVETT Research Group and Vice-Pres­i­dent for Inter­na­tion­al Rela­tions at the Uni­ver­si­ty of Alcalá. He had served as Edi­tor-in-Chief of IEEE Intel­li­gent Trans­porta­tion Sys­tems Mag­a­zine (2014–2016) and Asso­ciate Edi­tor of IEEE Trans­ac­tions on Intel­li­gent Trans­porta­tion sys­tems (2008–2015). He is cur­rent­ly the Pres­i­dent of the IEEE Intel­li­gent Trans­porta­tion Sys­tems Society.

Résumé :

Self-dri­ving cars have expe­ri­enced a boom­ing devel­op­ment in the lat­est years, hav­ing achieved a cer­tain degree of matu­ri­ty. Their scene recog­ni­tion capa­bil­i­ties have improved in an impres­sive man­ner, espe­cial­ly thanks to the devel­op­ment of Deep Learn­ing tech­niques and the avail­abil­i­ty of immense amount of data con­tained in well-orga­nized pub­lic datasets. But still, self-dri­ving cars exhib­it lim­it­ed abil­i­ty to deal with cer­tain types of sit­u­a­tions that do not pose a great chal­lenge to human dri­vers, such as enter­ing a con­gest­ed round-about, deal­ing with cyclists, or giv­ing way to a vehi­cle that is aggres­sive­ly merg­ing onto the high­way from a ramp lane. All these tasks require the devel­op­ment of advanced pre­dic­tion capa­bil­i­ties in order to pro­vide the most like­ly tra­jec­to­ries for all traf­fic agents around the ego-car, name­ly vehi­cles and vul­ner­a­ble road users, in a giv­en time hori­zon. This talk will ana­lyze the cur­rent state-of- the-art of the most advanced pre­dic­tion sys­tems for vehi­cles and vul­ner­a­ble road users and will dis­cuss their impact on the future of autonomous dri­ving. At the same time, it will present some inno­v­a­tive solu­tions for effi­cient­ly incor­po­rat­ing con­tex­tu­al infor­ma­tion and expe­ri­ence in the learn­ing process.

Sanaz Mostaghim

Pro­fesseur en infor­ma­tique à l’In­sti­tut des sys­tèmes de coopéra­tion intel­li­gents (IKS) de l’U­ni­ver­sité de Magde­bourg en Allemagne

Le ven­dre­di 22 févri­er 2019 10h30 en GI042 (Bâti­ment Blaise Pas­cal, uni­ver­sité de tech­nolo­gie Compiègne)

Résumé :

Intel­li­gent tech­ni­cal sys­tems are becom­ing more and more ubiq­ui­tous and their influ­ence on our lives grows dai­ly. In the last years, com­pu­ta­tion­al intel­li­gence meth­ods have – more than ever – exten­sive­ly con­tributed to the lat­est sci­en­tif­ic break­through in devel­op­ing such intel­li­gent sys­tems. Nev­er­the­less, one major chal­lenge con­cerns the real-time reac­tions of intel­li­gent sys­tems to the unknown dynam­ics in their envi­ron­ments which is con­sid­ered to be among the grand chal­lenges in this area. This talk is about mul­ti-objec­tive deci­sion mak­ing algo­rithms and will give an overview about the design issues for prob­lems with a large num­ber of deci­sion vari­ables and the chal­lenges in real-time appli­ca­tions such as in robot­ics and com­put­er games. In most of such appli­ca­tions, the deci­sion mak­ers (robots or agents) must find and select one pos­si­ble opti­mal solu­tion in a very lim­it­ed time frame. This is very chal­leng­ing, when the envi­ron­ment dynam­i­cal­ly changes as the deci­sion mak­er needs to re-opti­mize and decide on the fly.

Takashi OGUCHI & Koichi SAKAI

Pro­fesseur et directeur du cen­tre ITS à Tokyo & Maître de con­férences au cen­tre ITS

Le jeu­di 31 jan­vi­er 2019 11h30 en GI042 (Bâti­ment Blaise Pas­cal, uni­ver­sité de tech­nolo­gie Compiègne)

Résumé :

The Advanced Mobil­i­ty Research Cen­ter (ITS Cen­ter) in the Insti­tute of Indus­tri­al Sci­ence (IIS), The Uni­ver­si­ty of Tokyo, is the first research orga­ni­za­tion among uni­ver­si­ties in Japan for ITS with inter­fac­ul­ty col­lab­o­ra­tion, includ­ing civil/traffic, mechanical/control, and information/communication engi­neer­ing. A Mem­o­ran­dum of Under­stand­ing has been signed recent­ly to facil­i­tate aca­d­e­m­ic coop­er­a­tion between IIS and UTC. You are all very wel­come to attend the seminar.

Chris­t­ian WOLF

Maître de con­férences HDR, LIRIS, INSA Lyon

Le jeu­di 5 juin 2018 à 14h00 en GI042 (Bâti­ment Blaise Pas­cal, uni­ver­sité de tech­nolo­gie Compiègne)

Résumé :

We address human action recog­ni­tion from RGB data and study the role of artic­u­lat­ed pose and of visu­al atten­tion mech­a­nisms for this appli­ca­tion. In par­tic­u­lar, artic­u­lat­ed pose is well estab­lished as an inter­me­di­ate rep­re­sen­ta­tion and capa­ble of pro­vid­ing pre­cise cues rel­e­vant to human motion and behav­ior. We describe two dif­fer­ent meth­ods which use pose in dif­fer­ent ways, either dur­ing train­ing and test­ing, or dur­ing train­ing only.

The first method uses a train­able glimpse sen­sor to extracts fea­tures on a set of pre­de­fined loca­tions spec­i­fied by the pose stream, name­ly the 4 hands of the two peo­ple involved in the activ­i­ty. We show that it is of high inter­est to shift the atten­tion to dif­fer­ent hands at dif­fer­ent time steps depend­ing on the activ­i­ty itself. The mod­el not only learns to find choic­es rel­e­vant to the task, but also to draw away atten­tion from joints which have been incor­rect­ly locat­ed by the pose mid­dle­ware.

A sec­ond method has been designed to explic­it­ly remove the depen­den­cy on pose dur­ing train­ing, mak­ing the method more broad­ly applic­a­ble in sit­u­a­tions where pose is not avail­able. Instead, a sparse rep­re­sen­ta­tion of focus points is cal­cu­lat­ed by a dynam­ic visu­al atten­tion mod­el and passed to a set of dis­trib­uted recur­rent neur­al work­ers. State-of-the-art results are achieved on sev­er­al datasets, among which is the largest dataset for human activ­i­ty recog­ni­tion, name­ly NTU-RGB+D.