Donnons un sens à l'innovation

Séminaires équipe SyRI

  • 29/01/2026 – Eric Goubault, ISIPTA
    le sémi­naire portera sur la véri­fi­ca­tion des sys­tèmes cyber-physiques par méth­odes inter­val­listes et plus (prob­a­bilistes impré­cis­es), ces sys­tèmes inclu­ant des sys­tèmes à boucle de con­trôle. Au menu: zono­topes, tubes de valid­ités, ce genre de choses. Le sémi­naire local (avec surtout des mem­bres CID) nous avait con­va­in­cu de l’intérêt de ses recherch­es pour l’ensemble du labo, je me per­me­ts donc cette diffusion.
  • 26/02/2026 – Flo­ri­an Pouthi­er
    « Asyn­chro­nous Per­cep­tion and Con­trol on Quadrotors »
  • 24/03/2026 – Jesus Arman­do Miran­da Moya, doc­tor­ant Heudi­asyc
    « Hybrid Con­trol Strate­gies for Cyber-Phys­i­cal Sys­tems under Uncer­tain­ty: Mod­el and Data-based approach »
  • 07/04/2026 – Tuan Le, doc­tor­ant Heudi­asyc
    « Open-Vocab­u­lary 3D Object Detec­tion via Mul­ti-Stage Ana­lyt­i­cal Fitting »
  • 047/04/2026 – Julien More­au, Maitre de con­férences Heudi­asyc
    « Exchange in the team « Tiny Machine Learn­ing and Embed­ded Com­put­ing Lab (tML-EC Lab) » in the USA and clas­si­fi­ca­tion of event sequences with hyper­di­men­sion­al com­put­ing (HDC) »
  • 15/12/2025 – Joao Pedro SANDRINI MILANEZI (Fed­er­al uni­ver­si­ty of Espir­i­to San­to, Brazil)
    « Secu­ri­ty and Iso­la­tion for Crit­i­cal Cloud Appli­ca­tions: Intel­li­gent Trans­porta­tion Sys­tems« 
    This pre­sen­ta­tion pro­vides an overview of the work devel­oped at the Fed­er­al Uni­ver­si­ty of Espíri­to San­to (UFES) with­in the project Slic­ing Future Inter­net Infra­struc­tures, under the Sci­en­tif­ic Ini­ti­a­tion pro­gram Secu­ri­ty and Iso­la­tion for Cloud-Host­ed Crit­i­cal Appli­ca­tions. It begins with an intro­duc­tion to the presenter’s aca­d­e­m­ic back­ground and the the­o­ret­i­cal foun­da­tions sup­port­ing the research.
    The project’s objec­tives are then pre­sent­ed, focus­ing on the imple­men­ta­tion of a Smart Cross­ing Test­bed that inte­grates a cloud envi­ron­ment with sen­sors and actu­a­tors deployed at a key pedes­tri­an cross­ing on the UFES cam­pus. Ground­ed in the lit­er­a­ture review, the sys­tem require­ments were defined, lead­ing to the design and deploy­ment of a dig­i­tal infra­struc­ture using the Star­lingX edge-cloud plat­form and the FIWARE Smart Solu­tions frame­work. Sub­se­quent­ly, the obtained results are dis­cussed, includ­ing the dig­i­tal infrastructure’s per­for­mance and the sens­ing advance­ments. These find­ings were con­sol­i­dat­ed into a paper sub­mit­ted to the IEEE Inter­na­tion­al Con­fer­ence on Com­mu­ni­ca­tions 2026. The pre­sen­ta­tion con­cludes with the planned next steps for the ongo­ing research at UFES and dis­cuss­es the pro­posed project to be devel­oped at Heudiasyc/UTC.
  • 05/12/2025 – Boris LABBE (artiste en rési­dence à l’UTC, 2025–2027)
    « Inter­ac­tion adap­ta­tive en XR et robo­t­ique : expéri­ence sen­sorielle entre humains, envi­ron­nement de réal­ité mixte et robots mobiles« 
    Boris Lab­bé est artiste et réal­isa­teur né en 1987, ses œuvres audio­vi­suelles ont mar­qué la scène du ciné­ma d’an­i­ma­tion indépen­dant ain­si que celle de l’art numérique tant au niveau nation­al qu’in­ter­na­tion­al. Dans le cadre de sa rési­dence à l’U­ni­ver­sité de Tech­nolo­gie de Com­piègne (2025–2027), en col­lab­o­ra­tion avec Pedro Castil­lo, Indi­ra Thou­venin et Jos­sué Car­iño, l’artiste présen­tera sa démarche et ce qui l’a amené vers les nou­veaux médias, notam­ment la réal­ité virtuelle. Le pro­jet dévelop­pé en rési­dence, Un Rêve Pho­to­syn­thé­tique, est une expéri­ence qui mêle réal­ité virtuelle, pro­jec­tion vidéo inter­ac­tive et robot mobile dans un univers immer­sif foresti­er. La tech­nolo­gie y prend des accents poé­tiques et se veut surtout une expéri­ence sen­sorielle qua­si-utopique : un dia­logue entre humains, machines et arbres.
  • 02/12/2025 – David ST-ONGE (pro­fesseur au lab­o­ra­toire INIT ROBOTS, Cana­da)
    « Des robots hors des labos : sécu­ri­taires et résilients« 
    Le déploiement de sys­tèmes robo­t­iques dans des envi­ron­nements com­plex­es exige de plac­er la sécu­rité et la résilience au pre­mier plan. Nous pro­posons une approche unifiée où matéri­aux, con­cep­tion, per­cep­tion et intel­li­gence col­lec­tive s’en­trela­cent pour ren­forcer la robustesse. Des matéri­aux archi­tec­turés absorbent les chocs sans alour­dir les plate­formes, tan­dis que la con­cep­tion mécanique et le con­trôle con­textuel façon­nent des dynamiques sta­bles et tolérantes aux per­tur­ba­tions. La per­cep­tion avancée enri­chit cette sta­bil­ité en antic­i­pant les risques, et l’in­tel­li­gence col­lec­tive ampli­fie l’ensem­ble par coopéra­tion et redon­dance. Com­binés, ces leviers for­ment des sys­tèmes véri­ta­ble­ment sûrs, adap­tat­ifs et prêts au terrain.
  • 02/12/2025 – Alessan­dra Elisa SINDI MORANDO (doc­tor­ante Heudi­asyc)
    « Fleet For­ma­tion Con­trol: LMI-Based and NMPC Approach­es« 
    Mul­ti-agent sys­tems are increas­ing­ly employed across numer­ous appli­ca­tion domains due to their abil­i­ty to accom­plish com­plex tasks that exceed the capa­bil­i­ties of indi­vid­ual robots. This advan­tage is par­tic­u­lar­ly evi­dent in het­ero­ge­neous fleets, where the com­ple­men­tary strengths of dif­fer­ent robot­ic plat­forms can be effec­tive­ly lever­aged. The devel­op­ment of dis­trib­uted and robust con­trol strate­gies for such sys­tems is there­fore of cen­tral impor­tance. Although sig­nif­i­cant advances have been made in the con­text of lin­ear sys­tems, the design of dis­trib­uted robust con­trollers for non­lin­ear mul­ti-agent sys­tems remains an open and active research prob­lem. Two prin­ci­pal method­olo­gies can be pur­sued: lin­eariz­ing the non­lin­ear dynam­ics to enable the use of estab­lished lin­ear con­trol tech­niques or direct­ly employ­ing non­lin­ear con­trol tools. This pre­sen­ta­tion inves­ti­gates both approach­es in the con­text of for­ma­tion con­trol.  The first part of the pre­sen­ta­tion con­sid­ers the lin­eariza­tion-based approach and its appli­ca­tion to a for­ma­tion com­posed of two uni­cy­cles and a quad­copter. The for­ma­tion prob­lem is defined as a min-max prob­lem whose opti­mal con­trol strat­e­gy is lin­ear in the local mea­sure­ments of each agent, and the matrix gains are obtained by solv­ing a Lin­ear Matrix Inequal­i­ty (LMI). Arti­fi­cial repul­sive forces are added to avoid inter-agent col­li­sions and obsta­cles. The pro­posed con­trol scheme was val­i­dat­ed both in sim­u­la­tion and through sev­er­al exper­i­ments, involv­ing both sta­t­ic and dynam­ic obsta­cles, as well as online fleet recon­fig­u­ra­tion. The exper­i­men­tal results show that the agents can achieve the for­ma­tion with­out crash­es.  The final part of the pre­sen­ta­tion focus­es on the sec­ond approach, which inves­ti­gates the use of the Non­lin­ear Mod­el Pre­dic­tive Con­trol (NMPC). After intro­duc­ing the key prin­ci­ples and advan­tages of NMPC—supported by prac­ti­cal exper­i­men­tal examples—several sim­u­la­tion results are pre­sent­ed for both homo­ge­neous and het­ero­ge­neous fleets per­form­ing for­ma­tion con­trol. 
  • 02/10/2025 – Muril­lo FERREIRA DOS SANTOS (Asso­ciate pro­fe­sor CEFET-MG)
    « Arti­fi­cial intel­li­gence applied to iden­ti­fi­ca­tion and con­trol of intel­li­gent vehi­cles: Meth­ods, coop­er­a­tion, and chal­lenges« 
    This pre­sen­ta­tion intro­duces the ongo­ing inter­na­tion­al project “Arti­fi­cial Intel­li­gence applied to Iden­ti­fi­ca­tion and Con­trol of Intel­li­gent Vehi­cles”, coor­di­nat­ed by UFLA (Brazil) with the par­tic­i­pa­tion of CEFET-MG (Brazil), UTC (France), Uni­ver­si­ty of Water­loo and Uni­ver­si­ty of Alber­ta (Cana­da), and Jilin Uni­ver­si­ty (Chi­na). The project aims to devel­op AI-based tech­niques for sys­tem iden­ti­fi­ca­tion, state esti­ma­tion, pre­dic­tive con­trol, and fault detec­tion in intel­li­gent vehi­cles, com­bin­ing black-box and grey-box mod­els with advanced esti­ma­tors such as Gauss­ian Process Regres­sion and soft-sen­sors. The talk will first con­nect with pre­vi­ous work on UAV mod­el­ing and con­trol allo­ca­tion, high­light­ing sim­i­lar­i­ties with cur­rent chal­lenges in intel­li­gent vehi­cles. Then, it will detail the method­olog­i­cal steps. Final­ly, the pre­sen­ta­tion will dis­cuss my spe­cif­ic role in the project, which is the design and exper­i­men­tal val­i­da­tion of hybrid con­trollers inte­grat­ing AI-based esti­ma­tions. The ses­sion will also empha­size the impor­tance of inter­na­tion­al coop­er­a­tion and oppor­tu­ni­ties for joint val­i­da­tion using UTC’s sim­u­la­tors and exper­i­men­tal platforms.
  • 30/09/2025 – Jesus Arman­do MIRANDA MOYA (doc­tor­ant Heudi­asyc)
    « Robust Adap­tive Inte­gral Slid­ing Mode-based Motion Con­trol Scheme for Autonomous Vehi­cle Dynam­ics under Uncer­tain­ties« 
    This pre­sen­ta­tion intro­duces the design of a dynam­ics-based motion con­troller for a self-dri­ving vehi­cle nav­i­gat­ing under para­met­ric uncer­tain­ties, sen­sor noise, and uncer­tain road fric­tion con­di­tions. Assum­ing the exis­tence of onboard per­cep­tion and local­iza­tion sys­tems, as well as a glob­al path plan­ner pro­vid­ing sequen­tial way­points, the posi­tion­ing set­points from the plan­ner or from an ellip­tic lim­it cycle obsta­cle avoid­ance algo­rithm are processed by a time-vary­ing line-of-sight guid­ance law to gen­er­ate steer­ing ref­er­ences for the sys­tem. Then, an adap­tive inte­gral non­sin­gu­lar ter­mi­nal slid­ing mode con­troller is designed to achieve the desired head­ing and veloc­i­ty states while ensur­ing robust­ness to bound­ed exter­nal dis­tur­bances and mod­el uncer­tain­ties, prac­ti­cal finite-time con­ver­gence of the state error, and chat­ter­ing atten­u­a­tion. More­over, a Lya­punov-based analy­sis guar­an­tees the total sta­bil­i­ty of the cas­cade scheme in closed loop. Sim­u­la­tion results in Mat­lab demon­strate the robust­ness and effec­tive­ness of the pro­pos­al in sce­nar­ios involv­ing curved paths and over­tak­ing maneu­vers under time-vary­ing road fric­tion con­di­tions, while han­dling para­met­ric uncer­tain­ties and out­put noise. Final­ly, a quan­ti­ta­tive study high­lights the advan­tages of the con­troller over alter­na­tive strategies.
  • 15/07/2025 – Masahi­ro MAE (Assis­tant Pro­fes­sor Uni­ver­si­ty of Tokyo)
    « Mul­ti­vari­able High-Pre­ci­sion Motion Con­trol with Struc­tured Mod­el­ing and Data-Dri­ven Con­vex Opti­miza­tion« 
    High require­ments for the per­for­mance and flex­i­bil­i­ty of indus­tri­al mecha­tron­ics lead to the neces­si­ty of mul­ti­vari­able con­trol. Mul­ti­vari­able high-pre­ci­sion motion con­trol com­bin­ing mod­el-based and data-based approach­es is suit­able for mecha­tron­ic sys­tems in indus­tri­al appli­ca­tions. In mod­el-based aspects, the dynam­ics of the mul­ti­vari­able con­trolled sys­tem should be con­sid­ered as a mod­el struc­ture with respect to the lim­i­ta­tions of sam­pled-data char­ac­ter­is­tics and mul­ti-modal flex­i­bil­i­ty, and the con­trol approach should be suc­cess­ful­ly imple­ment­ed in phys­i­cal­ly intu­itive tun­ing para­me­ters for indus­tri­al applic­a­bil­i­ty. From data-based aspects, the tun­ing para­me­ters of the mul­ti­vari­able con­trollers should be tuned by the intu­itive process or data-dri­ven opti­miza­tion to avoid too much effort in the tun­ing process when the con­trollers are imple­ment­ed in indus­tri­al mecha­tron­ic sys­tems. The mul­ti­vari­able high-pre­ci­sion motion con­trol approach­es are intro­duced from both sides of feed­for­ward con­trol for tra­jec­to­ry track­ing and feed­back con­trol for dis­tur­bance rejec­tion with prac­ti­cal mecha­tron­ics applications.
  • 08/07/2025 – Ale­jan­dro MILLAN (doc­tor­ant Heudi­asyc)
    « Autonomous land­ing of a fixed-wing drone on a ground vehi­cle using a neu­ro-con­trol strat­e­gy with the­o­ret­i­cal guar­an­tees« 
    Land­ing of the fixed-wing drones presents a sig­nif­i­cant chal­lenge due to the long dis­tance required for its last phase of flight. Sev­er­al stud­ies pro­posed recov­ery meth­ods to scale down this dis­tance, but as result of its speed , these dif­fer­ent meth­ods often dam­age the vehi­cles, mak­ing nec­es­sary the study of new solu­tions. There­fore, this work pro­pos­es the coor­di­na­tion in coop­er­a­tive land­ing of a fixed-wing drone and a ground vehi­cle, min­i­miz­ing the land­ing dis­tance and avoid­ing dam­age to the air­craft. A land­ing of a fixed wing drone on a ground vehi­cle is pro­posed in this work. The land­ing stage is pro­posed fol­low­ing an air­speed reduc­tion strat­e­gy, where the ground vehi­cle also reach the touch­down point and cap­ture the drone. For the exper­i­men­tal val­i­da­tion in out­doors envi­ron­met, it was devel­opped a gain adap­ta­tion con­troller with back­prop­a­ga­tion neur­al net­works, to study how neur­al net­works reject or com­pen­sate the dis­tur­bances on the system.
  • 04/07/2025 – Stephany BERRIO PEREZ
    « Bridg­ing Ideas and Roads: Imple­ment­ing ITS in Aus­tralia« 
    This pre­sen­ta­tion pro­vides an overview of the the­o­ret­i­cal and prac­ti­cal advance­ments achieved by the Intel­li­gent Trans­porta­tion Sys­tems (ITS) group at the Aus­tralian Cen­tre for Robot­ics, Uni­ver­si­ty of Syd­ney. Our work is ded­i­cat­ed to enabling autonomous sys­tems in unique­ly Aus­tralian envi­ron­ments, with a strong empha­sis on local appli­ca­tions. We cov­er the process from col­lect­ing and anno­tat­ing datasets in real-world local set­tings, to devel­op­ing domain adap­ta­tion tech­niques that enhance the per­for­mance of 3D object detec­tors for these envi­ron­ments. The pre­sen­ta­tion also explores our research on col­lab­o­ra­tive per­cep­tion and com­mu­ni­ca­tion for nav­i­ga­tion, as well as the devel­op­ment of per­cep­tion sys­tems for road­side units. Through these efforts, our aim is to con­tribute to the safe and effec­tive deploy­ment of con­nect­ed and autonomous vehi­cles through­out Australia.
  • 26/06/2025 – Enri­co ZERO (Assis­tant Pro­fes­sor Uni­ver­si­ty of Gen­o­va, Italy)
    « From Sen­sors to Autonomous Intel­li­gence: A Lay­ered Frame­work for Safer and Smarter Trans­port Sys­tems« 
    Autonomous vehi­cles rep­re­sent a con­ver­gence of sens­ing, deci­sion-mak­ing, and con­trol. In this sem­i­nar, I will present a lay­ered frame­work for intel­li­gent trans­port sys­tems, bridg­ing data acqui­si­tion and advanced con­trol strate­gies to address the com­plex chal­lenges of auton­o­my and safe­ty in vehic­u­lar envi­ron­ments. My work is ground­ed in a Sys­tems of Sys­tems Engi­neer­ing per­spec­tive — an approach I have active­ly con­tributed to as Chair of the IEEE SoSE 2025 pub­li­ca­tion — and enriched by my role as Asso­ciate Edi­tor of the IEEE Trans­ac­tions on Intel­li­gent Trans­porta­tion Sys­tems. At the foun­da­tion lies the sens­ing and mon­i­tor­ing lay­er, where het­ero­ge­neous data — from LIDAR, cam­eras, and iner­tial sys­tems to phys­i­o­log­i­cal sig­nals from human dri­vers — is col­lect­ed and inte­grat­ed. I will dis­cuss mul­ti-sen­sor fusion, anom­aly detec­tion, and pre­dic­tive main­te­nance, with par­tic­u­lar atten­tion to safe­ty-crit­i­cal sce­nar­ios. A cen­tral focus will be my ongo­ing research into using brain activ­i­ty as a real-time sen­sor to detect cog­ni­tive and atten­tion­al states of the dri­ver, with a nation­al patent pend­ing on this nov­el mon­i­tor­ing sys­tem. Build­ing upon this, the data ana­lyt­ics and opti­miza­tion lay­er trans­forms raw data into struc­tured deci­sions through clas­si­cal and AI-enhanced approach­es. I will present results on Vehi­cle Rout­ing and Inven­to­ry Rout­ing Prob­lems (VRP/IRP), high­light­ing sus­tain­able and adap­tive logis­tics, and how trans­port infor­ma­tion sys­tems can sup­port antic­i­pa­to­ry plan­ning and coor­di­na­tion. At the top lay­er, I focus on con­trol and coop­er­a­tion, espe­cial­ly in mul­ti-agent vehic­u­lar sys­tems. I will describe the devel­op­ment of a micro­car pla­toon­ing lab­o­ra­to­ry equipped with an indoor posi­tion­ing sys­tem based on LIDAR and Blue­tooth anchors. This test­bed has sup­port­ed exper­i­men­tal eval­u­a­tion of dis­trib­uted con­trol algo­rithms, includ­ing Mod­el Pre­dic­tive Con­trol (MPC) and the Alter­nat­ing Direc­tion Method of Mul­ti­pli­ers (ADMM). I will also touch upon explorato­ry work on brain-com­put­er inter­ac­tion for con­trol, includ­ing ear­ly exper­i­ments with switch­ing and motion-based inter­faces. Con­clud­ing, I will out­line my vision for advanc­ing research at UTC in autonomous dri­ving and trans­port safe­ty using real vehi­cles. My aim is to con­tribute to build­ing safer, human-aware, and coop­er­a­tive trans­port sys­tems by inte­grat­ing phys­i­cal, cyber, and cog­ni­tive lay­ers — ulti­mate­ly enabling trust­wor­thy auton­o­my ground­ed in Sys­tems of Sys­tems thinking. 
  • 26/06/2025 – Vini­cius MARIANO
    « Con­trol Bar­ri­er Func­tions, Dif­fer­en­tiable Dis­tances, and Safe­ty in Con­trol Loops« 
    In robot­ics, safe­ty and oper­a­tional constraints—such as obsta­cle avoid­ance and joint lim­it enforcement—can be addressed at two dif­fer­ent lev­els: through motion/path plan­ning (delib­er­a­tive lay­er) and with­in the con­trol loop itself (reac­tive lay­er). While plan­ning-based approach­es are essen­tial for long-term safe­ty, a reac­tive safe­ty lay­er at the con­trol lev­el remains valuable—even when plan­ning per­forms well. A pop­u­lar method for enforc­ing con­straints with­in con­trol loops is to for­mu­late them as opti­miza­tion prob­lems, using Con­trol Bar­ri­er Func­tion (CBF)–based inequal­i­ties to guar­an­tee con­straint sat­is­fac­tion. In this talk, I will present my recent the­o­ret­i­cal and prac­ti­cal work on this top­ic and high­light some of the key chal­lenges that arise in imple­men­ta­tion. Giv­en that dis­tance com­pu­ta­tion plays a cen­tral role in many safe­ty-relat­ed con­straints, I will place par­tic­u­lar empha­sis on one of my cur­rent research top­ics:  dif­fer­en­tiable distances.
  • 05/06/2025 – Mladen CICIC (Uni­ver­si­ty of Cal­i­for­nia, Berke­ley)
    « Clos­ing the Lagrangian Traf­fic Con­trol Loop: Mod­el­ing, Actu­a­tion, Sens­ing, and Recon­struc­tion-based Con­trol« 
    As the con­nect­ed and auto­mat­ed vehi­cles (CAVs) enter the road in increas­ing num­bers, a new Lagrangian par­a­digm for traf­fic con­trol is becom­ing pos­si­ble. As opposed to the clas­si­cal, Euler­ian traf­fic con­trol, which requires addi­tion­al sta­tion­ary equip­ment, the Lagrangian approach uses CAVs as sen­sors and actu­a­tors, enabling new flex­i­ble solu­tions with­out rely­ing on con­ven­tion­al road traf­fic man­age­ment infra­struc­tures. This talk dis­cuss­es how CAVs can be direct­ly used as major com­po­nents of a traf­fic con­trol loop. After giv­ing some pre­lim­i­nar­ies about the traf­fic mod­els, we first dis­cuss the mech­a­nisms to use CAVs to pro­vide actu­a­tions and local traf­fic mea­sure­ments. Since traf­fic mea­sure­ments are now only avail­able in the vicin­i­ty of CAVs, the full traf­fic state needs to be esti­mat­ed and recon­struct­ed before con­trol can be applied. Addi­tion­al­ly, if the traf­fic mod­el is not known as a‑priori, the recon­struc­tion data can be used to iden­ti­fy the dynam­ics, along with the mod­el describ­ing the influ­ence of CAVs on the rest of traf­fic. Based on the traf­fic state pre­dic­tions acquired from the learned mod­el, we are able to imple­ment a con­trol law which dis­si­pates con­ges­tion and improves the through­put. Final­ly, an out­line of how Lagrangian sens­ing can be prac­ti­cal­ly imple­ment­ed and used is pre­sent­ed through exper­i­ments involv­ing Lagrangian actu­a­tor and probe vehicles.
  • 03/06/2025 – Ale­jan­dro MILLAN (doc­tor­ant Heudi­asyc)
    « Autonomous land­ing of a fixed-wing drone on a ground vehi­cle using a neu­ro-con­trol strat­e­gy with the­o­ret­i­cal guar­an­tees« 
    Land­ing of the fixed-wing drones presents a sig­nif­i­cant chal­lenge due to the long dis­tance required for its last phase of flight. Sev­er­al stud­ies pro­posed recov­ery meth­ods to scale down this dis­tance, but as result of its speed , these dif­fer­ent meth­ods often dam­age the vehi­cles, mak­ing nec­es­sary the study of new solu­tions. There­fore, this work pro­pos­es the coor­di­na­tion in coop­er­a­tive land­ing of a fixed-wing drone and a ground vehi­cle, min­i­miz­ing the land­ing dis­tance and avoid­ing dam­age to the air­craft.
         A land­ing of a fixed wing drone on a ground vehi­cle is pro­posed in this work. The land­ing stage is pro­posed fol­low­ing an air­speed reduc­tion strat­e­gy, where the ground vehi­cle also reach the touch­down point and cap­ture the drone. For the exper­i­men­tal val­i­da­tion in out­doors envi­ron­met, it was devel­opped a gain adap­ta­tion con­troller with back­prop­a­ga­tion neur­al net­works, to study how neur­al net­works reject or com­pen­sate the dis­tur­bances on the system.
  • 03/06/2025 – Tuan LE (doc­tor­ant Heudi­asyc)
    « Toward Open-Vocab­u­lary 3D Object Detec­tion in Urban Envi­ron­ments« 
    Accu­rate 3D object detec­tion from LiDAR point clouds is fun­da­men­tal for autonomous dri­ving and under­stand­ing com­plex urban envi­ron­ments. Although deep learn­ing has led to sig­nif­i­cant advance­ments in this field, most cur­rent 3D detec­tors oper­ate under a closed-set assump­tion, mean­ing they can only iden­ti­fy a pre­de­fined set of object cat­e­gories that have been man­u­al­ly labeled in train­ing datasets. How­ev­er, in real-world sce­nar­ios, the abil­i­ty to detect nov­el or infre­quent objects such as traf­fic cones, shop­ping carts, street signs, or non-stan­dard vehi­cles is cru­cial for ensur­ing the safe­ty and robust­ness of autonomous sys­tems. In par­al­lel, open-vocab­u­lary (OV) object detec­tion has gar­nered increas­ing atten­tion with­in the 2D vision com­mu­ni­ty. This approach empow­ers mod­els to iden­ti­fy objects based on arbi­trary tex­tu­al descrip­tions, includ­ing cat­e­gories that were not present dur­ing train­ing. Inspired by this suc­cess, we explore extend­ing open-vocab­u­lary capa­bil­i­ties to the 3D domain. Specif­i­cal­ly, a promis­ing strat­e­gy involves lever­ag­ing pre-trained 2D VLMs to per­form object detec­tion in images, and then pro­ject­ing the result­ing 2D bound­ing box­es into 3D space using cam­era-LiDAR cal­i­bra­tion and geo­met­ric trans­for­ma­tions. This approach com­bines the seman­tic under­stand­ing of 2D VLMs with the spa­tial pre­ci­sion of LiDAR-based geom­e­try, offer­ing a low-cost alter­na­tive that does not rely on addi­tion­al 3D annotations.
  • 20/05/2025 – Shan HE (doc­tor­ant Heudi­asyc)
    « Sus­tain­able Smart City Mobil­i­ty Enabled by Coop­er­a­tive Con­trol Flow Opti­miza­tion of Con­nect­ed Autonomous Vehi­cles« 
    Con­nect­ed Autonomous Vehi­cles (CAVs) are trans­form­ing the dri­ving envi­ron­ment of urban traf­fic, par­tic­u­lar­ly at unsignal­ized inter­sec­tions. In this work, we first present a Pre­dict­ed Inter-Dis­tance Pro­file based Mul­ti-Risk Man­age­ment Coop­er­a­tive Opti­miza­tion method (MRMCO-PIDP), which enables coop­er­a­tive deci­sion-mak­ing and tra­jec­to­ry plan­ning among mul­ti­ple vehi­cles at unsignal­ized inter­sec­tions to effec­tive­ly avoid col­li­sions. Each CAV iden­ti­fies poten­tial­ly con­flict­ing vehi­cles based on its pre­dict­ed tra­jec­to­ry and engages in col­lab­o­ra­tive nego­ti­a­tion to deter­mine the safest and most effi­cient strat­e­gy using a uni­fied cost func­tion.
         Sub­se­quent­ly, we explore how this method can be extend­ed to con­tin­u­ous traf­fic sce­nar­ios by intro­duc­ing a Con­tex­tu­al-Graph Topol­o­gy approach, which iden­ti­fies vehi­cles with poten­tial col­li­sion risks and recon­structs the com­mu­ni­ca­tion topol­o­gy accord­ing­ly, there­by reduc­ing unnec­es­sary com­pu­ta­tion­al load and improv­ing oper­a­tional effi­cien­cy. The effec­tive­ness of the pro­posed method has been val­i­dat­ed through sim­u­la­tions under ran­dom­ly gen­er­at­ed scenarios.
  • 20/05/2025 – Ivan GUTTIEREZ (doc­tor­ant Heudi­asyc)
    « From Autonomous Nav­i­ga­tion to Neu­ro­mor­phic Per­cep­tion with Het­ero­ge­neous Plat­forms« 
    Keep­ing the good state of infra­struc­ture is essen­tial to ensure safe­ty and ser­vices. How­ev­er, the large num­ber of assets, their dimen­sions and their com­mon loca­tion in remote areas com­pli­cate per­form­ing fre­quent and exhaus­tive inspec­tions of their state. Robots are promis­ing solu­tions to this prob­lem, which usu­al­ly implies high costs and human risks. For instance, inspect­ing viaducts implies check­ing the state of very high ele­ments, but giv­en their com­plex loca­tions, the use of cranes is not pos­si­ble most of the time, and work­ers have to risk their live climb­ing to ful­fill dif­fer­ent inspec­tion and main­te­nance tasks. In addi­tion to the height and the dan­ger, pow­er lines impose an addi­tion­al prob­lem: they have huge dimen­sions as they con­nect cities sep­a­rat­ed by sev­er­al kilo­me­ters. Aer­i­al robots are pro­posed as per­fect can­di­dates to tack­le this prob­lem and some solu­tions will be pre­sent­ed.
         Aer­i­al robots also entail prob­lems such as the high pow­er con­sump­tion and the dan­ger imposed by mul­ti­ro­tor blades. Flap­ping wing robots are an emerg­ing  bioin­spired tech­nol­o­gy which can deal with some of these prob­lems. How­ev­er, despite being safe for humans and able to save ener­gy by switch­ing to glid­ing mode, their devel­op­ment is con­sid­er­ably dif­fi­cult. In par­tic­u­lar, per­cep­tion becomes a very dif­fi­cult prob­lem giv­en the hard vibra­tions pro­duced by the flap­ping move­ments. Event cam­eras are neu­ro­mor­phic sen­sors inspired by the human reti­na. Their char­ac­ter­is­tics seem to per­fect­ly suit the char­ac­ter­is­tics of these robots. The inte­gra­tion of these sen­sors on flap­ping wing robots is exper­i­men­tal­ly eval­u­at­ed. The use of event cam­eras for oth­er tasks and oth­er types of plat­forms is also considered.
  • 13/05/2025 – Emmanuel ALAO (doc­tor­ant Heudi­asyc)
    « Safe and Uncer­tain­ty-aware Mul­ti-Risk Fusion for Autonomous Nav­i­ga­tion in the pres­ence of PLEVs« 
    This the­sis presents a Hier­ar­chi­cal Deci­sion Archi­tec­ture for autonomous vehi­cles that inte­grates high-lev­el strate­gic deci­sion mak­ing with tra­jec­to­ry plan­ning, using Deep Rein­force­ment Learn­ing (DRL) and dynam­ic risk assess­ment. The frame­work uni­fies both lon­gi­tu­di­nal and lat­er­al decision-making—covering tasks such as adap­tive cruise con­trol, lane changes, and obsta­cle avoidance—while ensur­ing safe­ty, effi­cien­cy, and adapt­abil­i­ty in com­plex dri­ving sce­nar­ios. The approach is struc­tured in three parts: devel­op­ment of a DRL-based Adap­tive Cruise Con­trol (ACC) mod­ule for respon­sive car-fol­low­ing, inte­gra­tion of a risk assess­ment mod­ule for proac­tive haz­ard antic­i­pa­tion, and uni­fi­ca­tion of lon­gi­tu­di­nal and lat­er­al deci­sion-mak­ing to enable coher­ent, risk-aware plan­ning. Empha­sis is placed on learn­ing inter­pretable and adapt­able behav­iors based on real-time traf­fic con­di­tions. Sim­u­la­tions in a joint Simulink/MatLab and SCANeR™ Stu­dio envi­ron­ment show that the pro­posed archi­tec­ture demon­strat­ing smooth, safe, and con­text-aware behav­iors. This work con­tributes to autonomous dri­ving by offer­ing a scal­able, learn­ing-based deci­sion-mak­ing frame­work that bridges strate­gic plan­ning with real-world dri­ving dynamics.
  • 22/04/2025 – Dany GHRAIZI
    « Design of Inte­grat­ed Deci­sion-Mak­ing and Tra­jec­to­ry Plan­ning Archi­tec­tures for Autonomous Vehi­cles using Deep Rein­force­ment Learn­ing and Risk Assess­ment« 
    This the­sis presents a Hier­ar­chi­cal Deci­sion Archi­tec­ture for autonomous vehi­cles that inte­grates high-lev­el strate­gic deci­sion mak­ing with tra­jec­to­ry plan­ning, using Deep Rein­force­ment Learn­ing (DRL) and dynam­ic risk assess­ment. The frame­work uni­fies both lon­gi­tu­di­nal and lat­er­al decision-making—covering tasks such as adap­tive cruise con­trol, lane changes, and obsta­cle avoidance—while ensur­ing safe­ty, effi­cien­cy, and adapt­abil­i­ty in com­plex dri­ving sce­nar­ios. The approach is struc­tured in three parts: devel­op­ment of a DRL-based Adap­tive Cruise Con­trol (ACC) mod­ule for respon­sive car-fol­low­ing, inte­gra­tion of a risk assess­ment mod­ule for proac­tive haz­ard antic­i­pa­tion, and uni­fi­ca­tion of lon­gi­tu­di­nal and lat­er­al deci­sion-mak­ing to enable coher­ent, risk-aware plan­ning. Empha­sis is placed on learn­ing inter­pretable and adapt­able behav­iors based on real-time traf­fic con­di­tions. Sim­u­la­tions in a joint Simulink/MatLab and SCANeR™ Stu­dio envi­ron­ment show that the pro­posed archi­tec­ture demon­strat­ing smooth, safe, and con­text-aware behav­iors. This work con­tributes to autonomous dri­ving by offer­ing a scal­able, learn­ing-based deci­sion-mak­ing frame­work that bridges strate­gic plan­ning with real-world dri­ving dynamics.
  • 01/04/2025 – Thibault CHARMET (doc­tor­ant Heudi­asyc)
    « Mon­i­tor­ing Oper­a­tional Design Domain Com­pli­ance in Intel­li­gent Vehi­cles« 
    Advanced dri­ver assis­tance sys­tems (ADAS) and auto­mat­ed dri­ving func­tions are becom­ing inte­gral to mod­ern vehi­cles. Ensur­ing their safe­ty and reli­a­bil­i­ty requires val­i­dat­ing their oper­a­tion with­in well-defined Oper­a­tional Design Domains (ODD). Mon­i­tor­ing ODD com­pli­ance is cru­cial to deter­mine when these func­tions can oper­ate safe­ly.
    We present a sys­tem­at­ic approach to ODD mon­i­tor­ing using a for­mal­ized, machine-read­able ODD descrip­tion and fuzzy log­ic. The method eval­u­ates com­pli­ance and pro­vides inter­pretable expla­na­tions for non-com­pli­ance, enhanc­ing trans­paren­cy and trust­wor­thi­ness. The approach intro­duces a two-lev­el hier­ar­chi­cal ODD rep­re­sen­ta­tion, a mem­ber­ship score quan­ti­fy­ing com­pli­ance, and an expla­na­tion mech­a­nism clar­i­fy­ing devi­a­tions. The mon­i­tor­ing results are inte­grat­ed into a Con­di­tion­al Acti­va­tion Con­trol Sys­tem (CACS), which gov­erns func­tion acti­va­tion based on ODD com­pli­ance. The pro­posed sys­tem was imple­ment­ed with­in a pro­duc­tion vehi­cle and val­i­dat­ed using real-world data, demon­strat­ing its fea­si­bil­i­ty for deployment.
  • 01/04/2025 – Ben­ja­mas Yui PANOMRUTTANARUG
    « Track­ing Con­trol in Autonomous Dri­ving« 
    The abil­i­ty of autonomous vehi­cles to accu­rate­ly fol­low pre­de­fined paths is crit­i­cal to their per­for­mance and safe­ty. This talk presents an overview of track­ing con­trol method­olo­gies in autonomous dri­ving, empha­siz­ing both the­o­ret­i­cal and prac­ti­cal aspects. We start by intro­duc­ing fun­da­men­tal vehi­cle mod­els, includ­ing kine­mat­ic and dynam­ic rep­re­sen­ta­tions, high­light­ing their roles and lim­i­ta­tions in con­trol design. Next, we delve into var­i­ous track­ing con­trol tech­niques, cat­e­go­riz­ing them into non-mod­el-based approaches—such as PID con­trol, Stan­ley, Pure Pur­suit (PP), and Iter­a­tive Learn­ing Con­trol (ILC)—and mod­el-based meth­ods, notably Lin­ear Qua­drat­ic Reg­u­la­tor (LQR) and Mod­el Pre­dic­tive Con­trol (MPC). Each method’s prin­ci­ples, advan­tages, and suit­able appli­ca­tions are exam­ined, sup­port­ed by insights from real-world exper­i­men­tal results. The talk aims to pro­vide par­tic­i­pants with a clear under­stand­ing of cur­rent track­ing con­trol strate­gies and their effec­tive imple­men­ta­tion in autonomous dri­ving systems
  • 25/02/2025 – Fadel TARHINI (doc­tor­ant Heudi­asyc)
    « On Ener­gy Effi­cien­cy In Motion Plan­ning for Autonomous Vehi­cles« 
    Achiev­ing safe, smooth, and ener­gy-effi­cient nav­i­ga­tion remains a crit­i­cal chal­lenge for autonomous vehi­cles, par­tic­u­lar­ly in dynam­ic envi­ron­ments. This work presents an inte­grat­ed tra­jec­to­ry plan­ning frame­work that enhances ener­gy effi­cien­cy and safe­ty by address­ing both path and speed plan­ning. A hybrid path plan­ning approach is intro­duced, com­bin­ing a sam­pling-based method for rapid fea­si­bil­i­ty assess­ment with an opti­miza­tion-based refine­ment step to gen­er­ate smoother and more effi­cient path. Addi­tion­al­ly, a jerk-con­trolled speed plan­ning strat­e­gy, based on quin­tic poly­no­mi­als, dynam­i­cal­ly adjusts speed by con­sid­er­ing road cur­va­ture, gra­di­ent, adher­ence, and obsta­cle inter­ac­tions. By reg­u­lat­ing jerk, the speed plan­ning method not only improves pas­sen­ger com­fort and vehi­cle sta­bil­i­ty but also enhances ener­gy effi­cien­cy. The pro­posed frame­work is val­i­dat­ed through a joint sim­u­la­tion between Simulink/Matlab and SCANeR Stu­dio, demon­strat­ing sig­nif­i­cant improve­ments in ener­gy effi­cien­cy, sta­bil­i­ty, com­fort, and com­pu­ta­tion­al performance.
  • 07/02/2025 - Joris TILLET
    « Véri­fi­ca­tion de sys­tèmes dynamiques incer­tains avec des méth­odes ensem­b­listes« 
    Pour car­ac­téris­er et con­trôler de manière sûre des sys­tèmes dynamiques réels, il est néces­saire de con­sid­ér­er toutes les incer­ti­tudes pos­si­bles. Celles-ci peu­vent venir du bruit des cap­teurs, des approx­i­ma­tions de la mod­éli­sa­tion du sys­tème, mais aus­si de l’ab­sence ou de la non-fia­bil­ité de cer­taines infor­ma­tions (don­nées aber­rantes, non cohérentes, etc). Pour­tant, on souhaite garan­tir la sécu­rité et le com­porte­ment de ces sys­tèmes. Dans cette présen­ta­tion seront abor­dés : la val­i­da­tion d’un con­trôleur en util­isant l’analyse par inter­valles, la local­i­sa­tion d’un robot sous-marin en s’ap­puyant sur de la logique floue, et la véri­fi­ca­tion de spé­ci­fi­ca­tions for­mal­isées dans une logique tem­porelle avec l’analyse par inter­valles. Ain­si, les incer­ti­tudes dites « sto­chas­tiques », prin­ci­pale­ment dues aux bruits des cap­teurs, sont pris­es en compte en les sup­posant bornées et en les représen­tant par des inter­valles. Les incer­ti­tudes « épistémiques », cor­re­spon­dant à la fia­bil­ité de l’in­for­ma­tion, sont représen­tées par des ensem­bles flous qui peu­vent être car­ac­térisées avec une approche par intervalles
  • 07/02/2025 – Nar­sim­lu KEMSARAM
    « Acous­to­Bots: A Swarm of Robots for Acoustophoret­ic Mul­ti­modal Inter­ac­tions« 
    Acoustophore­sis has enabled nov­el inter­ac­tion capa­bil­i­ties, such as lev­i­ta­tion, vol­u­met­ric dis­plays, mid-air hap­tic feed­back, and direc­tion­al sound gen­er­a­tion, to open new forms of mul­ti­modal inter­ac­tions. How­ev­er, its tra­di­tion­al imple­men­ta­tion as a sin­gu­lar sta­t­ic unit lim­its its dynam­ic range and appli­ca­tion ver­sa­til­i­ty. This work intro­duces « Acous­to­Bots » – a nov­el con­ver­gence of acoustophore­sis with a mov­able and recon­fig­urable phased array of trans­duc­ers for enhanced appli­ca­tion ver­sa­til­i­ty. We mount a phased array of trans­duc­ers on a swarm of robots to har­ness the ben­e­fits of mul­ti­ple mobile acoustophoret­ic units. This offers a more flex­i­ble and inter­ac­tive plat­form that enables a swarm of acoustophoret­ic mul­ti­modal inter­ac­tions. Our nov­el Acous­to­Bots design includes a hinge actu­a­tion sys­tem that con­trols the ori­en­ta­tion of the mount­ed phased array of trans­duc­ers to achieve high flex­i­bil­i­ty in a swarm of acoustophoret­ic mul­ti­modal inter­ac­tions. In addi­tion, we designed a Bead­Dis­penser­Bot that can deliv­er par­ti­cles to trap­ping loca­tions, which auto­mates the acoustic lev­i­ta­tion inter­ac­tion with­in our plat­form. These attrib­ut­es allow Acous­to­Bots to inde­pen­dent­ly work for a com­mon cause and inter­change between modal­i­ties, allow­ing for nov­el aug­men­ta­tions (e.g., a swarm of hap­tics, audio, and lev­i­ta­tion) and bilat­er­al inter­ac­tions with users in an expand­ed inter­ac­tion area.
  • 03/02/2025 – Julien DUCROCQ
    « Per­cep­tion visuelle adap­ta­tive, adap­tée aux envi­ron­nements hétérogènes « 
    Résumé : Les caméras con­ven­tion­nelles ne sont pas tou­jours adap­tées à nos envi­ron­nements : d’une part, ceux-ci peu­vent com­porter des zones som­bres et claires trop con­trastées, d’autre part, ils peu­vent con­tenir des élé­ments clés de niveau de détails var­iés. Afin de cap­tur­er plus d’in­for­ma­tions visuelles sur ces envi­ron­nements, j’ai tra­vail­lé sur deux caméras cata­diop­triques pen­dant mon doc­tor­at. La pre­mière, HDRom­ni, est une caméra omni­di­rec­tion­nelle HDR mul­ti­oc­u­laires, faite de qua­tre miroirs et de trois fil­tres à den­sité neu­tre. HDRom­ni est capa­ble de cap­tur­er à la fois les zones som­bres et les zones très éclairées d’en­vi­ron­nements extérieurs. Elle a été testée en robo­t­ique mobile. La sec­onde, Visadapt, est un objec­tif clas­sique qui fait face à un miroir déformable. La forme du miroir est cal­culée pour mag­ni­fi­er des élé­ments spé­ci­fiques de l’im­age sans per­dre le con­texte. Actuelle­ment, mes recherch­es visent à dévelop­per une expéri­ence de réal­ité virtuelle (VR) nom­mée Vir­tu­al Binoc­u­lars. Cette appli­ca­tion VR va per­me­t­tre d’ex­plor­er un envi­ron­nement tout en pointant une lentille pour observ­er des ani­maux ou des arte­facts anciens, en les mag­nifi­ant à la manière de jumelles mais sans per­dre le con­texte. Ce tra­vail étend les con­tri­bu­tions présen­tées dans ma thèse à la réal­ité virtuelle.
  • 28/01/2025 – Ale­jan­dro TEVERA RUIZ (doc­tor­ant Heudi­asyc)
    « Robust neu­ro-con­trollers for intel­li­gent robot­ics sys­tems« 
    The rise of arti­fi­cial intel­li­gence has dri­ven sig­nif­i­cant progress in extract­ing and uti­liz­ing infor­ma­tion from com­plex envi­ron­ments, enabling advance­ments in tra­jec­to­ry plan­ning and con­trol. This pre­sen­ta­tion intro­duces inno­v­a­tive approach­es to robust and adap­tive con­trol strate­gies, focus­ing on appli­ca­tions in coop­er­a­tive robot­ics and UAV nav­i­ga­tion. For robot­ic sys­tems, the pro­posed method­ol­o­gy address­es chal­lenges such as sta­bil­i­ty, adapt­abil­i­ty, and tun­ing by incor­po­rat­ing expert knowl­edge into learn­ing-based con­trol frame­works. Sim­i­lar­ly, in UAV nav­i­ga­tion, the focus is on enhanc­ing auton­o­my, ener­gy effi­cien­cy, and safe­ty in chal­leng­ing envi­ron­ments with dynam­ic ele­ments and dis­tur­bances. By merg­ing arti­fi­cial intel­li­gence with con­trol the­o­ry, these strate­gies demon­strate the poten­tial for cre­at­ing adap­tive and reli­able sys­tems capa­ble of thriv­ing in demand­ing sce­nar­ios. This pre­sen­ta­tion high­lights key insights and lays the ground­work for advanc­ing intel­li­gent autonomous systems.

02/07/2024 – SYRI and CID Col­lab­o­ra­tive Work­shop
- Sta­tis­ti­cal guar­an­tees for object detec­tion
- Har­ness­ing Super­class­es for Learn­ing from Hier­ar­chi­cal Data­bas­es
- Enhanc­ing Local­iza­tion through Per­cep­tion: Appli­ca­tions of Vec­tor Maps
- Intro­duc­tion à des archi­tec­tures de réseaux de neu­rone de traite­ment des événe­ments
- Esti­ma­tions d’in­cer­ti­tudes pour le cal­i­brage entre cap­teurs par appren­tis­sage pro­fond
- Présen­ta­tion de début de thèse: Esti­ma­tion de l’in­cer­ti­tude et de l’in­tégrité pour les sys­tèmes de per­cep­tion basés sur l’ap­pren­tis­sage automatique

21/05/2024 – Minh Quan Dao (INRIA ACENTAURI)
« Toward Solv­ing Occlu­sion and Spar­si­ty in Deep Learn­ing-Based 3D Object Detec­tion Through Col­lab­o­ra­tive Per­cep­tion »

16/05/2024 – Robin Con­dat (Uni­ver­sité de Picardie Jules Verne, Lab­o­ra­toire MIS)
« Con­tri­bu­tion à l’amélio­ra­tion de la robustesse de sys­tèmes de per­cep­tion fondés sur des réseaux de neu­rones pro­fonds mul­ti­modaux »

15/05/2024 – Jor­dan Cara­cotte (Uni­ver­sité de Picardie Jules Verne, Lab­o­ra­toire MIS)
« Recon­struc­tion 3D par stéréopho­tométrie pour la vision omni­di­rec­tion­nelle »

14/05/2024 - Achref Elouni (Uni­ver­sité Cler­mont Auvergne, Insti­tut Pas­cal, Remote­ly)
« Appren­tis­sage pro­fond et IA pour l’amélioration de la robustesse des tech­niques de local­i­sa­tion par vision arti­fi­cielle »

19/03/2024 - Zong­wei Wu (chercheur post-doc­tor­al à l’u­ni­ver­sité de Würzburg)
« Sin­gle-Mod­el and Any-Modal­i­ty for Video Object Track­ing »

19/03/2024 - Pur­va Joshi (doc­tor­ante à l’u­ni­ver­sité d’Eind­hoven)
« Effi­cient­ly Nav­i­gat­ing Autonomous Vehi­cles Around Intersections »

13/02/2024 - Gré­goire Richard (doc­tor­ant Heudi­asyc)
« Hap­tic feed­back for embod­ied and social inter­ac­tions in vir­tu­al reality »

05/12/2023 - Mario Cer­vantes (sta­giaire Heudi­asyc)
« Hexa­ro­tor with Tilt­ed Motors and PD Con­trol: From Con­cept to Real-Time Flight Test » 

05/12/2023 - Ser­van­do Enci­na (sta­giaire Heudi­asyc)
« Quad-rotor tra­jec­to­ry following » 

05/12/2023 - Alber­to Varela and Diego Gan­dul­fo (sta­giaires Heudi­asyc)
« Quater­nion-observ­er con­trol for aer­i­al drones: appli­ca­tion to object pickup. » 

26/09/2023 : Chenao Jiang (sta­giaire)
« Event-based Seman­tic-aid­ed Motion Segmentation »

26/09/2023 : Trista Lin & Xiaot­ing Li (Stel­lan­tis)
« Redefin­ing Mobil­i­ty Per­spec­tives: The Syn­er­gy of Real-Time Net­works and Soft­ware-Defined Vehicles »

12/07/2023 : retour de con­férence CVPR 
Julien More­au & Vin­cent Brebion

27/06/2023 - Maciej Michalek (Poz­nan Uni­ver­si­ty of Tech­nol­o­gy, Pologne)
« Auto­mat­ed Artic­u­lat­ed Vehi­cles – Mod­el­ling, Prop­er­ties and Control » 

23/06/2023 -David Bev­ly, pro­fesseur Uni­ver­sité d’Auburn, USA.

20/06/2023 – Maxime Noizet, Hugo Pousseur, Kévin Bellingard (doc­tor­ants Heudi­asyc)
Retour de con­férence IV

23/05/2023 – Shan He (doc­tor­ant Heudi­asyc) | Hugo Pousseur (doc­tor­ant Heudi­asyc)
« Mul­ti-Vehi­cle Deci­sion-Mak­ing Coop­er­a­tion in Inter­sec­tion based on Prob­a­bil­i­ty Col­lec­tive Algo­rithm » 
« Présen­ta­tion travaux pré­dic­tion de conduite »

14/03/2023 - Sémi­naire ODD / Ethique 

07/02/2023 - Work­shop Per­cep­tion
Vision mul­ti cap­teurs | Incer­ti­tude de perception 

13/12/2022 - Bertrand Ducourthial (pro­fesseur Heudi­asyc).
« Les réseaux de com­mu­ni­ca­tion dynamiques
 »

25/10/2022 – Emmanuel Alao (doc­tor­ant Heudi­asyc).
« Uncer­tain­ty-aware Nav­i­ga­tion in Crowd­ed Envi­ron­ment
 »

13/09/2022 – Bruno Bar­bosa (Uni­ver­sité Fédérale de Lavras, Brésil, Depart­ment of Auto­mat­ics).
« Appli­ca­tion of Arti­fi­cial Intel­li­gence in Sys­tems Iden­ti­fi­ca­tion and Intel­li­gent Vehi­cles
 »

04/07/2022 – Hiroshi Fuji­mo­to (pro­fesseur, Uni­ver­sité de Tokyo)
« Advanced con­trol of elec­tric vehi­cles and devel­op­ment of wire­less in-wheel motors »

12/07/2022 -Présen­ta­tion pro­jets ANR
Annapo­lis (Julien More­au) | V3EA (Reine Talj) | TOiCar (Joëlle Al Hage)

21/06/2022 – Arman­do Ala­torre Sevil­la (doc­tor­ant Heudi­asyc)
« Dynam­ic tra­jec­to­ry for land­ing an aer­i­al vehi­cle on a mobile plat­form »

21/06/2022 – Ali Ham­dan (doc­tor­ant Heudi­asyc)
« Tran­si­tion Man­age­ment Between an Autonomous Vehi­cle and a Real Human Dri­ver, in a Con­text of Take-Over Request »

21/06/2022 – Maxime Escour­rou (doc­tor­ant Heudi­asyc)
« Decen­tral­ized Col­lab­o­ra­tive Local­iza­tion with Map Update using Schmidt-Kalman Fil­ter »

14/06/2022 – Joëlle Al Hage (maitre de con­férences Heudi­asyc)
« Robust stu­den­t’s t‑filter for a tight­ly cou­pled data fusion: Eval­u­a­tion of integri­ty and accuracy »

31/05/2022 – Diego Mer­ca­do-Rav­ell (chercheur vis­i­teur en prove­nance du CIMAT-Zacate­ca au Mex­ique)
« Nav­i­ga­tion, per­cep­tion and con­trol of autonomous vehi­cles
 »

10/05/2022 – Arman­do Ala­torre. (doc­tor­ant Heudi­asyc)
« Land­ing of a fixed-wing unmanned aer­i­al vehi­cle in a lim­it­ed area
 »

05/04/2022 – Thibaud Duhaut­bout (doc­tor­ant Heudi­asyc)
« Plan­i­fi­ca­tion de tra­jec­toire pour véhicule autonome en milieu urbain
 »

15/03/2022 – Work­shop Inter­ac­tion homme-véhicule
Indi­ra Thou­venin Bap­tiste Wojtkows­ki : Envi­ron­nements mixtes infor­més : retours adap­tat­ifs
Reine Talj & Ali Ham­dan : Con­trôle coopératif homme-machine d’un véhicule semi-autonome
Alessan­dro Cor­rea & Hugo Pousseur : Nav­i­ga­tion partagée homme-sys­tème autonome de véhicules automatisés

22/03/2022 – Jorge Ariza­ga (doc­tor­ant à l’U­ni­ver­sité Tec­nológi­co de Mon­ter­rey au Mex­ique)
« Observ­er-Based Tra­jec­to­ry Adap­tive Con­trol for Sus­pend­ed Pay­load Swing Damp­ing on a Ful­ly-Actu­at­ed Hexa­copter UAV
 »

04/03/2022 – Johann Lacon­te (doc­tor­ant Insti­tut Pas­cal)
« Lamb­da-Field : une nou­velle méth­ode pour l’é­val­u­a­tion de risque dans les grilles d’oc­cu­pa­tion. »

02/03/2022 – Pierre Nem­ry (man­ag­er technique,Septentrio)
« Activ­ités menées au sein de l’entreprise Septen­trio, fab­ri­cant de récep­teurs GNSS, dans le cadre du pro­jet européen ERASMO. »

23/02/2022 – Dmytro Bobkov (ingénieur en com­put­er vision et AI chez Artisense)
« Solu­tion de local­i­sa­tion basée vision de l’entreprise alle­mande Artisense. »

15/02/2022 – Jos­sué Car­iño
« Drone fleet in coop­er­a­tion for pur­su­ing an intrud­er drone »

04/01/2022 – Edouard Capel­li­er (ancien doc­tor­ant, actuelle­ment à Motion­al Sin­gapour)
« Présen­ta­tion de Motion­al en général, du chal­lenge nuScenes, de nuPlan »

30/03/2021 - Na Li (post­doc Heudi­asyc)
« Com­bi­na­tion of super­vised and unsu­per­vised clas­si­fi­ca­tion meth­ods for remote sens­ing images
 »

06/04/2021  - Arjun Bal­akr­ish­nan (thèse effec­tuée au lab­o­ra­toire SATIE, Paris Saclay)
« The Integri­ty of Data Sources in Intel­li­gent Vehicles »

14/09/2021Fadi Dor­nai­ka (research pro­fes­sor at Iker­basque, Basque Foun­da­tion for Sci­ence, Bil­bao, Spain, and the Uni­ver­si­ty of the Basque Coun­try UPV/EHU, San Sebas­t­ian, Spain)
 » Research activ­i­ties in the domains of machine learn­ing and pat­tern recognition »

05/10/2021Corentin Sanchez
« World Mod­el for intel­li­gent autonomous vehicles »

26/10/2021Alex­is Offer­mann (doc­tor­ant Heudi­asyc)
« Le Drone à Propul­sion Hydraulique » (pro­jet DPH, pro­jet inter-labo Roberval)

Gildas BAYARD, Stéphane BONNET et Thier­ry MONGLON

Ingénieurs au lab­o­ra­toire Heudiasyc

Le mar­di 15 décem­bre 2020 à 14h00

Résumé : Retour sur la for­ma­tion « Safe­ty driver »

Luc Jaulin

Pro­fesseur, ENSTA-Bretagne

Le ven­dre­di 11 décem­bre 2020 à 15h30

Résumé : I will present a com­mon work with Julien Damers (phd stu­dent) and Simon Rohou (co-super­vi­sor).
In robot­ics, local­iza­tion and SLAM prob­lems gen­er­al­ly have some sym­me­tries (trans­la­tion, rota­tion, scales, time invari­ance, etc).
More­over, as for many state esti­ma­tion prob­lems, we gen­er­al­ly need a reli­able prop­a­ga­tion of uncer­tain­ties through non­lin­ear dif­fer­en­tial equa­tions.
In this talk, I will show that sym­me­tries make it pos­si­ble to dras­ti­cal­ly improve the accu­ra­cy of these prop­a­ga­tions.
As an illus­tra­tion, the inter­val prop­a­ga­tion will be con­sid­ered, but a par­ti­cle approach could be used as well.

Hélène Piet-Lahanier

Adjointe sci­en­tifique, ONERA

Le ven­dre­di 11 décem­bre 2020 à 14h30

Résumé : Les appli­ca­tions des flottes de drones se sont large­ment dévelop­pées ces dernières années. L’une de ces appli­ca­tions est la recherche, la détec­tion et le suivi de cibles mobiles sur un domaine poten­tielle­ment vaste. L’ef­fi­cac­ité de la stratégie choisie pour la recherche dépend de la disponi­bil­ité, de la qual­ité et de la fia­bil­ité des infor­ma­tions recueil­lies par les drones. L’es­ti­ma­tion des emplace­ments des cibles n’est pos­si­ble que lorsqu’elles appar­ti­en­nent au champ de vue du cap­teur embar­qué sur un drone don­né. Dans la plu­part des cas, les incer­ti­tudes de mesure sur de tels cap­teurs sont mod­élisées comme un bruit addi­tif, générale­ment sup­posé être gaussien à moyenne nulle, avec une vari­ance traduisant la qual­ité de la mesure. Les per­for­mances de local­i­sa­tion résul­tantes peu­vent s’avér­er sen­si­bles aux hypothès­es a pri­ori sur les fonc­tions de den­sité de prob­a­bil­ité (pdfs) décrivant les bruits de proces­sus et de mesure.
Une alter­na­tive à la descrip­tion prob­a­biliste con­siste à représen­ter les incer­ti­tudes et mécon­nais­sances sous forme de bornes et d’ex­ploiter cette infor­ma­tion pour iden­ti­fi­er les zones con­tenant des cibles, et celles n’en con­tenant pas.
Les approches présen­tées ici exploitent ce type de représen­ta­tion des incer­ti­tudes pour déter­min­er des straté­gies de déplace­ments des drones afin d’ex­plor­er une zone, de détecter des cibles et de les suiv­re de façon coopéra­tive et dis­tribuée. Elles tien­nent compte des pos­si­bil­ités de com­mu­ni­ca­tion, de la présence d’ob­sta­cles et de leur­res, c’est-à-dire d’ob­jets pou­vant dans cer­taines con­di­tions être con­fon­dus avec une véri­ta­ble cible.

Le mar­di 01 sep­tem­bre 2020 à 14h

  • IV (19 octo­bre – 13 novem­bre 2020) : Ste­fano Masi, Fed­eri­co Camarda
  • iROS (25 octo­bre – 25 novem­bre 2020) : Antho­ny Welte

Antoine LIMA

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 24 novem­bre 2020 à 14h00

Résumé : In dynam­ic local­iza­tion prob­lems, the obser­va­tions used from exte­ro­cep­tive sen­sors are usu­al­ly obtained from a sin­gle mea­sure­ment. How­ev­er, there are cas­es where the cur­rent mea­sure­ment is not suf­fi­cient to detect the ref­er­enced land­mark or to get a suf­fi­cient lev­el of accu­ra­cy. In this study, a point cloud accu­mu­la­tion strat­e­gy is used to improve the res­o­lu­tion of a LiDAR sen­sor along its sparse axis. In par­tic­u­lar, we are inter­est­ed in the detec­tion of mark­ings trans­verse to the road axis in order to improve the accu­ra­cy of local­iza­tion of an autonomous vehi­cle when approach­ing inter­sec­tions or round­abouts. We present a method that allows the con­struc­tion of an accu­rate obser­va­tion with an asso­ci­at­ed obser­va­tion mod­el based on a High-Def­i­n­i­tion (HD) map through an accu­mu­la­tion of scans as the vehi­cle moves, by com­pen­sat­ing the vehi­cle motion. The para­me­ters of the accu­mu­la­tor are stud­ied in terms of detec­tion and accu­ra­cy. The qual­i­ty of the obser­va­tions and their impact on the local­iza­tion qual­i­ty are ana­lyzed using real exper­i­ments car­ried out with an exper­i­men­tal vehi­cle equipped with a low-cost GNSS receiv­er, dead-reck­on­ing sen­sors and a ground truth system.

Michaël MORDEFROY

Ingénieur de recherche au lab­o­ra­toire Heudiasyc

Le mar­di 17 novem­bre 2020 à 14h00

Résumé : Présen­ta­tion de la plate­forme de datasets du laboratoire.

Le mar­di 22 sep­tem­bre 2020 à 14h dans l’am­phi Colcombet

  • ECCV (23–28 août 2020) : Julien More­au, Vin­cent Brebion
  • ICUAS (9–12 juin 2020) : Julio Betancourt

Cristi­no DE SOUZA JUNIOR

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 23 juin 2020 à 14h00

Résumé : In this sem­i­nar, we will talk about my main the­sis sub­ject: The design of mul­ti-agents strate­gies for track­ing and inter­cep­tion of a non-coop­er­a­tive agent and its appli­ca­tion to mobile robots.
The main moti­va­tion of this work is the grow­ing require­ment in anti-drone solu­tions, once intrud­er drones fly­ing over restrict­ed areas, such pow­er plants and air­ports has become a com­mon news head­line in the last years.
We will talk about the cur­rent tech­nics of mul­ti-agents and about our main con­tri­bu­tion: the use of Guid­ance laws as chas­ing behav­ior for the drones. Final­ly, I will expose some exper­i­men­tal results and the talk about future work and applications.

Julio BETANCOURT

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 02 juin 2020 à 16h00

Résumé : One of the moti­va­tions of this the­sis is to ana­lyze  the per­for­mance of a quad­copter when fol­lows a mobile tar­get  in unknown envi­ron­ments.
The chal­lenge will be also the vehi­cle  avoids sta­t­ic and mobiles obsta­cles.  For this, it is nec­es­sary to devel­op algo­rithms fast enough to detect
and track­ing mobile tar­get but at the same time con­sum­ing less mem­o­ry and com­pu­ta­tion­al  resources. Thus, a scheme for aer­i­al visu­al ser­vo­ing of a mobile
ground robot track­ing a smooth vec­tor field is pro­posed. The  scheme is based on struc­tur­al prop­er­ties and con­straints of  both sys­tems, such as
a non-holo­no­my, non­lin­ear dynam­ics  and under­ac­tu­a­tion. The result is aer­i­al sur­veil­lance of an  autonomous vehi­cle mim­ic­k­ing how we dri­ve a real vehi­cle by
redefin­ing local­ly smooth veloc­i­ty field toward the next tar­get  through admis­si­ble paths.

Alex­is OFFERMANN

Doc­tor­ant CIFRE au lab­o­ra­toire Heudiasyc

Le mar­di 26 mai 2020 à 13h00

Résumé : Quick devel­op­ment of drone tech­nolo­gies allows qual­i­ta­tive screen­shot from the sky. To devel­op tech­ni­cal per­spec­tives, a project was start­ed to bring tools direct­ly with­in con­tact of build­ings. For that, an inno­v­a­tive kind of aer­i­al vehi­cle has been devel­oped.
the par­tic­u­lar­i­ty of this robot is that the sys­tem can morph from a con­ven­tion­al quad (or octo) – copter into a sys­tem with a tilt­ed body with a tool in its kern. This allows to keep con­stant posi­tion in the iner­tial frame and give access to ful­ly inde­pen­dent degree of free­dom (from 4 for a reg­u­lar drone into 6 for this hybrid form). 8 actu­a­tors have been used mak­ing the sys­tem over-actu­at­ed. A non­lin­ear dynam­ic mod­el is obtained and final­ly the sys­tem is con­trolled by feed­back lin­eariza­tion tech­nique to obtain a lin­ear sys­tem. Mul­ti­ple con­trol tech­niques are applied to guar­an­tee sta­bil­i­ty in all states simul­ta­ne­ous­ly.
A pro­to­type has been devel­oped and real-time exper­i­ments val­i­date the behav­ior of the robot. A very visu­al and user-friend­ly plat­form has also been designed to help in exhaus­tive tests.

Antho­ny WELTE

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 19 mai 2020 à 14h00

Résumé : Local­iza­tion is crit­i­cal for the safe­ty of autonomous vehi­cles. Accu­rate local­iza­tion can be reached thanks to per­cep­tion sen­sors such as Lidars and cam­eras and using high­ly accu­rate maps (HD maps). Local­iza­tion with such sen­sors is, how­ev­er, dif­fi­cult as accu­rate match­ing needs to be obtained between obser­va­tions and map fea­tures. More­over, maps can be incom­plete or become out­dat­ed when the envi­ron­ment changes.

In this the­sis, we study using tem­po­ral buffers and maps to improve local­iza­tion. In par­tic­u­lar, using a state esti­mate buffer and an obser­va­tions buffer has been found to be help­ful to match obser­va­tions to map fea­tures as it pro­vides a more detailed rep­re­sen­ta­tion of the envi­ron­ment there­fore reduc­ing the match­ing ambi­gu­i­ties that can occur.

Addi­tion­al­ly, keep­ing states and obser­va­tions in mem­o­ry enables to eval­u­ate the accu­ra­cy of map fea­tures. The fea­tures for which obser­va­tion resid­u­als are high­er that expect­ed can be detect­ed to either be dis­card­ed in the esti­ma­tion or be cor­rect­ed for lat­er use.

Belem ROJAS

Doc­tor­ante au lab­o­ra­toire Heudiasyc

Le mar­di 12 mai 2020 à 16h30

Résumé : In this the­sis, a remote oper­a­tion sys­tem of a quadro­tor is stud­ied. The goal of the project is to feed­back to the user with states infor­ma­tion of the sys­tem dur­ing flights, to make deci­sions or chang­ing the mis­sion. To address this prob­lem, a tele­op­er­a­tion sys­tem using a vir­tu­al envi­ron­ment was devel­oped and imple­ment­ed. This vir­tu­al envi­ron­ment con­tains visu­al feed­back from the real drone for help­ing the user in the flight task. Dur­ing the flight tests, delays into the data trans­mis­sion were observed imply­ing could dete­ri­o­rate the closed-loop sys­tem per­for­mance and pro­duce the crash of the vehi­cle. An analy­sis of the sys­tem was done, and a pre­dic­tor-based con­troller is cur­rent­ly devel­op­ing. This scheme allows recov­er­ing the states of the sys­tem and hold­ing the sta­bil­i­ty of the sys­tem. Numer­i­cal results are car­ried-out to val­i­date the per­for­mance of the pro­posed predictor.

Maxime CHAVEROCHE

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 5 mai 2020 à 14h00

Résumé : Recent­ly, we have been wit­ness­es of acci­dents involv­ing autonomous vehi­cles and their lack of suf­fi­cient infor­ma­tion at the right time. One way to tack­le this issue is to ben­e­fit from the per­cepetion of dif­fer­ent view points, name­ly coop­er­a­tive per­cep­tion. While set­ting extra pieces of road infra­struc­ture to help autonomous vehi­cles could be imag­ined, this would require a lot of invest­ments and lim­its its usage to some areas in the world. Talk­ing about cen­tral­ized coop­er­a­tive per­cep­tion in par­tic­u­lar, this also fea­tures the dis­ad­van­tage of mak­ing the agents broad­cast their entire per­cep­tion, which can be heavy on the means of com­mu­ni­ca­tion and com­pu­ta­tion and give rise to delays. Decen­tral­ized coop­er­a­tion, how­ev­er, does not require any extra infra­struc­ture to work and offers the poss­bil­i­ty to make the agents active in their quest for full per­cep­tion, i.e. mak­ing them ask for spe­cif­ic areas in their sur­round­ings on which they would like to know more, instead of always broad­cast­ing every­thing, opti­miz­ing a trade-off between the max­i­miza­tion of knowl­edge about mov­ing objects in its vicin­i­ty and the min­i­miza­tion of the infor­ma­tion received from oth­ers. To this end, we pro­pose to cou­ple a Deep recur­rent gen­er­a­tive mod­el com­bined with evo­lu­tion strategies.

Fed­eri­co CAMARDA

Doc­tor­ant CIFRE au lab­o­ra­toire Heudi­asyc et à Renault

Le mar­di 28 avril 2020 à 14h00

Résumé : Lane detec­tion plays a cru­cial role in any autonomous dri­ving sys­tem. Cur­rent­ly com­mer­cial­ized vehi­cles offer lane keep assist and lane depar­ture warn­ing via inte­grat­ed smart cam­eras, deployed for road mark­ings detec­tion. These sen­sors alone, how­ev­er, do not gen­er­al­ly ensure ade­quate per­for­mance for high­er auton­o­my lev­els.
In the pre­sent­ed work, a mul­ti-sen­sor track­ing approach for gener­ic lane bound­aries is pro­posed. This solu­tion is based on well-estab­lished fil­ter­ing tech­niques and sup­ports a flex­i­ble clothoid spline rep­re­sen­ta­tion. It relies on fine-tuned mea­sure­ment mod­els, tai­lored on col­lect­ed data from both off-the-shelf and pro­to­type smart sen­sors. The imple­men­ta­tion takes into account real-time con­straints and ADAS ECUs scarci­ty of resources. The result is final­ly val­i­dat­ed against lane-lev­el ground truth and exper­i­men­tal data acquisitions.

Ste­fano MASI

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 21 avril 2020 à 16h00

Résumé : Although autonomous vehi­cle tech­nol­o­gy has evolved sig­nif­i­cant­ly in recent years, self-dri­ving vehi­cles nav­i­ga­tion in urban areas is still an open issue. One of the major chal­lenges in these con­di­tions is the safe nav­i­ga­tion of autonomous vehi­cles on roads open to pub­lic traf­fic. The main issue is the inter­ac­tion of the autonomous vehi­cle with reg­u­lar traf­fic because behav­iors and inten­tions of human-dri­ven vehi­cles are hard to pre­dict and under­stand. The goal of the Tor­na­do project, which regroups both indus­tri­als and aca­d­e­m­ic researchers, is to imple­ment an autonomous shut­tle ser­vice in an urban area. One of the most chal­leng­ing sce­nar­ios for autonomous dri­ving is rep­re­sent­ed by com­plex zones as inter­sec­tions, road merg­ings and round­about. In this work, we pro­pose a method to make an autonomous shut­tle able to cross safe­ly a mul­ti-lane round­about. Fur­ther­more, we also pro­pose strate­gies to han­dle vehi­cles inter­ac­tions (e.g. nav­i­ga­tion in par­al­lel lanes) into mul­ti-lane round­abouts. Our approach relies on High-Def­i­n­i­tion (HD) maps with lane lev­el descrip­tion. This for­mal­ism allows to pre­dict at lane lev­el the future sit­u­a­tion thanks to the con­cept of vir­tu­al vehi­cles. Our method han­dles safe­ly col­li­sion avoid­ance and guar­an­tees that no pri­or­i­ty con­straint is vio­lat­ed dur­ing the inser­tion maneu­ver. More­over, the method pro­vides a not be over­ly cau­tious inser­tion pol­i­cy, i.e. it not makes the autonomous vehi­cle wait for a long time before the inser­tion. The per­for­mance of our strat­e­gy has been eval­u­at­ed with the SUMO sim­u­la­tion frame­work. To bet­ter eval­u­ate the com­plex­i­ty of the sim­u­la­tion sce­nario, a high­ly inter­ac­tive vehi­cles flow has been gen­er­at­ed in SUMO using real dynam­ic traf­fic data con­tained into the INTERACTION dataset. Final­ly, we report how our approach behaves in terms of safe­ty and traf­fic flow under such com­plex sce­nar­ios con­sid­er­ing both sim­u­lat­ed envi­ron­ments and real tests effec­tu­at­ed with some exper­i­men­tal self-dri­ving vehi­cles on a dri­ving circuit.

Antoine LIMA

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 17 décem­bre 2019 à 14h00 en salle GI-043

Résumé : In this talk, sev­er­al com­mu­ni­ca­tion stan­dards and mes­sages devel­oped in the con­text of ITS are con­tex­tu­al­ized and explained. We will focus on the Euro­pean side of stan­dards and more pre­cise­ly on the work that has been done in the last decade on col­lec­tive aware­ness and per­cep­tion. After an intro­duc­tion on the com­mu­ni­ca­tion medi­um, three mes­sage con­tents will be detailed: the Col­lec­tive Aware­ness Mes­sage, Decen­tral­ized Envi­ron­men­tal Noti­fi­ca­tion Mes­sage and the Col­lec­tive Per­cep­tion Mes­sage. This talk is intend­ed as a quick intro­duc­tion for these mes­sages that might become wide­ly used in com­ing years, in order to under­stand their potential.

Alber­to CASTILLO FRASQUET

Le mar­di 10 décem­bre 2019 à 14h00 en salle GI-042

Abstract : The work is focused on devel­op­ing algo­rithms that pre­dict the future state of a sys­tem that is affect­ed by unknown dis­tur­bances. In the case of no dis­tur­bances, this is nor­mal­ly solved by cre­at­ing a sys­tem math­e­mat­i­cal mod­el and mea­sur­ing some state vari­ables so that, with the mod­el and the mea­sure­ments, one is able to pre­dict its future state, e.g. the future posi­tion of a mov­ing car, or the future glu­cose con­cen­tra­tion of a dia­bet­ic patient. How­ev­er, when­ev­er unknown dis­tur­bances (i.e. wind gusts, ocean cur­rents, fric­tion, loads vari­a­tion, etc) affect to the sys­tem, its future state is also depen­dent on the dis­tur­bance. The pre­vi­ous meth­ods are no longer valid in this sce­nario and they should be rede­fined in order to con­tem­plate for the dis­tur­bance effect.

Jos­sué Car­iño Escobar

Le mar­di 10 décem­bre 2019 à 14h00 en salle GI-042

Abstract : This work presents con­tri­bu­tions to the state-of-the-art in UAV con­trol for the pur­pos­es of coop­er­a­tive pay­load trans­porta­tion. Con­ven­tion­al coop­er­a­tive con­trol schemes rely on an infor­ma­tion communication/sensor net­work in order to design the con­trol law of each agent. How­ev­er, in coop­er­a­tive trans­porta­tion schemes these types of con­trollers can desta­bi­lize if the topol­o­gy of the net­work changes. Because of this, there has been a ten­den­cy to take advan­tage of the phys­i­cal con­nec­tion of the agents to the load by con­sid­er­ing its effects as a form of implic­it com­mu­ni­ca­tion. The pro­posed solu­tion of this work imple­ments a decen­tral­ized coop­er­a­tive con­trol scheme for slung-load trans­porta­tion based on the con­cept of pas­siv­i­ty and implic­it communication.

Nico­las PITON

Ingénieur Inno­va­tion / Respon­s­able plate­forme pro­to­ty­page rapide.

Le mar­di 03 décem­bre 2019 à 14h00 en salle GI-042

Résumé : Nico­las PITON, je suis Respon­s­able de la plate­forme de pro­to­ty­page. J’ai déjà eu l’occasion de tra­vailler avec cer­tains d’entre vous au sein du lab­o­ra­toire Heudi­asyc pour le développe­ment de prototype.

L’objectif de cette présen­ta­tion est de vous apporter à tous le même niveau d’information sur les pos­si­bil­ités offertes par la plate­forme en ter­mes de matériel et de fonc­tion­nement. A l’issue de cette présen­ta­tion, nous pour­rons échang­er sur les pos­si­bil­ités de col­lab­o­ra­tion et vos éventuels besoins.

Thomas FUHRMANN

Ibeo Auto­mo­tive Sys­tems GmbH

Le jeu­di 26 sep­tem­bre 2019 à 11h00 en salle GI-042

Résumé : La tech­nolo­gie Lidar sol­id-state est très atten­due par de nom­breuses indus­tries, notam­ment celle de l’au­to­mo­bile. L’ob­jet de la con­férence est de présen­ter plusieurs tech­nolo­gies exis­tantes pour les Lidar sol­id-state, avec un zoom sur le NEXT, Lidar sol­id-state dévelop­pé par Ibeo Auto­mo­tive Sys­tems GmbH.

Le mar­di 16 juil­let 2019 à 14h en salle GI-042

  • Laval Vir­tu­al (22–26 avril 2019) : I. Thou­venin, B. Wojtkows­ki, Q. Duchemin, F. Boucaud
  • ICRA (20–24 mai 2019) : A. Welte
  • IV (9–12 juin 2019) : beau­coup de monde
  • Tech­Days (24–25 juin 2019) : G. Bayard, G. Sanahu­ja, C. De Souza junior
  • FUSION (2–5 juil­let 2019) : J. Al Hage

Edouard CAPELLIER

Doc­tor­ant CIFRE au lab­o­ra­toire Heudi­asyc et à Renault.

Le mar­di 02 juil­let 2019 à 14h00 en salle GI-043

Résumé : In tra­di­tion­al LIDAR pro­cess­ing pipelines, a point-cloud is split into clus­ters, or objects, which are clas­si­fied after­wards. This sup­pos­es that all the objects obtained by clus­ter­ing belong to one of the class­es that the clas­si­fi­er can rec­og­nize, which is hard to guar­an­tee in prac­tice. We thus pro­pose an evi­den­tial end-to-end deep neur­al net­work to clas­si­fy LIDAR objects. The sys­tem is capa­ble of clas­si­fy­ing ambigu­ous and inco­her­ent objects as unknown, while only hav­ing been trained on vehi­cles and vul­ner­a­ble road users. This is achieved thanks to an evi­den­tial refor­mu­la­tion of gen­er­al­ized logis­tic regres­sion clas­si­fiers, and an online fil­ter­ing strat­e­gy based on sta­tis­ti­cal assump­tions. The train­ing and test­ing were real­ized on LIDAR objects which were labelled in a semi-auto­mat­ic fash­ion, and col­lect­ed in dif­fer­ent sit­u­a­tions thanks to an autonomous dri­ving and per­cep­tion platform.

Elwan HERY

Doc­tor­ant au lab­o­ra­toire Heudiasyc.

Le mar­di 02 juil­let 2019 à 14h00 en salle GI-043

Résumé : La local­i­sa­tion reste un enjeu majeur pour les véhicules autonomes. Une local­i­sa­tion pré­cise par rap­port à la route et par rap­port aux autres véhicules est essen­tielle pour de nom­breuses tâch­es de nav­i­ga­tion, en par­ti­c­uli­er pour la nav­i­ga­tion en con­voi où les par­tic­i­pants coopèrent pour amélior­er leurs local­i­sa­tions mutuelles. Nous présen­tons une méth­ode de local­i­sa­tion coopéra­tive dis­tribuée basée sur l’échange de cartes locales dynamiques (CLD). Chaque CLD con­tient des infor­ma­tions dynamiques sur la pose et la ciné­ma­tique de tous les agents en coopéra­tion. Dif­férentes sources d’in­for­ma­tion telles que les vitesses lon­gi­tu­di­nale et de rota­tion du bus CAN, les pos­es GNSS, les mesures LiDAR et la détec­tion des bor­ds de voie sont fusion­nées à l’aide d’une stratégie de fil­tre de Kalman asyn­chrone. Les CLD d’autres véhicules reçues par la com­mu­ni­ca­tion sont fusion­nées à l’aide d’un fil­tre par inter­sec­tion de covari­ance pour éviter la con­san­guinité de don­nées. Les résul­tats expéri­men­taux de ces travaux sont éval­ués sur des scé­nar­ios de con­duite en con­voi. Ils mon­trent l’im­por­tance d’une local­i­sa­tion rel­a­tive pré­cise en util­isant la per­cep­tion LiDAR pour amélior­er cette local­i­sa­tion. La local­i­sa­tion rel­a­tive entre les véhicules est améliorée dans toutes les CLD, y com­pris pour les véhicules qui ne sont pas capa­bles de percevoir les véhicules envi­ron­nants, mais qui sont perçus par les autres.

Romain GUYARD

Doc­tor­ant au lab­o­ra­toire Heudiasyc.

Le mar­di 18 juin 2019 à 14h00 en salle GI-043

Résumé : Les véhicules intel­li­gents pos­sè­dent de plus en plus de cap­teurs utile à l’aide à la con­duite. Cepen­dant, ces cap­teurs ont des capac­ités lim­itées, ce qui peut impacter la prise de déci­sion. Une méth­ode pour amélior­er la pré­ci­sion de la per­cep­tion des véhicules est de met­tre en com­mun les don­nées générées par plusieurs véhicules obser­vant le même envi­ron­nement. La méth­ode priv­ilégiée actuelle­ment est de cen­tralis­er les don­nées sur un serveur, de faire les cal­culs de fusion de don­nées et de ren­voy­er les résul­tats aux véhicules (cloud com­put­ing). Cette méth­ode néces­site donc l’en­voi de don­nées per­son­nelles à un tiers et sup­pose une con­nex­ion inter­net per­ma­nente. Pour éviter ces con­traintes, nous pro­posons une méth­ode de fusion de don­nées dis­tribuée où les véhicules com­mu­niquent directe­ment entre eux des résul­tats de fusion ne révélant pas les valeurs internes des cap­teurs aux voisins. L’al­go­rithme utilise le cadre des  fonc­tions de croy­ances pour gér­er les impré­ci­sions des cap­teurs et incer­ti­tudes dues au manque de con­fi­ance dans les don­nées des autres véhicules. Pen­dant cette dernière année nous avons con­cen­tré nos efforts sur l’élab­o­ra­tion d’un scé­nario simulé qui per­met de met­tre en appli­ca­tions plusieurs sché­mas de fusion pro­posés. Cette appli­ca­tion con­siste en la recherche du chemin opti­mal dans une ville en prenant en compte l’oc­cu­pa­tion des routes. Une carte d’oc­cu­pa­tion des routes dis­tribuée est cal­culé par tous les véhicules qui peu­vent ain­si choisir le meilleur chemin pour arriv­er le plus rapi­de­ment à destination.

Abbas CHOKOR

Doc­tor­ant au lab­o­ra­toire Heudiasyc.

Le mar­di 18 juin 2019 à 14h00 en salle GI-043

Résumé : In this talk, I will present our work in the field of  Glob­al Chas­sis Con­trol (GCC) whose goal is to improve the over­all vehi­cle per­for­mance by coor­di­nat­ing the Active Front Steer­ing, Direct Yaw Con­trol and Active Sus­pen­sion con­trollers. A  mul­ti­lay­er GCC archi­tec­ture is devel­oped. It con­tains a local con­trol lay­er and a deci­sion lay­er. The local objec­tives for the sub-con­trollers in the con­trol lay­er con­cern explic­it­ly: maneu­ver­abil­i­ty, lat­er­al sta­bil­i­ty, rollover avoid­ance, and ride com­fort. The sub-con­trollers are designed based on the super-twist­ing slid­ing mode the­o­ry. The deci­sion lay­er is devel­oped to promote/attenuate the local objec­tives of the sub-con­trollers, in order to remove the con­flicts among the dif­fer­ent objec­tives and extract the max­i­mum ben­e­fit from the coor­di­na­tion using some eval­u­a­tion cri­te­ria. This lay­er mon­i­tors the dynam­ics of the vehi­cle, cal­cu­lates and sends sched­uled gains to the sub-con­trollers, based on fuzzy log­ic rules and a sta­bil­i­ty cri­te­ri­on. Final­ly, the pro­posed Glob­al Chas­sis Con­troller is val­i­dat­ed on Matlab/Simulink using a vehi­cle mod­el val­i­dat­ed on the pro­fes­sion­al vehi­cle sim­u­la­tor « SCANeR Stu­dio ». The results show the effec­tive­ness of the pro­posed strategy.

Angel Gabriel ALATORRE VAZQUEZ

Doc­tor­ant au lab­o­ra­toire Heudiasyc.

Le mar­di 04 juin 2019 à 14h00 en salle GI-042

Shri­ram JUGADE

Doc­tor­ant au lab­o­ra­toire Heudiasyc.

Le mar­di 04 juin 2019 à 14h00 en salle GI-042

Résumé :

The field of ADAS has been con­tin­u­ous­ly evolv­ing for the bet­ter and safer dri­ving expe­ri­ence. Cur­rent­ly, the road map for the future devel­op­ments is tar­get­ed to have ful­ly autonomous/­self-dri­ving vehi­cles. Human dri­vers are still going to play an impor­tant part from an over­all per­for­mance aspect. One impor­tant issue still exist i.e. How will the tran­si­tion between man­u­al dri­ving mode and autonomous dri­ving mode take place? Also, the autonomous dri­ving encounter var­i­ous dri­ving issues and need to be resolved with the help of human dri­ver. One of the approach to address these issues is shared dri­ving con­trol authority.

In this project, the shared con­trol author­i­ty is devel­oped through the fusion of the dri­ving inputs of both the dri­vers. The use of fusion sys­tem approach removes the need of direct inter­ac­tion between human and autonomous dri­ving sys­tem. Fusion is achieved by resolv­ing the con­flict between the two dri­vers using non-coop­er­a­tive game the­o­ry and is based on fea­tures like dri­ving deci­sion admis­si­bil­i­ty, future pre­dic­tions of dri­ving pro­files, indi­vid­ual dri­ving inten­tions com­par­i­son (based on a sim­i­lar­i­ty mea­sure) etc. A two play­er non-coop­er­a­tive game is defined incor­po­rat­ing the dri­ving deci­sion admis­si­bil­i­ty and inten­tions. Con­flict res­o­lu­tion is achieved through an opti­mal bar­gain­ing solu­tion giv­en by Nash Equi­lib­ri­um. The final dri­ving com­mand for the vehi­cle is derived from the bar­gain­ing solu­tion. The rel­e­vant infor­ma­tion is fed back to the human dri­ver from the fusion sys­tem to avoid any con­fu­sion. The val­i­da­tion is car­ried out on a test rig inte­grat­ed with the soft­ware like IPG Car­Mak­er and Simulink. Var­i­ous fea­tures of the fusion sys­tem such as col­li­sion avoid­ance, human cen­tric etc are ana­lyzed in the val­i­da­tion process.

Anand Sánchez-Orta

Anand Sánchez-Orta received his M.Sc. degree in Auto­mat­ic Con­trol from the Autonomous Uni­ver­si­ty of Nue­vo León (UANL), Mex­i­co and Ph.D. degree in Infor­ma­tion and Sys­tems Tech­nolo­gies from the Uni­ver­si­ty of Tech­nol­o­gy of Com­piègne (UTC), France, in 2001 and 2007, respec­tive­ly. He joined the Robot­ics and Advanced Man­u­fac­tur­ing Divi­sion of the Cen­ter for Research and Advanced Stud­ies (CINVESTAV) in 2009, where he is cur­rent­ly a Research Pro­fes­sor. His research inter­ests include con­trol the­o­ry, state esti­ma­tion and visu­al ser­vo­ing with appli­ca­tions to robotics.

Le jeu­di 23 mai 2019 à 14h00 en salle GI-043

Résumé : Nowa­days, robot­ic sys­tems, such as mobile robots, manip­u­la­tor arms, under­wa­ter robots and UAVs, have a great poten­tial in a wide vari­ety of appli­ca­tions. In recent years, they sig­nif­i­cant­ly increased their per­for­mance, main­ly thanks to tech­no­log­i­cal inno­va­tions which facil­i­tate their con­struc­tion and con­trol. How­ev­er, to increase their degree of auton­o­my, it is nec­es­sary to find more effi­cient solu­tions for such sys­tems. In this talk I will present the syn­the­sis of robust esti­ma­tion and con­trol algo­rithms with respect to endoge­nous and exoge­nous dis­tur­bances for the autonomous nav­i­ga­tion of robot­ic sys­tems. In par­tic­u­lar, dis­tur­bances that are not nec­es­sar­i­ly dif­fer­en­tiable in the usu­al sense (inte­ger order) are con­sid­ered. Exper­i­men­tal results will be presented.

Lou­nis Adouane

Maître de con­férences à POLYTECH Clermont-Ferrand.

Le jeu­di 23 mai 2019 à 11h30 en salle GI-043

Résumé : This talk makes the focus on the way to increase grad­u­al­ly the auton­o­my of a sin­gle vehi­cle as well as mul­ti-vehi­cle sys­tems to achieve autonomous nav­i­ga­tion in com­plex envi­ron­ments (e.g., clut­tered, uncer­tain and/or dynam­ic). Its main objec­tive is to give an overview of the devel­oped gener­ic con­trol archi­tec­tures (main­ly decision/action aspects), and their dif­fer­ent com­po­nents, in order to enhance the safe­ty, flex­i­bil­i­ty and the reli­a­bil­i­ty of autonomous nav­i­ga­tion. First, it is giv­en a short overview of the main mechanisms/components char­ac­ter­iz­ing the pro­posed Mul­ti-Con­troller Archi­tec­tures (MCA), which allow to have gener­ic and bot­tom-up con­struc­tion of the vehi­cle’s nav­i­ga­tion func­tions. MCA have been devel­oped based on reli­able ele­men­tary con­trollers (obsta­cle avoid­ance, tar­get reaching/tracking, for­ma­tion main­tain­ing and recon­fig­u­ra­tion, etc.), but also on the propo­si­tion of appro­pri­ate mech­a­nisms to man­age the con­trollers » inter­ac­tions. Fur­ther, MCA have been devel­oped through three close­ly relat­ed ele­ments: task mod­el­ing; plan­ning/re-plan­ning and final­ly the con­trol aspects based main­ly on Lya­punov sta­bil­i­ty analy­sis. The talk will high­light sum­mar­i­ly some com­ple­men­tary com­po­nents, such as the link between opti­mal plan­ning, con­trol and flex­i­ble nav­i­ga­tion through sequen­tial way­points. Sec­ond­ly, the talk will empha­sis how MCA have been extend­ed to embed a reli­able deci­sion-mak­ing process to deal with risky and uncer­tain situations/environments (e.g., over­tak­ing in high­way or coop­er­a­tive inter­sec­tion cross­ing). More pre­cise­ly, the talk will show both: the pro­posed prob­a­bilis­tic-based approach­es for risk assess­ment and man­age­ment, and the devel­oped new met­ric in order to enhance the safe­ty of autonomous vehi­cles. Sev­er­al sim­u­la­tions and exper­i­ments high­light the dif­fer­ent devel­oped works. 

Lydie NOUVELIERE

Maître de Con­férences de l’U­ni­ver­sité d’Evry-Val-d’Es­sonne, Lab­o­ra­toire d’In­for­ma­tique, BioIn­for­ma­tique et Sys­tèmes Com­plex­es (IBISC).

Le lun­di 20 mai 2019 à 14h00 en salle GI-043

Résumé : Il n’est plus sur­prenant, en 2018, d’en­ten­dre par­ler de la voiture autonome dans les média. Et oui, nous y sommes, ou presque… ! Depuis 20 ans, beau­coup de développe­ments ont été pro­duits pour aider le con­duc­teur à mieux con­duire en ter­mes de sécu­rité et de flu­id­i­fi­ca­tion du traf­ic. Pour autant, depuis la COP21, la con­som­ma­tion d’én­ergie des véhicules est au cen­tre des déci­sions européennes en ter­mes de normes auto­mo­biles. L’idée, ici, est donc de con­cevoir des tra­jec­toires sécu­ri­taires, effi­caces et économiques en réal­isant le meilleur com­pro­mis tout en ten­ant compte des actions du con­duc­teur et de l’en­vi­ron­nement (avec appli­ca­tion expéri­men­tale en temps réel).

Ala MHALLA

Doc­teur en infor­ma­tique de l’Université Cler­mont Auvergne, France. Post­doc­tor­ant à l’In­sti­tut Pas­cal, Clermont-Ferrand.

Le jeu­di 16 mai 2019 à 16h00 en salle GI-043

Résumé : Les travaux de recherche pro­posés relèvent de la thé­ma­tique de l’intelligence arti­fi­cielle appliquée au monde de la sécu­rité et de la sur­veil­lance de traf­ic routiers. Plus par­ti­c­ulière­ment, il s’agit de dévelop­per des approches auto-super­visées pour spé­cialis­er automa­tique­ment des mod­èles pour la détec­tion et le suivi d’objets routiers dans des séquences vidéo. La détec­tion et le suivi automa­tique d’ob­jets 2D dans des séquences vidéo est un prob­lème ancien, qui a con­nu des pro­grès majeurs ces dernière années mais qui est encore loin d’être com­plète­ment résolu. C’est dans ce cadre très com­péti­tif que nous nous sommes intéressés à deux prob­lé­ma­tiques cen­trales : la spé­cial­i­sa­tion de mod­èles neu­ronaux pro­fonds pour la détec­tion mul­ti-objets par « trans­fert d’apprentissage » et « appren­tis­sage pro­fond », et le suivi mul­ti-objets basée sur un mod­èle spa­tio-tem­porel « entrelacement ». 

Julien MOREAU

Doc­teur en infor­ma­tique de l’UTBM, lab­o­ra­toires IRTES-SET, Belfort, et IFSTTAR-COSYS-LEOST, Villeneuve‑d’Ascq. Post­doc­tor­ant à l’U­ni­ver­sité Catholique de Lou­vain, Image and Sig­nal Pro­cess­ing Group, Belgique.

Le jeu­di 16 mai 2019 à 13h30 en salle GI-043

Résumé:

- Étude d’une méth­ode d’amélio­ra­tion de la local­i­sa­tion GNSS du véhicule en envi­ron­nement urbain, impli­quant un ensem­ble de proces­sus pour estimer un mod­èle 3D local à par­tir de vision fish­eye du point de vue du toit du véhicule et ori­en­tée vers le ciel. 

- Per­cep­tion mul­ti­modale RVB, ther­mique, et lidar, embar­quée sur un drone pour l’ex­plo­ration et la recon­struc­tion 3D d’une pièce en con­di­tions dégradées (obscu­rité, fumée d’incendie). 

- Con­tri­bu­tion à des archi­tec­tures de réseaux de neu­rones pro­fonds dans le but de l’analyse séman­tique en temps réel d’im­ages panoramiques de match de basket-ball.

Jason CHEVRIE

Doc­teur de l’Université de Rennes 1, IRISA. Chercheur post­doc­tor­al à l’In­sti­tut Ital­ien de Tech­nolo­gie (Isti­tu­to Ital­iano di Tec­nolo­gia, IIT), Gênes, Italie.

Le mer­cre­di 15 mai 2019 à 16h15 en salle GI-043

Résumé : Robot­ic assis­tance is a field of research that can have appli­ca­tions in var­i­ous domains, like in health­care or in the indus­try. In this talk, I will present an overview of the research activ­i­ties I car­ried out in robot­ic assis­tance for health­care appli­ca­tions at the Ital­ian Insti­tute of Tech­nol­o­gy (IIT), Genoa, Italy and at IRISA/Inria, Rennes, France. The first part will cov­er the activ­i­ties per­formed in IIT in domes­tic assis­tance on the R1 humanoid robot, tar­get­ed for exam­ple for the help of elder­ly or dis­abled peo­ple. For this, sev­er­al issues need to be tack­led due to the robot evolv­ing and inter­act­ing in a dynam­ic and unstruc­tured human envi­ron­ment. The sec­ond part will cov­er my activ­i­ties at IRISA/Inria focused on sur­gi­cal ges­ture assis­tance, in par­tic­u­lar for nee­dle inser­tions, which are med­ical pro­ce­dures com­mon­ly per­formed for the treat­ment or the diag­no­sis of tumors. I will briefly describe the dif­fer­ent aspects that need to be con­sid­ered to per­form an auto­mat­ic nee­dle inser­tion in soft tis­sues, as well as the inte­gra­tion of a human oper­a­tor in the con­trol loop via a hap­tic interface.

Mohamad Motasem NAWAF

Doc­teur en infor­ma­tique, Lab­o­ra­toire Hubert Curien, Uni­ver­sité Jean Mon­net, Saint-Eti­enne. Chercheur Post­doc­tor­al, Lab­o­ra­toire LIS, Aix-Mar­seille Université.

Le mer­cre­di 15 mai 2019 à 11h30 en salle GI-043

Résumé : We pro­vide details of hard­ware and soft­ware con­cep­tion and real­iza­tion of a stereo embed­ded sys­tem for under­wa­ter sur­vey. The main con­tri­bu­tion is a light visu­al odom­e­try method adapt­ed to under­wa­ter con­text. The pro­posed method runs on a sur­face com­put­er and uses the cap­tured stereo image stream to pro­vide real-time nav­i­ga­tion and site cov­er­age map which is nec­es­sary to con­duct a com­plete under­wa­ter sur­vey. The visu­al odom­e­try uses a sto­chas­tic pose rep­re­sen­ta­tion and semi-glob­al opti­miza­tion approach to han­dle large sites and pro­vides long-term auton­o­my. A nov­el stereo match­ing approach adapt­ed to under­wa­ter imag­ing and sys­tem attached light­ing allows fast pro­cess­ing and suit­abil­i­ty to low com­pu­ta­tion­al resources sys­tems. The sys­tem is test­ed in a real con­text and showed its robust­ness and promis­ing fur­ther perspectives.

Yann SOULLARD

Post­doc­tor­ant à l’U­ni­ver­sité de Rouen Nor­mandie, LITIS.

Le ven­dre­di 10 mai 2019 à 13h30 en salle GI-043

Résumé:

La recon­nais­sance de gestes est générale­ment réal­isée par l’emploi d’un mod­èle de séquences (mod­èles de Markov à états cachés, réseaux de neu­rones, …) per­me­t­tant la prise en compte de l’évo­lu­tion tem­porelle de la gestuelle pour la déci­sion. On s’in­téressera ici aux mod­èles de Markov à états cachés (HMMs). Les gestes tech­niques sont des gestes par­ti­c­uliers et pré­cis dont la recon­nais­sance automa­tique peut être une tâche dif­fi­cile due au petit nom­bre de don­nées super­visées, à des don­nées poten­tielle­ment bruitées et à des class­es déséquili­brées. Les esti­ma­tions faites au sein des HMMs peu­vent être biaisées par de telles don­nées. Nous pro­posons une exten­sion des HMMs à la théorie des prob­a­bil­ités impré­cis­es en con­sid­érant d’une part une infor­ma­tion a pri­ori sur les class­es et d’autre part des ensem­bles con­vex­es de prob­a­bil­ités pour ren­forcer la fia­bil­ité du mod­èle en prédiction. 

La détec­tion de lignes de texte dans des images est une étape cen­trale de l’analyse d’un doc­u­ment. En effet, les sys­tèmes actuels de recon­nais­sance automa­tique d’écri­t­ure trait­ent des images de lignes de texte pour en extraire les car­ac­tères. Cette recon­nais­sance per­met par la suite de rechercher des mots, d’ex­traire de l’in­for­ma­tion ou de caté­goris­er le doc­u­ment. Nous présen­tons une méth­ode d’i­den­ti­fi­ca­tion de lignes de texte dans des images par appren­tis­sage automa­tique. Cette approche s’ap­puie sur un réseaux de neu­rones totale­ment con­vo­lu­tif (FCN) pro­duisant un éti­que­tage au niveau pix­el. Alors que les archi­tec­tures usuelles de FCN néces­si­tent une étape de recon­struc­tion pour obtenir une sor­tie de la même dimen­sion que l’im­age d’en­trée, nous pro­posons de con­tourn­er cette étape par l’emploi de con­vo­lu­tions dilatées.

Mohammed CHADLI

Maître de con­férences au lab­o­ra­toire MIS de l’U­ni­ver­sité de Picardie Jules Verne.

Le jeu­di 09 mai 2019 à 14h00 en salle GI-043

Car­los MATEO

Post­doc­tor­ant à l’In­sti­tut Pas­cal UMR 6602 CNRS/UCA/SIGMA, Cler­mont-Fer­rand, France.

Le lun­di 06 mai 2019 à 13h30 en salle GI-043

Résumé:

3D Visu­al per­cep­tion has been a fun­da­men­tal tool in many robot manip­u­la­tion meth­ods. The idea is sim­ple and nat­ur­al: per­ceive a tar­get object and fol­low its sur­face shape while it is being manip­u­lat­ed. Although nowa­days, it is been play­ing a big roll data dri­ven strate­gies, like is the case of Con­vo­lu­tion­al Neu­ronal Net­works (CNN) or Gen­er­a­tive Adver­sar­i­al Net­works (GAN) in object recog­ni­tion and recon­struc­tion. Tra­di­tion­al­ly, the 3D visu­al per­cep­tion was gov­erned by the geo­met­ric analy­sis of sur­faces object sur­faces. The main sub­ject of the pre­sen­ta­tion is to show a series of algo­rithms and pipelines for 3D object recog­ni­tion, their needs and how can be used not just for object manip­u­la­tion but also in oth­er fields like object/map recon­struc­tion. The method­ol­o­gy is suit­able for dual robot arms install in fixed plat­forms or in a mobile-robot sys­tem. In both cas­es the sys­tem should avoid auto-col­li­sion or col­li­sion with oth­er actors and pro­vide robust visu­al infor­ma­tion. There may arise three types of prob­lems: visu­al per­cep­tion uncer­tain­ties, local min­i­ma in the robot pose opti­miza­tion and sin­gu­lar con­fig­u­ra­tions. Dur­ing the pre­sen­ta­tion these prob­lems will be dis­cussed. Dur­ing object manip­u­la­tion track­ing non-rigid object sur­faces is cru­cial and cur­rent­ly presents a chal­lenge, not just because state-of-arts meth­ods are restric­tives in terms of com­pu­ta­tion­al cost and mem­o­ry man­age­ment but also because most of the cur­rent works tends to fail in open move­ments. Prob­lems of non-rigid recon­struc­tion, sur­face track­ing and active per­cep­tion for opti­mize cam­era pose will also be dis­cussed dur­ing the presentation.

Thibaut RAHARIJAONA

Maître de Con­férences HDR, ISM Insti­tut des Sci­ences du Mou­ve­ment Eti­enne Jules Marey (UMR 7287)

Le mar­di 02 avril 2019 à 14h en salle GI-042

Résumé:

La présen­ta­tion abor­de le développe­ment d’une stratégie de pilotage pour le véhicule autonome et la robo­t­ique mobile. Cette stratégie vise à syn­thé­tis­er des lois de com­mande robustes peu coû­teuses en temps de cal­cul pour garan­tir le niveau de per­for­mances souhaitées du véhicule en envi­ron­nement incer­tain et per­tur­bé. En envi­ron­nement intérieur, un nou­veau cap­teur et un algo­rithme sont pro­posés pour robus­ti­fi­er la local­i­sa­tion et la nav­i­ga­tion. La nav­i­ga­tion du véhicule autonome ou du robot mobile pour­ra être égale­ment améliorée grâce à l’utilisation du flux optique pour l’odométrie.

Prof. Barys SHYROKAU

Pro­fes­sor at the Intel­li­gent Vehi­cles group at the Depart­ment of Cog­ni­tive Robot­ics, Delft Uni­ver­si­ty of Technology

Le jeu­di 17 jan­vi­er 2019 à 14h30 en Amphi Gauss (Cen­tre de Recherche)

Le mar­di 27 novem­bre 2018 à 14h en salle GI-042

  • iROS (1–5 octo­bre 2018) : Elwan Héry
  • ITSC (4–7 novem­bre 2018) : Shri­ram Jugade, Edouard Capellier
  • ITSNT (13–16 novem­bre 2018) : Joelle Al-Hage
  • ICARCV (18–21 novem­bre 2018) : Abdel­hak Loukkal, Gabriel Frisch, Philippe Xu

Cyra­no VASEUR

Doc­tor­ant vis­i­teur au lab­o­ra­toire Heudiasyc

Le mar­di 13 novem­bre 2018 à 14 h en N104 (PG2 – UTC)

Résumé :

Cur­rent­ly, we are work­ing on the imple­men­ta­tion of a high-accu­ra­cy body and road angles esti­ma­tor into a vir­tu­al sens­ing envi­ron­ment. Body angles are used for cor­rec­tion of IMU data to get accu­rate mea­sure­ments (in the road frame). This is espe­cial­ly favor­able when esti­mat­ing vehi­cle veloc­i­ty from mea­sured accel­er­a­tions. Road angles are required to cor­rect for the grav­i­ty com­po­nent when esti­mat­ing tire forces.
Assum­ing avail­able sus­pen­sion stroke mea­sure­ment, body angles can be recon­struct­ed kine­mat­i­cal­ly. In this case, sus­pen­sion strokes are mea­sured with onboard poten­tiome­ters. Com­mer­cial­ly, these are used in the adap­tive head­lights func­tion­al­i­ty of the test vehi­cle, Evoque. The road angles are esti­mat­ed in an Extend­ed Kalman Fil­ter esti­ma­tion struc­ture. Here­by a decou­pled pitch and roll mod­el is used. Future work involves using cou­pled mod­els based on sus­pen­sion dynam­ics.
Addi­tion­al­ly, some focus is on ver­ti­cal tire force esti­ma­tion. Tra­di­tion­al qua­si-sta­t­ic load trans­fer mod­els have lim­it­ed accu­ra­cy for esti­mat­ing ver­ti­cal tire forces, espe­cial­ly dur­ing tran­sient motion. In this approach, the ver­ti­cal tire forces are cal­cu­lat­ed from an elab­o­rate cou­pled pitch-roll dynam­ics mod­el with non-lin­ear sus­pen­sion char­ac­ter­is­tics. Here­by it is assumed that these char­ac­ter­is­tics are known. In this case, the char­ac­ter­is­tics are iden­ti­fied from mea­sured ver­ti­cal tire forces and sus­pen­sion strokes on the test vehicle.

Mar­co VIEHWEGER

Doc­tor­ant vis­i­teur au lab­o­ra­toire Heudiasyc

Le mar­di 13 novem­bre 2018 en N104 (PG2 – UTC)

Résumé :

Part I : Intro­duc­tion to project ‘ITEAM’

Part II : State Esti­ma­tion for Vehi­cle Dynamics

The esti­ma­tion of vehi­cle states, e.g. sideslip angles and tire forces, is a key fac­tor for improv­ing vehi­cle dri­ving safe­ty, espe­cial­ly in times of advanced dri­ver assis­tance sys­tems (ADAS) and autonomous dri­ving (AD). The vir­tu­al sens­ing approach enables the retrieval of infor­ma­tion which can­not be mea­sured direct­ly or only with expen­sive sen­sor equip­ment.
We are focus­ing our work on cre­at­ing an exten­sive state esti­ma­tion plat­form. There­fore, mul­ti­ple soft­ware tools like MATLAB, Siemens LMS AMES­im, IPG Car­Mak­er are used. Addi­tion­al­ly, real-world mea­sure­ment data is employed to check the accu­ra­cy of the vir­tu­al sen­sors.
Cur­rent­ly, we are work­ing on the imple­men­ta­tion of a high-accu­ra­cy body and road angles esti­ma­tor into the vir­tu­al sens­ing environment.

Part III : Con­cept Car Platform

Intend­ed as a research plat­form for test­ing and val­i­da­tion of auto­mo­tive vir­tu­al sens­ing approach­es KU Leuven’s MOD group devel­oped a Con­cept Car plat­form with a mod­u­lar pow­er­train archi­tec­ture. This project is in coop­er­a­tion with the Bel­gian indus­tri­al part­ner Punch Pow­er­train who is expe­ri­enced in the field of CVT gear­box­es, hybrid, and elec­tric pow­er­trains.
The devel­op­ment process was start­ed on the basis of the Master’s the­ses of six KU Leu­ven stu­dents. In teams of two they took care of:

  • Design of a tubu­lar frame (man­u­fac­tured by Engie Fabricom);
  • Inte­gra­tion of pow­er­train com­po­nents, includ­ing bat­tery pack development;
  • Ener­gy con­sump­tion minimization.

The sem­i­nar will briefly cov­er these aspects giv­ing insight into some specifics of the project hop­ing to spark some ideas for research collaborations.

Luis Rodol­fo GARCIA CARRILLO

Assis­tant Pro­fes­sor with the Depart­ment of Elec­tri­cal Engi­neer­ing at Texas A&M Uni­ver­si­ty – Cor­pus Christi

Le mar­di 03 juil­let 2018 à 14 h en salle GI-042

Résumé :

The pro­lif­er­a­tion of autonomous robots evi­dence forth­com­ing envi­ron­ments where mul­ti­ple autonomous sys­tems (MAS) will be inter­act­ing with each oth­er, as well as with human beings, to per­form com­plex tasks at a lev­el nev­er imag­ined before. Con­ven­tion­al meth­ods for improv­ing MAS per­for­mance address very spe­cif­ic chal­lenges, but not gen­er­al prob­lems. Learn­ing-based con­trollers offer adapt­abil­i­ty and robust­ness against uncer­tain­ties, how­ev­er, the com­pu­ta­tion­al com­plex­i­ty of these solu­tions is often not prac­ti­cal­ly fea­si­ble. These draw­backs lim­it the applic­a­bil­i­ty and penal­ize the per­for­mance of cur­rent MAS con­trol meth­ods. Recent­ly, cog­ni­tive sci­en­tists advo­cate that “a sin­gle occur­rence of an emo­tion­al­ly sig­nif­i­cant sit­u­a­tion is remem­bered far more vivid­ly and for a longer peri­od than a task, which is repeat­ed fre­quent­ly”. This high­lights that emo­tion­al pro­cess­ing is able to devel­op an effect that sus­tained sen­so­ry input is not able to achieve. In this talk, we present con­ven­tion­al and adap­tive dis­trib­uted con­sen­sus algo­rithms for MAS. Next, a descrip­tive and a math­e­mat­i­cal mod­el of emo­tion pro­cess­ing in the mam­malian brain is intro­duced, which is then mod­i­fied to devel­op a hier­ar­chi­cal feed­back con­trol for MAS. Pre­lim­i­nary results show how the basic fea­tures of the emo­tion­al learn­ing sys­tem in com­bi­na­tion with the MAS con­troller can help to effec­tive­ly con­trol a group of robots in real-time, in pres­ence of sys­tem uncertainties.

Bio : 

Luis Rodol­fo Gar­cia Car­ril­lo was born in Duran­go, Mex­i­co in 1980. He received the Licen­ciatu­ra in Elec­tron­ic Engi­neer­ing in 2003, and the M.Sc. in Elec­tri­cal Engi­neer­ing in 2007, both from the Insti­tu­to Tec­no­logi­co de La Lagu­na, in Coahuila, Mex­i­co. He received his Ph.D. in Con­trol Sys­tems from the Uni­ver­si­ty of Tech­nol­o­gy of Com­pieg­ne, France, in 2011, where he was advised by Pro­fes­sor Roge­lio Lozano. From 2012 to 2013, he was a post­doc­tor­al researcher at the Cen­ter for Con­trol, Dynam­i­cal Sys­tems and Com­pu­ta­tion at the Uni­ver­si­ty of Cal­i­for­nia, San­ta Bar­bara, where he was work­ing with Pro­fes­sor Joao Hes­pan­ha. He cur­rent­ly holds an Assis­tant Pro­fes­sor posi­tion with the Depart­ment of Elec­tri­cal Engi­neer­ing at Texas A&M Uni­ver­si­ty – Cor­pus Christi. His cur­rent research inter­ests include mul­ti-agent con­trol sys­tems, intel­li­gent con­trollers, and vision-based control.

Ari­ane Spaenlehauer

Doc­tor­ante au lab­o­ra­toire Heudiasyc

Le mar­di 26 juin 2018 à 14 h dans l’am­phi Col­com­bet (Cen­tre de Trans­fert, uni­ver­sité de tech­nolo­gie de Compiègne)

Résumé :

Over the last few years, mobile robot­ics has gained an increas­ing pop­u­lar­i­ty in aca­d­e­m­ic research and indus­try both for the under­ly­ing sci­en­tif­ic chal­lenges and the eco­nom­ic ben­e­fits. On the behalf of the Labex MS2T, the DIVINA chal­lenge team explores the design pos­si­bil­i­ties of Tech­no­log­i­cal Sys­tem-of-Sys­tems to cre­ate an autonomous fleet of het­ero­ge­neous UAVs using visu­al-sens­ing mainly.

Nes­rine Mahdoui

Doc­tor­ante au lab­o­ra­toire Heudiasyc

Le mar­di 26 juin 2018 à 14 h dans l’am­phi Col­com­bet (Cen­tre de Trans­fert, uni­ver­sité de tech­nolo­gie de Compiègne)

Résumé :

In the robot­ic com­mu­ni­ty, a grow­ing inter­est for mul­ti-robot sys­tems has appeared in the last decades. This is main­ly due to new large-scale appli­ca­tions requir­ing such sys­tem of sys­tems fea­tures in areas like secu­ri­ty, dis­as­ter sur­veil­lance, inun­da­tion mon­i­tor­ing, search and res­cue, infra­struc­ture inspec­tion, and so on. In such mis­sions, one of the fun­da­men­tal task – addressed in this work – is the coor­di­nat­ed explo­ration of an unknown envi­ron­ment sensed by a team of Micro-Aer­i­al Vehi­cle (MAV) with embed­ded vision.  The key prob­lem is to coop­er­a­tive­ly choose spe­cif­ic regions of the envi­ron­ment to be simul­ta­ne­ous­ly explored and mapped by each robot in an opti­mized man­ner, in order to reduce explo­ration time and, con­se­quent­ly, ener­gy con­sump­tion.  The tar­get goals – select­ed from the com­put­ed fron­tier points lying between free and unknown areas – are assigned to robots by con­sid­er­ing a trade-off between fast explo­ration and get­ting detailed grid maps. For deci­sion mak­ing pur­pose, MAVs usu­al­ly exchange a copy of their local map, how­ev­er, the nov­el­ty in this work is to exchange map fron­tier points instead, which allow to save com­mu­ni­ca­tion bandwidth.

Ser­gio Salazar

Pro­fesseur invité de l’U­MI LAFMIA CINVESTAV

Le mar­di 12 juin 2018 à 14 h dans l’am­phi Col­com­bet (Cen­tre de Trans­fert, uni­ver­sité de tech­nolo­gie de Compiègne)

Résumé :
Ser­gio Salazar (pro­fesseur invité de l’U­MI) nous présen­tera ses travaux de recherche sur des véhicules autonomes aériens, ter­restres, sous-marins et exosquelettes dévelop­pés dans l’U­MI LAFMIA CINVESTAV, notam­ment sur la robustesse, l’op­ti­mi­sa­tion, le vol mul­ti-agents et la nav­i­ga­tion de précision.

Ger­ar­do ORTIZ-TORRES

Doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 15 mai 2018 à 14 h dans l’am­phi Col­com­bet (Cen­tre de Trans­fert, uni­ver­sité de tech­nolo­gie de Compiègne)

Résumé :
In the last years mul­ti-rotors con­fig­u­ra­tions for Unmanned Aer­i­al Vehi­cles (UAVs) have become promis­ing mobile plat­forms capa­ble of nav­i­gat­ing (semi) autonomous­ly in uncer­tain envi­ron­ments. Numer­ous appli­ca­tions for this kind of vehi­cles have been pro­posed, as aer­i­al pho­tog­ra­phy, sur­veil­lance, crop spray­ing, oil spill detec­tion, sup­ply deliv­ery, agri­cul­tura assess­ment, among oth­ers. Among them, the quad­copter con­fig­u­ra­tion, has proved to be suit­able for these appli­ca­tions due to the fact that it can take-off and land­ing in shorts spaces, and it is essen­tial­ly sim­pler to build, com­pared with a con­ven­tion­al heli­copter. The quad­copter aer­i­al vehi­cle is also sen­si­tive to aero­dy­nam­ic and exter­nal dis­tur­bances that can lead to dif­fer­ent faults, such as actu­a­tor stuck, loss of a pro­peller or a motor, actu­a­tor degra­da­tion, volt­age con­trol fail­ure, struc­tur­al dam­age, phys­i­cal aging, and fatigue, which inevitably influ­ence the states of the vehi­cle. As a result, the sta­bil­i­ty, reli­a­bil­i­ty, and safe­ty could be affect­ed dur­ing the fight enve­lope. In order to iden­ti­fy mal­func­tions at any time and to improve reli­a­bil­i­ty and safe­ty in the quad­copter, Fault Tol­er­ant Con­trol (FTC) meth­ods can be considered.

The FTC tech­niques are clas­si­fied into two types: pas­sive and active. In the active tech­niques the con­troller para­me­ters are adapt­ed or recon­fig­ured accord­ing to the fault using the infor­ma­tion of the Fault Detec­tion and Diag­no­sis (FDD) sys­tem, so that the sta­bil­i­ty and accept­able per­for­mance of the sys­tem can be main­tained. An active FTC scheme for a quad­copter vehi­cle is pre­sent­ed. The actu­a­tor FDD method pro­posed in this work con­sid­ers the rota­tion­al dynam­ics of the vehi­cle. Par­tial and total actu­a­tor faults are con­sid­ered. The design pro­ce­dure can be explained as follows:

1) a nom­i­nal con­troller, that has been pre­vi­ous­ly designed, is con­sid­ered to track the 3D posi­tion and atti­tude dynam­ics of the quad­copter ensur­ing a desired per­for­mance in a fault-free case;

2) a Pro­por­tion­al-Inte­gral Observ­er (PIO) applied to the rota­tion­al dynam­ics is pro­posed for per­form­ing actu­a­tor fault esti­ma­tion. The fault detec­tion is done by com­par­ing the fault esti­ma­tion sig­nal with a pre­de­fined thresh­old. Fault iso­la­tion is achieved by ana­lyz­ing the sign of the fault esti­ma­tion sig­nal. Suf­fi­cient con­di­tions for the exis­tence of the observ­er is giv­en in terms of Lin­ear Matrix Inequalities;

3) an analy­sis of sta­t­ic con­trol­la­bil­i­ty is applied using the attain­able con­trol set in order to test the per­for­mance degra­da­tion of the quad­copter vehi­cle under par­tial and total faults; 4)finally, the par­tial fault accom­mo­da­tion con­trol law is gen­er­at­ed using the nom­i­nal con­troller and the fault esti­ma­tion sig­nal for retain­ing close to nom­i­nal fault-free per­for­mance despite par­tial actu­a­tor fault. The total fault recon­fig­u­ra­tion is done by chang­ing the para­me­ters of the nom­i­nal con­troller, los­ing the con­trol­la­bil­i­ty in yaw posi­tion but con­trol­ling the yaw veloc­i­ty around z‑axis. The pro­posed fault con­trol scheme is val­i­dat­ed in dif­fer­ent cas­es of fight tests for illus­trat­ing their fea­si­bil­i­ty and efectiveness.

Franck LI

Doc­tor­ant au lab­o­ra­toire Heudi­asyc (CIFRE Renault)

Le mar­di 15 mai 2018 à 14 h dans l’am­phi Col­com­bet (Cen­tre de Trans­fert, uni­ver­sité de tech­nolo­gie de Compiègne)

Résumé :
Le domaine des véhicules intel­li­gents est en con­stante évo­lu­tion. Les pro­grès tech­niques, notam­ment en ter­mes de cap­teurs, ren­dent pos­si­ble des fonc­tion­nal­ités de plus en plus avancées. Ces cap­teurs per­me­t­tent au sys­tème de recueil­lir des infor­ma­tions sur son envi­ron­nement direct. Une autre source d’information est la car­togra­phie, four­nissant des infor­ma­tions a pri­ori sur le réseau routi­er. Les cartes routières haute-déf­i­ni­tion com­men­cent peu à peu à faire leur appari­tion, mais l’exploitation de leur grande pré­ci­sion est lim­itée par la pré­ci­sion des sys­tèmes de posi­tion­nement disponibles, mis à rude épreuve notam­ment en envi­ron­nement urbain. Cet exposé présente une méth­ode de diag­nos­tic d’utilisabilité du sys­tème de posi­tion­nement. L’algorithme de map-match­ing  sur lequel elle est basée est présen­té. Il exploite le car­ac­tère mul­ti-hypothès­es d’un fil­tre par­tic­u­laire afin de gér­er les sit­u­a­tions ambigües. Puis le principe du test de cohérence déter­mi­nant un critère « Use/Don’t  Use » est exposé.

Osamah SAIF

Post-doc­tor­ant au lab­o­ra­toire Heudiasyc

Le mar­di 10 avril 2018 à 14 h en GI042 (Bâti­ment Blaise Pas­cal, uni­ver­sité de tech­nolo­gie Compiègne)

Résumé :
Les appli­ca­tions de quadriro­tors autonomes aug­mentent rapi­de­ment dans notre vie réelle. La sur­veil­lance, la vidéo et la pho­togra­phie sont les domaines d’ac­tiv­ité essen­tiels de véhicules aériens sans pilote (UAV). Actuelle­ment, les chercheurs et les sci­en­tifiques se con­cen­trent sur le déploiement mul­ti-drones pour l’inspection et la sur­veil­lance de vastes zones. C’est dans cet esprit que je par­lerai dans ma présen­ta­tion de mes activ­ités de recherche qui s’inscrivent dans le pro­jet FUI AIRMES « Drones Hétérogènes Coopérants en Flot­tille ». Ce pro­jet a pour objec­tif de per­me­t­tre le déploiement de flot­tilles de drones hétérogènes pour per­me­t­tre la sur­veil­lance des instal­la­tions fer­rovi­aires et élec­triques.  Ma mis­sion dans ce pro­jet est d’assurer le développe­ment des algo­rithmes de vol en for­ma­tions per­me­t­tant aux drones de nav­iguer suiv­ant des plans de vol tout en gérant leurs prox­im­ités et en main­tenant une dis­tance de sécu­rité entre eux.

Azade FOTOUHI

Doc­tor­ante à l’UNSW de Sydney

Le mar­di 10 avril 2018 à 14 h en GI042 (Bâti­ment Blaise Pas­cal, uni­ver­sité de tech­nolo­gie Compiègne)

Résumé :

There have been increas­ing inter­ests in employ­ing unmanned aer­i­al vehi­cles (UAVs) such as drones for telecom­mu­ni­ca­tion pur­pose. In such net­works, UAVs act as base sta­tions (BSs) and pro­vide down­load­ing ser­vice to users. Com­pared with con­ven­tion­al ter­res­tri­al base sta­tions, such UAV-BSs can dynam­i­cal­ly adjust their loca­tions to improve net­work per­for­mance. How­ev­er, there exists impor­tant issues in UAV net­works that must be con­sid­ered. For exam­ple, the UAV deploy­ment, intro­duces a new tool for radio resource man­age­ment, since BS posi­tions are open for net­work opti­miza­tion. More­over, drones have prac­ti­cal agili­ty con­straints in terms of fly­ing speed, turn­ing angles, and ener­gy con­sump­tion. The aim of this pre­sen­ta­tion is to overview the inte­gra­tion of UAVs in cel­lu­lar net­works, exist­ing issues and poten­tial solu­tions for assist­ing cel­lu­lar com­mu­ni­ca­tions with UAV-based fly­ing relays and base sta­tions. Towards that end, a pro­posed mobil­i­ty con­trol method based on the SNR mea­sure­ment and game the­o­ry approach will be pre­sent­ed. The results demon­strate that the UAV-BSs mov­ing accord­ing to our pro­posed algo­rithm sig­nif­i­cant­ly improve the net­work per­for­mance in terms of pack­et through­put and spec­tral effi­ca­cy com­pared to a base­line scenario.