UMR CNRS 7253

Site Tools


en:data

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
en:data [2014/11/11 23:39] xuphilipen:data [2016/11/05 04:38] xuphilip
Line 1: Line 1:
-====== KITTI semantic segmentation ground truth ====== +====== Grand Cooperative Driving Challenge ====== 
-{{ :en:sample.png?nolink |}} + 
-{{ :en:classes.png?nolink |}} +We provide the C++ code as well as the data recorded during our participation in the GCDC 2016
-A set of 107 images (70 for training and 37 for testing) from the [[http://www.cvlibs.net/datasets/kitti/index.php|KITTI Vision Benchmark Suite]] were manually annotated with the software Adobe® Photoshop® CS2. + 
-The left color images were annotated at the pixel level considering a set of 28 classes.+The code has been developped within the [[https://devel.hds.utc.fr/software/pacpus|PACPUS]] open-source framework.
  
 == Download == == Download ==
-  Training set  ^  Testing set  ^ +Component ^  Link  ^ 
-Ground truth | {{:en:gttrain.zip|}} | {{:en:gttest.zip|}} | +Localisation | {{:en:codes:GCDC_localisation.zip|}} 
-Left images | {{:en:lefttrain.zip|}} | {{:en:lefttest.zip|}} | +^ Perception | {{:en:codes:GCDC_perception.zip|}} | 
-Right images | {{:en:righttrain.zip|}} | {{:en:righttest.zip|}} | +Control | {{:en:codes:GCDC_control.zip|}} | 
-Velodyne data | {{:en:velotrain.zip|}} | {{:en:velotest.zip|}} |+Communication | {{:en:codes:GCDC_communication.zip|}} | 
 +Supervisor | {{:en:codes:GCDC_supervisor.zip|}} |
  
-For convenience, we also provide the left and right images, as well as the Velodyne data, associated to the ground truth annotations+^ Data ^  Heat 1  ^  Heat 3  ^  Heat 5  ^ 
-These data were extracted from the raw sequences+^ Localization | {{:en:codes:GCDC_data_heat1_localisation.zip|}} | {{:en:codes:GCDC_data_heat3_localisation.zip|}} | {{:en:codes:GCDC_data_heat5_localisation.zip|}} | 
-They are copyright by the [[http://www.cvlibs.net/datasets/kitti/index.php|KITTI Vision Benchmark Suite]] and published under the [[http://creativecommons.org/licenses/by-nc-sa/3.0/|Creative Commons Attribution-NonCommercial-ShareAlike 3.0]] License.+^ Perception | {{:en:codes:GCDC_data_heat1_perception.zip|}} | {{:en:codes:GCDC_data_heat3_perception.zip|}} | {{:en:codes:GCDC_data_heat5_perception.zip|}} | 
 +^ Communication | {{:en:codes:GCDC_data_heat1_communication.zip|}} | {{:en:codes:GCDC_data_heat3_communication.zip|}} | {{:en:codes:GCDC_data_heat5_communication.zip|}} | 
 + 
 +====== Evidential Calibration ====== 
 + 
 +We provide the MATLAB® code for our evidential multiclass classifier calibration method. 
 +  * calibTrain.m train the calibration model given some validation data 
 +  * score2prob.m:  transform a vector of scores into probabilities 
 +  * score2plaus.m: transform a vector of scores into plausibilities over singletons 
 + 
 +== Download == 
 +^  ^  Link  ^ 
 +^ Calibration code {{:en:codes:multinomial_calibration.zip|}} |
  
 == References == == References ==
-Ph. Xu, F. Davoine, J.-B. Bordes, H. Zhao and T. Denoeux. 
-**Multimodal Information Fusion for Urban Scene Understanding**. 
-//Machine Vision and Applications (MVA)//, 2014. [accepted for publication] 
  
-PhXu, F. Davoine, J.-B. Bordes, H. Zhao and T. Denoeux+**__PhXu__**, F. Davoine, H. Zha and T. Denœux
-**Information Fusion on Oversegmented Images: An Application for Urban Scene Understanding**. +**Evidential calibration of binary SVM classifiers**. 
-In //Proceedings of the Thirteenth IAPR International Conference on Machine Vision Applications (MVA)//, pages 189-193, Kyoto, Japan, May 20-232013+//International Journal of Approximate Reasoning (IJAR)//, Vol. 72, pages 55--70May 2016.\\ 
-{{en/publi/xu13_mva_information_fusion_on_oversegmentated_images_an_application_for_urban_scene_understanding.pdf|paper}}{{en/publi/xu13_mva_presentation.pdf|oral}} +{{:en:publis:xu14_ijar_evidential_calibration_of_binary_svm_classifiers.pdf|Paper}} 
-[[http://hal.archives-ouvertes.fr/hal-00932896|HAL]]+[[https://hal.archives-ouvertes.fr/hal-01154794|HAL]] 
 +[[https://dx.doi.org/10.1016/j.ijar.2015.05.002|DOI]]
  
-----+**__Ph. Xu__**, F. Davoine and T. Denœux. 
 +**Evidential Multinomial Logistic Regression for Multiclass Classifier Calibration**. 
 +In //Proceedings of the 18th International Conference on Information Fusion//, pages 1106-1112, Washington, D.C., July 6-9, 2015.\\ 
 +{{:en:publis:xu15_fusion_evidential_multimodal_logistic_regression_for_multiclass_classifier_calibration.pdf|Paper}}
  
-====== KITTI moving object segmentation ground truth ====== +**__Ph. Xu__**, F. Davoine and T. Denœux. 
-Coming soon...+**Evidential Logistic Regression for Binary SVM Classifier Calibration**. 
 +In FCuzzolin, editor, //Belief Functions: Theory and Applications//
 +//Proceedings of the 3rd International Conference on Belief Functions//, Springer, LNCS 8764, pages 49-57, Oxford, UK, September 26-28, 2014.\\ 
 +{{:en:publis:xu14_belief_evidential_logistic_regression_for_binary_svm_classifier_calibration.pdf|Paper}} 
 +{{:en:publis:xu14_belief_presentation.pdf|Oral}} 
 +[[https://dx.doi.org/10.1007/978-3-319-11191-9_6|DOI]] 
 +{{:en:publi:xu14_belief.bib|BibTeX}}
  
 ---- ----
Line 60: Line 80:
 {{:en:publis:xu14_bmvc_evidential_combination_of_pedestrian_detectors.pdf|paper}} {{:en:publis:xu14_bmvc_evidential_combination_of_pedestrian_detectors.pdf|paper}}
 {{:en:publis:xu14_bmvc_presentation.pdf|oral}} {{:en:publis:xu14_bmvc_presentation.pdf|oral}}
 +
 +----
 +
 +====== KITTI semantic segmentation ======
 +{{ :en:sample.png?nolink |}}
 +{{ :en:classes.png?nolink |}}
 +A set of 107 images (70 for training and 37 for testing) from the [[http://www.cvlibs.net/datasets/kitti/index.php|KITTI Vision Benchmark Suite]] were manually annotated with the software Adobe® Photoshop® CS2.
 +The left color images were annotated at the pixel level considering a set of 28 classes.
 +
 +== Download ==
 +^  ^  Training set  ^  Testing set  ^
 +^ Ground truth | {{:en:gttrain.zip|}} | {{:en:gttest.zip|}} |
 +^ Left images | {{:en:lefttrain.zip|}} | {{:en:lefttest.zip|}} |
 +^ Right images | {{:en:righttrain.zip|}} | {{:en:righttest.zip|}} |
 +^ Velodyne data | {{:en:velotrain.zip|}} | {{:en:velotest.zip|}} |
 +
 +For convenience, we also provide the left and right images, as well as the Velodyne data, associated to the ground truth annotations.
 +These data were extracted from the raw sequences.
 +They are copyright by the [[http://www.cvlibs.net/datasets/kitti/index.php|KITTI Vision Benchmark Suite]] and published under the [[http://creativecommons.org/licenses/by-nc-sa/3.0/|Creative Commons Attribution-NonCommercial-ShareAlike 3.0]] License.
 +
 +== References ==
 +**__Ph. Xu__**, F. Davoine, J.-B. Bordes, H. Zhao and T. Denœux.
 +**Multimodal Information Fusion for Urban Scene Understanding**.
 +//Machine Vision and Applications (MVA)//, Vol. 27, Issue 3, pages 331--349, April 2016.\\
 +[[https://hal.archives-ouvertes.fr/hal-01133430|HAL]]
 +[[https://dx.doi.org/10.1007/s00138-014-0649-7|DOI]]
 +
 +**__Ph. Xu__**, F. Davoine, J.-B. Bordes, H. Zhao and T. Denoeux.
 +**Information Fusion on Oversegmented Images: An Application for Urban Scene Understanding**.
 +In //Proceedings of the Thirteenth IAPR International Conference on Machine Vision Applications (MVA)//, pages 189-193, Kyoto, Japan, May 20-23, 2013.
 +{{en/publi/xu13_mva_information_fusion_on_oversegmentated_images_an_application_for_urban_scene_understanding.pdf|paper}}{{en/publi/xu13_mva_presentation.pdf|oral}}
 +[[http://hal.archives-ouvertes.fr/hal-00932896|HAL]]
 +

User Tools