ARS4 Project

December, 2023

Author

Rémy Huet

This project is to be completed in pairs by the students. Submit your answers to the preliminary questions and your code on Moodle (as a single zip archive containing all codes, including provided functions), before 11:59 PM the 17th of January.

The oral defense of the projects is scheduled the 19th of January.

Sould you have any question, feel free to ask them by sending me an email at remy.huet@hds.utc.fr.

Problem Statement

A range-based scanner (like a rotative LiDAR sensor), is positioned at the center of a squared surveillance area (at the position \((0, 0)\)) with given bounds. This zone is frequently crossed by pedestrian, that come from one of the borders, and cross the zone in a quasi straight line, with a quasi constant velocity.

The sensor gives point observations for the pedestrian, in polar coordinates (we note \(\phi\) the directed angle from the local \(x\) axis to avoid confusion with the state model). The sensor is not perfect. Thus, it suffers from:

  • missdetections with a probability \(1 - P_D\) where \(P_D\) is a constant;
  • clutter (false positives) modeled as a PPP with intensity \(\lambda_c\) and uniformly distributed on the surveillance area;
  • white noise, meaning that the perceived positions of the targets are not exact.

For this task, our interest is only in the pedestrian positions. The output of the filter (the estimates) are then only the positions of all the pedestrians in the scene.

Preliminary Work

Q1 – Evolution Model

What model will you use to represent the state of a pedestrian? Give the state representation and the corresponding transition model and Jacobian matrix.

Q2 – Observation Model

Give the observation model of a pedestrian and the associated Jacobian matrix.

Q3 – Birth Model

Propose a birth model based on the observations. What is the main difficulty for this task? How can we overcome these difficulties?

Q4 – Log domain

To avoid overflow and numerical instability due to computation with very small numbers, it is often a good idea to manipulate probabilities in the log domain.

For this project, you must compute and use the weights for the gaussian mixture using logs. The following questions will help you to code using this representation. Note that in the following questions, as in Matlab / Octave, the function log corresponds to the natural logarithm, such that \(\log(\exp(x)) = x\).

In the lesson about the PHD, we denoted the un-normalized and the normalized weights \(\tilde{w}\) and \(w\). For the seek of comprehension, we will note them \(\tilde{l}\) and \(l\) when they are expressed in the log space.

Q4.1 – Prediction

The equation for the prediction of the weights is given by

\[ w_{k\vert k-1}^h = P_S w_{k-1 \vert k-1}^h \]

What is the expression of \(l_{k \vert k-1}^h\)?

Q4.2 – Update

In the update phase, the un-normalized weight \(\tilde{w}_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h}\) is given by

\[ \tilde{w}_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h} = P_D w_{k \vert k-1}^h \mathcal{N}(z_k^i; \hat{z}_{k \vert k-1}^h, S_k^h) \]

where \(\hat{z}_{k \vert k-1}^h\) is the expected measurement for the hypothesis \(h\) and \(S_k^h\) the innovation covariance for this hypothesis.

What is the expression of \(\tilde{l}_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h}\)?

Q4.3 – Normalization

When all the hypotheses are updated with a given measurement, the corresponding weights are normalized following the next equation:

\[ w_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h} = \frac{ \tilde{w}_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h} }{ \lambda_c(z_k^i) + \sum_{h'=1}^{H_{k \vert k-1}}\tilde{w}_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h'} } \]

Give the expression of the normalized log weight \(l_{k \vert k}^{i\mathcal{H}_{k \vert k-1} + h}\) using the function logsumexp.

Implementation

Implement the PHD filter from the kit on the moodle (in Octave / Matlab) using the data from the given data.mat. This file contains the observations, and the ground truth to compare with.

Reference functions

Some functions are given as references in the +ref folder and may be used directly for the project. Read the comments of the functions to see how to use them.

  • err may be used to compute the difference between the expected measurement and the actual measurement, while maintaining the difference between the angles between \(-\pi\) and \(\pi\). Use this function in the update phase.
  • gaussianMerging can be used directly to merge the hypotheses in the PHD function inside the provided reduction function that you will have to write.
  • the GOSPA function calculates the GOSPA between two given sets, using the given threshold.
  • logmvnpdf can be used to compute the logarithm of the pdf of a multivariate normal distribution. Prefer this function over log(mvnpdf(...)).
  • logsumexp can be used to compute the log of the sum of exponentials, which is useful for the normalization of the weights in the update.
  • momentMatching is used by gaussianMerging, you won’t need it directly.
  • munkres used in the GOSPA function to find the best association between the two sets to compare. You will not use is directly.
  • normalizeLogWeights is a function to ensure that all weights sum to 1. It is used in gaussian merging, but do not use it directly (as you never want the weights to sum to 1).
  • the reduction function is to be used in the filter after the update to reduce the number of components.

All these function may be called using ref.function_name. Read and try to understand the GOSPA and reduction function, as they are important components of the filter.

Tasks

For the implementation of the PHD filter, the following order is recommended:

  1. complete the EKF_predict and measmodel functions, that are used to predict the evolution of a single hypothesis for EKF_predict and the expected measurement and associated jacobian matrix for measmodel.
  2. Complete the PHD_predict function, that is used to predict all the hypotheses of the PHD.
  3. Complete the PHD_update function.
  4. Write the estimates function, that produces estimates from the updated PHD.
  5. Write the birthmodel function that creates hypotheses from observations.

Use all theses functions in the filter script. Do not forget to use the reduction function, and to compute the GOSPA at each timestep. Show the results of the filter and compare them to the ground truth using the GOSPA. Compare also the number of targets estimated by the filter to the real number of targets over time.

Tip: first test your code with the data_nonoise.mat data, using a really small \(R\), \(P_D = 1\) and \(\bar\lambda_c = 0\) using a relatively small nmax (~100). Then, you can try the data.mat file, where data contains noise sampled with \(R = \text{diag}([0.1^2, 4 * 0.017453^2])\), \(P_D = 0.95\) and \(\bar\lambda_c = 4\) with a much higher nmax (~500).

Attention will be given to the readability of the code and the comments.

Bonus

This question may be asked during the oral defense and could give you a bonus:

The implemented birth model create an estimate wherever an observation is done, including for the clutter measurements. How could we take advantage of the scenario to adapt this birth model to avoid creating hypothesis for obvious clutter?