
Single Object Tracking
“Simple” Multiple Object Tracking
Random Finite Sets
The Probability Hypothesis Density Filter
\[ P(A \vert B) = \frac{ P(A) P(B \vert A) }{ P(B) } \qquad(1)\]
\[ P(A \vert B) = \frac{ P(A) P(B \vert A) }{ \sum_A P(A) P(B \vert A) } \]
Using PDFs :
\[ p(x \vert y) = \frac{ p(x) p(y \vert x) }{ \int p(x) p(y \vert x)dx } \]
If we have a prior on a state (eg. localization) \(p(x_k \vert z^{k-1})\), we can take into account some observations \(z_k\) using the likelihood function
\[ p(x_k \vert z^k) = \frac{ p(x_k \vert z^{k-1}) g(z_k \vert x_k) }{ \int p(x_k \vert z^{k-1}) g(z_k \vert x_k)dx_k } \]
We can compute the prior for the time \(k\) from time \(k-1\) using the Chapman-Kolmogorov equation :
\[ p(x_k \vert z^k) = \frac{ \int f(x_k \vert x_{k-1})p(x_{k-1} \vert z^{k-1})g(z_k \vert x_k) dx_{k-1} }{ \iint f(x_k \vert x_{k-1})p(x_{k-1} \vert z^{k-1})g(z_k \vert x_k) dx_{k-1}dx_k } \]
Assuming linear evolution and observation model and gaussian centered noises, we have
\[ p(x_{k} \vert z^{k-1}) = \mathcal{N}(x_{k}; \mu_{k \vert k-1}, P_{k \vert k-1}) \]
and
\[ g(z_k \vert x_k) = \mathcal{N}(z_k; H_k x_k, R_k) \]
The posterior density obtained by the Kalman update is the solution of the Bayes rule with previous assumptions:
\[ p(x_k \vert z^k) = \frac{ \mathcal{N}(x_{k}; \mu_{k \vert k-1}, P_{k \vert k-1}) \mathcal{N}(z_k; H_k x_k, R_k) }{ \int \mathcal{N}(x_{k}; \mu_{k \vert k-1}, P_{k \vert k-1}) \mathcal{N}(z_k; H_k x_k, R_k)dx_k } \]
Toy example : a single target moving along 1-D, with an associated constant velocity model :
\[ X_k = \begin{pmatrix} x_k \\ v_k \end{pmatrix},\ X_{k+1} = \begin{pmatrix} 1 & \Delta t \\ 0 & 1 \end{pmatrix} X_k \]
We only observe \(x\), so the observation model is :
\[ z_k = \begin{pmatrix} 1 & 0 \end{pmatrix} X_k \]
Assume process noise \(Q = \begin{pmatrix}0.1 & 0 \\ 0 & 0.2\end{pmatrix}\) and observation noise \(R = 0.3\)







Single Object Tracking
“Simple” Multiple Object Tracking
Random Finite Sets
The Probability Hypothesis Density Filter
How to track a known and constant number of targets?

Conceptual solution: use 1 KF per track, associate measurements to tracks (NN, GNN,…).



How to deal with
Single Object Tracking
“Simple” Multiple Object Tracking
Random Finite Sets
The Probability Hypothesis Density Filter
A set is a collection of different things, called its elements. Sets are unordered.
\(X = \{\} = \emptyset\) is the empty set.
\(X = \{x\}\) is a singleton.
They may or may not contain an infinite number of elements.
A Finite Set is a set containing a finite number of elements.
The number of elements is the cardinality of the set, denoted \(\lvert X \rvert\)

A Random Finite Set (RFS) is a set containing a random number of random variables ie,
\[ X = \left\{ x_1, \dots, x_n \right\} \]
where both the cardinality \(n\) and the \(x_i\) are randomly distributed.
The \(x_i\) may be random vectors of any dimension, for example the position of a target \(\begin{pmatrix}x \\ y \end{pmatrix}\) or its pose \(\begin{pmatrix}x \\ y \\ \theta \end{pmatrix}\).
Multi-object PDF \(\pi(X) \geq 0\), s.t.
\[ \int \pi(X)dX = 1 \]
based on the set integral 1
\[ \int \pi(X) = \sum_{i=0}^{\infty}\rho(i) \]
With \(\rho(i)\) the cardinality distribution of \(X\):
\[ \rho(i) \triangleq \frac{1}{i!}\int\pi(x_1, \dots, x_i)dx_1 \dots dx_i \]
\[ \pi(X) = \begin{cases} 1 - r & \text{if } X = \emptyset \\ r \cdot p(x) & \text{if } X = \left\{ x \right\} \\ 0 & \text{otherwise} \end{cases} \]
Also called Poisson Point Process
Commonly used to model clutter measurements (false positive).
The intensity function of a Poisson RFS is defined as
\[ \lambda(x) = \bar\lambda \cdot p(x) \]
We can get back \(\bar\lambda\) and \(p(\cdot)\)
\[ \bar\lambda = \int\lambda(x)dx \]
\[ p(x) = \frac{\lambda(x)}{\bar\lambda} = \frac{\lambda(x)}{\int\lambda(x)dx} \]
Example:

\[ p(X_k \vert Z^k) = \frac{ p(X_k \vert Z^{k-1}) g(Z_k \vert X_k) }{ \int p(X_k \vert Z^{k-1}) g(Z_k \vert X_k) dX_k } \]
\(g(Z_k \vert X_k)\) is the multi-object likelihood.
Let’s use a more complicated model:
Use RFS to represent the states (\(X_k\)) and the measurements (\(Z_k\)).
=> Compute both the update of existing elements, and birth of new objects.
\(Z_k = \left\{ z_k^1, \dots, z_k^{m_k} \right\}\) and \(X_k = \left\{ x_k^i, \dots, x_k^{n_k} \right\}\)
Observations \(Z_k = C_k \cup O_k\)
Using the association hypothesis vector \(\theta_k = \left[ \theta_k^1, \dots, \theta_k^{n_k} \right]\) where
\[ \theta_k^i = \begin{cases} j & \text{if object } i \text{ is associated to measurement } j \\ 0 & \text{if objetc } i \text{ is undetected} \end{cases} \]
The observation model is
\[ p(Z_k \vert X_k) = \sum_{\theta_k} \left( \exp(-\bar\lambda_c) \prod_{j=1}^{m_k}\lambda_c(z_k^j)\prod_{i:\theta^i=0}(1 - P_D(x_k^i)) \\ \times \prod_{i: \theta^i \gt 0}\frac{ P_D(x_k^i)g_k(z_k^{\theta^i} \vert x_k^i) }{ \lambda_c(z_k^{\theta_k^i}) } \right) \]
\[ p(Z_k \vert X_k) = p_{c_k}(Z_k)g_k(\emptyset \vert X_k)\sum_{\theta_k}\prod_{i:\theta_k^i \ge 0}\frac{ P_D(x_k^i)g_k(z_k^{\theta_k^i} \vert x_k^i) }{ \lambda_c(z_k^{\theta_k^i})(1 - P_D(x_k^i)) } \]
with
\[ p_{c_k}(Z_k) = \exp(-\bar\lambda_c)\prod_{s=1}^{m_k}\lambda_c(z_k^s) \]
\[ g_k(\emptyset \vert X_k) = \prod_{j=1}^{n_k}(1 - P_D(x_k^j)) \]
The previous exact equations are not calculable in practice => need to introduce some approximations.
Single Object Tracking
“Simple” Multiple Object Tracking
Random Finite Sets
The Probability Hypothesis Density Filter
The Probability Hypothesis Density \(D_X(x)\) of a RFS \(X\) is defined as:
\[ \begin{aligned} D_X(x) & = \int p_X(X)\sum_{x' \in X}\delta(x-x')dX \\ & = \int p_X(\left\{ x \right\} \cup X)dX \end{aligned} \]
It is the first-order moment of the RFS, also called intensity function of the RFS.
\[ \int_A D_X(x)dx = \mathbb{E}\left[\left| X \cap A \right|\right] \]
=> Expected number of elements in the subspace \(A\).
If \(X\) is a Bernoulli RFS parametrized by \(r\) and \(p(\cdot)\), its PHD is
\[ D_X(x) = r \cdot p(x) \]
Example with \(r = 0.75\) and \(p(x) = \mathcal{N}(x; 1.2, 0.78)\)

If \(X\) is a Multi-Bernoulli RFS parametrized by \(\left\{ r^{(i)}, p^{(i)}(\cdot) \right\}_{i=1}^{N}\), its PHD is
\[ D_X(x) = \sum_{i=1}^{N} r^{(i)}p^{(i)}(x) \]
MB-RFS with 2 components \((1, \mathcal{N}(-2.3, 1.4))\) and \((1, \mathcal{N}(1.2, 0.85))\)

MB-RFS with 2 components \((0.95, \mathcal{N}(-2.3, 1.4))\) and \((0.65, \mathcal{N}(1.2, 0.85))\)

If \(X\) is a Poisson RFS with intensity function \(\lambda(\cdot)\), its PHD is
\[ D_X(x) = \lambda(x) \]
As the PHD is the exact representation of a Poisson RFS, one may approximate any RFS by a Poisson RFS parametrized by its PHD.
Let \(X\) a MB-RFS with 2 components \((0.95, \mathcal{N}(-2.3, 1.4))\) and \((0.65, \mathcal{N}(1.2, 0.85))\).
\[ \begin{aligned} D_X(x) & = 0.95 \mathcal{N}(x; -2.3, 1.4) + 0.65 \mathcal{N}(1.2, 0.85) \end{aligned} \]
\(X\) can be approximated by a PPP with intensity function \(\lambda(x) = D_X(x)\)
The prior is composed of two gaussians \(\mathcal{N}(-1.02, 0.3^2)\) and \(\mathcal{N}(1.1, 0.4^2)\) with respective weights \(0.85\) and \(0.92\).

Suppose we observe one object at \(0.99\), but we miss the object at \(-1\). The noise model is set at \(R = 0.15\).
We can plot the observation above the prior:

Perform a Kalman update for each component of the prior, using the observation :
# There is only one observation => no loop on observation set
w_sum = 0
tmp_gm = []
kalman_updated_gm = []
# decompose component as weight + normal
for w, n in prior_gm:
S = R + n.var() # innovation covariance
K = n.var() / S # Kalman gain
error = o_1.mean() - n.mean() # Innovation
mu = n.mean() + K * error
var = (1 - K) * n.var()
# Numerator of the weight update
weight = Pd * w * norm.pdf(o_1.mean(), n.mean(), S)
w_sum += weight
tmp_gm.append((weight, norm(mu, sqrt(var))))
# Division for weights
for w, n in tmp_gm:
weight = w / (c_z + w_sum)
kalman_updated_gm.append((weight, n))
Multiply the predicted (prior) PHD by \(1- P_D\) to add the possibility of missed objects:


Scale by \(P_S\) the probability of survival :

Create a component for each observation in previous step. The birth model is follows a \(\mathcal{N}(\mu, R)\) with weight \(0.2\) (maybe too high in practice but fine for visualization).

The predicted PHD is the sum of the propagated PHD and the birth PHD.

Suppose this time we have the following observations (2 OK + clutter) :

Update each gaussian component with each observation.
kalman_updated_gm = []
for o in obs:
w_sum = 0
tmp_gm = []
# decompose component as weight + normal
for w, n in predicted_phd:
S = R + n.var() # innovation covariance
K = n.var() / S # Kalman gain
error = o.mean() - n.mean() # Innovation
mu = n.mean() + K * error
var = (1 - K) * n.var()
# Numerator of the weight update
weight = Pd * w * norm.pdf(o.mean(), n.mean(), S)
w_sum += weight
tmp_gm.append((weight, norm(mu, sqrt(var))))
# Division for weights
for w, n in tmp_gm:
weight = w / (c_z + w_sum)
kalman_updated_gm.append((weight, n))
Multiply the predicted PHD by \(1- P_D\) to add the possibility of missed objects:


Scale by \(P_S\) the probability of survival :

Create a component for each observation in previous step. The birth model is follows a \(\mathcal{N}(\mu, R)\) with weight \(0.1\).

The predicted PHD is the sum of the propagated PHD and the birth PHD.

Suppose this time we have the following observations (2 OK with bigger noise for the 2nd) :

Update each gaussian component with each observation.
kalman_updated_gm = []
for o in obs:
w_sum = 0
tmp_gm = []
# decompose component as weight + normal
for w, n in predicted_phd:
S = R + n.var() # innovation covariance
K = n.var() / S # Kalman gain
error = o.mean() - n.mean() # Innovation
mu = n.mean() + K * error
var = (1 - K) * n.var()
# Numerator of the weight update
weight = Pd * w * norm.pdf(o.mean(), n.mean(), S)
w_sum += weight
tmp_gm.append((weight, norm(mu, sqrt(var))))
# Division for weights
for w, n in tmp_gm:
weight = w / (c_z + w_sum)
kalman_updated_gm.append((weight, n))
Multiply the predicted PHD by \(1- P_D\) to add the possibility of missed objects:


Infos:
Number of components: 69, sum of the weights: 2.058122621541668
We model the PHD and the measurements as gaussian mixtures, ie. a weighted sum of gaussian distributions.


Other (better) filters exist, like the CPHD, the MB-RFS filter, the PMBM filter…

Finite Set Statistics for Multiple-Object Tracking