- #1
mp6250
- 3
- 0
Hi All,
I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."
I'm having a little trouble with the logic in the step that updates the mean and variance of the models. According to the paper, when new image data comes in, you follow a recursive formula to get exponential moving statistics for these parameters based on the following formulas:
[itex]μ_t = (1-ρ)μ_{t-1} + ρX_t[/itex]
[itex]σ^2_t = (1-ρ)σ^2_{t-1} + ρ(X_t-μ_t)^T(X_t-μ_t)[/itex]
where μ and σ are the mean and standard deviation of the model, [itex]X_t[/itex] is the incoming data vector, and the subscript indicate the relative times between variables. ρ is defined as:
[itex] ρ=α \frac{1}{(2π)^{\frac{n}{2}}|Ʃ|^{\frac{1}{2}}} e^{-\frac{1}{2}(X_t-μ_t)^T Ʃ^{-1}(X_t-μ_t)}[/itex]
where Ʃ is the covariance matrix (taken to be diagonal for simplicity) and α is a parameter that controls the learning rate.
My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so). It's my understanding that you would never set α > 1.0, so where could this be corrected for? Is there a normalization I am missing somewhere?
I'm trying to implement the Gaussian Mixture Model for background subtraction as described by Chris Stauffer and W.E.L Grimson in their paper "Adaptive background mixture models for real-time tracking."
I'm having a little trouble with the logic in the step that updates the mean and variance of the models. According to the paper, when new image data comes in, you follow a recursive formula to get exponential moving statistics for these parameters based on the following formulas:
[itex]μ_t = (1-ρ)μ_{t-1} + ρX_t[/itex]
[itex]σ^2_t = (1-ρ)σ^2_{t-1} + ρ(X_t-μ_t)^T(X_t-μ_t)[/itex]
where μ and σ are the mean and standard deviation of the model, [itex]X_t[/itex] is the incoming data vector, and the subscript indicate the relative times between variables. ρ is defined as:
[itex] ρ=α \frac{1}{(2π)^{\frac{n}{2}}|Ʃ|^{\frac{1}{2}}} e^{-\frac{1}{2}(X_t-μ_t)^T Ʃ^{-1}(X_t-μ_t)}[/itex]
where Ʃ is the covariance matrix (taken to be diagonal for simplicity) and α is a parameter that controls the learning rate.
My confusion is this, ρ will always be tiny. The algorithm assumes large variances to begin with and the tiny probabilities that come out of these functions will cause very slow convergence, regardless of the choice of alpha (usually taken to be around 0.05 or so). It's my understanding that you would never set α > 1.0, so where could this be corrected for? Is there a normalization I am missing somewhere?