Signal Processing: Finding the auto-correlation

In summary, for the system described by ## v(t) = -cv(t - 1) + e(t)##, the first two terms in the autocorrelation sequence are ## R_v(0) = \frac{\sigma_e ^2}{1 - c^2}## and ## R_v(1) = \frac{- c \sigma_e ^2}{1 - c^2}##. These values are obtained by making the assumptions that the signal is stationary and that the mean is constant.
  • #1
Master1022
611
117
Homework Statement
For the system below, where ## e(t) ## is a zero mean white noise sequence with variance ##\sigma_e ^2## , determine the first two terms in the autocorrelation sequence ##R_v (0) ## and ## R_v (1) ##
Relevant Equations
Autocorrelation
Hi,

I am working on the following problem from a textbook, but am getting stuck and am not sure how to proceed.

Question: For the system below:
[tex] v(t) = -cv(t - 1) + e(t) [/tex]
where ## e(t) ## is a zero mean white noise sequence with variance ##\sigma_e ^2## , determine the first two terms in the autocorrelation sequence ##R_v (0) ## and ## R_v (1) ##

Attempt:
I am not sure which assumptions I ought to make to proceed with this question. At first, I thought about assuming that the signal ## v(t) ## was a stationary signal, such that:
- ##E[v(t)] = \mu = \text{constant} ##
- The autocorrelation is only a function of the time difference ## \tau##: ## R(\tau) = E[v(t) v(t - \tau)] ##

However, I don't think this assumption really makes sense as taking the expectation of the equation for ## v(t) ## yields the fact that ## E[v(t)] = -c E[v(t] ##, which shows the mean is non-constant, unless the mean is 0.

Nonetheless, if I proceed with this assumption:
[tex] R_v (\tau) = E[v(t) v(t - \tau)] = E[\left( e(t) - c v(t - 1) \right) \left( e(t - \tau) - v(t - 1 - \tau) \right) ] [/tex]
[tex] = E[e(t)e(t - \tau)] + c^2 E[v(t - 1) v(t - 1 - \tau)] - c E[e(t) v(t - 1 - \tau)] - c E[v(t - 1) e(t - \tau)] [/tex]
Then by the stationarity assumption: ## E[v(t - 1) v(t - 1 - \tau)] = R_v(\tau) ##

[tex] R_v (\tau) = R_e (\tau) + c^2 R_v(\tau) - c E[e(t) v(t - 1 - \tau)] - c E[v(t - 1) e(t - \tau)] [/tex]
Here is where I don't know how to deal with their cross terms. The first thought that comes to mind is that they are independent and thus I can split them up into a product of the expected values (which will yield zero). However, surely something like ## v(t - 1) ## does depend on previous values of the error?

If I expand the expression, then I get: ## y(t) = e(t) - c y(t - 1) = e(t) - c( e(t - 1) - c y(t - 2) ) = ... ## which then leads to something like:
[tex] y(t) = ( - c)^{t} + \sum_{k = 0}^{t} ( - c)^k e(t - k) [/tex]

but am not sure how to use this to determine whether the terms ## E[e(t) v(t - 1 - \tau)] ## and ## E[v(t - 1) e(t - \tau)] ## are non-zero.

Any help would be greatly appreciated.
 
Physics news on Phys.org
  • #2
You are making things complicated by keeping ##\tau## as a variable in the equations. The problem is only asking for the cases of ##\tau=0## and ##\tau=1##. You should only consider those cases and the equations should be fairly simple.
 
  • Like
Likes berkeman and Master1022
  • #3
Thanks for the reply.
FactChecker said:
You are making things complicated by keeping ##\tau## as a variable in the equations. The problem is only asking for the cases of ##\tau=0## and ##\tau=1##. You should only consider those cases and the equations should be fairly simple.
Yes that is true - I was trying to generalize and then just substitute ## \tau = 0, 1##, but that is more complicated than it needs to be.

Using ## \tau = 0 ##, I get:
[tex] R_v (0) = E[v(t) v(t - 0)] = E[e(t)e(t)] - 2cE[v(t - 1)e(t)] + c^2 E[v(t - 1) v(t - 1)] [/tex]
[tex] R_v (0) = R_e (0) + c^2 R_v (0) \rightarrow R_v (0) = \frac{\sigma_e ^2}{1 - c^2} [/tex]

Then for ## \tau = 1##:
[tex] R_v (1) = E[v(t) v(t - 1)] = E[e(t)e(t - 1)] - cE[e(t)v(t - 2)] - cE[e(t-1)v(t - 1)] + c^2 E[v(t - 1) v(t - 2)] [/tex]
[tex] R_v (1) = c^2 R_y (1) - cE[e(t-1)v(t - 1)] = c^2 R_v(1) - c \left( E[e(t - 1) e(t - 1)] - c E[e(t - 1) v(t - 2)] \right) [/tex]
[tex] \rightarrow R_v(1) = \frac{-c R_e (0)}{1 - c^2} = \frac{- c \sigma_e ^2}{1 - c^2} [/tex]
 

FAQ: Signal Processing: Finding the auto-correlation

What is auto-correlation in signal processing?

Auto-correlation in signal processing is a mathematical tool used to measure the similarity of a signal with a delayed version of itself. It helps to identify repeating patterns and frequencies in a signal.

Why is auto-correlation important in signal processing?

Auto-correlation is important in signal processing because it allows us to analyze signals and extract useful information from them. It is particularly useful in identifying periodic signals and removing noise from a signal.

How is auto-correlation calculated?

Auto-correlation is calculated by multiplying a signal with a delayed version of itself and then integrating the product over a specific time interval. This process is repeated for different delay values to obtain the auto-correlation function.

What are the applications of auto-correlation in signal processing?

Auto-correlation has many applications in signal processing, such as in speech recognition, audio and video compression, radar and sonar systems, and time series analysis. It is also used in digital communications to detect and correct errors in a signal.

Are there any limitations of using auto-correlation in signal processing?

Yes, there are some limitations of using auto-correlation in signal processing. It assumes that the signal is stationary and that there is no external noise present. It also requires a large amount of data to obtain accurate results. Additionally, it may not be suitable for non-linear signals or signals with a low signal-to-noise ratio.

Similar threads

Back
Top