Solving Autocovariance Function $\gamma(t+h,t)$

  • MHB
  • Thread starter nacho-man
  • Start date
  • Tags
    Function
In summary: So, in summary,for h = 0, autocovariance = $\sigma^2 + \theta^2 \sigma^2$for h = |1|, autocovariance = $\theta \sigma^2 + \theta \sigma^2$ = $2 \theta \sigma^2$for h > |1|, autocovariance = 0The textbook may have a typo in the last line, as the autocovariance for h = |1| should be $2 \theta \sigma^2$ instead of $\theta \sigma^2$.
  • #1
nacho-man
171
0
This one is bugging me!

Let ${Z_t}$ ~ $(0, \sigma^2)$

And $X_t = Z_t + \theta Z_t$

im trying to find the autocovariance function $\gamma(t+h,t)$ And nearly have it, but am struggling with some conceptual issues :S

$\gamma(t+h,t) = \text{COV}[Z_{t+h} + \theta Z_{t-1+h}, Z_t + \theta Z_t-1]$

= $\text{COV}(Z_{t+h}, Z_t) + \theta \text{COV}(Z_{t+h}, Z_{t-1}) + \theta \text{COV}(Z_{t-1+h}, Z_t) + \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$

$\text{COV}(Z_{t+h}, Z_t) = \sigma^2$ (at $h=0$)
$\theta \text{COV}(Z_{t+h}, Z_{t-1})$ = $\theta \sigma^2$ (at $h=-1$)
$ \theta \text{COV}(Z_{t-1+h}, Z_t)$ = $\theta \sigma^2$ (at $h=1$)
$ \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$ = $\theta^2 \sigma^2$ (at $h=0$)

So, to summarise,

for h = 0, autocovariance = $\sigma^2 + \theta^2 \sigma^2$
for h = |1|, autocovariance = $\theta \sigma^2 + \theta \sigma^2$ = $2 \theta \sigma^2$ <<< textbook disagrees here!
f0r h>|1|, autocovariance = 0

The answers are attached to this post, I have a discrepancy for h=|1| and cannot see why. Is there a typo in the book?I Have an additional follow up question, depending on the response i receive for this initial post!

Any help very much appreciated as always,
thank you in advance.
 

Attachments

  • Untitled.jpg
    Untitled.jpg
    34.1 KB · Views: 79
Last edited:
Physics news on Phys.org
  • #2
nacho said:
This one is bugging me!

Let ${Z_t}$ ~ $(0, \sigma^2)$

And $X_t = Z_t + \theta Z_t$

im trying to find the autocovariance function $\gamma(t+h,t)$ And nearly have it, but am struggling with some conceptual issues :S

$\gamma(t+h,t) = \text{COV}[Z_{t+h} + \theta Z_{t-1+h}, Z_t + \theta Z_t-1]$

= $\text{COV}(Z_{t+h}, Z_t) + \theta \text{COV}(Z_{t+h}, Z_{t-1}) + \theta \text{COV}(Z_{t-1+h}, Z_t) + \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$

$\text{COV}(Z_{t+h}, Z_t) = \sigma^2$ (at $h=0$)
$\theta \text{COV}(Z_{t+h}, Z_{t-1})$ = $\theta \sigma^2$ (at $h=-1$)
$ \theta \text{COV}(Z_{t-1+h}, Z_t)$ = $\theta \sigma^2$ (at $h=1$)
$ \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$ = $\theta^2 \sigma^2$ (at $h=0$)

So, to summarise,

for h = 0, autocovariance = $\sigma^2 + \theta^2 \sigma^2$
for h = |1|, autocovariance = $\theta \sigma^2 + \theta \sigma^2$ = $2 \theta \sigma^2$ <<< textbook disagrees here!
f0r h>|1|, autocovariance = 0

The answers are attached to this post, I have a discrepancy for h=|1| and cannot see why. Is there a typo in the book?I Have an additional follow up question, depending on the response i receive for this initial post!

Any help very much appreciated as always,
thank you in advance.

Because we have zero means:

$$\begin{aligned}\gamma_Z(t+1,t)&=E(Z_{t+1}Z_t)\\
&=E( (X_{t+1}+\theta X_t)(X_{t}+\theta X_{t-1}))\\
&=E(X_{t+1}X_t)+E(X_{t+1}\theta X_{t-1})+E(\theta X_t X_t)+E(\theta X_t \theta X_{t-1})
\end{aligned}$$

Now as the $X_i$s are uncorrelated and independent all the expectations but the third are zero, so:

$$\gamma_Z(t+1,t)=\theta \sigma^2$$

and similarly:

$$\begin{aligned}\gamma_Z(t-1,t)&=E(Z_{t-1}Z_t)\\
&=E( (X_{t-1}+\theta X_{t-2})(X_{t}+\theta X_{t}))\\
&=E(X_{t-1}X_t)+E(X_{t-1}\theta X_{t-1})+E(\theta X_{t-2} X_t)+E(\theta X_{t-2} \theta X_{t-1})
\end{aligned}$$

Now for the same reasons as before all the expectations other than the third are zero and we have as before:

$$\gamma_Z(t-1,t)=\theta \sigma^2$$

.
 
Last edited:

FAQ: Solving Autocovariance Function $\gamma(t+h,t)$

What is an autocovariance function?

An autocovariance function is a mathematical tool used to measure the linear relationship between a time series and its own past values. It is a function that describes how much two values in a time series are correlated with each other at different time lags.

How is the autocovariance function calculated?

The autocovariance function is calculated by taking the covariance between a time series and its own lagged values at different time lags. This means taking the average of the product of the deviations from the mean of the two values being compared.

What is the significance of the autocovariance function?

The autocovariance function is significant because it helps us understand the underlying patterns and relationships within a time series. It can be used to identify trends, seasonal patterns, and other important characteristics of a time series.

How is the autocovariance function used in statistical analysis?

The autocovariance function is used in statistical analysis to check for autocorrelation in a time series. If there is a significant autocorrelation, it means that the current value of the time series is related to its past values and the data may not be independent. This is important in determining the appropriate statistical models to use.

How can the autocovariance function be used in forecasting?

The autocovariance function can be used in forecasting by helping to identify any patterns or trends in the time series data. This information can then be used to make predictions about future values of the time series. Additionally, the autocovariance function can be used to assess the accuracy of a forecasting model by comparing the predicted values to the actual values of the time series.

Similar threads

Back
Top