- #1
MAXIM LI
- 6
- 2
- Homework Statement
- $$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
- Relevant Equations
- $$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$
My question relates to subsection 2.2.1 of [this article][1]. This subsection recalls the work of Lindgren, Rue, and Lindström (2011) on Gaussian Markov Random Fields (GMRFs). The subsection starts with a two-dimensional regular lattice where the 4 first-order neighbours of $u_{i,j}$ are identified. The article defines the full conditional distribution through the expectation $$E(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}(u_{i-1,j}+u_{i+1,j}+u_{i,j-1}+u_{i,j+1})$$ and variance $$Var(u_{i,j}\vert u_{-i,j}) = \frac{1}{a}.$$
This is then redefined in terms of the precision matrix, where the upper right quadrant is
\begin{array}
-1 \\
a & -1
\end{array}
Extending to second-order neighbours (i.e. the neighbours of first-order neighbours), the precision matrix becomes (again, just the upper right quadrant)
\begin{array}
-1 \\
-2a & 2 \\
4+a^2 & -2a & 1.
\end{array}
I am new to this topic and am trying to understand where the expressions for the conditional expectation and variance came from, and how the precision matrices were derived. I'd appreciate a fulsome explanation and derivation for both the first-order and second-order case. I tried looking in the book 'Gaussian Markov Random Fields
Theory and Applications' and this looks very similar to a conditional autoregression (CAR) model, defined in Chapter 1. However, here the full conditionals are written down as
$$
x_i \vert \mathbf{x}_{-i} \sim N\left(\sum_{j\neq i}\beta_{ij}x_{j},\kappa_i^{-1} \right)
$$
and the elements of the corresponding precision matrix are stated to be ##Q_{ii} = \kappa_i## and ##Q_{ij} = -\kappa_{i}\beta_{ij}## for ##i\neq j##This seems to be more general, which leaves me wondering how the conditional mean and variance at the start of this post were derived (along with the precision matrices). Where did a come from and why did we scale by this amount? Any help addressing this is much appreciated.
Note that ##\mathbf{x}_{-i}## means the vector of random variables excluding ##x_i##.
[1]: https://becarioprecario.bitbucket.io/spde-gitbook/ch-intro.html#sec:spde
This is then redefined in terms of the precision matrix, where the upper right quadrant is
\begin{array}
-1 \\
a & -1
\end{array}
Extending to second-order neighbours (i.e. the neighbours of first-order neighbours), the precision matrix becomes (again, just the upper right quadrant)
\begin{array}
-1 \\
-2a & 2 \\
4+a^2 & -2a & 1.
\end{array}
I am new to this topic and am trying to understand where the expressions for the conditional expectation and variance came from, and how the precision matrices were derived. I'd appreciate a fulsome explanation and derivation for both the first-order and second-order case. I tried looking in the book 'Gaussian Markov Random Fields
Theory and Applications' and this looks very similar to a conditional autoregression (CAR) model, defined in Chapter 1. However, here the full conditionals are written down as
$$
x_i \vert \mathbf{x}_{-i} \sim N\left(\sum_{j\neq i}\beta_{ij}x_{j},\kappa_i^{-1} \right)
$$
and the elements of the corresponding precision matrix are stated to be ##Q_{ii} = \kappa_i## and ##Q_{ij} = -\kappa_{i}\beta_{ij}## for ##i\neq j##This seems to be more general, which leaves me wondering how the conditional mean and variance at the start of this post were derived (along with the precision matrices). Where did a come from and why did we scale by this amount? Any help addressing this is much appreciated.
Note that ##\mathbf{x}_{-i}## means the vector of random variables excluding ##x_i##.
[1]: https://becarioprecario.bitbucket.io/spde-gitbook/ch-intro.html#sec:spde