- #1
shinobi20
- 280
- 20
- TL;DR Summary
- I'm a bit confused with the matrix notation and index notation of vector rotations. Specifically, how to understand the consistency of the index placement compared to the matrix notation.
Given a vector ##\mathbf{r} = r^i e_i## where ##r^i## are the components, ##e_i## are the basis vectors, and ##i = 1, \ldots, n##. In matrix notation,
\begin{equation*}
\mathbf{r} = \begin{bmatrix} e_1 & e_2 & \ldots e_n \end{bmatrix} \begin{bmatrix} r^1 \\ r^2 \\ \vdots\\ r^n \end{bmatrix}
\end{equation*}
We can do an orthogonal transformation, e.g. rotation, on both the components and the basis vectors by inserting an identity matrix ##I = R^T R## in the middle and not change the vector. In index notation,
\begin{equation*}
\mathbf{r} = e_i (R^T)^i_{\; j} R^k_{\; i} r^i
\end{equation*}
Notice that the indices are placed in such a way that the summation with respect to the components and basis vectors makes sense. However, I'm a bit confused with the two notations,
\begin{align*}
& \mathbf{r} = e R^T R r \quad \text{matrix notation}\\
& \mathbf{r} = e_i (R^T)^i_{\; j} R^k_{\; i} r^i \quad \text{index notation}
\end{align*}
For the first equation, I believe ##R^T R## is a simple matrix multiplication so that without thinking of the current context it should follow the row-column rule for matrix multiplication ##(R^T)^i_{\; j} R^j_{\; k}##. Notice that the column of ##R^T## labeled by ##j## is summed over the rows of ##R## also labeled by ##j##. So, how do I reconcile this with the second equation?
I know that we could just switch the placement of ##(R^T)^i_{\; j} R^k_{\; i}## in the index notation since they are just numbers such that ##(R^T)^i_{\; j} R^k_{\; i} =R^k_{\; i} (R^T)^i_{\; j}##; in this case the summed over indices is ##i## and it now follows the row-column summation having the same index. I think I have a partial understanding of the situation, but I think I need more clarification on this.
\begin{equation*}
\mathbf{r} = \begin{bmatrix} e_1 & e_2 & \ldots e_n \end{bmatrix} \begin{bmatrix} r^1 \\ r^2 \\ \vdots\\ r^n \end{bmatrix}
\end{equation*}
We can do an orthogonal transformation, e.g. rotation, on both the components and the basis vectors by inserting an identity matrix ##I = R^T R## in the middle and not change the vector. In index notation,
\begin{equation*}
\mathbf{r} = e_i (R^T)^i_{\; j} R^k_{\; i} r^i
\end{equation*}
Notice that the indices are placed in such a way that the summation with respect to the components and basis vectors makes sense. However, I'm a bit confused with the two notations,
\begin{align*}
& \mathbf{r} = e R^T R r \quad \text{matrix notation}\\
& \mathbf{r} = e_i (R^T)^i_{\; j} R^k_{\; i} r^i \quad \text{index notation}
\end{align*}
For the first equation, I believe ##R^T R## is a simple matrix multiplication so that without thinking of the current context it should follow the row-column rule for matrix multiplication ##(R^T)^i_{\; j} R^j_{\; k}##. Notice that the column of ##R^T## labeled by ##j## is summed over the rows of ##R## also labeled by ##j##. So, how do I reconcile this with the second equation?
I know that we could just switch the placement of ##(R^T)^i_{\; j} R^k_{\; i}## in the index notation since they are just numbers such that ##(R^T)^i_{\; j} R^k_{\; i} =R^k_{\; i} (R^T)^i_{\; j}##; in this case the summed over indices is ##i## and it now follows the row-column summation having the same index. I think I have a partial understanding of the situation, but I think I need more clarification on this.