Set of 2-dimensional orthogonal matrices equal to an union of sets

In summary, the conversation discusses the set of $2$-dimensional orthogonal matrices, denoted by $O(2, \mathbb{R})$, and shows that it can be divided into two subsets $D$ and $S$, where $D$ is a set of matrices with certain properties and $S$ is a set of matrices with different properties. It is proven that these two subsets are disjoint, and that they together form the set $O(2, \mathbb{R})$. Additionally, it is shown that for each value of $alpha$ in the real numbers, there is an orthonormal basis $B_{\alpha}$ for $\mathbb{R}^2$ consisting
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :giggle:

The set of $2$-dimensional orthogonal matrices is given by $$O(2, \mathbb{R})=\{a\in \mathbb{R}^{2\times 2}\mid a^ta=u_2\}$$ Show the following:

(a) $O(2, \mathbb{R})=D\cup S$ and $D\cap S=\emptyset$. It holds that $D=\{d_{\alpha}\mid \alpha\in \mathbb{R}\}$ and $S=\{s_{\alpha}\mid \alpha\in \mathbb{R}\}$, where $d_{\alpha}=\begin{pmatrix}\cos (\alpha) & -\sin (\alpha) \\ \sin (\alpha ) & \cos (\alpha )\end{pmatrix}$ and $s_{\alpha}=\begin{pmatrix} \cos (\alpha )& \sin (\alpha ) \\ \sin (\alpha) & -\cos(\alpha)\end{pmatrix}$.

(b) For all $\alpha\in \mathbb{R}$ is $B_{\alpha}$ an orthonormal basis of $\mathbb{R}^2$. It holds that $B_{\alpha}=(e_{\alpha}, f_{\alpha})$, where $e_{\alpha}\begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right )\end{pmatrix}$ and $f_{\alpha}\begin{pmatrix}-\sin \left (\frac{\alpha}{2}\right ) \\ \cos \left (\frac{\alpha}{2}\right )\end{pmatrix}$

(c) Calculate $M_{B_{\alpha}}(\sigma_{\alpha})$,where $\sigma_{\alpha}(x)=s_{\alpha}x$.

I have done the following :

(a) To show that $D\cap S=\emptyset$, we assume that this is not true, i.e. that there is a matrix that belongs to $D$ and to $S$. Then for some $\alpha\in \mathbb{R}$ it must hold that $-\sin (\alpha)= \sin (\alpha) \Rightarrow \sin (\alpha)=0$ and that $\cos (\alpha)= -\cos (\alpha) \Rightarrow \cos (\alpha)=0$. There is no such $\alpha$ and therefore the intersection is empty.

How can we show that $O(2, \mathbb{R})=D\cup S$ ?

(b) We have to show that $e_{\alpha}$ and $f_{\alpha}$ are linearly independent, so that we can say that $B_{\alpha}$ is a basis of $\mathbb{R}^2$, right? To show also that itis an orthonormal basis, we have to show that the vectors $e_{\alpha}$ and $f_{\alpha}$ are orthogonal, i.e. their dot product is equal to $0$ and that it is normal, i.e. that both vectors have length $1$, right?

(c) Do we have to write the columns of $s_{\alpha}$ as a linear combination of $e_{\alpha}$ and $f_{\alpha}$ ? :unsure:
 
Physics news on Phys.org
  • #2
mathmari said:
How can we show that $O(2, \mathbb{R})=D\cup S$ ?

Hey mathmari!

Suppose we consider a matrix $a$ with column vectors $\vec a$ and $\vec b$.
What can we deduce from the condition $a^t a = u_2$? 🤔

mathmari said:
(b) show that the vectors $e_{\alpha}$ and $f_{\alpha}$ are orthogonal, i.e. their dot product is equal to $0$ and that it is normal, i.e. that both vectors have length $1$, right?
Yep. (Nod)

mathmari said:
(c) Do we have to write the columns of $s_{\alpha}$ as a linear combination of $e_{\alpha}$ and $f_{\alpha}$ ?

What we need, is to find the images of $e_\alpha$ and $f_\alpha$ in terms of $e_\alpha$ and $f_\alpha$.
It may help to write $s_{\alpha}$ as a linear combination of $e_{\alpha}$ and $f_{\alpha}$. :unsure:
 
Last edited:
  • #3
Klaas van Aarsen said:
Suppose we consider a matrix $a$ with column vectors $\vec a$ and $\vec b$.
What can we deduce from the conditions $a^t a = u_2$? 🤔

We have that $\vec{a}\cdot \vec{a}=\vec{b}\cdot \vec{b}=1$ and $\vec{a}\cdot \vec{b}=0$, right? :unsure:
Klaas van Aarsen said:
What we need, is to find the images of $e_\alpha$ and $f_\alpha$ in terms of $e_\alpha$ and $f_\alpha$.
It may help to write $s_{\alpha}$ as a linear combination of $e_{\alpha}$ and $f_{\alpha}$. :unsure:

Will the coefficients of the linear combination be matrices? :unsure:
 
  • #4
mathmari said:
We have that $\vec{a}\cdot \vec{a}=\vec{b}\cdot \vec{b}=1$ and $\vec{a}\cdot \vec{b}=0$, right?

Yep. 🤔

mathmari said:
Will the coefficients of the linear combination be matrices?

Perhaps we should consider the matrix given by $B_\alpha$.
It transforms the unit vectors to $e_\alpha$ and $f_\alpha$.
So we should be able to construct $M_{B_\alpha}(\sigma_\alpha)$ using $B_\alpha$ and its inverse. 🤔
 
  • #5
Klaas van Aarsen said:
Yep. 🤔

This properties are satisfied by the columns of the mayrices of $D$ and $S$, right? So does the desired result just follow then? :unsure:
Klaas van Aarsen said:
Perhaps we should consider the matrix given by $B_\alpha$.
It transforms the unit vectors to $e_\alpha$ and $f_\alpha$.
So we should be able to construct $M_{B_\alpha}(\sigma_\alpha)$ using $B_\alpha$ and its inverse. 🤔

We have that $B_{\alpha}= \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & - \sin \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix}$.

Do you mean that $M_{B_\alpha}(\sigma_\alpha)=B_\alpha^{-1}B_\alpha$ ? :unsure:
 
  • #6
mathmari said:
This properties are satisfied by the columns of the matrices of $D$ and $S$, right? So does the desired result just follow then?

Not necessarily.
That is what we still have to prove.
We have to prove that any $\vec a$ can be written as $(\cos\alpha,\sin\alpha)$. Can it?
And additionally that any $\vec b$ is either $(-\sin\alpha,\cos\alpha)$ or $(\sin\alpha,-\cos\alpha)$.
Can we prove that? 🤔
mathmari said:
Do you mean that $M_{B_\alpha}(\sigma_\alpha)=B_\alpha^{-1}B_\alpha$ ?

Close... but that right side is just identity isn't it. Something is missing... (Sweating)
 
  • #7
Klaas van Aarsen said:
Not necessarily.
That is what we still have to prove.
We have to prove that any $\vec a$ can be written as $(\cos\alpha,\sin\alpha)$. Can it?
And additionally that any $\vec b$ is either $(-\sin\alpha,\cos\alpha)$ or $(\sin\alpha,-\cos\alpha)$.
Can we prove that? 🤔

Do we prove that by proving that these two vectors are orthognal to each otherand have length $1$ ? :unsure:
Klaas van Aarsen said:
Close... but that right side is just identity isn't it. Something is missing... (Sweating)

Ah yes. Should it be $M_{B_\alpha}(\sigma_\alpha)=B_\alpha^{-1}(s_\alpha)B_\alpha$ ? :unsure:
 
  • #8
mathmari said:
Do we prove that by proving that these two vectors are orthognal to each otherand have length $1$ ?

We should observe that any vector of length 1 must be on the unit circle, which implies that it can be written as $(\cos\alpha,\sin\alpha)$ for some $\alpha$.
And there are only 2 unit vectors that are orthogonal in 2 dimensions (dot product 0).
That is either $(\sin\alpha,-\cos\alpha)$ or $(-\sin\alpha,\cos\alpha)$. 🤔

mathmari said:
Ah yes. Should it be $M_{B_\alpha}(\sigma_\alpha)=B_\alpha^{-1}(s_\alpha)B_\alpha$ ?
Yep. (Nod)

And since $B_\alpha$ is an orthogonal matrix, its inverse is the same as its transpose. 🤔
 
  • #9
Klaas van Aarsen said:
We should observe that any vector of length 1 must be on the unit circle, which implies that it can be written as $(\cos\alpha,\sin\alpha)$ for some $\alpha$.
And there are only 2 unit vectors that are orthogonal in 2 dimensions (dot product 0).
That is either $(\sin\alpha,-\cos\alpha)$ or $(-\sin\alpha,\cos\alpha)$. 🤔

Can we just say that any vector of length 1 must be on the unit circle, which implies that it can be written as $(\cos\alpha,\sin\alpha)$ for some $\alpha$, or do we have to prove that? :unsure:
Klaas van Aarsen said:
And since $B_\alpha$ is an orthogonal matrix, its inverse is the same as its transpose. 🤔

So we have that
\begin{align*}M_{B_\alpha}(\sigma_\alpha)&=B_\alpha^T s_\alpha B_\alpha \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos (\alpha )& \sin (\alpha ) \\ \sin (\alpha) & -\cos(\alpha)\end{pmatrix} \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & - \sin \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos (\alpha )\cos \left (\frac{\alpha}{2}\right )+\sin (\alpha)\sin \left (\frac{\alpha}{2}\right )& -\cos (\alpha )\sin \left (\frac{\alpha}{2}\right )+\sin (\alpha)\cos \left (\frac{\alpha}{2}\right ) \\ \sin (\alpha )\cos \left (\frac{\alpha}{2}\right )-\cos (\alpha)\sin \left (\frac{\alpha}{2}\right ) & -\sin (\alpha )\sin \left (\frac{\alpha}{2}\right )-\cos (\alpha)\cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos \left (\frac{\alpha}{2}\right )& \sin \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix}
\\ & = \begin{pmatrix} \cos^2 \left (\frac{\alpha}{2}\right )+\sin^2 \left (\frac{\alpha}{2}\right )& 2\cos\left (\frac{\alpha}{2}\right )\sin \left (\frac{\alpha}{2}\right ) \\ 0 & -\sin^2 \left (\frac{\alpha}{2}\right )+\cos^2 \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix} 1& 2\cos\left (\frac{\alpha}{2}\right )\sin \left (\frac{\alpha}{2}\right ) \\ 0 & -\sin^2 \left (\frac{\alpha}{2}\right )+\cos^2 \left (\frac{\alpha}{2}\right )\end{pmatrix} \end{align*}

:unsure:
 
  • #10
mathmari said:
Can we just say that any vector of length 1 must be on the unit circle, which implies that it can be written as $(\cos\alpha,\sin\alpha)$ for some $\alpha$, or do we have to prove that?

Yes, we can just state that. (Nod)
mathmari said:
So we have that
\begin{align*}M_{B_\alpha}(\sigma_\alpha)&=B_\alpha^T s_\alpha B_\alpha \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos (\alpha )& \sin (\alpha ) \\ \sin (\alpha) & -\cos(\alpha)\end{pmatrix} \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & - \sin \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos (\alpha )\cos \left (\frac{\alpha}{2}\right )+\sin (\alpha)\sin \left (\frac{\alpha}{2}\right )& -\cos (\alpha )\sin \left (\frac{\alpha}{2}\right )+\sin (\alpha)\cos \left (\frac{\alpha}{2}\right ) \\ \sin (\alpha )\cos \left (\frac{\alpha}{2}\right )-\cos (\alpha)\sin \left (\frac{\alpha}{2}\right ) & -\sin (\alpha )\sin \left (\frac{\alpha}{2}\right )-\cos (\alpha)\cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix}\cos \left (\frac{\alpha}{2}\right ) & \sin \left (\frac{\alpha}{2}\right ) \\ -\sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix} \begin{pmatrix} \cos \left (\frac{\alpha}{2}\right )& \sin \left (\frac{\alpha}{2}\right ) \\ \sin \left (\frac{\alpha}{2}\right ) & \cos \left (\frac{\alpha}{2}\right )\end{pmatrix}
\\ & = \begin{pmatrix} \cos^2 \left (\frac{\alpha}{2}\right )+\sin^2 \left (\frac{\alpha}{2}\right )& 2\cos\left (\frac{\alpha}{2}\right )\sin \left (\frac{\alpha}{2}\right ) \\ 0 & -\sin^2 \left (\frac{\alpha}{2}\right )+\cos^2 \left (\frac{\alpha}{2}\right )\end{pmatrix} \\ & = \begin{pmatrix} 1& 2\cos\left (\frac{\alpha}{2}\right )\sin \left (\frac{\alpha}{2}\right ) \\ 0 & -\sin^2 \left (\frac{\alpha}{2}\right )+\cos^2 \left (\frac{\alpha}{2}\right )\end{pmatrix} \end{align*}

Looks about right, although I didn't check the calculations.
We can still simplify the result a bit more can't we? We can use the double angle formulas. 🤔
 

FAQ: Set of 2-dimensional orthogonal matrices equal to an union of sets

What is a 2-dimensional orthogonal matrix?

A 2-dimensional orthogonal matrix is a square matrix with real entries that has a special property where the columns and rows are orthogonal to each other, meaning they are at right angles to each other. This results in the matrix being equal to its own transpose.

What is the significance of a set of 2-dimensional orthogonal matrices?

A set of 2-dimensional orthogonal matrices is significant because it represents a group of matrices that have useful properties, such as preserving distances and angles. They are commonly used in applications involving rotations and reflections.

What does it mean for a set of 2-dimensional orthogonal matrices to be equal to an union of sets?

This means that the set of 2-dimensional orthogonal matrices is composed of elements from multiple sets, but all of these elements satisfy the properties of being orthogonal matrices. In other words, the set is a combination of multiple smaller sets that have a common characteristic.

How can a set of 2-dimensional orthogonal matrices be represented mathematically?

A set of 2-dimensional orthogonal matrices can be represented as a group, denoted as O(2). This group has the properties of closure, associativity, identity, and inverse, making it a group under matrix multiplication.

What are some real-world applications of 2-dimensional orthogonal matrices?

2-dimensional orthogonal matrices have numerous applications in fields such as computer graphics, robotics, and engineering. They are used to represent rotations and reflections in 2-dimensional space, making them useful in tasks such as image processing, object tracking, and motion planning.

Similar threads

Replies
52
Views
3K
Replies
15
Views
1K
Replies
3
Views
2K
Replies
11
Views
1K
Replies
7
Views
1K
Replies
4
Views
2K
Replies
16
Views
2K
Replies
8
Views
2K
Back
Top