Math Challenge - November 2018

In summary, we discussed various mathematical problems including sequences, polynomials, integrals, and functions. We also explored combinatorics, probabilities, calculus, and linear algebra. These problems were submitted by referees who will post solutions on the 15th of the following month. The rules for solving problems were also outlined, including the use of proofs and resources.
  • #141
julian said:
I guess when you originally set the question you didn't want to give any clues away that the irreducible components are organised as ##\{1 \}##, ##\{x , y \}##, and ##\{ x^2 , xy , y^2 \}## as that was question 16 b).

I'll be adjusting the calculations I did before (by adjusting ##\varphi##) to get the appropriate homomorphism later when I'm less preoccupied.
I actually calculated from behind: took some irreps of ##\mathfrak{sl}_2##, wrote them as polynomials, and changed the basis to ##\mathfrak{su}_2## and finally choose the Pauli-matrices (##\sigma_j##) instead of proper Lie algebra vectors ##(i\sigma_j##), mixed the basis vectors, in order - yes - not to give away the solution for free, et voilà: the standard example in every book about Lie algebras became a standard example of what physicists use instead. I thought this is an interesting (and simple) example to demonstrate what physicists call "ladder-operators".
 
  • Like
Likes julian
Physics news on Phys.org
  • #142
I think I found solution for Basics 4:
Let the bug altitude in the evening of day i be noted ## z_i ## (in meters).
The tree height in the evening of day n is ## L_0 +ir## with r being the daily rate of tree growth (0.20 m/day).
Then the relative bug position w.r.t. tree top in the evening of day i is ## \lambda _i = \frac {z_i} {L_0+ir} ##
On the next day morning (day i+1), the bug altitude becomes: ## z_i + d = \lambda _i *(L_0+ir) +d ##, with d being the nighty rate of climb (0.10 m/night)
On the next day evening (day i+1), the relative bug position will be the same as in the morning: ## \lambda _{i+1} = \frac {z_i +d} {L_0+ir} = \lambda _i +\frac {d} {L_0+ir} ##. Obviously, its absolute altitude will be higher to uniform tree growth.
The bug will reach the tree top as soon as ## \lambda _{i} \geq 1 ##. Although this can take a long time, we can be sure the bug will succeed because if we consider i forming a continuum (changing it from integer to real), then the relative position is a real function with logarithmic growth: ## \lambda(x)= \int \frac {d} {L_0+(x-1)r} \, dx = \frac {d} {r} Ln(1+ \frac {(x-1)r} {L_0}) ## which is increasing with x without upper bound.
The number of days for reaching the tree top is obtained by solving the equation ## \lambda(x)= 1 ##. That is: ## x= 1+ \frac {L_0}{r}*(e^{\frac {r}{d}} -1) ##.
Numerically: ## x=3,195.53 ## days that is for an integer number of days: 3,196 days. However, the precise solution obtained by discrete summation is only 3,192 days. The actual result is the lower Darboux stepwise sum, as compared to the continuous logarithmic integral which is consequently too large.
 
  • Like
Likes fresh_42
  • #143
I'll post proofs of parts a and b of fresh_42's proof outline for Problem 18 in post #125.
(a): I had earlier proved that for every positive integer ##n > 1##, ##|n|## is at most 1. If for all such ##n##, ##|n| = 1##, then that is the trivial case. So to be nontrivial, there must be some such ##n## where ##|n| < 1##.

Since the positive integers are bounded from below, there must be some minimum value of ##n## that satisfies ##|n| < 1##, and since ##|1| = 1##, this minimum value must be greater than 1.

(b): To prove that this minimum ##n## must be a prime number, I consider the case of composite ##n##, ##n = ab##, where both ##a## and ##b## are greater than 1 and less than ##n##. If ##n## is this minimum ##n##, then ##|a|## and ##|b|## must both equal 1. But ##|n| = |ab| = |a| |b| = 1##, which contradicts this premise. Therefore, this minimum ##n## must be a prime number.
 
Last edited:
  • #144
I continue to be stumped by parts c and d of fresh_42's proof outline in post #125, even with his hints in post #139. In particular, I find that the inequality of part c is a very weak constraint, not much stronger than the triangle inequality in the original definition of the value function. It appears to be too weak to prove anything in part d.
 
  • #145
lpetrich said:
I continue to be stumped by parts c and d of fresh_42's proof outline in post #125, even with his hints in post #139. In particular, I find that the inequality of part c is a very weak constraint, not much stronger than the triangle inequality in the original definition of the value function. It appears to be too weak to prove anything in part d.
You know that there is a minimal prime with ##|p|<1## and that all natural numbers have ##|n| \leq 1\,.## The goal is to find out which numbers are of exactly norm ##1\,.##
 
  • #146
fresh_42 said:
You know that there is a minimal prime with ##|p|<1## and that all natural numbers have ##|n| \leq 1\,.## The goal is to find out which numbers are of exactly norm ##1\,.##
I can easily prove that for all ##n## with prime factors less than ##p##, ##|n| = 1##. But if ##n## has prime factors larger than ##p## and is not divisible by ##p##, then I am stuck. Expressing ##n## in the form ##kp+m## with ##0 < m < p##, I find
##|kp + m| \leq \max(|kp|,|m|) = \max(|kp|,1) = 1##,
and that does not add anything to the non-Archimedeanness of the value function.
 
  • #147
lpetrich said:
I can easily prove that for all ##n## with prime factors less than ##p##, ##|n| = 1##. But if ##n## has prime factors larger than ##p## and is not divisible by ##p##, then I am stuck. Expressing ##n## in the form ##kp+m## with ##0 < m < p##, I find
##|kp + m| \leq \max(|kp|,|m|) = \max(|kp|,1) = 1##,
and that does not add anything to the non-Archimedeanness of the value function.
Have I missed that you have already shown the maximum formula, because you use it?

Anyway, if you have that all numbers are of norm one, which are coprime to ##p##, then you also know all others for which ##|n|<1## and even their norm in dependence of ##p##. With that, the rest is only a bit technical stuff.
 
  • #148
fresh_42 said:
Have I missed that you have already shown the maximum formula, because you use it?
I don't know what you have in mind, but I must note that ##|kp+m| \leq 1## is not equivalent to ##|kp+m| = 1##. Or is there something that I was missing somewhere?
 
  • #149
lpetrich said:
I don't know what you have in mind, but I must note that ##|kp+m| \leq 1## is not equivalent to ##|kp+m| = 1##. Or is there something that I was missing somewhere?
We know ##|p| < 1## and ##|n| \leq 1## and assume for the moment that ##|a+b| \leq \operatorname{max}\{\,|a|\, , \,|b|\,\}\,.##

Next we show that ##|a+b|=\operatorname{max}\{\,|a|,|b|\,\}##:
Let ##|a|<|b|\,.## Then ##|a|<|b|=|(a+b)-a| \leq \operatorname{max}\{\,|a+b|,|a|\,\} =|a+b| \leq \operatorname{max}\{\,|a|,|b|\,\} = |b|\,.##

Thus we have ##|m|=1## for all ##m## which are coprime to ##p##. All other numbers are of the form ##n=p^r\cdot m## with ##|n|=|p|^r##.
 
  • #150
But if ##|a + b| = \max(|a|,|b|)## for all integers a and b, then |0| = |1 - 1| = max(|1|,|-1|) = max(1,1) = 1, which is contrary to our definition of this value function or norm. So there must be some flaw in the proof of this statement.
 
  • #151
lpetrich said:
But if ##|a + b| = \max(|a|,|b|)## for all integers a and b, then |0| = |1 - 1| = max(|1|,|-1|) = max(1,1) = 1, which is contrary to our definition of this value function or norm. So there must be some flaw in the proof of this statement.
Yes. I had to use ##|a| \neq |b|## for otherwise I couldn't have concluded, that ##|a|<\max\{\,|a+b|,|a|\,\}=|a+b|\,.##
In your post #146 where it is needed, we have ## |kp| =|k|\cdot |p| \leq |p| \lt |m|\,.##
 
  • #152
So with ##|a+b| = \max(|a|,|b|)## if ##b \neq a##, I can do all the positive integers. For n relatively prime to p, the smallest number where |p| < 1, I get n = kp + m, where m is nonzero and less than p:
##|n| = |kp+m| = \max(|kp|,|m|) = \max(|kp|,1) = 1##.

With that case solved, consider n having p as a prime factor: ##n = k p^m## for k is relatively prime to p. Then,
##|n| = |k p^m| = |k||p|^m = |p|^m##.

For fraction ##x = n_1/n_2## where ##n_1 = k_1 p^{m_1}## and likewise for ##n_2##, then ##|x| = |p|^{m_1 - m_2}##.
 
  • #153
lpetrich said:
So with ##|a+b| = \max(|a|,|b|)## if ##b \neq a##, I can do all the positive integers. For n relatively prime to p, the smallest number where |p| < 1, I get n = kp + m, where m is nonzero and less than p:
##|n| = |kp+m| = \max(|kp|,|m|) = \max(|kp|,1) = 1##.

With that case solved, consider n having p as a prime factor: ##n = k p^m## for k is relatively prime to p. Then,
##|n| = |k p^m| = |k||p|^m = |p|^m##.

For fraction ##x = n_1/n_2## where ##n_1 = k_1 p^{m_1}## and likewise for ##n_2##, then ##|x| = |p|^{m_1 - m_2}##.
That doesn't define the norm. In the end the definition should work for any rational number including powers of ##p##, ##0## and without self references.
Plus you still must show ##|a+b| \leq \max\{\,|a|,|b|\,\}\,. ##
 
  • #154
Solution to problem 16.

The actual calculations are in the attached pdf file. I just outline in this post what was done.

We define a basis for ##\mathfrak{su}(2,\mathbb{C})## as:

##
u_1 = i \sigma_1 =
\begin{pmatrix}
0 & i \\ i & 0
\end{pmatrix}
, \quad
u_2 = i \sigma_2 =
\begin{pmatrix}
0 & 1 \\ -1 & 0
\end{pmatrix}
, \quad
u_3 = i \sigma_3 =
\begin{pmatrix}
i & 0 \\ 0 & -i
\end{pmatrix}
##

with brackets:

##
[ u_1 , u_2 ] = - 2 u_3 , \quad [ u_2 , u_3 ] = - 2 u_1 , \quad [ u_3 , u_1 ] = - 2 u_2 .
##

We compute ##[(\alpha_1 , \alpha_2 , \alpha_3) , (\alpha_1' , \alpha_2' , \alpha_3')]## defined by ##[(\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3) , (\alpha_1' u_1 + \alpha_2' u_2 + \alpha_3' u_3)]##. We easily find:

##
[(\alpha_1 , \alpha_2 , \alpha_3) , (\alpha_1' , \alpha_2' , \alpha_3')] =
##
##
= 2 (\alpha_3 \alpha_2' - \alpha_2' \alpha_3) u_1 + 2 (\alpha_1 \alpha_3' - \alpha_3 \alpha_1') u_2 + 2 (\alpha_2 \alpha_1' - \alpha_1 \alpha_2') u_3 .
##

We define an adjusted ##\varphi## by:

\begin{align*}
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3) &= \varphi (\alpha_1 (i \sigma_1) + \alpha_2 (i \sigma_2) + \alpha_3 (i \sigma_3)) \\
&= \varphi( (i \alpha_1) \sigma_1 + ( i\alpha_2) \sigma_2+ (i \alpha_3) \sigma_3) \\
\end{align*}

Then from

\begin{align*}
\varphi(\alpha_1 \sigma_1 +\alpha_2 \sigma_2+\alpha_3 \sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= x(- i \alpha_1 a_3 + \alpha_2 a_3 - \alpha_3 a_1 )+\\
&+ x^2(2 i \alpha_1 a_5 + 2 \alpha_2 a_5 + 2 \alpha_3 a_2 )+\\
&+ y(i \alpha_1 a_1 + \alpha_2 a_1 + \alpha_3 a_3 )+\\
&+ y^2(2 i \alpha_1 a_5 -2 \alpha_2 a_5 -2 \alpha_3 a_4 )+\\
&+ xy(- i \alpha_1 a_2 - i \alpha_1 a_4 +\alpha_2 a_2 - \alpha_2 a_4 )
\end{align*}

we have:

\begin{align*}
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy) = \\
= \varphi( (i \alpha_1) \sigma_1 + ( i\alpha_2) \sigma_2+ (i \alpha_3) \sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= x(\alpha_1 a_3 +i\alpha_2 a_3 - i\alpha_3 a_1 )+\\
&+ x^2(-2\alpha_1 a_5 +2 i\alpha_2 a_5 + 2i\alpha_3 a_2 )+\\
&+ y(- \alpha_1 a_1 + i \alpha_2 a_1 +i\alpha_3 a_3 )+\\
&+ y^2(-2\alpha_1 a_5 -2i\alpha_2 a_5 -2i\alpha_3 a_4 )+\\
&+ xy(\alpha_1 a_2 +\alpha_1 a_4 +i\alpha_2 a_2 -i\alpha_2 a_4 ) \qquad Eq (1)
\end{align*}
Part 16 a) Is done in pdf file where I prove

##
[\tilde{\varphi} (\alpha_1 , \alpha_2 , \alpha_3) , \tilde{\varphi} (\alpha_1' , \alpha_2' , \alpha_3')] = \tilde{\varphi} ([(\alpha_1 , \alpha_2 , \alpha_3) , (\alpha_1' , \alpha_2' , \alpha_3')])
##

when applied to ##1,x,x^2,y,y^2##, and ##xy## in turn.16 b) Components that are transformed into linear combinations of themselves under repeated application of ##\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 +\alpha_3 u_3)## form an invariant subspace. Irreducible components are an invariant subspace which cannot be separated into smaller invariant subspaces. From Eq 1 we have:

\begin{align*}
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.1 = 0\\
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.x= - x i \alpha_3 + y (- \alpha_1 + i \alpha_2) \\
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.x^2 = 2 x^2 i \alpha_3 + xy (\alpha_1 + i \alpha_2) \\
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.y = x (- \alpha_1 + i \alpha_2) + y i \alpha_3 \\
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.y^2 = - 2 y^2 i \alpha_3 - xy (- \alpha_1 + i \alpha_2) \\
\tilde{\varphi} (\alpha_1 u_1 + \alpha_2 u_2 + \alpha_3 u_3)&.xy = 2 x^2 (- \alpha_1 + i \alpha_2) - 2 y^2 (\alpha_1 + i \alpha_2) .
\end{align*}

It is then obvious that the irreducible components are:

##
\{ 1 \}
##
##
\{ x , y \}
##
##
\{ y^2 , xy , x^2 \} .
##16 c) The corresponding vectors of maximum weight are

##
1 ,
##
##
y ,
##
##
x^2
##

respectively - see the pdf file.

p.s. Hello @fresh_42. I've done the calculation with the adjusted ##\varphi## and still think there is a slight typo in the question. In the original question you wrote:

\begin{align*}
\varphi(\alpha_1 \sigma_1 +\alpha_2 \sigma_2+\alpha_3 \sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= \dots\\
&+ y(-i \alpha_1 a_1 - \alpha_2 a_1 +i\alpha_3 a_3 )+\\
&+ \dots
\end{align*}

but I think it should say:

\begin{align*}
\varphi(\alpha_1 \sigma_1 +\alpha_2 \sigma_2+\alpha_3 \sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= \dots\\
&+ y(i \alpha_1 a_1 + \alpha_2 a_1 +i\alpha_3 a_3 )+\\
&+ \dots
\end{align*}

Could you check my calculations in the pdf.
 

Attachments

  • prob16.pdf
    69.8 KB · Views: 235
Last edited:
  • #155
@julian
Great job, Julian!
Btw.: The Heisenberg algebra question in 15. is far easier with less computations. Only a bit thinking about centers is needed.

You are right, there is a sign error in the setup of ##\varphi## somewhere.
I took the basis and the representation theorem from
https://www.physicsforums.com/insights/journey-manifold-su2-part-ii/
which I now checked again, and which turned out to be correct.

I thought I had taken ##x## as the eigenvector of minimal weight ##-1## and ##y## for the maximal eigenvector. But with this setting my definition of ##\varphi## doesn't match up. Guess I deserved all these calculations to check again:

Pauli - matrices ##\sigma_j \, \longrightarrow ## skew-Hermitian version ##i \cdot \sigma_j \,\longrightarrow ## basis ##\langle U,V,W\,|\,[U,V]=W\; , \;[V,W]=U\; , \;[W,U]=V \rangle## because this is easiest to memorize ## \longrightarrow ## standard basis ##\langle H,X,Y\,|\,[H,X]=2X\; , \;[H,Y]=-2Y\; , \;[X,Y]=H \rangle## in order to have the example ##\mathfrak{sl}_2## from the textbooks with CSA ##\langle H \rangle \,\longrightarrow ## and all the way back to the Pauli matrices to make the question "physics" compatible

At least I haven't chosen a weird basis in ##\mathbb{C}x \oplus \mathbb{C}y## and something like ##3x-5y## the maximal vector.

These were the ladder structures I had in mind:
##\mathfrak{su}_2.\mathbb{C}=\{\,0\,\}##
\begin{align*}
(0,1) &\stackrel{X}{\longrightarrow} (1,0)\stackrel{X}{\longrightarrow} (0,0)\\
(1,0)&\stackrel{Y}{\longrightarrow}(0,1)\stackrel{Y}{\longrightarrow}(0,0)
\end{align*}
\begin{align*}
(0,0,1)&\stackrel{X}{\longrightarrow}(0,-i,0)\stackrel{X}{\longrightarrow}(2,0,0)\stackrel{X}{\longrightarrow}(0,0,0)\\
(1,0,0)&\stackrel{Y}{\longrightarrow} (0,-i,0) \stackrel{Y}{\longrightarrow} (0,0,2)\stackrel{Y}{\longrightarrow}(0,0,0)
\end{align*}
 
  • #156
A partial go a problem 21:

The deck transformations of the covering space ##\tilde{X}## constitute a group of homeomorphisms of that covering space (where the group operation is the usual operation of composition of homeomorphisms).

Definitions:

Definition of a homeomorphism:

A function ##h: X \rightarrow Y## between two topological spaces is a homeomorphism if it has the following properties:

##h## is a bijection (one-to-one and onto),
##h## is continuous,
the inverse function ##h^{-1}## is continuous (##h## is an open mapping).

Proving the Group property of ##\mathcal{D} (p)##:

Closure under composition of two deck transformations:

Say ##p \circ h_1 = p## and ##p \circ h_2 = p## then we wish to prove that ##p \circ (h_1 \circ h_2) = p##. For any ##\tilde{x} \in \tilde{X}## we have

\begin{align*}
[p \circ (h_1 \circ h_2)] (\tilde{x}) &= p \big( h_1 \big( h_2 (\tilde{x}) \big) \big) \\
& = [p \circ h_1] \big( h_2 (\tilde{x}) \big) \\
& = p \big( h_2 (\tilde{x}) \big) \qquad \qquad \text{as } p \circ h_1 = p \\
& = [p \circ h_2] (\tilde{x}) \\
& = p (\tilde{x}) \qquad \qquad \qquad \text{as } p \circ h_2 = p
\end{align*}

hence ##p \circ (h_1 \circ h_2) = p##.

Identity Homeomorphism is a deck transformation:

The identity map ##id_{\tilde{X}} : \tilde{X} \rightarrow \tilde{X}## is a homeomorphism and we obviously have ##p \circ id_{\tilde{X}} = p##.

Inverse of a deck transformation is a deck transformation:

Say ##p \circ h = p##. For any ##\tilde{x} \in \tilde{X}## we have:

\begin{align*}
p (\tilde{x}) &= [p \circ (h \circ h^{-1})] (\tilde{x}) \\
&= [p \circ h] \big( h^{-1} (\tilde{x}) \big) \\
& = p \big( h^{-1} (\tilde{x}) \big) \qquad \qquad \qquad \text{as } p \circ h = p \\
& = [p \circ h^{-1}] (\tilde{x}) \\
\end{align*}

hence ##p \circ h^{-1} = p##.

Associativity of deck transformations:

Suppose that ##h_1 : \tilde{X} \rightarrow \tilde{X}##, ##h_2 : \tilde{X} \rightarrow \tilde{X}##, and ##h_3 : \tilde{X} \rightarrow \tilde{X}## are deck transformations. Then ##p \circ [(h_3 \circ h_2) \circ h_1] = p \circ [h_3 \circ (h_2 \circ h_3)]##. Proof. For any ##\tilde{x} \in \tilde{X}## we have

\begin{align*}
[ p \circ [(h_3 \circ h_2) \circ h_1 ] ] (\tilde{x}) & = p \big( ( h_3 \circ h_2 ) \big( h_1 (\tilde{x}) \big) \big) \\
& = p \big( h_3 \big( h_2 \big( h_1 (\tilde{x}) \big) \big) \big) \\
& = p \big( h_3 \big( (h_2 \circ h_1) (\tilde{x}) \big) \big) \\
& = [ p \circ [h_3 \circ (h_2 \circ h_1)] ] (\tilde{x}) . \\
\end{align*}Proof that ##h (\tilde{x}) = \tilde{x}## for ##\tilde{x} \in \tilde{X}## implies ##h = id_{\tilde{X}}##:

First we prove that if ##p \circ g = p \circ h##, where ##g## and ##h## are deck transformations, and ##g (\tilde{x}) = h(\tilde{x})## for some point ##\tilde{x} \in \tilde{X}## then ##g = h##. It then easily follows that if ##h (\tilde{x}) = \tilde{x}## for some point ##\tilde{x} \in \tilde{X}## then ##h = id_{\tilde{X}}##.We prove that ##A := \{ \tilde{x} \in \tilde{X} : g (\tilde{x}) = h (\tilde{x}) \}## is both open and closed, thereby proving that ##A = \tilde{X}## because of the connectedness of ##\tilde{X}##.

Both ##g : \tilde{X} \rightarrow \tilde{X}## and ##h : \tilde{X} \rightarrow \tilde{X}## are continuous maps as they are homeomorphisms. Let ##\tilde{x} \in \tilde{X}##. There exists an open neighborhood ##U \in X## containing the point ##p (g(\tilde{x}))## with ##p^{-1} (U)## a disjoint union of open sets ##V_\iota## each of which is homeomorphically mapped onto ##U## by ##p## (for ##\tilde{U} \in V_\iota## we write ##p | \tilde{U} : \tilde{U} \rightarrow U## is a homeomorphism). One of these open sets contains ##g (\tilde{x})##, denote it by ##\tilde{U}##. Also one of these open sets contains ##h (\tilde{x})## (this is because ##p \circ h = p \circ g##) Let us denote this open set by ##\tilde{V}##. Let ##N_{\tilde{x}} = g^{-1} (\tilde{U}) \cap h^{-1} (\tilde{V})##. Then ##N_{\tilde{x}}## is an open set in ##\tilde{X}## containing ##\tilde{x}## (open because ##g## and ##h## are continuous functions as they are homeomorphisms).

Consider the case when ##\tilde{x} \in A##. Then ##g (\tilde{x}) = h (\tilde{x})##, and therefore ##\tilde{V} = \tilde{U}##. It follows from this that both ##g## and ##h## map the open set ##N_{\tilde{x}}## into ##\tilde{U}##. We now use that ##p \circ g = p \circ h## and that ##p | \tilde{U} : \tilde{U} \rightarrow U## is a homeomorphism to prove that ##g | N_{\tilde{x}} = h | N_{\tilde{x}}##. Take an arbitrary point ##\tilde{x}' \in N_{\tilde{x}}## other than the point ##\tilde{x}##. Suppose that ##g (\tilde{x}') \not= h (\tilde{x}')##, but we have ##(p \circ g) (\tilde{x}') = (p \circ h) (\tilde{x}')##. This says two different points get mapped to the same point by ##p | \tilde{U}##, but that contradicts that ##p | \tilde{U}## is injective. As such we must have that ##g (\tilde{x}') = h (\tilde{x}')## for all ##\tilde{x}' \in N_{\tilde{x}}##. Thus ##N_{\tilde{x}} \subset A##. We have thus shown that for each ##\tilde{x} \in A## there exists an open set set ##N_{\tilde{x}}## such that ##\tilde{x} \in N_{\tilde{x}}## and ##N_{\tilde{x}} \subset A##. Thus ##A## is open.

Next we show that the set ##\tilde{X} / A## is open as well. So consider the case ##\tilde{x} \in \tilde{X} / A##. In this case ##\tilde{U} \cap \tilde{V} = \emptyset## since ##g (\tilde{x}) \not= h (\tilde{x})## (having both ##g (\tilde{x})## and ##h (\tilde{x})## in ##\tilde{U}## together with ##(p \circ g) (\tilde{x}) = (p \circ h) (\tilde{x})## is in contradiction with the injectivity of ##p | \tilde{U}##). Again define ##N_{\tilde{x}} = g^{-1} (\tilde{U}) \cap h^{-1} (\tilde{V})##. But ##g (N_{\tilde{x}}) \subset \tilde{U}## and ##h (N_{\tilde{x}}) \subset \tilde{V}##. Therefore ##g (\tilde{x}') \not= h (\tilde{x}')## for ##\tilde{x}' \in N_{\tilde{x}}##, and thus ##N_{\tilde{x}} \subset \tilde{X} / A##. We have then shown that for each ##\tilde{x} \in \tilde{X} / A## there exists an open set ##N_{\tilde{x}}## such that ## \tilde{x} \in N_{\tilde{x}}## and ##N_{\tilde{x}} \subset \tilde{X} / A##. Thus ##\tilde{X} / A## is open.

The subset ##A## of ##\tilde{X}## is therefore both open and closed. As ##A## was assumed to be non-empty, we deduce that ##A = \tilde{X}##, because ##\tilde{X}## is connected. Thus ##g = h##, which was the required resultThe last step is to note that a function such that ##h (\tilde{x}) = \tilde{x}## for some point ##\tilde{x} \in \tilde{X}## and the identity function ##id_{\tilde{X}}## (satisfying ##h (\tilde{x}) = id_{\tilde{X}} (\tilde{x})## for this point ##\tilde{x} \in \tilde{X}##) means that ##h = id_{\tilde{X}}##.
 
Last edited:
  • Like
Likes fresh_42
  • #157
julian said:
A partial go a problem 21: ...
What do you mean by partial? It looks fine to me!

Only a few minor remarks:
  • Subsequent application of functions is associative. Done. And ##p\circ (h\circ g^{-1})=p ## does the other group properties in one step.
  • For the second part, you could have started as you did with the pair ##(h,g)## but then switched to ##g=1##. Just because it is less to write.
  • ##A## is closed can be done shorter by using the fact that diagonals in Hausdorff spaces are closed. This is even equivalent to ##T_2##.
 
  • #158
fresh_42 said:
What do you mean by partial? It looks fine to me!

Only a few minor remarks:
  • Subsequent application of functions is associative. Done. And ##p\circ (h\circ g^{-1})=p ## does the other group properties in one step.
  • For the second part, you could have started as you did with the pair ##(h,g)## but then switched to ##g=1##. Just because it is less to write.
  • ##A## is closed can be done shorter by using the fact that diagonals in Hausdorff spaces are closed. This is even equivalent to ##T_2##.

I'm still looking into the issue of when homeomorphims form a group. I have proven that the general result that the composition of homeomorphisms is a homoemorphism and a result about an inverse function being a homeomorphism.

But I think there are subtle issues relating to the topology. For example the Identity Homeomorphism:

The identity map ##id_X : X \rightarrow X## is obviously bijective. However, the identity mapping ##id_X : (X ,\tau) \rightarrow (X , \tau')## is continuous if and only if ##\tau' \subseteq \tau##. Thus if we topologize ##X## in such a way that this inclusion is proper the identity mapping in this direction will be continuous while its inverse (also the identity) will not. We will only have a continuous inverse if ##\tau = \tau'##.

Does this mean that in order to have a group like structure do we have to consider only homeomorphims that preserve the topology ##\tau##?

I have some intuition that deck transformation would preserve the topology of ##\tilde{X}## but haven't formed a rigorous proof yet.
 
  • #159
julian said:
Does this mean that in order to have a group like structure do we have to consider only homeomorphims that preserve the topology?
I don't quite understand. The definition was for homeomorphisms, so the question is obsolete. Whether ##p\circ h = p## forms a group if ##h## isn't homeomorph is another issue. I don't think so, since the bijection might be sufficient for the algebraic properties, but not for the topological. And isn't the whole issue of homeomorphisms, that they preserve the topologies?
I have some intuition that deck transformation would preserve the topology of ##\tilde{X}## but haven't formed a rigorous proof yet.
It's defined that way.
 
  • #160
I was trying to prove the continuity of the inverse of ##h^{-1}## by showing ##( h^{-1} )^{-1} = h## using bijective properties and that ##h## is continuous. Now I realize you can and should prove the continuity of the inverse ##h^{-1}## by topological arguments. Cleared up now.
 
  • #161
Is there a mistake in problem 20? It seems that ##I = \langle \mathcal{I} \rangle## is NOT a normal subgroup of ##F##, and hence ##F / I## is not a group (meaning that ##\mathcal{F} / \sim_\mathcal{I}## does NOT admit a group structure).

It seems that it is ##J = \langle \mathcal{J} \rangle## that is a normal subgroup of ##F## and hence ##F / J## is a group (meaning ##\mathcal{F} / \sim_\mathcal{J}## that does admit a group structure).

To prove that ##I## is not a normal subgroup of ##F## we just have to give an example of when ##f^{-1} i f \notin I## for some ##i \in I## and some ##f \in F##. An example of this is

\begin{align*}
[ u^{-1} \circ r \circ u ] (z) & = [ v \circ r \circ u ] (z) \\
& = v \big( \big( \frac{1}{2} (-1 + i \sqrt{3} z) \big)^{-1} \big) \\
& = v \big( - \frac{1}{2} (1 + i \sqrt{3}) \frac{1}{z} \big) \\
& = - \frac{1}{2} (1 + i \sqrt{3}) \big( - \frac{1}{2} (1 + i \sqrt{3}) \frac{1}{z} \big) \\
& = \frac{1}{2} (-1 + i \sqrt{3}) \frac{1}{z} \notin I
\end{align*}

(where we have used that ##v (z)## is the inverse of ##u (z)##).

I'll give all details in my next post.
 
  • #162
##r(u(z))=r\left( \left( \dfrac{1}{2}\left( -1+i\sqrt{3} \right)z \right) \right)= \dfrac{1}{2}\left( -1+i\sqrt{3} \right)z^{-1} \neq -\dfrac{1}{2}\left( 1+i\sqrt{3} \right)z^{-1}## .
Sorry, I should have mentioned that.
 
  • #163
fresh_42 said:
##r(u(z))=r\left( \left( \dfrac{1}{2}\left( -1+i\sqrt{3} \right)z \right) \right)= \dfrac{1}{2}\left( -1+i\sqrt{3} \right)z^{-1} \neq -\dfrac{1}{2}\left( 1+i\sqrt{3} \right)z^{-1}## .
Sorry, I should have mentioned that.

So you are saying the maps act upon the ##z## in other words. That is the way I originally did the question and I still got that ##I## is not a normal subgroup! Let me redo the example I gave in my previous post:

\begin{align*}
[u^{-1} \circ r \circ u] (z) & = [v \circ r \circ u] (z) \\
& = v \left( \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{z} \right) \\
& = \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{- \frac{1}{2} \left( 1 + i \sqrt{3} \right) z} \\
& = - \frac{1}{2} (1 + i \sqrt{3}) z^{-1} \notin I
\end{align*}

so the conclusion is still that ##I## is not a normal subgroup.
 
  • #164
julian said:
So you are saying the maps act upon the ##z## in other words.
Yes. This happens if one "invents" problems rather than copy them from the internet, I guess.

I calculated:
\begin{align*}
\varphi(u^{-1})(r)=u^{-1}ru &= u^{-1}r\left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z \right)\\
&= v \left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z^{-1} \right)\\
&= \left(-\dfrac{1}{2}\left( 1+i\sqrt{3} \right) \right)\left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z ^{-1} \right)\\
&= -\dfrac{1}{4}\left(1+i\sqrt{3} \right)\left(-1+i\sqrt{3} \right)z^{-1}\\
&= -\dfrac{1}{4} \cdot \left(-1 - 3 \right)z^{-1}\\
&= z^{-1}
\end{align*}
The goal is to consider the structure of a certain group with twelve elements.
 
  • #165
fresh_42 said:
Yes. This happens if one "invents" problems rather than copy them from the internet, I guess.

I calculated:
\begin{align*}
\varphi(u^{-1})(r)=u^{-1}ru &= u^{-1}r\left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z \right)\\
&= v \left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z^{-1} \right)\\
&= \left(-\dfrac{1}{2}\left( 1+i\sqrt{3} \right) \right)\left(\dfrac{1}{2}\left( -1+i\sqrt{3} \right)z ^{-1} \right)\\
&= -\dfrac{1}{4}\left(1+i\sqrt{3} \right)\left(-1+i\sqrt{3} \right)z^{-1}\\
&= -\dfrac{1}{4} \cdot \left(-1 - 3 \right)z^{-1}\\
&= z^{-1}
\end{align*}
The goal is to consider the structure of a certain group with twelve elements.

But if the maps act upon the ##z## then you should have:

\begin{align*}
v \left( \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{z} \right) & = \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{v (z)} \\
& = - \frac{1}{2} (1 + i \sqrt{3}) z^{-1}
\end{align*}
 
  • #166
julian said:
But if the maps act upon the ##z## then you should have:

\begin{align*}
v \left( \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{z} \right) & = \frac{1}{2} \left( - 1 + i \sqrt{3} \right) \frac{1}{v (z)} \\
& = - \frac{1}{2} (1 + i \sqrt{3}) z^{-1}
\end{align*}
I doesn't act on the map, it acts on ##z##. And as we don't have rings or algebras, the factors simply remain factors. As said, I wanted to have a group with only twelve elements. The intended clue was, that although we can define equivalence relations for any subgroup, only the normal ones give a again a factor (or quotient) group.
 
  • #167
I'm confused. In post #162 you basically told me that I should not interpret ##r (u (z))## as meaning

##
r (u (z)) = r \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right) = \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right)^{-1} = - \frac{1}{2} (1 + i \sqrt{3}) z^{-1}
##

but should interpret as this instead:

##
r (u (z)) = r \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right) = \frac{1}{2} (-1 + i \sqrt{3}) z^{-1} = \frac{1}{2} (-1 + i \sqrt{3}) r (z) .
##
 
  • #168
I already apologized for not being clear. What we have is ##\langle\mathcal{I} \rangle = V_4## and ##\langle\mathcal{J} \rangle=\mathbb{Z}_3## and I wanted to combine them to a semidirect product, i.e. a group with ##12## elements, ##A_4## in this case, i.e. ##\varphi\, : \,\mathbb{Z}_3 \longrightarrow \operatorname{Aut}(V_4)## by conjugation ##\varphi(w)(s)=wsw^{-1}##.

Beside the unfortunate parenthesis in post #162, which meat "only ##z##" is affected, I don't see a problem.

The only failure was, that I should have added ##s(c\cdot z) = c s(z)## for ##s\in \langle I \rangle## and ##w## being a left multiplication by a constant factor for all ##w\in \langle J \rangle##.
 
  • #169
julian said:
I'm confused. In post #162 you basically told me that I should not interpret ##r (u (z))## as meaning

##
r (u (z)) = r \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right) = \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right)^{-1} = - \frac{1}{2} (1 + i \sqrt{3}) z^{-1}
##

but should interpret as this instead:

##
r (u (z)) = r \left( \frac{1}{2} (-1 + i \sqrt{3}) z \right) = \frac{1}{2} (-1 + i \sqrt{3}) z^{-1} = \frac{1}{2} (-1 + i \sqrt{3}) r (z) .
##

The point I'm making is that in post #162 you told me not to interpret ##r ( u (z))## as usual function composition! Well then how am I to interpret ##a (b (z))## generally?
 
  • #170
julian said:
The point I'm making is that in post #162 you told me not to interpret ##r ( u (z))## as usual function composition! Well then how am I to interpret ##a (b (z))## generally?
Well, it can be defined as a function, only that ##s(cz^\varepsilon):=cs(z^\varepsilon)## for ##s\in \langle \mathcal{I}\rangle## and ##\varepsilon =\pm 1##, simply because ##s(1)##, resp. ##s(c)## isn't defined, and I didn't say that it should be extended on ##\mathbb{C}##, so I only failed to say, how else it has to be defined, if not as such an extension. The formulation ##u := L_{\frac{1}{2}(-1+i\sqrt{3})}## and ##v:=L_{-\frac{1}{2}(1+i\sqrt{3})}## where ##L_c## notes the left multiplication with ##c## would have made it clear what to do with ##\langle \mathcal{J} \rangle##.
I was so focused on the group that I forgot to drop a few words on the functions.
 
  • #171
I am able to form a twelve dimensional group via ordinary function composition (maybe not the group you had in mind). And then able to prove that ##J## is a normal subgroup of this group. Not sure how anything is wrong. Could you have a look.

Determining the groups:

The group ##I = \langle \mathcal{I} \rangle##:

From

\begin{align*}
(q \circ q) (z) & = -q (-z) = z = p (z) \\
(q \circ r) (z) & = q (z^{-1}) = - z^{-1} = s (z) = (r \circ q) (z) \\
(q \circ s) (z) & = q (-z^{-1}) = z^{-1} = r (z) = (s \circ q) (z) \\
(r \circ r) (z) & = r (z^{-1}) = z = p (z) \\
(s \circ s) (z) & = s (- z^{-1}) = z = p (z) \\
\end{align*}

we see that we have closure. We obviously have inverses to every element is they are involutions.

Associativity:

Associativity, and this will be true generally here, follows from

\begin{align*}
[(a \circ b) \circ c] (z) & = (a \circ b) \big ( c (z) \big) \\
& = a \big( b \big( c (z) \big) \big) \\
& = a \big( (b \circ c) (z) \big) \\
& = [a \circ (b \circ c)] (z) \\
\end{align*}

where ##a##, ##b##, and ##c## are any of the maps that we will encounter in the question.

therefore we have all the properties of a group. The group table for ##I = \langle \mathcal{I} \rangle## is:

\begin{array}{c|c|c|c}
& p & q & r & s \\
\hline p & p & q & r & s \\
\hline q & q & p & s & r \\
\hline r & r & s & p & q \\
\hline s & s & r & q & p \\
\end{array}

The group ##J = \langle \mathcal{J} \rangle##:

Consider the compositions:

\begin{align*}
v \circ u & = v \big( - {1 \over 2} (- 1 + i \sqrt{3}) z \big) = - {1 \over 2} (1 + i \sqrt{3}) \cdot {1 \over 2} (-1 + i \sqrt{3}) \cdot z = z = 1 (z) \\
u \circ v & = u \big( - {1 \over 2} (1 + i \sqrt{3}) z \big) = {1 \over 2} (-1 + i \sqrt{3}) \cdot - {1 \over 2} (1 + i \sqrt{3}) z = z = 1 (z) \\
u \circ u & = u \big( {1 \over 2} (-1 + i \sqrt{3}) z \big) = {1 \over 2} (-1 + i \sqrt{3}) {1 \over 2} (-1 + i \sqrt{3}) z = - {1 \over 2} (1 + i \sqrt{3}) = v (z) \\
v \circ v & = v \big( - {1 \over 2} (1 + i \sqrt{3}) z \big) = - {1 \over 2} (1 + i \sqrt{3}) \cdot - {1 \over 2} (1 + i \sqrt{3}) z = {1 \over 2} (-1 + i \sqrt{3}) = u (z)
\end{align*}

We have closure under consecutive applications of the operations:

\begin{align*}
1 & = u \circ v = v \circ u \\
& u \\
& v \\
\end{align*}

which includes the identity map.

Inverses:

We have that ##u## is the inverse of ##v##, and ##v## is the inverse of ##u##.

Associativity:

Associativity has already been established, it is just due to the nature of composition.

The group table for ##J = \langle \mathcal{J} \rangle## is:

\begin{array}{c|c|c|c}
& 1 & u & v \\
\hline 1 & 1 & u & v \\
\hline u & u & v & 1 \\
\hline v & v & 1 & u \\
\end{array}

The group ##F = \langle \mathcal{F} \rangle##:

The set of functions we obtain if we combine any of the functions in ##\mathcal{I}## and ##\mathcal{J}## by consecutive applications is denoted by ##\mathcal{F}=\langle\mathcal{I},\mathcal{J} \rangle##. The group associated with ##\mathcal{F}## is denoted ##F = \langle \mathcal{F} \rangle##.

In order to find ##\mathcal{F}## first consider the composite operations:

\begin{align*}
(r \circ u) (z) & = [u (z)]^{-1} = {2 \over (-1 + i \sqrt{3}) z} = - {1 \over 2} (1 + i \sqrt{3}) {1 \over z} \\
(r \circ v) (z) & = [v (z)]^{-1} = - {2 \over (1 + i \sqrt{3}) z} = {1 \over 2} (-1 + i \sqrt{3}) {1 \over z} \\
(s \circ u) (z) & = - [u (z)]^{-1} = - {2 \over (-1 + i \sqrt{3}) z} = - {1 \over 2} (1 + i \sqrt{3}) {1 \over z} \\
(s \circ v) (z) & = - [v (z)]^{-1} = {2 \over (1 + i \sqrt{3}) z} = - {1 \over 2} (-1 + i \sqrt{3}) {1 \over z} \\
\end{align*}

From the functions in ##\mathcal{I}## and ##\mathcal{J}## and the functions

\begin{align*}
(q \circ u) (z) & = - {1 \over 2} (-1 + i \sqrt{3}) z \quad \text{map denoted } qu \\
(q \circ v) (z) & = {1 \over 2} (1 + i \sqrt{3}) z \quad \text{map denoted } qv \\
(r \circ u) (z) & = - {1 \over 2} (1 + i \sqrt{3}) {1 \over z} \quad \text{map denoted } ru \\
(s \circ u) (z) & = {1 \over 2} (1 + i \sqrt{3}) {1 \over z} \quad \text{map denoted } su \\
(r \circ v) (z) & = {1 \over 2} (-1 + i \sqrt{3}) {1 \over z} \quad \text{map denoted } rv \\
(s \circ v) (z) & = - {1 \over 2} (-1 + i \sqrt{3}) {1 \over z} \quad \text{map denoted } sv \\
\end{align*}

one can verify the following "multiplication" table:

\begin{array}{c|c|c|c}
& 1 & q & r & s & u & v & qu & qv & ru & rv & su & sv \\
\hline 1 & 1 & q & r & s & u & v & qu & qv & ru & rv & su & sv \\
\hline q & q & 1 & s & r & qu & qv & u & v & su & sv & ru & rv \\
\hline r & r & s & 1 & q & ru & rv & su & sv & u & v & qu & qv \\
\hline s & s & r & q & 1 & su & sv & ru & rv & qu & qv & u & v \\
\hline u & u & qu & rv & sv & v & 1 & qv & q & r & ru & s & su \\
\hline v & v & qv & ru & su & 1 & u & q & qu & rv & r & sv & s \\
\hline qu & qu & u & sv & rv & qv & q & v & 1 & s & su & r & ru \\
\hline qv & qv & v & su & ru & q & qu & 1 & u & sv & s & rv & r \\
\hline ru & ru & su & v & qv & rv & r & sv & s & 1 & u & q & qu \\
\hline rv & rv & sv & u & qu & r & ru & s & su & v & 1 & qv & q \\
\hline su & su & ru & qv & v & sv & s & rv & r & q & qu & 1 & u \\
\hline sv & sv & rv & qu & u & s & su & r & ru & qv & q & v & 1 \\
\end{array}

where "multiplication" is composition of functions. This multiplication table forms a group multiplication table as we have closure under composition and inverses. As we have closure under composition we have

\begin{align*}
\mathcal{F} & = \big\{ z\stackrel{p}{\mapsto} z\; , \; z\stackrel{q}{\mapsto} -z\; , \;z\stackrel{r}{\mapsto} z^{-1}\; , \;z\stackrel{s}{\mapsto}-z^{-1} , \\
& \qquad z\stackrel{u}{\longmapsto}\frac{1}{2}(-1+i \sqrt{3})z\; , \;z\stackrel{v}{\longmapsto}-\frac{1}{2}(1+i \sqrt{3})z, \\
& \qquad z\stackrel{qu}{\longmapsto}- \frac{1}{2} (-1 + i \sqrt{3}) z \; , \; z\stackrel{qv}{\longmapsto} \frac{1}{2} (1 + i \sqrt{3}) z, \\
& \qquad z\stackrel{ru}{\longmapsto}- \frac{1}{2} (1 + i \sqrt{3}) \frac{1}{z} \; , \; z\stackrel{su}{\longmapsto} \frac{1}{2} (1 + i \sqrt{3}) \frac{1}{z} \\
& \qquad z\stackrel{rv}{\longmapsto} \frac{1}{2} (-1 + i \sqrt{3}) \frac{1}{z} \; , \; z\stackrel{sv}{\longmapsto}- \frac{1}{2} (-1 + i \sqrt{3}) \frac{1}{z}
\big\}
\end{align*}

and the above group multiplication table is for the group denoted by ##F##.Verifying that ##I = \langle \mathcal{I} \rangle## is not a normal subgroup of ##F = \langle \mathcal{F} \rangle##:Definition: Let ##F## be a group. A subgroup ##N## of ##F## is normal if ##a^{-1} n a \in N## for every ##n \in N## and every ##a \in F##.

It appears that ##I## is a not a normal subgroup. We verify that ##g^{-1} i g \not\in I## for some ##i \in I## and for some ##g \in F##. When ##g \in I## we obviously have ##g^{-1} i g \in I## as ##I## is itself a group. We check the other cases:

\begin{align*}
u^{-1} \circ q \circ u & = v \circ qu = q \\
v^{-1} \circ q \circ v & = u \circ qv = q \\
u^{-1} \circ r \circ u & = v \circ ru = rv \; \leftarrow \\
v^{-1} \circ r \circ v & = u \circ rv = vr \; \leftarrow \\
u^{-1} \circ s \circ u & = v \circ su = sv \; \leftarrow \\
v^{-1} \circ s \circ v & = u \circ sv = su \; \leftarrow \\
\end{align*}

and

\begin{align*}
qu^{-1} \circ q \circ qu & = qv \circ q \circ qu = q \\
qv^{-1} \circ q \circ qv & = qu \circ q \circ qv = q \\
qu^{-1} \circ r \circ qu & = qv \circ r \circ qu = rv \; \leftarrow \\
qv^{-1} \circ r \circ qv & = qu \circ r \circ qv = ru \; \leftarrow \\
qu^{-1} \circ s \circ qu & = qv \circ s \circ qu = sv \; \leftarrow \\
qv^{-1} \circ s \circ qv & = qu \circ s \circ qv = su \; \leftarrow \\
\end{align*}

and

\begin{align*}
ru^{-1} \circ q \circ ru & = ru \circ q \circ ru = q \\
rv^{-1} \circ q \circ rv & = rv \circ q \circ rv = q \\
ru^{-1} \circ r \circ ru & = ru \circ r \circ ru = rv \; \leftarrow \\
rv^{-1} \circ s \circ rv & = rv \circ r \circ rv = ru \; \leftarrow \\
ru^{-1} \circ r \circ ru & = ru \circ s \circ ru = sv \; \leftarrow \\
rv^{-1} \circ s \circ rv & = rv \circ s \circ rv = su \; \leftarrow \\
\end{align*}

and

\begin{align*}
su^{-1} \circ q \circ su & = su \circ q \circ su = q \\
sv^{-1} \circ q \circ sv & = sv \circ q \circ sv = q \\
su^{-1} \circ r \circ su & = su \circ r \circ su = rv \; \leftarrow \\
sv^{-1} \circ r \circ sv & = sv \circ r \circ sv = ru \; \leftarrow \\
su^{-1} \circ s \circ su & = su \circ s \circ su = sv \; \leftarrow \\
sv^{-1} \circ s \circ sv & = sv \circ s \circ sv = su \; \leftarrow \\
\end{align*}

We have thus verified that ##I## is not a normal subgroup. As such ##F / I## does not form a group as multiplication cannot be defined - this will be explained in detail below..

Verifying that ##J = \langle \mathcal{J} \rangle## is a normal subgroup of ##F = \langle \mathcal{F} \rangle##:

Next we establish that ##J## is a normal subgroup. We verify that ##g^{-1} j g \in J## for every ##j \in J## and every ##g \in F##. This obviously holds for ##g \in J##. We check the other cases

\begin{align*}
r^{-1} \circ u \circ r & = ru \circ r = v \\
q^{-1} \circ u \circ q & = qu \circ q = u \\
s^{-1} \circ u \circ s & = su \circ s = v \\
r^{-1} \circ v \circ r & = rv \circ r = u \\
q^{-1} \circ v \circ q & = qv \circ q = v \\
s^{-1} \circ v \circ s & = sv \circ s = u \\
\end{align*}

and

\begin{align*}
qu^{-1} \circ u \circ qu & = qv \circ u \circ qu = u \\
qv^{-1} \circ u \circ qv & = qu \circ u \circ qv = u \\
qu^{-1} \circ v \circ qu & = qv \circ v \circ qu = v \\
qv^{-1} \circ v \circ qv & = qu \circ v \circ qv = v \\
\end{align*}

and

\begin{align*}
ru^{-1} \circ u \circ ru & = ru \circ u \circ ru = v \\
rv^{-1} \circ u \circ rv & = rv \circ u \circ rv = v \\
ru^{-1} \circ v \circ ru & = ru \circ v \circ ru = u \\
rv^{-1} \circ v \circ rv & = rv \circ v \circ rv = u \\
\end{align*}

and

\begin{align*}
su^{-1} \circ u \circ su & = su \circ u \circ su = v \\
sv^{-1} \circ u \circ sv & = sv \circ u \circ sv = v \\
su^{-1} \circ v \circ su & = su \circ v \circ su = u \\
sv^{-1} \circ v \circ sv & = sv \circ v \circ sv = u \\
\end{align*}

We have thus verified that ##J## is a normal subgroup. As such ##F / J## forms a group (the quotient group) as I will explain in a moment. First we need a preliminary result:

Let ##F## be a group and let ##J## be a subgroup of ##F##. Then ##J## is a normal subgroup of ##F## if and only if ##a J = J a## for every ##a \in F##.

Proof:

Assume ##J## is a normal subgroup of ##F##. Let ##a \in F## (we will show that ##a J = J a##).

First we will prove that ##a J \subseteq J a##. Let ##x \in a J##. Then ##x = a j## for some ##j \in J##. By assumption ##a j a^{-1} = (a^{-1})^{-1} j a^{-1} \in J##, so ##x = a j = (a j a^{-1}) a \in J a##. Next we prove that ##J a \subseteq aJ##. Let ##x = j a## for some ##j \in J##. By assumption, ##a^{-1} j a \in J##, so ##x = j a = a (a^{-1} j a) \in a J##. Therefore ##a J = J a##, as desired.

Now assume that ##a J = J a## for every ##a \in F## (we will show that ##J## is a normal subgroup of ##F##). Let ##j \in J##, ##a \in F##. We have ##j a \in J a = a J##, so ## j a = a k## for some ##k \in J##. So ##a^{-1} j a = k \in J##. The desired result.
We need the notion of a left coset: the left coset containing ##a \in F## is the set ##a J = \{ a j : j \in J \}##. We give some basic properties of left cosets:

Since a row of a group multiplication table contains each element of ##F## exactly once, the elements in any left coset must all be different. Thus the number of elements in each left coset is equal to the number as elements in ##J##.

Either ##a J \cap b J = \emptyset## or ##a J = b J##, that is, either left cosets do not overlap or are equal. Suppose that the cosets in rows ##a## and ##b## overlap then ##a j_1 = b j_2## for some ##j_1, j_2 \in J##. Therefore ##a = b j_2 j_1^{-1}## and if ##j## is any element of ##J## then ##a j = b j_2 j_1^{-1} j = b j'## where ##j' = j_2 j_1^{-1} j##. Since ##J## is a subgroup, ##j'## is an element of ##J##. Therefore ##a J \subseteq b J##. Similarly we can show ##b J \subseteq a J##. Thus ##a J = b J##. In this way the set ##F## can be decomposed into mutually disjoint left cosets.

If ##a J = J## then ##a \in J## for if ##a \not\in J## we would have that ##a 1 \not\in J## despite ##1## being in ##J##.

We obviously have ##1 J = j J## for every ##j \in J##.

The quotient group - general idea:

Here I explain why ##F / J## forms a group when ##J## is a normal subgroup of ##F##.

Let ##F## be a group and ##J## be a subgroup. Let ##F / J = \{ a J : a \in F \}## be the set of left cosets of ##J## in ##F##. The operation:

\begin{align*}
F / J \times F / J \rightarrow F / J \qquad \text{defined by} \qquad (a J) \cdot (b J) = (ab) J
\end{align*}

is well defined (that is, ##a J = a' J## and ##b J = b' J## then ##ab J = a'b' J##) if and only if ##J## is a normal subgroup of ##F##.

Proof:

Now we return to proving that multiplication is well defined if and only if ##J## is normal. Suppose ##J## is normal. If ##a J = a' J## and ##b J = b' J## then

##
(ab) J = a (b J) = a (b' J) = a (J b') = (aJ) b' = (a' J) b' = a' (J b') = a' (b' J) = (a'b') J
##

so the operation is well defined and multiplication is closed.

Now suppose that the operation is well defined so that whenever ##a J = a' J## and ##b J = b' J## that ##ab J = a'b' J##. We want to show that ##a^{-1} j a \in J## for every ##j \in J## and for every ##a \in F##. For each ##j \in J##, since ##j J = 1 J##, we have ##(1 J) (a J) = 1 a J = a J## and so ##(j J) (a J) = ja J##. So ##a J = j a J##, hence ##J = a^{-1} j a J##, so ##a^{-1} j a \in J## for each ##j \in J##. This holds for any ##a \in F##, hence ##J## is a normal subgroup of ##F##.

One checks that this operation on ##F / J## is associative:

##
(aJ b J) c J = ab J c J = (ab) c J = a (bc) J = aJ bc J = a J (b J c J) .
##

Has identity element ##J##

##
a J 1 J = a 1 J = a J = 1 J a J
##

and the inverse of an element ##a J## of ##F /J## is ##a^{-1} J##:

##
(a^{-1} J) (a J) = a^{-1} a J = e J \qquad \text{and} \qquad (a J) (a^{-1} J) = a a^{-1} J = e J
##
 
Last edited:
  • #172
julian said:
I am able to form a twelve dimensional group via ordinary function composition (maybe not the group you had in mind). And then able to prove that JJJ is a normal subgroup of this group. Not sure how anything is wrong. Could you have a look.
I am very sorry, Julian, but you are completely right!

My mistake was that I thought I had defined ##A_4= \mathbb{Z}_3 \ltimes V_4##, but this is wrong.

We have indeed ##\mathcal{F}= D_6 \cong D_3 \times \mathbb{Z}_2 \cong \mathbb{Z}_3 \rtimes V_4## and the representation in terms of our functions is ##D_6 = \langle \, uq\, , \,r\,|\,(uq)^6=r^2=1\, , \,r(uq)r=(uq)^{-1}=qv\, \rangle## where ##u=(uq)^4\, , \,v=(uq)^2\, , \,q=(uq)^3\, , \,s=(uq)^3r\,.##

P.S.: It took me quite a while to find an element of order ##6##. I thought there was none.
 
Last edited:
  • #173
I'm right?! Cool beans.
 
  • Like
Likes fresh_42
  • #174
I crunched that group with a group-theory calculations Mathematica notebook, and it had commutator series of quotient groups {Z2*Z2, Z3}. I'd expected A4, with {Z3,Z2*Z2}, but this looks like Dih(6) = Z2*Dih(3). So I decided to generalize it.
$$ F = \{z \to \omega^k z \text{ and } z \to \omega^k (1/z) \text{ for } k = 0 \dots n-1\} $$
where ##\omega## is a primitive nth root of unity for n. The generators of the group are
$$ a = (z \to \omega z) \text{ and } b = (z \to (1/z)) $$
Element a has order n and b has order 2, with ##ab = ba^{n-1}##. Thus, this group is Dih(n), the dihedral group with order 2n, and I get the result that I had earlier conjectured.
 
  • Like
Likes fresh_42
  • #175
fresh_42 said:
24. Solve and describe the solution step by step in quadrature a Lagrangian differential equation with Lagrangian
$$L(t,x,\dot x)=\frac{1}{2}\dot x^2-\frac{t}{x^4}.$$

What do you mean by quadrature? I can find several potential meanings that could be relevant. For example, it could mean doing an integral. It could mean a particular method of numerical solution. It could mean something about fitting things to a rectangle. It could mean something about a quadratic equation.
 

Similar threads

Replies
28
Views
5K
2
Replies
42
Views
8K
3
Replies
93
Views
12K
Replies
20
Views
5K
Replies
16
Views
5K
2
Replies
61
Views
11K
3
Replies
80
Views
6K
2
Replies
61
Views
9K
Replies
33
Views
8K
2
Replies
39
Views
11K
Back
Top