Math Challenge - November 2021

In summary, In summary, we discussed various mathematical topics including analysis, projective geometry, ##C^*##-algebras, group theory, Markov processes, manifolds, topology, Galois theory, linear algebra, and commutative algebra. We also solved various problems, including the continuity of inverse functions, determining the existence of a straight line intersecting two skew lines, positive functionals in ##C^*##-algebras, and irreducibility and aperiodicity of Markov chains. We also proved properties of normable topological vector spaces, orientability of manifolds, and isomorphisms between vector spaces. Lastly, we solved equations and inequalities involving real numbers, sequences, and functions
  • #1
fresh_42
Staff Emeritus
Science Advisor
Insights Author
2023 Award
19,721
25,715
Summary: Analysis. Projective Geometry. ##C^*##-algebras. Group Theory. Markov Processes. Manifolds. Topology. Galois Theory. Linear Algebra. Commutative Algebra.1.a. (solved by @nuuskur ) Let ##C\subseteq \mathbb{R}^n## be compact and ##f\, : \,C\longrightarrow \mathbb{R}^n## continuous and injective. Show that the inverse ##g=f^{-1}\, : \,f(C)\longrightarrow \mathbb{R}^n## is continuous.

1.b. (solved by @nuuskur ) Let ##S:=\{x+tv\,|\,t\in (0,1)\}## with ##x,v\in \mathbb{R}^n,## and ##f\in C^0(\mathbb{R}^n)## differentiable for all ##y\in S.## Show that there is a ##z\in S## such that
$$
f(x+v)-f(x)=\nabla f(z)\cdot v\,.
$$
1.c. (solved by @MathematicalPhysicist ) Let ##\gamma \, : \,[0,\pi]\longrightarrow \mathbb{R}^3## be given as
$$
\gamma(t):=\begin{pmatrix}
\cos(t)\sin(t)\\ \sin^2(t)\\ \cos(t)
\end{pmatrix}\, , \,t\in [0,\pi].
$$
Show that the length ##L(\gamma )>\pi.##2. (solved by @mathwonk ) Let ##g,h## be two skew lines in a three-dimensional projective space ##\mathcal{P}=\mathcal{P}(V)##, and ##P## a point that is neither on ##g## nor on ##h##. Prove that there is exactly one straight through ##P## that intersects ##g## and ##h.##3. (solved by @QuantumSpace ) Let ##(\mathcal{A},e)## be a unital ##C^*##-algebra. A self-adjoint element ##a\in \mathcal{A}## is called positive, if its spectral values are:
$$
\sigma(a) :=\{\lambda \in \mathbb{C}\,|\,a-\lambda e \text{ is not invertible }\}\subseteq \mathbb{R}^+:=[0,\infty).
$$
The set of all positive elements is written ##\mathcal{A}_+\,.## A linear functional ##f\, : \,\mathcal{A}\longrightarrow \mathbb{C}## is called positive, if ##f(a)\in \mathbb{R}^+## for all positive ##a\in \mathcal{A}_+\,.##

Prove that a positive functional is continuous.4. Prove that the following groups ##F_1,F_2## are free groups:

4.a. (solved by @nuuskur ) Consider the functions ##\alpha ,\beta ## on ##\mathbb{C}\cup \{\infty \}## defined by the rules
$$
\alpha(x)=x+2 \text{ and }\beta(x)=\dfrac{x}{2x+1}.
$$
The symbol ##\infty ## is subject to such formal rules as ##1/0=\infty ## and ##\infty /\infty =1.## Then ##\alpha ,\beta ## are bijections with inverses
$$
\alpha^{-1}(x)=x-2\text{ and }\beta^{-1}(x)=\dfrac{x}{1-2x}.
$$
Thus ##\alpha ## and ##\beta ## generate a group of permutations ##F_1## of ##\mathbb{C}\cup \{\infty \}.##

4.b. (solved by @martinbn and @mathwonk ) Define the group ##F_2:=\langle A,B \rangle ## with
$$
A:=\begin{bmatrix}1&2\\0&1 \end{bmatrix} \text{ and }
B:=\begin{bmatrix}1&0\\2&1 \end{bmatrix}
$$

5. We model the move of a chess piece on a chessboard as a timely homogeneous Markov chain with the ##64## squares as state space and the position of the piece at a certain (discrete) point in time as a state. The transition matrix is given by the assumption, that the next possible state is equally probable. Determine whether these Markov chains ##M(\text{piece})## are irreducible and aperiodic for (a) king, (b) bishop, (c) pawn, and (d) knight.6. Prove that a ##n##-dimensional manifold ##X## is orientable if and only if
(a) there is an atlas for which all chart changes respect orientation, i.e. have a positive functional determinant,
(b) there is a continuous ##n##-form which nowhere vanishes on ##M.##7. (solved by @nuuskur ) A topological vector space ##E## over ##\mathbb{K}\in \{\mathbb{R},\mathbb{C}\}## is normable if and only if it is Hausdorff and possesses a bounded convex neighborhood of ##\vec{0}.##8.a. (solved by @kmitza ) Determine the minimal polynomial of ##\pi + e\cdot i## over the reals.

8.b. (solved by @jbstemp ) Show that ##\mathbb{F}:=\mathbb{F}_7[T]/(T^3-2)## is a field, calculate the number of its elements, and determine ##(T^2+2T+4)\cdot (2T^2+5),## and ##(T+1)^{-1}.##

8.c. (solved by @mathwonk ) Consider ##P(X):=X^{7129}+105X^{103}+15X+45\in \mathbb{F}[X]## and determine whether it is irreducible in case
$$
\mathbb{F} \in \{\mathbb{Q},\mathbb{R},\mathbb{F}_2,\mathbb{Q}[T]/(T^{7129}+105T^{103}+15T+45)\}
$$
8.d. (solved by @mathwonk ) Determine the matrix of the Frobenius endomorphism in ##\mathbb{F}_{25}## for a suitable basis.9. (solved by @mathwonk ) Let ##V## and ##W## be finite-dimensional vector spaces over the field ##\mathbb{F}## and ##f\, : \,V\otimes_\mathbb{F}W\longrightarrow \mathbb{F}## a linear mapping such that
\begin{align*}
\forall \,v\in V-\{0\}\quad \exists \,w\in W\, &: \,f(v\otimes w)\neq 0\\
\forall \,w\in W-\{0\}\quad \exists \,v\in V\, &: \,f(v\otimes w)\neq 0
\end{align*}
Show that ##V\cong_\mathbb{F} W.##10. (solved by @mathwonk ) Let ##R:=\mathbb{C}[X,Y]/(Y^2-X^2)##. Describe ##V_\mathbb{R}(Y^2-X^2)\subseteq \mathbb{R}^2,## determine whether ##\operatorname{Spec}(R)## is finite, calculate the Krull-dimension of ##R,## and determine whether ##R## is Artinian.

1606835746499-png-png-png-png-png.png


High Schoolers only
11.
Let ##a\not\in\{-1,0,1\}## be a real number. Solve
$$
\dfrac{(x^4+1)(x^4+6x^2+1)}{x^2(x^2-1)^2}=\dfrac{(a^4+1)(a^4+6a^2+1)}{a^2(a^2-1)^2}\,.
$$

12. Define a sequence ##a_1,a_2,\ldots,a_n,\ldots ## of real numbers by
$$
a_1=1\, , \,a_{n+1}=2a_n+\sqrt{3a_n^2+1}\quad(n\in \mathbb{N})\,.
$$
Determine all sequence elements that are integers.13. For ##n\in \mathbb{N}## define
$$
f(n):=\sum_{k=1}^{n^2}\dfrac{n-\left[\sqrt{k-1}\right]}{\sqrt{k}+\sqrt{k-1}}\,.
$$
Determine a closed form for ##f(n)## without summation. The bracket means: ##[x]=m\in \mathbb{Z}## if ##m\leq x <m+1.##14. Solve over the real numbers
\begin{align*}
&(1)\quad\quad x^4+x^2-2x&\geq 0\\
&(2)\quad\quad 2x^3+x-1&<0\\
&(3)\quad\quad x^3-x&>0
\end{align*}

15. Let ##f(x):=x^4-(x+1)^4-(x+2)^4+(x+3)^4.## Determine whether there is a smallest function value if ##f(x)## is defined ##(a)## for integers, and ##(b)## for real numbers. Which is it?#
 
Last edited:
  • Like
Likes jbergman and berkeman
Physics news on Phys.org
  • #2
My solution for exercise 3.

We don't need the assumption that ##\mathcal{A}## is unital, so we will not use this.

Note that every element ##a## in a ##C^*##-algebra ##\mathcal{A}## can be written as ##a = p_1 - p_2 + i(p_3-p_4)## where ##p_1, p_2,p_3, p_4## are positive elements with ##\|p_i\| \le \|a\|##.

Thus, it suffices to show that
$$\sup_{a \in \mathcal{A}_+, \|a\| \le 1} \|f(a)\| < \infty$$
in order to conclude that ##\|f\| < \infty##. Suppose to the contrary that this supremum equals ##\infty##. Then we find a sequence ##\{a_n\}_{n=1}^\infty## of positive elements in the unit ball of ##\mathcal{A}## with the property that ##\|f(a_n)\|\ge 4^n##. Define ##a:= \sum_{n=1}^\infty 2^{-n} a_n##, where the series converges in the norm-topology because its absolutely convergent. For all ##n \ge 1##, we have ##a \ge 2^{-n} a_n## and thus by positivity ##f(a) \ge 2^{-n} f(a_n)## for all ##n \ge 1##. Taking norms, we obtain
$$\|f(a)\| \ge 2^{-n}\|f(a_n)\| \ge 2^{-n} 4^n = 2^n$$
and letting ##n\to \infty## yields a contradiction. Hence, the claim follows.

Remark: the same proof works to show that any positive map between ##C^*##-algebras is bounded. That's why I denote the absolute value on ##\mathbb{C}## by ##\|\cdot\|## as well.
 
Last edited:
  • Like
Likes fresh_42
  • #3
My solution to 1.c:
##L(\gamma)=\int_0^\pi | \dot{\gamma(t)}|dt = \int_0^\pi\sqrt{(\cos^2(2t)+\sin^2(2t)+\sin^2(t)}dt=\int_0^\pi \sqrt{(1+\sin^2(t))}dt\ge \int_0^\pi 1dt = \pi##
 
  • #4
Let [itex]V:=V_\mathbb K[/itex] be a topological VS with topology [itex]\tau[/itex]. If [itex]V[/itex] is normable, then its closed unit ball is a bounded convex neighborhood of zero. The space [itex]V[/itex] is automatically Hausdorff, because there are balls of arbitrarily small radii.

Conversely, let [itex]C[/itex] be a convex bounded NH of zero. Then there exists a balanced NH of zero [itex]B[/itex] such that [itex]B\subseteq C[/itex]. Then [itex]A := \mathrm{Cl}(\mathrm{conv\, B})[/itex] is a closed, absolutely convex bounded NH of zero. Its Minkowski functional [itex]p_A[/itex] is therefore a seminorm. Check that [itex]p_A[/itex] is actually a norm. That is, [itex]p_A(x) = 0[/itex] implies [itex]x=0[/itex].

For every NH of zero [itex]W[/itex] we can choose [itex]t>0[/itex] such that [itex]tA \subseteq W[/itex] (because [itex]A[/itex] is bounded). This implies [itex]\{tA \mid t>0\}[/itex] is a NH basis of zero. By assumption [itex]V[/itex] is Hausdorff, thus [itex]\bigcap \{tA \mid t>0\} = \{0\}[/itex].

Suppose [itex]x\neq 0[/itex]. Then there exists [itex]t_0>0[/itex] such that [itex]x\notin t_0A[/itex], which means [itex]p_A(x) \neq 0[/itex]. Thus, [itex](V,p_A)[/itex] is a normed space with unit ball [itex]A[/itex] and since the [itex]tA[/itex] are a [itex]\tau[/itex]-NH basis of zero, the [itex]p_A[/itex]-induced topology coincides with [itex]\tau[/itex].
 
  • #5
nuuskur said:
Let [itex]V:=V_\mathbb K[/itex] be a topological VS with topology [itex]\tau[/itex]. If [itex]V[/itex] is normable, then its closed unit ball is a bounded convex neighborhood of zero. The space [itex]V[/itex] is automatically Hausdorff, because there are balls of arbitrarily small radii.

Conversely, let [itex]C[/itex] be a convex bounded NH of zero. Then there exists a balanced NH of zero [itex]B[/itex] such that [itex]B\subseteq C[/itex]. Then [itex]A := \mathrm{Cl}(\mathrm{conv\, B})[/itex] is a closed, absolutely convex bounded NH of zero. Its Minkowski functional [itex]p_A[/itex] is therefore a seminorm. Check that [itex]p_A[/itex] is actually a norm. That is, [itex]p_A(x) = 0[/itex] implies [itex]x=0[/itex].

For every NH of zero [itex]W[/itex] we can choose [itex]t>0[/itex] such that [itex]tA \subseteq W[/itex] (because [itex]A[/itex] is bounded). This implies [itex]\{tA \mid t>0\}[/itex] is a NH basis of zero. By assumption [itex]V[/itex] is Hausdorff, thus [itex]\bigcap \{tA \mid t>0\} = \{0\}[/itex].

Suppose [itex]x\neq 0[/itex]. Then there exists [itex]t_0>0[/itex] such that [itex]x\notin t_0A[/itex], which means [itex]p_A(x) \neq 0[/itex]. Thus, [itex](V,p_A)[/itex] is a normed space with unit ball [itex]A[/itex] and since the [itex]tA[/itex] are a [itex]\tau[/itex]-NH basis of zero, the [itex]p_A[/itex]-induced topology coincides with [itex]\tau[/itex].
Wow. You concentrated my proof from 48 to 8 lines! Is that already a compactification?

My more detailed solution will be published on 2/1/2022 in
https://www.physicsforums.com/threads/solution-manuals-for-the-math-challenges.977057/
or here in case someone wants to read it prior to that.
 
  • #6
Some ideas for 4a
Let [itex]F_1 = \langle\alpha,\beta \rangle [/itex]. By definition
[tex]
F_1 = \{\gamma _n\circ \gamma _{n-1} \circ\ldots\circ \gamma _1 \mid \gamma _i \in \{\alpha,\beta,\alpha^{-1},\beta ^{-1}\},\ n\in\mathbb N\}
[/tex]
Since [itex]\alpha,\beta,\alpha^{-1},\beta^{-1}[/itex] are maps, in principle, it could happen that two different reduced compositions are still the same. So we need to check that this does not happen. None of the maps is idempotent (because an idempotent bijection must be the identity) and the different type of maps do not commute. E.g one can work out [itex]\alpha\beta (x) = \frac{5x+2}{2x+1}[/itex] and [itex]\beta\alpha (x) = \frac{x+2}{2x+5}[/itex]. Similarly, [itex]\alpha\beta ^{-1} \neq \beta ^{-1}\alpha[/itex] and [itex]\alpha^{-1}\beta \neq \beta\alpha^{-1}[/itex]. Also, [itex]\alpha\alpha \neq \beta\beta[/itex].

Is the following true? If a composition is non-empty and reduced, then it is not the identity. Suppose this is true for all reduced compositions of length [itex]n\geqslant 2[/itex]. Let [itex]\omega := \gamma _{n+1}\gamma _{n}\ldots \gamma _1\in F_1[/itex] be a reduced composition of length [itex]n+1[/itex]. Denote [itex]\sigma := \gamma _n\ldots\gamma _1[/itex]. Suppose [itex]\gamma _{n+1}\sigma[/itex] is the identity. Then [itex]\sigma ^{-1} = \gamma _{n+1}[/itex], but then [itex]\omega[/itex] reduces to the identity, a contradiction... ?!?

Something feels wrong, though. There are matrices, for instance, that satisfy ##A^n = E ## for some (possibly large) ##n##. This is true by Cayley Hamilton, even. Take any matrix with characteristic equation ##x^n -1 =0##, then ##A^n-E=0##.

There must be something specific about these ##\alpha,\beta## ..
 
  • #7
True in very general circumstances. Let [itex]f:X\to Y[/itex] be continuous and suppose [itex]X[/itex] is compact. Then the image is also compact. Suppose [itex]f(X) \subseteq \bigcup V_i[/itex], where the [itex]V_i[/itex] are an open cover. Then continuity implies [itex]X\subseteq \bigcup f^{-1}(V_i)[/itex], where [itex]f^{-1}(V_i)[/itex] are open. By compactness there must be a finite subcover, so [itex]f(X)[/itex] also has a finite subcover.

Further, if [itex]X[/itex] is compact and [itex]A\subseteq X[/itex] is closed, then taking an open cover [itex]A\subseteq \bigcup U_i[/itex] we have an open over [itex]X \subseteq(X\setminus A) \cup \bigcup U_i [/itex]. So [itex]A[/itex] must also have a finite subcover and so is compact.

Thirdly, in Hausdorff spaces compact implies closed. Let [itex]X[/itex] be Hausdorff and [itex]A\subseteq X[/itex] compact. Suffices to show [itex]X\setminus A[/itex] is open. Take [itex]b\in X\setminus A[/itex], then for every [itex]a\in A[/itex], one can pick disjoint open sets [itex]U_a, V_a[/itex] such that [itex]a\in U_a[/itex] and [itex]b\in V_a[/itex]. We have an open cover [itex]A\subseteq \bigcup \{U_a \mid a\in A\}[/itex]. Suppose [itex]A\subseteq \bigcup \{U_a \mid a\in F\}[/itex] for some finite subset [itex]F\subseteq A[/itex]. Then [itex]b\in\bigcap _{a\in F} V_a \subseteq X\setminus A[/itex]. So [itex]b\in \mathrm{Int\,}(X\setminus A)[/itex].

Now suppose [itex]f:X\to Y[/itex] is a continuous injection with [itex]X[/itex] compact. All we have to assume is the spaces are Hausdorff. Then the closed subsets are mapped to compact ones, which are closed and so [itex]f:X\to f(X)[/itex] is a homeomorphism.
 
Last edited:
  • #8
nuuskur said:
Some ideas for 4a
Let [itex]F_1 = \langle\alpha,\beta \rangle [/itex]. By definition
[tex]
F_1 = \{\gamma _n\circ \gamma _{n-1} \circ\ldots\circ \gamma _1 \mid \gamma _i \in \{\alpha,\beta,\alpha^{-1},\beta ^{-1}\},\ n\in\mathbb N\}
[/tex]
Since [itex]\alpha,\beta,\alpha^{-1},\beta^{-1}[/itex] are maps, in principle, it could happen that two different reduced compositions are still the same. So we need to check that this does not happen. None of the maps is idempotent (because an idempotent bijection must be the identity) and the different type of maps do not commute. E.g one can work out [itex]\alpha\beta (x) = \frac{5x+2}{2x+1}[/itex] and [itex]\beta\alpha (x) = \frac{x+2}{2x+5}[/itex]. Similarly, [itex]\alpha\beta ^{-1} \neq \beta ^{-1}\alpha[/itex] and [itex]\alpha^{-1}\beta \neq \beta\alpha^{-1}[/itex]. Also, [itex]\alpha\alpha \neq \beta\beta[/itex].

Is the following true? If a composition is non-empty and reduced, then it is not the identity. Suppose this is true for all reduced compositions of length [itex]n\geqslant 2[/itex]. Let [itex]\omega := \gamma _{n+1}\gamma _{n}\ldots \gamma _1\in F_1[/itex] be a reduced composition of length [itex]n+1[/itex]. Denote [itex]\sigma := \gamma _n\ldots\gamma _1[/itex]. Suppose [itex]\gamma _{n+1}\sigma[/itex] is the identity. Then [itex]\sigma ^{-1} = \gamma _{n+1}[/itex], but then [itex]\omega[/itex] reduces to the identity, a contradiction... ?!?

Something feels wrong, though. There are matrices, for instance, that satisfy ##A^n = E ## for some (possibly large) ##n##. This is true by Cayley Hamilton, even. Take any matrix with characteristic equation ##x^n -1 =0##, then ##A^n-E=0##.

There must be something specific about these ##\alpha,\beta## ..
Hint: Consider what the powers of ##\alpha ## and the powers of ##\beta ## do geometrically.
 
Last edited:
  • #9
Good lord, I have [itex]o(e^{-n})[/itex] understanding of geometry :oldgrumpy:
 
  • #10
nuuskur said:
Good lord, I have [itex]o(e^{-n})[/itex] understanding of geometry :oldgrumpy:
Ok, then use topology and what you know about the 1-sphere 2-disc.
 
  • #11
The crucial point is, that all powers of ##\alpha ## map the interior of the unit circle to the exterior, and all powers of ##\beta ## map the exterior to the interior with ##0## removed. Now consider a non-trivial reduced word that equals ##1## and conclude by the universal property that its non-existence is sufficient to have an isomorphism to the free group.
 
  • #12
Oh, I didn't think of that, that's a neat little trick
So, by definition
[tex]
F_1 = \{\gamma _n\gamma _{n-1} \ldots \gamma _1 \mid \gamma _i \in\{\alpha,\beta,\alpha^{-1},\beta^{-1}\},\ n\in\mathbb N\}
[/tex]
Call a composition reduced if it does not contain strings of type [itex]\ldots\gamma\gamma ^{-1}\ldots[/itex] and [itex]\ldots\gamma ^{-1}\gamma\ldots[/itex]. The goal is to show that non-empty reduced words are not the identity map.

Clearly, [itex]\alpha ^n(x) = x+2n[/itex]. Note that [itex]\beta ^2(x) = \frac{x}{2\cdot 2x+1}[/itex]. Suppose [itex]\beta ^n(x) = \frac{x}{2nx + 1}[/itex], then
[tex]
\beta ^n ( \beta (x)) = \frac{\beta (x)}{2n\beta (x)+1} = \frac{x}{2(n+1)x +1}
[/tex]
Note the following. Let [itex]0<|z|<1[/itex]. Immediately one has [itex]|\alpha ^n(z)| > 1[/itex]. Also
[tex]
\beta ^n(1/z) = \frac{1/z}{2n(1/z)+1} = \frac{1}{z+2n} = \frac{1}{\alpha ^n(z)},
[/tex]
which implies [itex]\beta ^n[/itex] maps the exterior of the unit ball into the unit ball never attaining zero. Now it suffices to show that non-empty non-constant reduced words don't map zero to zero. By a constant word I mean something like [itex]\beta^n[/itex], we know they are not the identity.

So a typical reduced word is of the form
[tex]
\left ( \beta ^{u_1} \right )^{n_1}\left (\alpha^{u_2}\right )^{n_2}\left (\beta^{u_3}\right )^{n_3}\ldots \left (\alpha^{u_k}\right )^{n_k}
[/tex]
where [itex]u_i \in \{-1,1\}[/itex]. The word may end with powers of [itex]\alpha^{\pm1}[/itex] or [itex]\beta^{\pm 1}[/itex], but we may assume the word begins with powers of [itex]\alpha ^{\pm 1}[/itex], because powers of [itex]\beta^{\pm 1}[/itex] map zero to zero. If we begin with powers of [itex]\alpha ^{\pm 1} [/itex] we land outside the unit ball, then powers of [itex]\beta ^{\pm 1}[/itex] do not map to zero and we just start oscillating.

So this means two reduced words are equal as maps if and only if they are the same word. Let [itex]F[/itex] be the free group with generators [itex]a,b[/itex]. Then [itex]\alpha \mapsto a[/itex] and [itex]\beta \mapsto b[/itex] extends to a well defined map, which is obviously an isomorphism.
 
  • Like
Likes fresh_42
  • #13
Lagrange MVT, essentially.
It is well known that directional derivative along [itex]\mathbf{u}\neq 0[/itex] can be computed as
[tex]
\frac{\partial}{\partial \mathbf{u}} f(\mathbf{a}) = \langle \nabla f(\mathbf{a}), \overline{\mathbf{u}} \rangle
[/tex]
where [itex]\overline{\mathbf{u}}[/itex] is unit vector in the direction of [itex]\mathbf{u}[/itex].

Let [itex]f:\mathbb R^n \to \mathbb R[/itex] be continuous and differentiable in [itex]S[/itex]. Define [itex]h:[0,1] \to S\cup \{x,x+v\}[/itex] by
[tex]
h(t) := x+tv.
[/tex]
Then [itex]fh[/itex] satisfies the assumptions of the MVT. So [itex](fh)'(t_0) = fh(1) - fh(0)[/itex] for some [itex]t_0\in (0,1)[/itex]. Put [itex]h(t_0) = z[/itex], then [itex]\langle \nabla f(z), v\rangle = f(x+v)-f(x)[/itex].
 
  • #14
In reference to #2, probably none of us in the US had a course in elementary geometry, even though we may have seen topological vector spaces, advanced calculus, field extension theory, tensor products, and free groups! So this is a review of simple geometric facts about projective 3 space.

Recall that the points of a projective 3 space P^3 are the lines through the origin (one diml subspaces) of a 4 diml vector space V. A line in P^3 is a 2 diml subspace of V, and a plane in P^3 is a 3 diml subspace of V.
So this problem is also a problem in linear algebra, if you prefer. In particular, you can use linear algebra to prove these geometric facts:

If two distinct lines in P^3 meet, they meet in exactly one point, and lie on exactly one plane.
Given a line and a plane in P^3, either the line lies in the plane or else meets it in exactly one point.
A line in P^3 and a point not on that line, together lie on exactly one plane.
Two distinct planes in P^3 meet in a unique line.
Two distinct lines in a plane meet in exactly one point.

If you assume these facts, they suffice to solve the problem as stated. In fact the solution then requires no further mathematical knowledge, only logic, hence a layperson could do it. Or you might prefer to prove the linear algebra version: Given two 2 diml subspaces F,G of the 4 diml vectror space V, intersecting only at the origin, and a one diml subspace E not lying in either F or G, prove there is a unique 2 diml subspace H containing E and meeting each of F,G in a one diml subspace. If you choose this approach, you really should then go back and give the projective geometric argument. I only make this alternate suggestion since some of us may have more linear algebra intuition than projective geometric intuition.
 
Last edited:
  • Like
Likes jbergman
  • #15
I pretty much bruteforced 8a but here is my entry until someone posts a nice one:

So we start by setting $$a = \pi + e*i$$ then by squaring both sides we get $$a^2 = \pi^2 +2ei\pi -e^2 $$ now we put all real terms on one side and square again to get $$ (a^2 - \pi^2 +e^2 )^2 = -4e^2\pi^2 $$ from here some simple algebra gets us our first candidate: $$f(a) = a^4 +2a^2e^2 -2a^2\pi^2 +e^4 + e^2\pi^2 + \pi^4 $$ of course this doesn't seem irreducible and to check we suppose that there is a factorization $$(a^2+ba+ c)(a^2 + da + f) = f(x)$$ simple manipulation gives us: $$ d+b = 0 \implies d = -b$$ $$f +db +c = 2e^2 - 2\pi^2 $$ $$bf + dc = b(f-c) = 0 \implies b= 0 \or f=c$$ $$cf = (e^2 +\pi^2)^2$$ we will consider the case where f=c so from here we get that $$c=f= \pi^2 + e^2$$ and $$b = 2\pi$$ so our factorization is $$f(x) = (x^2 +2x\pi + e^2 + \pi^2 )(x^2 - 2x\pi + e^2 + \pi^2) = g(x)h(x)$$ finally it is easy to see that h(x) has no real zeros and is hence irreducible and it has $$\pi + ei$$ as one of the zeros. Hence h(x) is the minimal polynomial
 
Last edited:
  • #16
kmitza said:
I pretty much bruteforced 8a but here is my entry until someone posts a nice one:

So we start by setting $$a = \pi + e*i$$ then by squaring both sides we get $$a^2 = \pi^2 +2ei\pi -e^2 $$ now we put all real terms on one side and square again to get $$ (a^2 - \pi^2 +e^2 )^2 = -4e^2\pi^2 $$ from here some simple algebra gets us our first candidate: $$f(a) = a^4 +2a^2e^2 -2a^2\pi^2 +e^4 + e^2\pi^2 + \pi^4 $$ of course this doesn't seem irreducible and to check we suppose that there is a factorization $$(a^2+ba+ c)(a^2 + da + f) = f(x)$$ simple manipulation gives us: $$ d+b = 0 \implies d = -b$$ $$f +db +c = 2e^2 - 2\pi^2 $$ $$bf + dc = b(f-c) = 0 \implies b= 0 \or f=c$$ $$cf = (e^2 +\pi^2)^2$$ we will consider the case where f=c so from here we get that $$c=f= \pi^2 + e^2$$ and $$b = 2\pi$$ so our factorization is $$f(a) = (x^2 +2a\pi + e^2 + \pi^2 )(x^2 - 2a\pi^2 + e^2 + \pi^2) = g(a)h(a)$$ finally it is easy to see that h(a) has no real zeros and is hence irreducible and it has $$\pi + ei$$ as one of the zeros. Hence h(a) is the minimal polynomial
I assume there is a typo somewhere. Could you please write your solution as an element of ##\mathbb{R}[x]?## I'm a bit lost within your alphabet.
 
  • #17
fresh_42 said:
I assume there is a typo somewhere. Could you please write your solution as an element of ##\mathbb{R}[x]?## I'm a bit lost within your alphabet.
Yeah sorry my notation wasn't good, bad choice using a as a variable... I think I fixed it now
 
  • #18
kmitza said:
Yeah sorry my notation wasn't good, bad choice using a as a variable... I think I fixed it now
Nope. Hint: Compare ##g(x)## and ##h(x)##. One is wrong. Unfortunately ##h(x).##
 
  • #19
Um I am not sure what you mean I double checked with wolfram and hand just now and the one that has $$\pi + ei$$ as a root is $$h(x) = x^2 -2x\pi + e^2 + \pi^2$$ the other one has $$-\pi + ie$$ and it's conjugate as roots, right?
 
  • #20
kmitza said:
Um I am not sure what you mean I double checked with wolfram and hand just now and the one that has $$\pi + ei$$ as a root is $$h(x) = x^2 -2x\pi + e^2 + \pi^2$$ the other one has $$-\pi + ie$$ and it's conjugate as roots, right?
Yes, but this is not what you wrote earlier! You squared ##\pi ## in the second term in your original answer. Here it is gone!

Here is the (slightly) more elegant proof:

It is easy to guess the conjugate, second root. Now ##(\pi+ e\cdot i)(\pi - e\cdot i)=\pi^2+e^2 \in \mathbb{R}## and ##(\pi+ e\cdot i)+(\pi - e\cdot i)=2\pi \in \mathbb{R}## so we get by Vieta's formulas ##X^2-2\pi X+\pi^2+e^2\in \mathbb{R}[X].##
 
  • Like
Likes kmitza
  • #21
fresh_42 said:
Yes, but this is not what you wrote earlier! You squared ##\pi ## in the second term in your original answer. Here it is gone!

Here is the (slightly) more elegant proof:

It is easy to guess the conjugate, second root. Now ##(\pi+ e\cdot i)(\pi - e\cdot i)=\pi^2+e^2 \in \mathbb{R}## and ##(\pi+ e\cdot i)+(\pi - e\cdot i)=2\pi \in \mathbb{R}## so we get by Vieta's formulas ##X^2-2\pi X+\pi^2+e^2\in \mathbb{R}[X].##
Oh my god that's so much simpler thank you for showing me, when it comes to the square it is an honest mistake I didn't see I added it
 
  • Like
Likes fresh_42
  • #22
Hint for #9: To show two vector spaces are isomorphic, of course we first try to find a linear map from one to the other, and then hopefully it is bijective. We don't quite have that here, but we have something close. I.e. remember the defining property of the tensor product, Hom(VtensW, X) ≈ Bil(VxW,X) ≈ Hom(V,Hom(W, X)), (where Hom means linear maps and Bil means bilinear maps). What does that give here, and how does that help?

Hints for #10: This is an (affine) algebraic geometry problem, the study of the relation between subsets of affine space and the ideals of polynomials vanishing on them. If f(X,Y) is a polynomial over a field k, with no multiple factors, and C = {(a,b): f(a,b) = 0} is the subset of the plane where it vanishes, then the family of all polynomials vanishing on this set equals the ideal (f) generated by f in k[X,Y], hence k[X,Y]/(f) = R is the ring of polynomial functions restricted to C.

The fundamental fact relating points and ideals, is that if p = (a,b) is a point of C, then the evaluation function k[X,Y]-->k, sending a polynomial g to its value g(p), at p, has as its kernel a maximal ideal of k[X,Y] which contains the ideal (f), hence defines a maximal, hence prime, ideal Mp of R. (Prove this.) What are the generators of Mp?

This technique let's you produce certain elements of Spec(R) = {set of all prime ideals of R}. There are however, also other prime ideals in R. Can you find them? (They correspond to "points" of C, i.e. solutions (a,b) of f(X,Y) = 0, but with coefficients (a,b) not in k, but in certain field extensions of k.)

If you want to explore the prime (or maximal) spectrum of the ring of real polynomial functions restricted to the real locus, you may consult my answer to this question on MathOverflow:
https://math.stackexchange.com/ques...y/2844259?noredirect=1#comment5865953_2844259
 
Last edited:
  • #23
Question on #6:
Pardon me for ignorance, but I wonder just what is wanted in problem 6, since I thought 6a is sometimes taken as a definition of an orientable differentiable manifold. Of course it is possible to define the orientability of any continuous manifold, in terms of continuous sections of the orientation bundle, a certain 2 sheeted cover of the manifold constructed from local integral homology groups, in which setting the problem does not quite make sense, i.e. if no differentiable structure is given. Does the problem ask for a proof that the definition of orientability as a continuous manifold, i.e. in terms of the orientation bundle, is equivalent to the two given conditions, in the presence of differentiable structure?

Of course for starters one could just prove that 6a and 6b are equivalent for differentiable manifolds. But then what? Thank you.
 
  • #24
mathwonk said:
Question on #6:
Pardon me for ignorance, but I wonder just what is wanted in problem 6, since I thought 6a is sometimes taken as a definition of an orientable differentiable manifold.
I meant the following more basic definition via coordinate charts. I thought that was closest to how physicists consider an orientation; basically, a positive fundamental determinant ##\det D(\varphi_\alpha \circ \varphi^{-1}_\beta )>0## of charts changes:

Orientations of a vector space are elements from either of the two possible equivalence classes of ordered bases, i.e. ##\det T \gtrless 0## where ##T## is the transformation matrix between bases.

An orientation ##\mu## of ##M## is a choice of orientations ##\mu_x## for every tangent space ##T_x(M),## such that for all ##x_0\in M## there is an open neighborhood ##x_0\in U\subseteq M## and differentiable vector fields ##\xi_1,\ldots,\xi_n## on ##U## with
$$
\left[\left(\xi_1\right)_x,\ldots,\left(\xi_n\right)_x\right]=\mu_x
$$
for all ##x\in U.## The manifold ##M## is called orientable, if an orientation for ##M## can be chosen.

Let ##\mu## be an orientation on ##M.## A chart ##(U,\varphi )## with coordinates ##x_1,\ldots,x_n## is called positive oriented, if for all ##x\in U##
$$
\left[\left. \dfrac{\partial }{\partial x_1}\right|_{x},\ldots,\left. \dfrac{\partial }{\partial x_n}\right|_{x}\right]=\mu_x
$$
 
  • #25
Suggestion for #8b:
First prove that F7 is a field, then try to use the same proof to show F7[T]/(T^3-2) is a field.

Here is an example of the (easier) problem of finding an inverse, this time of T:
since T^3 = 2, then 4T^3 = (4T^2)T = 8 ≈ 1 (mod 7), so T^-1 = 4T^2.

For a reprise of basic facts about computations in extensions of this type, one may consult pages 21-30 of chapter 2 of Galois Theory, by Emil Artin, available here for free download, under open access; (this is where I first encountered these ideas in about 1963, and is still the clearest explanation I have seen since):
https://projecteuclid.org/ebooks/no...d-Theory/ndml/1175197045?tab=ArticleFirstPage
 
Last edited:
  • #26
I'll take a go at 8b.

Let ##p(t) = T^3-2##. Notice that ##p## has not roots over ##\mathbb{F}_7## which can be seen by simply evaluating ##p## at each element of ##\mathbb{F}_7##.

We claim then ##p## is irreducible. If not then ##p## can be factored as ##p = f g## where ##f##, ##g## are irreducible and ##deg(f) + deg(g) = 3##. Since ##f## and ##g## are irreducible, ##deg(f), deg(g) > 0## and so either ##deg(f) = 1## or ##deg(g) = 1##. Suppose without loss of generality that ##deg(f) = 1##, then ##f## is linear and hence has a root of ##\mathbb{F}_7## which means that ##p## has a root over ##\mathbb{F}_7## which is a contradiction.

Since ##p## is irreducible it then ##(p)## must be maximal. Otherwise suppose there exists some ##q \in \mathbb{F}_7[T]## such that ##(p) \subset (q)##. Hence there exists an ##f \in \mathbb{F}_7[T]## such that ##p = qf##. Since ##p## is irreducible either ##q## is a unit (in which case ##(q) = \mathbb{F}_7[T]##) or ##f## is a unit (in which case ##(f) = (q)##). So indeed ##(f)## is maximal.

We now use the fact that given any ring ##R## with a maximal ideal ##M##, ##R/M## is a field. This follows from one of the standard isomorphism theorems which states that there is a one-to-one correspondence between the ideals of ##R/M## and the ideals in ##R## containing ##M##. So if ##M## is maximal, ##R/M## can not contain any proper ideals, and so must be a field.

To compute the number of elements in ##\mathbb{F}_7[T]/(T^3-2)## we note that since ##3## is prime each cosets can be represented by elements of the form ## c_1 + c_2 T + c_3 T^2## where ##c_i \in \mathbb{F}_7[T]##, and there are ##7^3 = 343## such elements.

To compute ##(T^2+2T+4)(2T^2+5)## in ##\mathbb{F}_7[T]/(T^3-2)## we first note that ##(T^2+2T+4)(2T^2+5) = 2T^4 + 2T^3 + 6T^2 + 3T + 6## in ##\mathbb{F}_7## then reducing ##mod (T^3-2)## gives ## 2(2)T + 2(2) + 6T^2 + 3T + 6## and so $$(T^2+2T+4)(2T^2+5) = 6T^2 + 3$$

To compute ##\frac{1}{T+1}## you can do a sort of "brute force" method. Let ##\frac{1}{T+1} = P(T) = a_0 + a_1T + a_2T^2## then ##(T+1)P(T) = (a_1+a_2)T^2 + (a_1 + a_0)T + (a_0 + 2a_2) = 1## which gives a system $$a_0 + a_2 = 1, a_1 + a_0 = 0, a_1 + a_2 = 0$$
which has a solution ##a_0 = 5, a_1 = 2, a+2 = 5##. So $$\frac{1}{T+1} =5 + 2T + 5T^2$$
 
Last edited by a moderator:
  • Like
Likes mathwonk and fresh_42
  • #27
jbstemp said:
I'll take a go at 8b.

Let ##p(t) = T^3-2##. Notice that ##p## has not roots over ##\mathbb{F}_7## which can be seen by simply evaluating ##p## at each element of ##\mathbb{F}_7##.

We claim then ##p## is irreducible. If not then ##p## can be factored as ##p = f g## where ##f##, ##g## are irreducible and ##deg(f) + deg(g) = 3##. Since ##f## and ##g## are irreducible, ##deg(f), deg(g) > 0## and so either ##deg(f) = 1## or ##deg(g) = 1##. Suppose without loss of generality that ##deg(f) = 1##, then ##f## is linear and hence has a root of ##\mathbb{F}_7## which means that ##p## has a root over ##\mathbb{F}_7## which is a contradiction.

Since ##p## is irreducible it then ##(p)## must be maximal. Otherwise suppose there exists some ##q \in \mathbb{F}_7[T]## such that ##(p) \subset (q)##. Hence there exists an ##f \in \mathbb{F}_7[T]## such that ##p = qf##. Since ##p## is irreducible either ##q## is a unit (in which case ##(q) = \mathbb{F}_7[T]##) or ##f## is a unit (in which case ##(f) = (q)##). So indeed ##(f)## is maximal.

We now use the fact that given any ring ##R## with a maximal ideal ##M##, ##R/M## is a field. This follows from one of the standard isomorphism theorems which states that there is a one-to-one correspondence between the ideals of ##R/M## and the ideals in ##R## containing ##M##. So if ##M## is maximal, ##R/M## can not contain any proper ideals, and so must be a field.

To compute the number of elements in ##\mathbb{F}_7[T]/(T^3-2)## we note that since ##3## is prime each cosets can be represented by elements of the form ## c_1 + c_2 T + c_3 T^2## where ##c_i \in \mathbb{F}_7[T]##, and there are ##7^3 = 343## such elements.

To compute ##(T^2+2T+4)(2T^2+5)## in ##\mathbb{F}_7[T]/(T^3-2)## we first note that ##(T^2+2T+4)(2T^2+5) = 2T^4 + 2T^3 + 6T^2 + 3T + 6## in ##\mathbb{F}_7## then reducing ##mod (T^3-2)## gives ## 2(2)T + 2(2) + 6T^2 + 3T + 6## and so $$(T^2+2T+4)(2T^2+5) = 6T^2 + 3$$

To compute ##\frac{1}{T+1}## you can do a sort of "brute force" method. Let ##\frac{1}{T+1} = P(T) = a_0 + a_1T + a_2T^2## then ##(T+1)P(T) = (a_1+a_2)T^2 + (a_1 + a_0)T + (a_0 + 2a_2) = 1## which gives a system $$a_0 + a_2 = 1, a_1 + a_0 = 0, a_1 + a_2 = 0$$
which has a solution ##a_0 = 5, a_1 = 2, a+2 = 5##. So $$\frac{1}{T+1} =5 + 2T + 5T^2$$
Almost perfect! Only ##(T^2+2T+4)(2T^2+5)=6T^2.## The coefficient at ##T^3## is ##4,## not ##2.##
 
  • #28
Nice work! Linear equations can always be used to find inverses as you do. Another trick is to find the "minimal polynomial" of an element, since if X^3 + aX^2 + bX + c = 0, then X^3 + aX^2 + bX = -c, and so X(X^2 + aX + b) = -c, and since we know the inverse of -c in the field of coefficients, we can divide by it. (In particular an element has an inverse iff its minimal poly has non zero constant term. This works in linear algebra too, where we recall the constant term of the characteristic polynomial is the determinant.)

This is brute force, but for linear polynomials like T+a, the force needed is minimal, since we can always substitute T = (T+a)-a into the given equation and expand. E.g. T+1 = S, gives T^3 = (S-1)^3 = S^3 - 3S^2 + 3S -1 = 2, so S^3 - 3S^2 + 3S = S(S^2-3S+3) = 3, so since 3.5 =1, we get S(5S^2 - S + 1) = 1, and thus 1/(T+1) = 1/S = 5S^2 -S + 1 = 5(T+1)^2 -(T+1) + 1 = 5T^2 + 2T +5.

Or just play with it, looking for a multiple of T+1 that is constant: E.g. here T^3 = 2, so T^3 + 1 = 3, so T^3 + 1 = (T+1)(T^2-T+1) = 3, and since 1/3 = 5, thus 1/(T+1) = 5(T^2-T+1) = 5T^2 -5T +5 = 5T^2 +2T +5.

I tried to use linear equations to show this ring is a field, by checking that we can always solve the system for an inverse but had trouble showing the determinant is non zero. For a general element a + bT + cT^2, I got determinant a^3 + 2b^3 + 4c^3 + abc, which apparently has no solutions mod 7, except a=b=c= 0. I could show this at least when abc = 0, i.e. when at least one coefficient is zero, (which includes the case of T+1), using the fact that mod 7 the only numbers having cube roots are 0,1,-1, and also when abc≠0 and all the cubes are equal, but did not pursue all other cases in general.

Your abstract method using maximal ideals of course works beautifully in general. You can also use abstract linear algebra by first showing that the product of two non zero polyomials of form a+bX+cX^2 cannot be divisible by T^3-2, (either directly as Artin does, following Gauss, or using unique factorization of polynomials), hence the map from our ring (a finite dimensional vector space) to itself defined by multiplication by a non zero polynomial a+bX+cX^2 is a linear injection, hence also surjective, hence multiplies some polynomial into 1.

Congratulations!

(What do you think of 8c? For 8d, I myself will need to recall Galois' construction of the field with 25 elements, a field I have never used. ... Oh yes, fresh_42 just showed us how to comnstruct the field of order 7^3 so we can do likewise for 5^2. I also need to know what the frobenius map of F25 is, ... ok it raises each element to the 5th power. Ah yes! it is additive, multiplicative, and fixes the subfield F5, hence is an F5 - linear map!...after much faulty calculation, I seem to have accidentally chosen a nice simple eigenbasis. By the way, in hindsight, what happens when you compose the frobenius with itself?)

Hint: For 8c:

If f.g = P, all with integer coefficients, what would be true mod 5? Is that possible?By the way, as usual these are very nice problems. I think #8 in particular is wonderful. The technical difficulties are minimal, and the instructional value is high. It is not easy to come up with problems like this that are not routine, do not overwhelm, and teach you a lot.
 
Last edited:
  • Like
Likes fresh_42
  • #29
SPOILER, solutions of 8c, 8d:

8c: To be reducible it suffices to have a root, say X=a, since then by the root factor theorem, (X-a) is a factor. In each of the last three fields the polynomial has a root: in the reals, by the intermediate value theorem every odd degree polynomial takes both signs hence has a root; in the field with 2 elements X=1 is visibly a root since each of the 4 terms is congruent to 1, mod2; in the quotient ring, the meaning of the notation is that the polynomial in the bottom is set equal to zero, hence T itself is a root. Over the integers, the polynomial is irreducible, since it is congruent to a positive power of X, mod 5, but then both factors must also be congruent to a positive power of X, mod 5, which implies both have constant term divisible by 5, contradicting the fact that the constant term is not divisible by 25. By Gauss's lemma, an integer polynomial which factors with rational coefficients, has also integer coefficient factors, so it is also irreducible over the rationals.

8d: The non zero elements of a finite field form a cyclic group, so all 25 terms of the field F25 satisfy the polynomial X^25 = X, but not all satisfy X^5 = X. Since by definition the Frobenius map F takes an element a to F(a) = a^5, that means F^2 = Id, so F satisfies the minimal and characteristic polynomial X^2-1 = 0. Hence its eigenvalues are 1,-1, and it is diagonalizable as a 2x2 matrix, in a suitable basis, with those eigenvalues on the diagonal.
 
Last edited:
  • #30
SPOILER: solution for 9:

By definition of the tensor product, or by use of its basic property, a linear map out of VtensW is equivalent to a bilinear map out of VxW, which then is equivalent both to a linear map from V to W*, and to a linear map from W to V*. In particular, given f as in the problem, define a linear map V-->W* by sending v to the linear functional that sends w to f(vtensw). By what is given, no non zero v maps to the zero functional, i.e. this injects V linearly into W*. Hence dim(V) ≤ dim(W*) = dim(W). Similarly, W embeds in V*, so also dim(W) ≤ dim(V). Since V,W are finite dimensional of the same dimension, they are isomorphic, (but not naturally).
 
  • #31
SPOILER: solution of #10:By definition, V(Y^2-X^2) denotes the set of points in (X,Y) space whose coordinates satisfy the polynomial Y^2-X^2. Hence the real points consist of the points on the two lines X=Y and X+Y = 0, in the real plane. By definition Spec(R) is the set of prime ideals in the ring R, where an ideal J in R is prime if and only if the quotient ring R/J is a "domain", i.e. R/J has no non trivial divisors of zero. Equivalently, if and only if for elements a,b, in R, the product ab belongs to J if and only if at least one of a or b, or both, belongs to J.

If (a,b) is a solution of Y^2-X^2 = 0, the evaluation map C[X,Y]-->C, sending f(X,Y) to f(a,b), defines a surjective homomorphism C[X,Y]-->C with kernel the maximal ideal (X-a,Y-b). Y^2-X^2 belongs to this ideal as one sees by setting Y = (Y-b)+b and X=(X-a)+a and expanding Y^2-X^2. Hence (X-a,Y-b) defines an ideal J in R = C[X,Y]/(Y^2-X^2), such that R/J ≈ C, field. Thus J is a prime ideal in R. Moreover J determines the point (a,b) as the only common zero of all elements of J. Hence there is an injection from the infinite set of (real, hence also complex) points of V(Y^2-X^2) into Spec(R), that spectrum is infinite.

By definition, the Krull dimension of R is the length of the longest strict chain of prime ideals in R, ordered by containment, where the chain P0 < P1 <...< Pn has "length" = n. Since (X-Y) < (X-1,Y-1) is a chain of prime ideals in R of length 1, the Krull dimension is at least one. It already follows that R is not Artinian, since a ring is Artinian if and only if it is Noetherian, (which R is), and of Krull dimension zero, which R is not.

Now R actually has dimension one, since (X-Y) is a principal, hence minimal, prime in R, and when we mod out by it we get the principal ideal domain C[X,Y]/(X-Y) ≈ C[X]. In this ring an ideal is prime if and only if it is generated by an irreducible element, hence no non - zero prime ideal can be contained in another, since one of the irreducible generators would divide the other, contradicting the meaning of irreducible. Thus only one prime ideal, at most, can contain (X-Y). Similarly at most one can contain (X+Y). Thus any chain starting from either (X-Y) or (X+Y) has length at most one. But by definition of a prime ideal, any prime in R must contain zero, i.e. the product (X-Y)(X+Y), hence contains at last one of them.
 
  • #32
mathwonk said:
SPOILER, solutions of 8c, 8d:

8c: To be reducible it suffices to have a root, say X=a, since then by the root factor theorem, (X-a) is a factor. In each of the last three fields the polynomial has a root: in the reals, by the intermediate value theorem every odd degree polynomial takes both signs hence has a root; in the field with 2 elements X=1 is visibly a root since each of the 4 terms is congruent to 1, mod2; in the quotient ring, the meaning of the notation is that the polynomial in the bottom is set equal to zero, hence T itself is a root.

mathwonk said:
Over the integers, the polynomial is irreducible, since it is congruent to a positive power of X, mod 5, but then both factors must also be congruent to a positive power of X, mod 5, which implies both have constant term divisible by 5, contradicting the fact that the constant term is not divisible by 25. By Gauss's lemma, an integer polynomial which factors with rational coefficients, has also integer coefficient factors, so it is also irreducible over the rationals.
Or short with Eisenstein and ##5##.
mathwonk said:
8d: The non zero elements of a finite field form a cyclic group, so all 25 terms of the field F25 satisfy the polynomial X^25 = X. Since by definition the Frobenius map F takes an element a to F(a) = a^5, that means F^2 = Id, so F satisfies the minimal and characteristic polynomial X^2-1 = 0. Hence its eigenvalues are 1,-1, and it is diagonalizable as a 2x2 matrix, in a suitable basis, with those eigenvalues on the diagonal.
... which results in ##
\begin{bmatrix}
1&0\\0&4
\end{bmatrix}
##
 
  • #33
SPOILER: solution for 2:

Let the two disjoint lines be L and M, and p the point not on either of them. Then there is a unique plane A spanned by L and p, and a unique plane B spanned by p and M. Then A and B are distinct planes, since one of them contains L, the other contains M, and L and M cannot be in the same plane since they do not meet. Hence the planes A and B meet along a unique common line K. Since K consists of all points common to A and B, it contains p. Since K also lies in both planes A and B, it meets both L and M. If R were some other such line, R would contain p as well as a point of L, hence would meet A in 2 points, hence would lie in A. Similarly it would lie in B, so the only possible such line R is the unique line K of intersection of A and B.That leaves 4b, 5 and 6. Are we stumped?!
 
  • Like
Likes fresh_42
  • #34
mathwonk said:
SPOILER: solution for 2:

Let the two disjoint lines be L and M, and p the point not on either of them. Then there is a unique plane A spanned by L and p, and a unique plane B spanned by p and M. Then A and B are distinct planes, since one of them contains L, the other contains M, and L and M cannot be in the same plane since they do not meet. Hence the planes A and B meet along a unique common line K. Since K consists of all points common to A and B, it contains p. Since K also lies in both planes A and B, it meets both L and M. If R were some other such line, R would contain p as well as a point of L, hence would meet A in 2 points, hence would lie in A. Similarly it would lie in B, so the only possible such line R is the unique line K of intersection of A and B.That leaves 4b, 5 and 6. Are we stumped?!
Doh! I was just working on sketching out a logical proof based on your axioms of P^3:
1) the two skew lines (g, h) do not meet, so they are in distict planes, A and B
1a) g is a line on A
1b) h is a line on B
2) A and B are distinct so they meet in a unique line, k != h or g
2a) h intersects A at exactly one point
2b) g intersects B at exactly one point
2c) k is a line in both planes
3) the point p is neither on g or h
3a) p and g lie in exactly one plane, C
3b) p and h lie in exacly one plane, D
4) C and D are distinct, so they meet in a unique line, l
5) I'm not sure here... but I think:
4a) C and A intersect along the line g (g is in both planes)
4b) D and B intersect along the line h (h is in both planes)
6) l intersects with both g and h
6a) since both l and g are on C, they meet at a unique point
6b) since both l and h are on D, they meet at a unique point
6c) The point p is on both C and D, so it must be also be on l

If that is more or less correct, I guess 2) becomes irrelevant, but let me know if I got anything wrong.
 
  • #35
valenumr said:
If that is more or less correct, I guess 2) becomes irrelevant, but let me know if I got anything wrong.
This is too vague and looks too Euclidean. How do you construct ##A## and ##B## with those properties? Anyway, the correct solution has been given in post #33.
 

Similar threads

3
Replies
80
Views
6K
3
Replies
93
Views
12K
2
Replies
61
Views
9K
2
Replies
61
Views
11K
3
Replies
102
Views
8K
3
Replies
86
Views
11K
2
Replies
61
Views
7K
2
Replies
67
Views
9K
2
Replies
69
Views
6K
4
Replies
114
Views
8K
Back
Top