How Do We Find the Embeddings of $\mathbb{Q}(\sqrt{2})$ in $\mathbb{R}$?

  • MHB
  • Thread starter mathmari
  • Start date
In summary, the example shows how to find embeddings in a field extension using morphisms that send roots of a polynomial of a primitive element to roots of the image of the polynomial. Two embeddings are found by choosing the image of the base elements, in this case 1 and $\sqrt{2}$. The two options for the image of $\sqrt{2}$ correspond to the two roots of the polynomial $x^2-2$.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

In my notes there is the following example:

$$\mathbb{Q}(\sqrt{2}) \overset{\widetilde{\sigma}}{\longrightarrow}\mathbb{R}\\ | \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ | \\ \mathbb{Q} \overset{\sigma=id : q \mapsto q}{\rightarrow}\mathbb{R}$$

$p(x)=Irr(\sqrt{2}, \mathbb{Q})=x^2-2 \in \mathbb{Q}[x]$

$p^{\sigma}=x^2-2 \in \mathbb{R}[x]$ has two different roots in $\mathbb{R}$ : $ \pm \sqrt{2}$

So there are two embeddings $\widetilde{\sigma} : \mathbb{Q} (\sqrt{2}) \rightarrow \mathbb{R}$ :

- $\widetilde{\sigma}(\sqrt{2})=\sqrt{2}$ so $\widetilde{\sigma} ( \xi)=\xi, \forall \xi \in \mathbb{Q}(\sqrt{2})$
- $\widetilde{\sigma}(\sqrt{2})=-\sqrt{2}$ so $\widetilde{\sigma}(q_o+q_1 \sqrt{2})=q_0-q_1\sqrt{2}$

Could you explain me howwe found these two embeddings?? (Wondering)
 
Physics news on Phys.org
  • #2
Hi,

Those morphisms sends roots of the polynomial of the primitive element into roots of the "image of the polynomial" (the polynomial with coefficients the image of the initial coefficients).

And the morphisms are defined by the image of the elements of a base (In this case, $\{1, \sqrt{2}\}$), as you know, these morphism needs to be the identity when restricted over the base field, so $\tilde{\sigma}(1)=1$ and you got two options to choose the image of $\sqrt{2}$ that are the two roots of $x^{2}-2$, i.e. $\tilde{\sigma}(\sqrt{2})=\pm \sqrt{2}$
 

Related to How Do We Find the Embeddings of $\mathbb{Q}(\sqrt{2})$ in $\mathbb{R}$?

1. How are embeddings created?

Embeddings are created using a process called embedding learning. This involves training a neural network on a large dataset, such as a corpus of text, to learn the relationships between words. The neural network then maps each word to a vector representation, known as an embedding, which captures its semantic and syntactic properties.

2. What is the purpose of embeddings?

The purpose of embeddings is to represent words in a numerical format that can be used as input for machine learning algorithms. This allows machines to understand the meaning and relationships between words, and to perform tasks such as text classification, information retrieval, and natural language processing.

3. How do we measure the quality of embeddings?

The quality of embeddings can be measured using various metrics, such as their ability to capture semantic and syntactic relationships between words, their performance on downstream tasks, and their ability to handle out-of-vocabulary words. Embeddings can also be evaluated by comparing them to human judgments or by conducting user studies.

4. Can embeddings be trained on different languages?

Yes, embeddings can be trained on different languages by using multilingual datasets or by training on monolingual datasets and then translating words to a common language. Some embeddings are specifically trained to capture cross-lingual relationships, making them suitable for multilingual applications.

5. Are there different types of embeddings?

Yes, there are different types of embeddings, such as word embeddings, sentence embeddings, and document embeddings. Word embeddings represent individual words, while sentence and document embeddings capture the overall meaning of a sentence or document. There are also specialized embeddings for specific tasks, such as image embeddings for visual tasks or knowledge graph embeddings for knowledge representation.

Similar threads

Back
Top