Why do we use double pipes to represent the norm of a vector?

In summary, the norm of a vector is defined as ||\vec{v}|| instead of just |\vec{v}| because there are different norms for different spaces. The way the matrix product works is a definition.
  • #1
Ashiataka
21
1
1. Why is the norm of a vector noted by double pipes when it is just the magnitude which is notated by single pipes?

2. Does anyone know where I could find out why matrix multiplication is defined the way it is? I know how to do it, but I do not understand why it is that way.

Thank you.
 
Physics news on Phys.org
  • #2
1) You define the norm of a vector as [itex]||\vec{v}||[/itex] instead of just [itex]|\vec{v}|[/itex] because, although they often coincide, this is not necessary true. In some spaces (different from euclidean ones) you can define different kinds of norm, as for example: [itex]||\vec{v}||=\min |\vec{v(x)}| [/itex], where x is a certain variable.

2) The way the matrix product works is a definition. You define an operation and then you find al the consequences.
 
  • #3
1. In this case, it doesn't matter if you write ##\|x\|## or ##|x|##. The ##\|\,\|## notation is standard for norms, and the map ##x\mapsto|x|## is a norm. This is why it makes sense to write ##\|x\|## instead of ##|x|##.
2. For this you need to understand the relationship between linear operators and matrices. See e.g. this post. (You can ignore the quote and the stuff below it). Make sure that you understand bases of vector spaces before you try to understand this. The motivation behind the definition of matrix multiplication is what I said in this post. I think I have posted more details of this somewhere else. I will try to find it... Edit: There it is.

Edit 2: Some additional comments:
  • Linear operator, linear transformation, linear map, linear function all mean the same thing. (Most of the time anyway. Some authors use the term "operator" only when the domain and codomain are the same vector space, and some use the term "function" only when the codomain is ##\mathbb R## or ##\mathbb C##).
  • The point of view that I'm advocating in these posts is that for each linear transformation ##A:U\to V##, and each pair of bases (one for U, one for V), there's a matrix [A] that corresponds to it in the following way: We take the number on row i, column j to be ##(Au_j)_i##. Here ##u_j## is the jth member of the given basis for U, and ##(Au_j)_i## is the ith component of the vector Au_j, in the given basis for V. Matrix multiplication is defined the way it is to ensure that we always have ##[A\circ B]=[A]##.
 
Last edited:
  • #4
Thank you.

I understand it is a definition. But I'm asking why it is defined that way.

EDIT:
Thank you Fredrik. I shall read through those posts, though I fear the mathematics is beyond me.
 
Last edited:
  • #5
Ashiataka said:
I fear the mathematics is beyond me.
The notation (in particular all the indices) may be intimidating, but if you understand vector spaces, bases, and what it means for a function to be linear, it's actually pretty easy. You need to know that the definition of matrix multiplication can be written as ##(AB)_{ij}=\sum_k A_{ik}B_{kj}##. The right-hand side is often written as ##A_{ik}B_{kj}##, especially by physicists, because it's easy enough to remember that there's always a sum over each index that appears twice.
 
  • #6
Ashiataka said:
2. Does anyone know where I could find out why matrix multiplication is defined the way it is? I know how to do it, but I do not understand why it is that way.

The basic reason is "because it is a very useful way to combine two matrices". A simple example is using matrices to represent simultanous equations. If you have equations like $$\begin{align} ax + by &= p\\ cx + dy &= q\end{align}$$ that corresponds to the matrix equation $$\begin{bmatrix} a & b \\ c & d\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} p \\ q \end{bmatrix}$$.
An apparently simpler definition of "multiplication" like$$\begin{bmatrix} a & b \\ c & d\end{bmatrix} \begin{bmatrix} p & q \\ r & s \end{bmatrix} = \begin{bmatrix} ap & bq \\ cr & ds\end{bmatrix} $$ (similar to the definition of matrix addition) turns out to be much less useful.
 
  • #7
C'mon, don't delete those, it was the "best explanation ever"!
 

FAQ: Why do we use double pipes to represent the norm of a vector?

What is matrix multiplication?

Matrix multiplication is a mathematical operation that involves multiplying two matrices to produce a new matrix. It is only defined for two matrices if the number of columns in the first matrix is equal to the number of rows in the second matrix. The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.

What is the difference between matrix multiplication and scalar multiplication?

Matrix multiplication involves multiplying two matrices to produce a new matrix, while scalar multiplication involves multiplying every element of a matrix by a single number (scalar). In scalar multiplication, the resulting matrix will have the same dimensions as the original matrix, whereas in matrix multiplication, the resulting matrix will have different dimensions.

How do you calculate the norm of a matrix?

The norm of a matrix is a measure of its size or magnitude. There are several different ways to calculate the norm of a matrix, including the Frobenius norm, which is calculated by taking the square root of the sum of the squared values of all the elements in the matrix.

Why is matrix multiplication important in linear algebra?

Matrix multiplication is an essential operation in linear algebra because it allows us to manipulate and transform data represented in the form of matrices. It is used in a variety of applications, such as solving systems of linear equations, calculating transformations in computer graphics, and performing data analysis.

Can you multiply matrices of different dimensions?

No, in order to multiply two matrices, they must have compatible dimensions. This means that the number of columns in the first matrix must be equal to the number of rows in the second matrix. If the matrices do not have compatible dimensions, the multiplication operation is not defined.

Back
Top