What is the basis of the trivial vector space {0}

In summary: So I suppose we should do both. In summary, the title to this thread said "empty set". Obviously, {0} is not the "empty set".
  • #1
I like Serena
Homework Helper
MHB
16,336
258
HallsofIvy said:
By the way, the title to this thread said "empty set". Obviously, {0} is not the "empty set".

It makes me wonder... wikipedia says about a basis:
In mathematics, a set of elements (vectors) in a vector space V is called a basis, or a set of basis vectors, if the vectors are linearly independent and every vector in the vector space is a linear combination of this set.[1]

So what is the basis for the trivial vector space $\{\mathbf 0\}$?
Because if we pick the empty set $\varnothing$ as a basis, we cannot find a linear combination for the zero-vector $\mathbf 0$.
Would the basis then be $\{\mathbf 0\}$?
Or is the definition in wiki wrong? It certainly doesn't say anything about the trivial vector space.
And it seems to me that a basis should only contain non-zero vectors. (Thinking)

greg1313 said:
Use \{\varnothing\} for $\{\varnothing\}$.

I'm afraid it's not $\{\varnothing\}$. It's really $\{\mathbf 0\}$. (Nerd)
 
Physics news on Phys.org
  • #2
Re: Intersection of all subspace of V is the empty set

I believe that $\{0\}$ does not have a basis. Indeed, the zero-vector cannot be a basis because it is not independent.
Taylor and Lay define (Hamel) bases only for vector spaces with "some nonzero elements". (Introduction to Functional Analysis, 1980.) Then they give the usual proof that every such vector space has a Hamel basis.
 
  • #3
Re: Intersection of all subspace of V is the empty set

Krylov said:
I believe that $\{0\}$ does not have a basis. Indeed, the zero-vector cannot be a basis because it is not independent.

Ah, but it can be a basis! Since there is only one vector, the zero-vector, it holds that any vector in the basis is not a linear combination of the other vectors in the basis - just because there aren't any!

See the wiki definition of linear independence:
In the theory of vector spaces, a set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others; if no vector in the set can be written in this way, then the vectors are said to be linearly independent.

And to be honest, it doesn't make sense to me that there is exactly one vector space, the trivial vector space, that wouldn't have a basis.
So I'm not sure what to think of Taylor and Lay. I'm not familiar with them, but it would seem as if they have also overlooked the trivial vector space.
Just checked my own books, but unfortunately I don't have any that define a basis or linear independence.
 
  • #4
Re: Intersection of all subspace of V is the empty set

Then I think I disagree with the wiki definition of independence. For me, the standard definition is: The vectors $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are said to be independent if
\[
c_1\mathbf{x}_1 + \cdots + c_n\mathbf{x}_n = 0
\]
for some scalars $c_1,\ldots,c_n \in \mathbb{K}$ (with $\mathbb{K}$ the real or complex field, for simplicity) implies $c_1 = \cdots = c_n = 0$. To me, what is written in the wiki seems to be an incomplete characterization.

So, $\mathbf{0}$ by itself is not independent and therefore cannot form a basis.

Some books rule out the trivial vector space from the start. I suppose it is to avoid this kind of thing.
 
Last edited:
  • #5
Re: Intersection of all subspace of V is the empty set

Krylov said:
Then I think I disagree with the wiki definition of independence. For me, the standard definition is: The vectors $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are said to be independent if
\[
c_1\mathbf{x}_1 + \cdots + c_n\mathbf{x}_n = 0
\]
for some scalars $c_1,\ldots,c_n \in \mathbb{K}$ (with $\mathbb{K}$ the real or complex field, for simplicity) implies $c_1 = \cdots = c_n = 0$. To me, what is written in the wiki seems to be an incomplete characterization.

Fair enough.
But the nice thing about wiki is that we can fix it!
Preferably with a proper reference, but just making it consistent will do for me - I'm sure other people will respond to it and fix it if we do it wrong.
(I've recently been fixing the definition of subfield on wiki. ;))

Either way, linear dependence has to be the opposite of linear independence.
So I guess you're proposing to switch the definitions around?
That could work for me, especially since the current wiki definition doesn't have a reference.
Still, we do need to get to a definition that is consistent and complete.

Krylov said:
Some books rule out the trivial vector space from the start. I suppose it is to avoid this kind of thing.

I don't agree with that.
It just means they're being sloppy and can't be bothered with the edge cases that should never be ignored IMHO.

As I see it, either we accept {0} as a basis, and accept it as being a linearly independent set, which also keeps span consistent.
Or we modify basis to mean that it only has non-zero vectors, modify linear span to always include the zero-vector, and modify linear independence to be as you suggested, and make linear dependence its opposite.

Both would fix the inconsistencies wouldn't they? (Wondering)
 
  • #6
Just because I can and because it makes sense to me, I've moved the posts about a basis for {0} to a new thread.
 
  • #7
Wouldn't any linearly independent set be a basis for $\{0\}?$ Suppose your linearly independent set is $\{x_1,x_2,\dots,x_n\}$. Then the linear combination $0\cdot x_1+0\cdot x_2+\cdots+0\cdot x_n=0$, and thus it spans the space.
 
  • #8
Re: Intersection of all subspace of V is the empty set

I like Serena said:
Either way, linear dependence has to be the opposite of linear independence.
So I guess you're proposing to switch the definitions around?
That could work for me, especially since the current wiki definition doesn't have a reference.
Still, we do need to get to a definition that is consistent and complete.

I like Serena said:
As I see it, either we accept {0} as a basis, and accept it as being a linearly independent set, which also keeps span consistent.
Or we modify basis to mean that it only has non-zero vectors, modify linear span to always include the zero-vector, and modify linear independence to be as you suggested, and make linear dependence its opposite.

Both would fix the inconsistencies wouldn't they? (Wondering)

I don't think there currently are any inconsistencies in the standard literature definitions of "linear independence", "linear dependence", "span" or "basis", nor do I think any modifications are required. Given a vector space $V$ over $\mathbb{K} = \mathbb{R}$ or $\mathbb{K} = \mathbb{C}$,

1. The vectors $\mathbf{x}_1,\ldots,\mathbf{x}_n$ in $V$ are defined to be linearly independent if the equation
\[
c_1\mathbf{x}_1 + \cdots + c_n\mathbf{x}_n = \mathbf{0} \qquad (\ast)
\]
only has the trivial solution $c_1 = \cdots = c_n = 0$. (This is what I wrote in post #4.)

2. The above vectors are defined to be linearly dependent if at least one nontrivial solution $(c_1,\ldots,c_n) \in \mathbb{K}^n$ of $(\ast)$ exists.

So, the above standard definitions are indeed complementary.

If the above vectors $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are independent and they span $V$, then by definition they are a basis for $V$.

I think that, as far as the definitions are concerned, there is nothing more to it. In particular, I do not know of any reference that regards $\mathbf{0}$ by itself as an independent vector. It would contradict the above definitions. (Moreover, but less importantly, it would mess up a lot of results. For example, "A square matrix is invertible if and only if its columns are linearly independent" would no longer be true: The $1 \times 1$ zero matrix would be a counterexample.)

Ackbach said:
Wouldn't any linearly independent set be a basis for $\{0\}?$ Suppose your linearly independent set is $\{x_1,x_2,\dots,x_n\}$. Then the linear combination $0\cdot x_1+0\cdot x_2+\cdots+0\cdot x_n=0$, and thus it spans the space.
No, the linearly independent set has to be a subset of the vector space for which it is going to be a basis. (Otherwise, one gets strange things: the vectors $(1,0)^T$ and $(0,1)^T$ in $\mathbb{R}^2$ would form a basis for the trivial subspace $\{(0,0,0)^T\}$ of $\mathbb{R}^3$.)
 
Last edited:
  • #9
Re: Intersection of all subspace of V is the empty set

Krylov said:
I don't think there currently are any inconsistencies in the standard literature definitions of "linear independence", "linear dependence", "span" or "basis", nor do I think any modifications are required.

The problems I see with the current definitions is:Wiki's intro section of linear independence:
wiki said:
In the theory of vector spaces, a set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others; if no vector in the set can be written in this way, then the vectors are said to be linearly independent.

With that definition {0} is independent instead of dependent.
I believe we can fix it by making it a set of non-zero vectors.

And I interpret it to mean that $\varnothing$ is an independent set, which I consider to be correct now that you've clarified.
No need to mention that in an intro section.


Wiki's definition section of linear independence is as you've quoted, which indeed takes care of {0}.

However, I think that $\varnothing$ is ambiguous.
If I'm not mistaken it satisfies both the definition of dependent and independent.
That is because the sum of zero terms is (usually) considered to be zero.

I believe we can improve it, by making it explicit that $\varnothing$ is an independent set.
And while we're at it note that {0} is a dependent set.


Similarly, I believe it would slightly improve wiki's basis if we note that $\varnothing$ is a basis for the trivial vector space {0}.
And that {0} is not a basis, since it's a linear dependent set.
 
  • #10
Re: Intersection of all subspace of V is the empty set

Krylov said:
No, the linearly independent set has to be a subset of the vector space for which it is going to be a basis. (Otherwise, one gets strange things: the vectors $(1,0)^T$ and $(0,1)^T$ in $\mathbb{R}^2$ would form a basis for the trivial subspace $\{(0,0,0)^T\}$ of $\mathbb{R}^3$.)

I would agree that basis vectors have to be in some larger set that contains the vector space you're finding the basis of. However, if you read Griffiths' Introduction to Quantum Mechanics (I have the 1st Ed.), you find on page 102 a couple of very fun footnotes that I'll quote here for you:

We are engaged here in a dangerous stretching of the rules, pioneered by Dirac (who had a kind of inspired confidence that he could get away with it) and disparaged by von Neumann (who was more sensitive to mathematical niceties), in their rival classics (P. A. M. Dirac, The Principles of Quantum Mechanics, first published in 1930, 4th Ed., Oxford (Clarendon Press) 1958, and J. von Neumann, The Mathematical Foundations of Quantum Mechanics, first published in 1932, revised by Princeton Univ. Press, 1955). Dirac notation invites us to apply the language and methods of linear algebra to functions that lie in the "almost normalizable" suburbs of Hilbert space. It turns out to be powerful and effective beyond any reasonable expectation.

The very next footnote:

That's right: We're going to use, as bases, sets of functions none of which is actually in the space! They may not be normalizable, but they are complete, and that's all we need.
 
  • #11
Re: Intersection of all subspace of V is the empty set

Ackbach said:
I would agree that basis vectors have to be in some larger set that contains the vector space you're finding the basis of. However, if you read Griffiths' Introduction to Quantum Mechanics (I have the 1st Ed.), you find on page 102 a couple of very fun footnotes that I'll quote here for you:
The very next footnote:

It seems to me that he is (or should be (Wink)) thinking about "rigged Hilbert spaces". This construction was invented for non square integrable (hence non-normalizable) functions that are nevertheless formal (as opposed to: rigorous) eigenstates of differential operators on $L^2(\mathbb{R})$, say. It seems that physicists usually ignore this, but this then indeed leads to the mathematical problem that you and Griffiths suggest.

So, "all we need" clearly depends on who "we" are.
 
  • #12
Bah. I just went ahead and added non-zero vectors to the intro section of linear dependence on wiki.
Then I realized it was wrong since it doesn't apply to a dependent set, so I've reverted it again.
As yet I haven't figured out how to make it correct and consistent without adding exceptional clauses to the intro section, which is not what I want to do. (Worried)
Still, it bother me that the intro section is formulated as a definition even though it is incorrect.
 
  • #13
David C. Lay's book Linear Algebra with applications would give some not so "trivial answers" to a "somewhat trivial question." Wonderful book for upper level undergrads and graduate students. Definitely worth a read.

See attached for some samples.
View attachment 7636
 

Attachments

  • m3260sp03sec43notes.pdf
    119.7 KB · Views: 71
  • #14
DrWahoo said:
David C. Lay's book Linear Algebra with applications would give some not so "trivial answers" to a "somewhat trivial question." Wonderful book for upper level undergrads and graduate students. Definitely worth a read.

See attached for some samples.

Erm... it says nothing about $\varnothing$, $\{\mathbf 0\}$, or the trivial vector space...
It seems to be just the same definition for linear dependence we already have plus some examples.
So what's the point?
 
  • #15
It's a tough nut to crack.

On the one hand, if we want a basis to define the *dimension* of a vector space, we ought to choose $\emptyset$ as the basis. On the other, if we want a basis to be a minimal *spanning set*, we ought to choose $\{0\}$ (since it is the only element we can form spanning sets *from*).

I tend to prefer the empty basis approach, and use the typical caveat that the minimal spanning set criterion only applies to vector spaces of non-zero dimension (if one is careful, one can craft appropriate exceptions in one's statements of the "usual" definitions).
 
  • #16
Deveno said:
It's a tough nut to crack.

On the one hand, if we want a basis to define the *dimension* of a vector space, we ought to choose $\emptyset$ as the basis. On the other, if we want a basis to be a minimal *spanning set*, we ought to choose $\{0\}$ (since it is the only element we can form spanning sets *from*).

I tend to prefer the empty basis approach, and use the typical caveat that the minimal spanning set criterion only applies to vector spaces of non-zero dimension (if one is careful, one can craft appropriate exceptions in one's statements of the "usual" definitions).

We may consider $0$ as the value of the empty linear combination, because of associativity of addition. It is the same argument that allows us to define $0!$ (an empty product) as $1$; this is valid in any monoid.

Another way to see this is to define the subspace $\langle S\rangle$ spanned by a subset $S$ as the smallest subspace that contains $S$ (the intersection of all subspaces that contain $S$). As any subspace contains $0$, we have $\langle\varnothing\rangle = \{0\}$.
 
  • #17
castor28 said:
We may consider $0$ as the value of the empty linear combination, because of associativity of addition. It is the same argument that allows us to define $0!$ (an empty product) as $1$; this is valid in any monoid.

Another way to see this is to define the subspace $\langle S\rangle$ spanned by a subset $S$ as the smallest subspace that contains $S$ (the intersection of all subspaces that contain $S$). As any subspace contains $0$, we have $\langle\varnothing\rangle = \{0\}$.

I like your spanning definition, as it dovetails nicely with the notion of generation by a set.

I don't follow how $0$ is the value of the empty linear combination *by associativity*, could you explain?
 
  • #18
Deveno said:
I don't follow how $0$ is the value of the empty linear combination *by associativity*, could you explain?

This is not very formal, but what I had in mind is that any monoid $M$ is the homomorphic image of a free monoid $F$ (set of words with concatenation as operation). As the identity of $F$ is the empty word, it is natural to interpret an empty product in $M$ as the identity.
 

FAQ: What is the basis of the trivial vector space {0}

What is the trivial vector space?

The trivial vector space is a vector space that contains only one element, the zero vector (0). This means that all vector operations, such as addition and scalar multiplication, result in the zero vector.

What is the basis of the trivial vector space?

The basis of the trivial vector space {0} is the empty set. This is because there are no vectors in the trivial vector space other than the zero vector, so it cannot be spanned by any set of vectors.

Why is the basis of the trivial vector space the empty set?

The basis of a vector space is a set of linearly independent vectors that span the entire space. Since the trivial vector space only contains one element, the zero vector, it is impossible to have a set of linearly independent vectors that can span the space.

Is the trivial vector space important in mathematics?

While the trivial vector space may seem insignificant, it is actually a fundamental concept in mathematics. It helps to define and understand the concept of a vector space, and is often used as a starting point for more complex mathematical concepts.

Can the trivial vector space be used in real-world applications?

In practical applications, the trivial vector space may not have much use on its own. However, it can be used as a building block for more complex vector spaces and can help provide a theoretical understanding of vector operations and concepts.

Similar threads

Back
Top