- #1
- 2,567
- 4
Let [itex]\mathbb{K}[/itex] be a field. Assume that any vector spaces mentioned hereafter have [itex]\mathbb{K}[/itex] underlying them. W is a vector space, and W* is the dual space. If Y, Z are vector spaces, define the vector space:
Y * Z = Span{F(y,z) | y in Y, z in Z}
So if (y,z) and (y',z') are distinct, then F(y,z) and F(y',z') are considered to be linearly independent in Y * Z. Define:
U = Span{F(0,z), F(y,0), F(ay,z) - aF(y,z), F(y,az) - aF(y,z), F(y + w,z) - [F(y,z) + F(w,z)], F(y,z + x) - [F(y,z) + F(y,x)] | a is in K; y, w are in Y; z, x are in Z}
Define the vector space:
[tex]Y \otimes Z = (Y*Z)/U[/tex]
and define, for all y in Y and all z in Z
[tex]y \otimes z = F(y,z) + U[/tex]
i.e. the coset of U containing F(y,z)
If V is a vector space, define T(V) to be the graded algebra:
[tex]\mathbb{K} \oplus V \oplus (V \otimes V) \oplus (V \otimes V \otimes V) \oplus \dots[/tex]
Now let [itex]V = W \oplus W^*[/itex]. Define I(V)todd to be the (2-sided) ideal of T(V) generated by all elements of the form:
[tex](w_1,f_1) \otimes (w_2,f_2) + (w_2,f_2)\otimes (w_1,f_1) - t(f_1(w_1) + f_2(w_1))[/tex]
where t is some element of our underlying field. The (2-sided) ideal generated by those elements is the smallest subspace of T(V) (i.e. it is a vector space) containing all those elements and also satisfying the condition that for every element a of T(V) and every element i of the ideal, ai and ia are in the ideal. Finally, define the Clifford-t algebra:
[tex]Cl_t(V) = T(V)/I(V)_t ^{odd}[/tex]
where, recall, t is an element of the underlying field [itex]\mathbb{K}[/itex] and V is [itex]W \oplus W^*[/itex], where W is some [itex]\mathbb{K}[/itex]-vector space.
The problem I'm assigned to do is to show that the Clifford-t algebra defined above has no non-trivial (2-sided) ideals for t not equal to 0. So I think I want to show that if I have some element of this C = Clt(V) in my ideal that is non-zero, then I can force that ideal to have the unit of C, and thus every element of C. Let I = I(V)todd. The unit element of C is 1 + I, where 1 is the unit element of T(V). The unit element of T(V) is just the multiplicative identity of [itex]\mathbb{K}[/itex]. So I think I want to say that if x is some non-zero element of T(V) - I, then the vector space J containing x+y for each y in I and such that for all j in J and a in T(v), aj and ja are in J is such that there is some x' in J and y' in I such that x' - y' = 1.
So I start with:
{x}
Then I create:
{x + i | i in I}
Then I create:
{a(x + i), (x + i)a, a(x + i)b | a,b in T(V), i in I}
Then I make it a vector space (close it under addition and scalar multiplication):
Span{a(x + i), (x + i)a, a(x + i)b | a,b in T(V), i in I}
So I have to show that for any non-zero x in T(V), I can pick (for some n,m) n+m+p scalars ki, n+m+2p elements of T(V) ai, and n+m+p+1 elements of the ideal j1, ..., jn, y such that:
[tex]\sum _{i=1} ^n k_ia_i(x + j_i) + \sum _{i=1} ^m k_i(x + j_i)a_i + \sum _{i=1} ^p k_ia_i(x + j_i)a_{p+i} - y = 1[/tex]
Am I on any sort of a right track? What do I do from here? Is there a good way of thinking about the elements of I?
EDIT: Actually, since the identity element is an element of T(V), I just need to look at the sum:
[tex]\sum _{i=1} ^n k_ia_i(x + j_i)x_{n+i} - y = 1[/tex]
i.e. things of the form a(x+j) are just of the form a(x+j)1 which is of the form a(x+j)b. So to restate, if x is an element of T(V) that is not an element of I, I need to show that for some n, I can pick n scalars {k1, ..., kn}, 2n elements of T(V) {a1, ..., a2n}, and n+1 elements of I {j1, ..., jn, y} such that
k1a1(x + j1)an+1 + k2a2(x + j2)an+2 + ... + knan(x + jn)a2n - y = 1
This is formally what I want to do. However, I have no intuitive sense of what's going on, or what these elements x and j, etc. are "like" so I don't know what to do from here. I had a previous question that asked to show that Hom(V,V) where V is a finite-dimensional vector space has no non-trivial 2-sided ideals. I did this by treating Hom(V,V) as the set of nxn matrices, and saying that if I have a non-zero matrix, I can row reduce it, then left and right multiply it by the matrix that has a 1 in the first entry, and zeroes every where else, then multiply it by a particular scalar so I'm left with a matrix that just has a 1 in the top-left and zeroes everywhere else. Then I can make n-1 more matrices (where dim(V) = n) that have zeroes everywhere except a 1 in the second, third, ..., or nth place, by elementary row operations. I can add these n matrices together to get the identity. Then I can get everything in Hom(V,V) so this ideal is trivial. In this case, I was able to think of Hom(V,V) as something more intuitive or easier to deal with (matrices) and was able to think of things I could do to it (row reduce, apply elementary row operations to get the "1" to move down the diagonal, etc.). I have no familiarity with the structure being dealt with in the question of this thread, so I don't know how to go about doing the problem. Is C = Clt(V) isomorphic to something nice like an algebra of matrices or something like that?
Y * Z = Span{F(y,z) | y in Y, z in Z}
So if (y,z) and (y',z') are distinct, then F(y,z) and F(y',z') are considered to be linearly independent in Y * Z. Define:
U = Span{F(0,z), F(y,0), F(ay,z) - aF(y,z), F(y,az) - aF(y,z), F(y + w,z) - [F(y,z) + F(w,z)], F(y,z + x) - [F(y,z) + F(y,x)] | a is in K; y, w are in Y; z, x are in Z}
Define the vector space:
[tex]Y \otimes Z = (Y*Z)/U[/tex]
and define, for all y in Y and all z in Z
[tex]y \otimes z = F(y,z) + U[/tex]
i.e. the coset of U containing F(y,z)
If V is a vector space, define T(V) to be the graded algebra:
[tex]\mathbb{K} \oplus V \oplus (V \otimes V) \oplus (V \otimes V \otimes V) \oplus \dots[/tex]
Now let [itex]V = W \oplus W^*[/itex]. Define I(V)todd to be the (2-sided) ideal of T(V) generated by all elements of the form:
[tex](w_1,f_1) \otimes (w_2,f_2) + (w_2,f_2)\otimes (w_1,f_1) - t(f_1(w_1) + f_2(w_1))[/tex]
where t is some element of our underlying field. The (2-sided) ideal generated by those elements is the smallest subspace of T(V) (i.e. it is a vector space) containing all those elements and also satisfying the condition that for every element a of T(V) and every element i of the ideal, ai and ia are in the ideal. Finally, define the Clifford-t algebra:
[tex]Cl_t(V) = T(V)/I(V)_t ^{odd}[/tex]
where, recall, t is an element of the underlying field [itex]\mathbb{K}[/itex] and V is [itex]W \oplus W^*[/itex], where W is some [itex]\mathbb{K}[/itex]-vector space.
The problem I'm assigned to do is to show that the Clifford-t algebra defined above has no non-trivial (2-sided) ideals for t not equal to 0. So I think I want to show that if I have some element of this C = Clt(V) in my ideal that is non-zero, then I can force that ideal to have the unit of C, and thus every element of C. Let I = I(V)todd. The unit element of C is 1 + I, where 1 is the unit element of T(V). The unit element of T(V) is just the multiplicative identity of [itex]\mathbb{K}[/itex]. So I think I want to say that if x is some non-zero element of T(V) - I, then the vector space J containing x+y for each y in I and such that for all j in J and a in T(v), aj and ja are in J is such that there is some x' in J and y' in I such that x' - y' = 1.
So I start with:
{x}
Then I create:
{x + i | i in I}
Then I create:
{a(x + i), (x + i)a, a(x + i)b | a,b in T(V), i in I}
Then I make it a vector space (close it under addition and scalar multiplication):
Span{a(x + i), (x + i)a, a(x + i)b | a,b in T(V), i in I}
So I have to show that for any non-zero x in T(V), I can pick (for some n,m) n+m+p scalars ki, n+m+2p elements of T(V) ai, and n+m+p+1 elements of the ideal j1, ..., jn, y such that:
[tex]\sum _{i=1} ^n k_ia_i(x + j_i) + \sum _{i=1} ^m k_i(x + j_i)a_i + \sum _{i=1} ^p k_ia_i(x + j_i)a_{p+i} - y = 1[/tex]
Am I on any sort of a right track? What do I do from here? Is there a good way of thinking about the elements of I?
EDIT: Actually, since the identity element is an element of T(V), I just need to look at the sum:
[tex]\sum _{i=1} ^n k_ia_i(x + j_i)x_{n+i} - y = 1[/tex]
i.e. things of the form a(x+j) are just of the form a(x+j)1 which is of the form a(x+j)b. So to restate, if x is an element of T(V) that is not an element of I, I need to show that for some n, I can pick n scalars {k1, ..., kn}, 2n elements of T(V) {a1, ..., a2n}, and n+1 elements of I {j1, ..., jn, y} such that
k1a1(x + j1)an+1 + k2a2(x + j2)an+2 + ... + knan(x + jn)a2n - y = 1
This is formally what I want to do. However, I have no intuitive sense of what's going on, or what these elements x and j, etc. are "like" so I don't know what to do from here. I had a previous question that asked to show that Hom(V,V) where V is a finite-dimensional vector space has no non-trivial 2-sided ideals. I did this by treating Hom(V,V) as the set of nxn matrices, and saying that if I have a non-zero matrix, I can row reduce it, then left and right multiply it by the matrix that has a 1 in the first entry, and zeroes every where else, then multiply it by a particular scalar so I'm left with a matrix that just has a 1 in the top-left and zeroes everywhere else. Then I can make n-1 more matrices (where dim(V) = n) that have zeroes everywhere except a 1 in the second, third, ..., or nth place, by elementary row operations. I can add these n matrices together to get the identity. Then I can get everything in Hom(V,V) so this ideal is trivial. In this case, I was able to think of Hom(V,V) as something more intuitive or easier to deal with (matrices) and was able to think of things I could do to it (row reduce, apply elementary row operations to get the "1" to move down the diagonal, etc.). I have no familiarity with the structure being dealt with in the question of this thread, so I don't know how to go about doing the problem. Is C = Clt(V) isomorphic to something nice like an algebra of matrices or something like that?
Last edited: