Vectors and isometries on a manifold

In summary, the conversation discusses the concepts of vectors, coordinate systems, and isometries in the context of General Relativity. A map is used to assign values to points on a manifold and a tangent vector is defined using the coordinate basis. The basis vector depends on the point on the manifold, but not the vector itself. There is no issue with constructing tangent planes. It is also clarified that 4-velocity should not be confused with basis vectors. The question of what people mean by rotations in GR is left open.
  • #36
davidge said:
If we want to write the component of a rotated vector V at x as ##V^{\mu}_{(rotated)} (x) = V^{\mu}(x) + \xi^{\mu}_{,\nu} V^{\nu}(x)##

That's not what I wrote, and it's not correct. Where are you getting this from?
 
  • Like
Likes davidge
Physics news on Phys.org
  • #37
PeterDonis said:
The action of the infinitesimal rotation matrix is to take ##V^\mu## to ##V^\mu + d\theta \xi^\mu##.

Perhaps the presence of ##d\theta## is confusing you. If so, rewrite this as ##V^\mu \rightarrow V^\mu + \epsilon \xi^\mu##, where ##\epsilon << 1##. There is no connection assumed between ##\epsilon## and the coordinate ##\theta## or its differential; we find out that the action of the rotation is to move the point through an angle ##\epsilon = d\theta##, without changing ##r##, by applying the action of the rotation. The key thing is that no derivative of ##\xi^\mu## appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
 
  • Like
Likes davidge
  • #38
PeterDonis said:
The key thing is that no derivative of ξμξμ\xi^\mu appears in the action of the rotation. I have said this several times now but it doesn't appear to be getting through to you.
No. I understood. What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.
davidge said:
Is it possible to find a ξμξμ\xi^{\mu}, other than the one you've found
PeterDonis said:
Where are you getting this from?
Maybe I'm wrong, but I thought when we perform a change in the components of a vector (keeping the vector the same), one can treat this by saying that the vector changed in [itex] \mathbb{R} ^2[/itex] from a point x to a point y, where [itex]y = x + \epsilon \xi (x)[/itex], [itex] |\epsilon| << 1[/itex] in the infinitesimal case. In the case of a rotation, we require that [itex]\xi^{\mu} = 0 [/itex] at x for any [itex]\mu[/itex], so that the point x is kept invariant.
I don't know if you've already read Nakahara's book or Weinberg's book, but if you do that, you would understand what I'm trying to say.
 
  • #39
davidge said:
What I'm asking is if we can also use the derivative of (a possible) another Killing Vector to accomplish this task.

Not to my knowledge.

davidge said:
I thought when we perform a change in the components of a vector (keeping the vector the same)

This is treating the rotation as a coordinate transformation, not a mapping of vectors to vectors. Everything I have said so far has been assuming we are treating the rotation as a mapping of vectors to vectors. A coordinate transformation keeping vectors fixed is not the same thing, although there are similarities.

davidge said:
the vector changed in ##\mathbb{R} ^2## from a point ##x## to a point ##y##, where ##y = x + \epsilon \xi (x)##, ##|\epsilon| << 1##

Now it seems like you can't make up your mind whether you want to talk about a coordinate transformation or a mapping of vectors to vectors. I strongly advise you to take a step back and use very precise language to make sure you are saying exactly what you mean.

What is quoted just above describes mapping a vector ##x## to a vector ##y##, holding the coordinate chart fixed. If you want to describe a coordinate transformation, you would say something like: the coordinates of vector ##V^\mu## changed from ##x^\mu## to ##y^\mu = x^\mu + \epsilon \xi^\mu##. Here ##\xi^\mu## is describing a coordinate transformation (in the particular case we have been discussing, it would be one that rotates the coordinates about the origin, i.e., changes ##\theta## but not ##r##, in the opposite sense to the sense that we would view the mapping of vectors to vectors as rotating a vector).
 
  • Like
Likes davidge
  • #40
PeterDonis said:
This is treating the rotation as a coordinate transformation
This wouldn't for one thing, we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.
PeterDonis said:
What is quoted just above describes mapping a vector xxx to a vector y
I think you are misunderstanding my notation. Here both [itex]x[/itex] and [itex]y[/itex] denote points in [itex]\mathbb{R} ^2[/itex] and not vectors. Also, I noticed that you are treating [itex]V^{\mu}[/itex] as a vector. Actually it is the component of a vector ##V##, i.e $$V = \sum_\mu V^{\mu}(x) \frac{\partial}{\partial x^{\mu}},$$ where ##V^{\mu}## is allowed to be a function of ##x## at a point ##x## in ##\mathbb{R} ^2##.
 
Last edited:
  • #41
davidge said:
we are keeping the point fixed, that is to say we are keeping the coordinates the same as before.

These two are not the same thing. Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed; you can still rotate the coordinates without changing the origin (which is what the coordinate transformation you are describing does).

davidge said:
Here both ##x## and ##y## denote points in ##\mathbb{R} ^2## and not vectors.

Doesn't matter: there is a one-to-one correspondence between them once you've picked an origin, and any rotation picks an origin (the point that's left invariant).

davidge said:
I noticed that you are treating ##V^{\mu}## as a vector.

Yes, writing that notation when a vector instead of an individual component is meant is common. In many cases (including the one under discussion), both the vector itself and each of its components obey the same equation, so no harm is done by the notation.

None of this changes anything I was saying.
 
  • Like
Likes davidge
  • #42
PeterDonis said:
Keeping a particular point fixed (the origin) does not mean keeping the coordinates fixed
That is because I was visualizing a point as a set of real numbers, e.g. ##x \doteq \begin{Bmatrix}1&0\end{Bmatrix}##. In this case, keeping ##x## fixed means don't changing either ##1## or ##0##. Is my interpretation of a point wrong?
 
  • #43
davidge said:
keeping ##x## fixed means don't changing either ##1## or ##0##.

But the rotations we are talking about don't keep the point ##(1, 0)## fixed. They only keep the origin ##(0, 0)## fixed. If you view the rotation as mapping points to other points (or vectors to other vectors), then the point ##(1, 0)## gets mapped to some other point. If you view the rotation as a coordinate transformation, then it changes the coordinates of the point ##x## that originally had coordinates ##(1, 0)## to different coordinates.
 
  • Like
Likes davidge
  • #44
PeterDonis said:
If you view the rotation as mapping points to other points (or vectors to other vectors), then the point (1,0)(1,0)(1, 0) gets mapped to some other point
PeterDonis said:
If you view the rotation as a coordinate transformation, then it changes the coordinates of the point xxx that originally had coordinates (1,0)(1,0)(1, 0) to different coordinates.
I see. Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ## \begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix} ## or ## \begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix} ## instead of ## \begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix} ##?
 
Last edited:
  • #45
davidge said:
Are you sure about the infinitesimal rotation matrix in post #31? Would not it be (or could it be replaced with) ##\begin{bmatrix}1&-r d\theta\\\frac{d\theta}{r}&1\end{bmatrix}## or ##\begin{bmatrix}1&-r d\theta\\rd\theta&1\end{bmatrix}## instead of ##\begin{bmatrix}1&0\\\frac{d\theta}{r}&1\end{bmatrix}## ?

Try it and see. A rotation by ##d \theta## should take the vector ##\begin{bmatrix} 1 \\ \theta \end{bmatrix}## to ##\begin{bmatrix} 1 \\ \theta + d \theta \end{bmatrix}##. Do either of the alternatives you suggested do that? (Notice that this is a more general requirement than the one I illustrated explicitly in post #31.)
 
  • Like
Likes davidge
  • #46
PeterDonis said:
Do either of the alternatives you suggested do that?
Yes. Both of them do that at ##r = 1##, including the one you wrote down.
 
  • #47
davidge said:
Both of them do that at ##r = 1##, including the one you wrote down.

Do the more general thing that I edited my last post to say?
 
  • Like
Likes davidge
  • #48
PeterDonis said:
Do the more general thing that I edited my last post to say?
Indeed, they do not satisfy your condition. :smile: It's always worth trying something more general to check things out...
 
  • #49
One more question: does the transformation ## \begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix} ## takes the same form in all points or only in ##r = 1##, i.e. only on ##(1 , \theta)##?
 
  • #50
davidge said:
does the transformation ##\begin{bmatrix}1\\\theta\end{bmatrix} \rightarrow \begin{bmatrix}1\\\theta +d\theta\end{bmatrix}## takes the same form in all points or only in ##r = 1##,

What do you think the general form for arbitrary ##r## should be? (Hint: rotations don't change the length of vectors, just the direction in which they point.)
 
  • Like
Likes davidge
  • #51
PeterDonis said:
What do you think the general form for arbitrary rrr should be?
Since it has an r in one of its entries, I would say its form depends on the point in question.
 
  • #52
davidge said:
Since it has an r in one of its entries, I would say its form depends on the point in question.

The value of the components does, but the form does not. The general form is that the rotation maps ##\begin{bmatrix} r \\ \theta \end{bmatrix}## to ##\begin{bmatrix} r \\ \theta + d \theta \end{bmatrix}##. If you work through the math, you will see that this is why the lower left component of the matrix I gave is ##d\theta / r##.
 
  • Like
Likes davidge
  • #53
PeterDonis said:
The value of the components does, but the form does not
PeterDonis said:
The general form is that the rotation maps [rθ][rθ]\begin{bmatrix} r \\ \theta \end{bmatrix} to [rθ+dθ]
Ok. But will this condition hold for ##r = 0##? I've tried here and I found that at ##r = 0## we can't transform the second component from ##\theta## to ##\theta + d\theta##. Also, as ##r## tends to zero the derivatives of the Killing Vector tends to infinity.
 
  • #54
davidge said:
will this condition hold for ##r = 0##?

No, and that's ok, because the rotation leaves the origin ##r = 0## fixed, and the coordinate ##\theta## is singular there anyway, so "rotating" from ##\theta## to ##\theta + d\theta## is meaningless.

davidge said:
as rrr tends to zero the derivatives of the Killing Vector tends to infinity

Yes, that's true. So what? As I've said a number of times now, the derivatives of the KVF have nothing to do with generating rotations.
 
  • Like
Likes davidge
  • #55
Btw, if the rotation matrix being undefined at ##r = 0## bothers you, you can redo the analysis in Cartesian coordinates, which are nonsingular at the origin. In those coordinates you can show explicitly that the rotation matrix does nothing to the origin ##x = 0##, ##y = 0##.
 
  • Like
Likes davidge
  • #56
PeterDonis said:
So what? As I've said a number of times now, the derivatives of the KVF have nothing to do with generating rotations
Because I found that at least on ##r = 0## we cand find vectors ##\xi^{\rho}## that leave the point invariant and that obey Killing's equation:
##\xi^{r} = \xi_{r} = \xi^{\theta} = \xi_{\theta} = 0##, ##\partial_{\theta} \xi^{r} = 0##, ##\partial_{r} \xi^{\theta} = \frac{1}{r}##, and so

##V'^{\theta} = V^{\theta} + d{\theta} (\partial_{r} \xi^{\theta})V^{r} = \theta + d{\theta}##,
##V'^{r} = V^{r} + d{\theta} (\partial_{\theta} \xi^{r})V^{\theta} = V^{r} = r##.

These are not the Killing Vectors you wrote down in a previous post, because those were obtained from metric relations.

PeterDonis said:
you can redo the analysis in Cartesian coordinates
I will
 
Last edited:
  • #57
davidge said:
I found that at least on ##r = 0## we cand find vectors ##\xi^{\rho} ## that leave the point invariant and that obey Killing's equation

This doesn't make sense; ##r = 0## is just one point, and the vector ##\xi## vanishes at that point; but if you are only taking into account one point, derivatives are meaningless (you need at least a neighborhood). And at the point ##r = 0## it is also meaningless to talk about "rotating" ##\theta## to ##\theta + d\theta##, since the coordinate ##\theta## is singular there.

Why are you persisting with trying to use derivatives of Killing vectors instead of Killing vectors themselves? That is not done anywhere in the literature that I'm aware of. If it's just your personal thing, then (a) it's not going to work, and (b) PF has rules about personal theories, which you should review.
 
  • Like
Likes davidge
  • #58
davidge said:
These are not the Killing Vectors you wrote down

Not quite, as you wrote it, because you used partial derivatives instead of covariant derivatives. But if you alter your definition to use covariant derivatives, you will see that, if we consider a neighborhood of ##r = 0## instead of just that point (so that derivatives are meaningful), and if you the components of ##\xi## at ##r = 0## plus the derivatives you gave there define a vector field ##\xi## on the neighborhood that is identical to the one I wrote down earlier.

(Also, your definition using partial derivatives is not well-defined at ##r = 0##, since ##1 / r## is undefined there; so it doesn't work anyway. Whereas the definition in terms of covariant derivatives works fine at ##r = 0##.)
 
  • Like
Likes davidge
  • #59
PeterDonis said:
Why are you persisting with trying to use derivatives of Killing vectors instead of Killing vectors themselves? That is not done anywhere in the literature
As I said before, I'm following what I learned from Weinberg's book and some two or three notes I found on web. (Also from a lecture I attended last month at the university.) Maybe I misunderstood what I've read?
PeterDonis said:
at the point r=0r=0r = 0 it is also meaningless to talk about "rotating" θθ\theta to θ+dθθ+dθ\theta + d\theta, since the coordinate θθ\theta is singular there
I don't see why it is singular there.

PeterDonis said:
you used partial derivatives instead of covariant derivatives
I did not show my complete derivation in post #56, but as I mentioned, I imposed the condition that partial derivatives of them are equal to the covariant ones.
PeterDonis said:
Whereas the definition in terms of covariant derivatives works fine at r=0r=0r = 0.
I don't think so. Consult your post on the derivation of the covariant derivatives to see that at ##r = 0##, ##\nabla_{r}\xi^{\theta}## is undefined.

PeterDonis said:
if you are only taking into account one point, derivatives are meaningless (you need at least a neighborhood)
I agree.
 
Last edited:
  • #60
davidge said:
Maybe I misunderstood what I've read?

I think you must be. Unfortunately I don't have Weinberg's book to check the relevant sections for myself.

davidge said:
I don't see why it is singular there.

The metric is ##ds^2 = dr^2 + r^2 d\theta^2##. This metric has no inverse at ##r = 0## (its determinant is zero), because ##g_{\theta \theta} = 0## there. Saying that ##\theta## is singular at ##r = 0## is a (somewhat sloppy) shorthand for that.

davidge said:
I imposed the condition that partial derivatives of them are equal to the covariant ones.

You can't "impose" this condition; it is either satisfied by the coordinates you've chosen or it isn't. For polar coordinates, it isn't.

davidge said:
Consult your post on the derivation of the covariant derivatives to see that at ##r = 0##, ##\nabla_{r}\xi^{\theta}## is undefined.

Yes, but that doesn't stop ##\xi^\theta## itself from being well-defined (and nonzero) at ##r = 0## by my definition; it's just ##\xi^\theta = 1##, like it is everywhere else (and ##\partial r \xi^\theta = 0## everywhere, including ##r = 0##). What vanishes according to my definition at ##r = 0## is the norm of ##\xi##, i.e., ##\sqrt{g_{\mu \nu} \xi^\mu \xi^\nu}##. From the metric you will see that this norm is just ##r##. Whereas by your definition, you tried to make ##\xi^\theta## itself vanish at ##r = 0##, which doesn't work.
 
  • Like
Likes davidge
  • #61
PeterDonis said:
You can't "impose" this condition
but, since ##\nabla_{\mu}\xi^{\nu} = \partial_{\mu}\xi^{\nu} + \xi^{\sigma} \Gamma^{\nu}_{\sigma \mu}##, it follows that if I require ##\xi^{\sigma} = 0##, then ##\nabla_{\mu}\xi^{\nu} = \partial_{\mu}\xi^{\nu} ##. Is it not so?

PeterDonis said:
This metric has no inverse at r=0r=0r = 0
Oh yea. I forgot about it.

PeterDonis said:
Yes, but that doesn't stop ξθξθ\xi^\theta itself from being well-defined (and nonzero) at r=0r=0r = 0 by my definition
The thing is that I'm using a different definition. What would my different definition imply on the action of the rotation?
 
  • #62
davidge said:
"A metric space is said to be isotropic about a given point ##X## if there exist infinitesimal isometries that leave the point ##X## fixed, so that ##\xi ^{\lambda}(X) = 0##, and for which the first derivatives ##\xi_{\lambda ; \ \nu}(X)## take all possible values [...]. In particular, in N dimensions we can choose a set of N(N-1)/2 Killing vectors..."

"As an example of a maximally symmetric space, consider an N-dimensional flat space, with vanishing curvature tensor. [...]
We can choose a set of N(N+1)/2 Killing vectors as follows:

$$\xi_{\mu}^{(\nu)}(X) = \delta_{\mu}^{\nu}$$
$$\xi_{\mu}^{(\nu \lambda)}(X) = \delta_{\mu}^{\nu} x^{\lambda} - \delta_{\mu}^{\lambda} x^{\nu}$$

...

The N vectors ##\xi_{\mu}^{(\nu)}(X)## represent translations, whereas the N(N-1)/2 vectors ##\xi_{\mu}^{(\nu \lambda)}(X)## represent infinitesimal rotations [...]"

Let me try and unpack this. We have been discussing the 2-dimensional flat space, i.e., the plane. It has 3 Killing vectors total: two translations and one rotation. The two translations in Cartesian coordinates are just ##\partial / \partial x## and ##\partial / \partial y##, i.e., the Cartesian basis vectors. In the notation of the above quote these would be ##\xi_\mu^{(\nu)} (X)##. The rotation is the Killing vector we've been discussing; it is ##\xi_{\mu}^{(\nu \lambda)}(X)## in the notation of the quote above, and is ##\partial / \partial \theta## in polar coordinates, and its components in Cartesian coordinates I'll leave to you (but the second formula above should give a good hint).

Now, as far as isotropy is concerned, the only Killing vector involved is the rotation Killing vector (the translation Killing vectors have to do with homogeneity, which is a different symmetry property). So let's focus on that. We have seen that it satisfies ##\xi = 0## at the point ##X##, i.e., at the origin (note: I am assuming that Weinberg refers to the norm being zero, not components--you have to read his notation very carefully). The part I'm not sure about is the thing about the first derivatives taking "all possible values". The derivatives in question are covariant derivatives, as shown by the semicolon in the quoted formula; but I don't understand the "all possible values", since the values of all the possible covariant derivatives are as given by the formulas I posted previously, there is no wiggle room for them to have "all possible" values.

I notice that you left out text after the "all possible values"; could you possibly post it or at least a further excerpt, for more context?
 
  • Like
Likes davidge
  • #63
davidge said:
if I require ##\xi^{\sigma} = 0##

You can't "require" ##\xi^\sigma = 0## (or any other value). The Killing vector is what it is. You can't make it be whatever you want.

davidge said:
I'm using a different definition.

You can't help yourself to whatever definition you want. Killing vectors are a geometric property of the manifold; you don't pick a definition for them, you find out what they are by looking at the geometry.
 
  • Like
Likes davidge
  • #64
PeterDonis said:
d its components in Cartesian coordinates I'll leave to you
I will work this out

PeterDonis said:
I am assuming that Weinberg refers to the norm being zero, not components--you have to read his notation very carefully
I don't think it's what he is meaning

PeterDonis said:
The part I'm not sure about is the thing about the first derivatives taking "all possible values".
There are pics below from that part on the book

PeterDonis said:
You can't "require" ξσ=0ξσ=0\xi^\sigma = 0 (or any other value). The Killing vector is what it is. You can't make it be whatever you want.
It seems he was trying to define them by his own choice.

Eb4wenp.png

iUa2nWz.jpg
 
  • #65
davidge said:
There are pics below from that part on the book

These are helpful. First a general comment: Weinberg's books tend to assume a lot of background knowledge, so they aren't always suitable as introductory texts. Also they tend to assume a physicist's level of rigor rather than a mathematicians, which means issues like coordinate singularities are glossed over if they don't affect the physics (or in this case the geometry).

Now some more specific comments:

(1) I didn't say you were confusing a rotation with a coordinate transformation; I said you seemed confused about which interpretation of rotations you were using, the interpretation as a mapping from vectors to vectors (or points to points), holding the coordinates constant; or the interpretation as a coordinate transformation holding the underlying manifold and its points and vectors constant. Weinberg is clearly using the second interpretation, which is perfectly valid, but if you're going to use it you have to use it properly. On this interpretation, the only thing that changes under a rotation is the coordinates: you can't think of moving points and vectors around, you have to think of moving the coordinate grid lines around while holding all points and vectors constant. (I actually prefer the other interpretation because to me it seems more natural to think of moving points and vectors around and keeping the coordinate grid lines fixed; but that's a matter of personal preference.)

(2) Weinberg is not defining the Killing vectors "by his own choice". He is defining Killing vectors the standard way; a vector field that satisfies Killing's equation (his equation 13.1.5) is a Killing vector field. (And he is saying, which is true, that any Killing vector field defines an isometry.) He is then saying that if you have a Killing vector field that satisfies some additional conditions, then the metric space on which it exists is said to be "isotropic". But you can't just handwave such a Killing vector field into existence; you have to check to see if one exists given the geometry of the metric space. There are metric spaces on which no such Killing vector field exists; such spaces are simply not isotropic.

(3) As far as his statements ##\xi^\lambda(X) = 0## and ##\xi_{\lambda ; \ \nu}(X)## taking all possible values, I'll defer comment on them until you've worked things out in Cartesian coordinates. I don't think polar coordinates will work for evaluating these statements since they are singular at the point ##X## (the origin ##r = 0##).
 
  • Like
Likes davidge
  • #66
davidge said:
Can you show explicitly the form that ##\nabla{_\theta} \xi^{\theta}##, ##\nabla{_r} \xi^{\theta}##, ##\nabla{_\theta} \xi^{r}## and ##\nabla{_r} \xi^{r}## takes

I answered this as it was asked, but since we have mentioned Killing's equation, I should clarify that for that equation these are not quite the relevant covariant derivatives, because the index on ##\xi## is upper instead of lower. Killing's equation in polar coordinates (the only non-trivial component) is

$$
\nabla_r \xi_\theta + \nabla_\theta \xi_r = 0
$$

Lowering the index makes a difference, because ##\xi_\theta = g_{\theta \theta} \xi^\theta = r^2## depends on ##r## (whereas ##\xi^\theta## with the upper index does not). So Killing's equation becomes (giving only the nonzero terms)

$$
\partial_r \xi_\theta - 2 \Gamma^\theta{}_{r \theta} \xi_\theta = 2r - 2 \frac{1}{r} r^2 = 0
$$

(Note the minus sign in front of the ##\Gamma## term because we are now "correcting" a lower index instead of an upper index in the covariant derivative.)
 
  • Like
Likes davidge
  • #67
PeterDonis said:
Weinberg's books tend to assume a lot of background knowledge
PeterDonis said:
they tend to assume a physicist's level of rigor rather than a mathematicians
I noticed it

PeterDonis said:
I answered this as it was asked, but since we have mentioned Killing's equation, I should clarify that for that equation these are not quite the relevant covariant derivatives, because the index on ξξ\xi is upper instead of lower
No problem. I realized that that was for vectors, not co-vectors.

PeterDonis said:
I didn't say you were confusing a rotation with a coordinate transformation
PeterDonis said:
Weinberg is clearly using the second interpretation
PeterDonis said:
Weinberg is not defining the Killing vectors "by his own choice"
That is ok
PeterDonis said:
He is then saying that if you have a Killing vector field that satisfies some additional conditions
PeterDonis said:
you can't just handwave such a Killing vector field into existence
I did not realize this fact before.

PeterDonis said:
As far as his statements ξλ(X)=0ξλ(X)=0\xi^\lambda(X) = 0 and ξλ; ν(X)ξλ; ν(X)\xi_{\lambda ; \ \nu}(X) taking all possible values, I'll defer comment on them until you've worked things out in Cartesian coordinates
A tentative:

The two translational Killing vectors could be ##\xi^{1}## and ##\xi^{2}##.
For example, let ##x## and ##y## be two points on ##\mathbb{R}^2##, related by ##y = x +\epsilon \xi##, for arbitrary ##\epsilon \in \mathbb{R}##.
##\xi^{1}(x) = \xi^{2}(x) = 1, x \doteq (0,0) \Longrightarrow y \doteq (x^{1} + \epsilon\xi^{1},x^{2} + \epsilon\xi^{2}) = \epsilon(1,1)##.

Following your work here on the polar coordinates case,

if ##V## is a vector with components ## \begin{bmatrix}1\\0\end{bmatrix}## at ##x \doteq (0,0)##, the action of the Killing vector is to change ## \begin{bmatrix}1\\0\end{bmatrix}## to ## \begin{bmatrix}1\\0\end{bmatrix} + \epsilon\begin{bmatrix}1\\1\end{bmatrix}##, remembering that in 2-d, Cartesian coordinates, the Killing vectors are ##\partial/\partial x^1## and ##\partial/\partial x^2## for a point ##x## with coordinates ##(x^1,x^2)##.

The problem here is that it seems we don't have N(N+1)/2 = 3 KV, but only 2. It's missing one, because the derivatives all vanish for this metric.
 
Last edited:
  • #68
davidge said:
let ##x## and ##y## be two points on ##\mathbb{R}^2##, related by ##y = x +\epsilon \xi##,

You still appear to be confused about interpretations. If you want to use Weinberg's interpretation, then translations are coordinate transformations just like rotations. (As far as I can tell, his discussion of isometries as coordinate transformations and Killing vectors as generating infinitesimal coordinate transformations is not limited to rotations.) So you should be saying: let ##x## and ##y## be the coordinates of a chosen point in ##\mathbb{R}^2## in two different charts, related by ##y = x + \epsilon \xi##.

In this case, the two translation Killing vectors, in Cartesian coordinates, are ##\partial / \partial x^1## and ##\partial / \partial x^2## (I won't use ##x## and ##y## since you used them to label the coordinate 2-tuples as a whole). In column vector notation these are ##\begin{bmatrix} 1 \\ 0 \end{bmatrix}## and ##\begin{bmatrix} 0 \\ 1 \end{bmatrix}##.

davidge said:
ξ1(x)=ξ2(x)=1

No. There are two Killing vectors, not two components of one Killing vector. See above.

The correct actions of the two translation Killing vectors, using ##x = \begin{bmatrix} a \\ b \end{bmatrix}## and ##y = \begin{bmatrix} a' \\ b' \end{bmatrix}## for the two coordinate 2-tuples in column vector notation, are:

$$
\begin{bmatrix} a' \\ b' \end{bmatrix} = \begin{bmatrix} a \\ b \end{bmatrix} + \epsilon \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} a + \epsilon \\ b \end{bmatrix}
$$

$$
\begin{bmatrix} a' \\ b' \end{bmatrix} = \begin{bmatrix} a \\ b \end{bmatrix} + \epsilon \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} a \\ b + \epsilon \end{bmatrix}
$$

As you can see, these are just infinitesimal translations in one of the two basis directions.

davidge said:
The problem here is that it seems we don't have N(N+1)/2 = 3 KV, but only 2.

You've forgotten the rotation Killing vector. It's still there. Have you tried figuring out what it looks like in Cartesian coordinates?
 
  • Like
Likes davidge
  • #69
PeterDonis said:
You still appear to be confused about interpretations
PeterDonis said:
So you should be saying
Yes. I'm sorry for non using the correct language in my last post. I get confused with English language itself.

PeterDonis said:
There are two Killing vectors, not two components of one Killing vector
Thanks for clarifying this for me. I thought there were only one vector.

PeterDonis said:
As you can see, these are just infinitesimal translations in one of the two basis directions.
Yea

PeterDonis said:
You've forgotten the rotation Killing vector. It's still there. Have you tried figuring out what it looks like in Cartesian coordinates?
Yes. I've tried, but all covariant derivatives vanish and I was expecting to find it from that derivatives. Also, in polar coordinates, the effect of using the rotation KV to a general vector seems to be the same as using the translation KV in your previous post. Is it due the difference between Cartesian and Polar coordinates?
 
  • #70
davidge said:
all covariant derivatives vanish

All covariant derivatives of what? All covariant derivatives of the basis vectors vanish, yes. But there is nothing that says a Killing vector has to be a coordinate basis vector.

If you're having trouble guessing an answer, just write down the most general possible vector in Cartesian coordinates and plug it into Killing's equation, and see what conditions its components have to satisfy for the vector to satisfy Killing's equation. In Cartesian coordinates this is actually very simple because all of the Christoffel symbols are zero, so covariant derivatives are just partial derivatives. Also there is only one component of Killing's equation that is not trivial in 2 dimensions.
 
  • Like
Likes davidge
Back
Top