How can we represent tensors in conformal model?

In summary: No, that's not correct. I wanted to express the transformation as a 2. order D+1-dimentional tensor so that all the neccesary operations could be contained in a single tensor.
  • #1
espen180
834
2
Let's say I have two coordinate system in first-rank tensor form:
[tex]x^{\mu}=\left[\begin{matrix} x \\ y \\ z \end{matrix}\right][/tex] and [tex]x^{\mu^\prime}=\left[\begin{matrix} x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right][/tex]

and I want to translate the origin of [tex]x^{\mu^\prime}[/tex] from the origin to point (a,b,c) in [tex]x^{\mu}[/tex]. I can to this by using a second-rank transformation tensor [tex]{T^{\mu^\prime}}_{\mu}[/tex] such that [tex]x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu}[/tex].

Would I be "cheating" if I said that [tex]{T^{\mu^\prime}}_{\mu}[/tex] can be written as [tex]{T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1+\frac{a}{x} & 0 & 0 \\ 0 & 1+\frac{b}{y} & 0 \\ 0 & 0 & 1+\frac{c}{z}\end{matrix}\right][/tex]? Because sure enough, if you carry out the multiplication, you find that [tex]x^\prime=x+a[/tex] and so on, but since [tex]{T^{\mu^\prime}}_{\mu}[/tex] includes the original coordinates, it is not independent of [tex]x^{\mu}[/tex].

Is it therefore neccesary to represent the three-vector above as a four-vector [tex]x^{\mu}=[x,y,z,1]^T[/tex] and
[tex]{T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right][/tex]
in order for the transformation to be "rigorous" enough?

Thanks for any help.
 
Physics news on Phys.org
  • #2
You want to translate the origin to some point. What about the rest of the points, what do you want to do to them?
 
  • #3
In this transformation, I want all the points of [tex]x^{\mu^\prime}[/tex] to undergo the same translation as the origin, such that the result is an orthogonal coordinate system with "grid-increments" the same size as in [tex]x^{\mu}[/tex].
 
  • #4
A translation is simply x'μ = xμ + aμ. A translation cannot be done by matrix multiplication because matrix multiplication always takes the origin to itself.
 
  • #5
I don't understand. Which origin are you talking about? What the multiplication does is relate one set of coordinates to another, right?
 
  • #6
Only if the coordinates are related by a linear transformation. A translation is not a linear transformation, and therefore it cannot be represented by matrix multiplication. It is easy to see that matrix multiplication always sends (0, 0, 0) to (0, 0, 0).
 
  • #7
Unless I am mistaken, I would say that [tex]\left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right]\left[\begin{matrix} x \\ y \\ z \\ 1 \end{matrix}\right][/tex] sends (0,0,0) to (a,b,c).

Then why doesn't this multiplication work?
 
  • #8
It sends (0, 0, 0, 1) to (a, b, c, 1).

But yes, what you did is ok as long as you interpret (x, y, z, 1) as the point (x, y, z). Basically, you are using a linear transformation of a 4-dimensional space to generate a translation of a 3-dimensional subset. But why would you do that when you can just add the vector aμ to xμ?
 
  • #9
That works just fine. I think dx might be confused. Translation can be represented by matrix multiplication by extending your space by one more dimension as you have done above. In effect, a translation in D dimensions is just a projection of a linear transformation in D+1 dimensions onto a D-dimensional subspace (the hyperplane w=1, in your case).

Note, however, that your transformation matrix is no longer a D-dimensional tensor.
 
  • #10
dx said:
It sends (0, 0, 0, 1) to (a, b, c, 1).

But yes, what you did is ok as long as you interpret (x, y, z, 1) as the point (x, y, z). Basically, you are using a linear transformation of a 4-dimensional space to generate a translation of a 3-dimensional subset. But why would you do that when you can just add the vector aμ to xμ?

You might do this if you wanted to compose several operations and then maybe invert them.
 
  • #11
Does the addition of aμ to xμ prevent you from doing further operations on the vector or inverting them?
 
  • #12
Not really, but by expressing the transformation as a 2. order D+1-dimentional tensor it is possible to compress all the neccesary operations into a single tensor [tex]{T^{\mu^\prime}}_{\mu}[/tex]. This is of course possible also after doing the vector addition, which leaves one to decide which of
[tex]x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu}[/tex] (1)
and
[tex]x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}\left(x^{\mu}+a^{\mu}\right)[/tex] (2)
he/she prefers.

The difference in [tex]{T^{\mu^\prime}}_{\mu}[/tex] between the two is that in (2) it is a D-dimentional tensor, while in (1), it is a D+1-dimentional tensor.
 
  • #13
It's not a question of preference, but of utility. A translation is a translation, and the simplest way to translate a point (x, y, z) is to add a vector (a, b, c) to it. Or you could do something more complicated, like you did, but I still don't see why you would want to do that. What it looks like to me from your OP is that you think that any transformation between two coordinate systems must be a matrix multiplication, which is not true. Based on this misconception, you tried to get the translation into matrix form. Is that correct, or did you have some other reason you wanted it in that form?
 
Last edited:
  • #14
I don't believe that a transformation from one coordinate system to another must be a matrix multiplication, but I believe that any such transformation may be expressed as one. Is that correct?

If not, to what extent is it true?
 
  • #15
A general smooth change of coordinates on a patch of space (or some other manifold) is of the form

[tex] x^{\mu}(P) = f^{\mu}(y^1(P), y^2(P), ..., y^n(P)) [/tex]​

where the [tex] f^{\mu} [/tex] are smooth functions. A complete description of the transformation requires you to specify these functions. At each point, the coordinate system gives you a basis for the tangent and cotangent spaces. The components of a vector in a given coordinate system refer to this basis. The components of the same vector at a point A in the two coordinate systems are related in the following way: [tex] V'^{\mu} = T^{\mu}_{\nu}V^{\nu} [/tex], where

[tex]T^{\mu}_{\nu} = \frac{\partial y^{\mu}}{\partial x^{\nu}}(A) [/tex]​

So locally, a general smooth change of coordinate is a linear transformation of the coordinates on the tangent space. This is the only way linear transformations naturally arise from general coordinate transformations: they describe the local effect on the tangent and cotangent spaces and more general tensors constructed from them.
 
Last edited:
  • #16
From what I have read about tangent spaces, I understand that they are D-1-dimentional manifolds (In 2 and 3 dimentions; lines and planes) which are tangential to a given function at a given point, though that's how far it goes.

I assume [tex]x^{\mu}[/tex] is a first order tensor representing a coordinate system. What is[tex]y[/tex] in your equation?
 
  • #17
x and y are two coordinate systems, the coordinates in the systems being (x1, x2, ..., xn) and (y1, y2, ..., yn) respectively. They are not tensors. The tangent space to a surface at a point is the set of tangent vectors at that point, and has the same dimension as the surface.
 
  • #18
Okay. Here is my interpretation of your equation. Please tell me if it is correct or not:

P is a point in n-dimensional space. [tex]x^{\mu}(P)[/tex] is the point given in the coordinates of [tex]x^{\mu}[/tex].

The function [tex]f^{\mu}[/tex] relates each individual coordinate of [tex]x^{\mu}[/tex] with the coordinates of [tex]y^{\mu}[/tex].

A smooth function is a funtion which has derivatives of all orders. I think this means that [tex]f^{(n)}[/tex] is never 0.
 
Last edited:
  • #19
All correct except that last line. Derivatives of all orders must exists, but they can be zero.
 
  • #20
Okay. Still one question remains.

Why is the tangent space important in this context?
 
  • #21
Because we want to understand how linear transformations arise from non-linear transformations. The tangent space is, in a sense, the local linear approximation of a space, and a non-linear change of coordinates on the space induces a linear change on the tangent spaces.
 
  • #22
dx said:
[tex]T^{\mu}_{\nu} = \frac{\partial y^{\mu}}{\partial x^{\nu}}(A) [/tex]​

What does this notation mean? You differentiate the old coordinates with respect to the new, and multiply with A? what is A and how does this differentiation work?
 
  • #23
A is a point in the space. The notation just means that I'm evaluating the derivative at A.
 
  • #24
And you sum the derivatives over all the superscript values, right?
 
  • #25
No sum. The convention is to sum over repeated indices, one of which appears upstairs and one of which appears downstairs.
 
  • #26
Hello,
I don't know if this can be useful at all, but please, remember that there is also a way to express translations as orthogonal transformations. This is accomplished in the Conformal Model of Geometric Algebra. However the price to pay is that your originally 3D space must be embedded into a 5-D space with Minkowski metric. Still, this is, as far as I know, the only way to model all the Euclidean transformation by linear transformation (matrix multiplication).
 
  • #27
I googled the Minowski metric and found this for the Monwski tensor:
b9e01c89a9cb9eb65f905bea9305b3ac.png


It said to take c=1. I have two questions:

1. How will it look if we omit the c=1 convention?

2. Why does SR play a role in linear tensor analysis? (This may be a misconception on my part)
 
Last edited:
  • #28
I may not be the best person to answer to your question, since I am not a physicist and I am a newbie with Geometric Algebra.

However, as you already noticed the minkowski space, as physicist use it has a base with 4 orthogonal unit vectors, among which, only one has the property [tex]<e,e>=-1[/tex].

In the conformal model, you have to use 5 dimensions (not 4), so the metric will look like an 5x5 identity matrix but with one negative element.

I suggest you take a look here if are interested:
http://www.euclideanspace.com/maths/geometry/space/nonEuclid/conformal/index.htm

Regarding your last question, I would almost say that it is tensor analysis which plays a role in SR, as tensor analysis is the right "tool" to study curved spaces.
 

FAQ: How can we represent tensors in conformal model?

What is the origin translation tensor?

The origin translation tensor is a mathematical concept used in physics and engineering to describe the translation of a coordinate system's origin from one point to another.

How is the origin translation tensor represented?

The origin translation tensor is represented as a matrix with elements that describe the displacement of the origin in each coordinate direction. It can also be represented using vector notation.

What is the purpose of the origin translation tensor?

The origin translation tensor is used to describe the relationship between different coordinate systems and to transform equations and physical quantities from one coordinate system to another.

How is the origin translation tensor related to other tensors?

The origin translation tensor is a special case of a general tensor, known as the transformation tensor. It is also related to other tensors, such as the rotation tensor, which describes the rotation of a coordinate system.

In what fields is the origin translation tensor commonly used?

The origin translation tensor is commonly used in fields such as mechanics, electromagnetics, and fluid dynamics, where coordinate systems and transformations between them are important for understanding physical phenomena.

Similar threads

Replies
3
Views
3K
Replies
7
Views
3K
Replies
7
Views
3K
Replies
4
Views
4K
Replies
6
Views
783
Replies
6
Views
3K
Back
Top