Matrix Representation of Differential Operator w/ 3 Basis Vectors

In summary, the conversation discusses the task of finding the coefficients for a differential operator D in a basis of three vectors e1, e2, and e3. The combination rule for the basis vectors is normal addition and the differential operator is defined by Dp(x) = p'(x). The process involves calculating the derivatives of each basis vector and setting them equal to the coefficients in the D matrix. The conversation also discusses the definition of a basis and how it relates to the Cartesian unit vectors. Ultimately, the task is to find the vector p' transformed into its first derivative p'(x) and to determine the coefficients for the D matrix, which is found to be \left(\begin{array}{ccc}0&0&-
  • #1
jeebs
325
4
I have 3 basis vectors:
[tex] e_1 = sin^2(x), e_2 = cos^2(x), e_3 = sin(x)cos(x) [/tex]

I am told that the combination rule is just normal addition, and that the differential operator is defined by Dp(x) = p'(x).

My task is to show that [tex] D = \left(\begin{array}{ccc}0&0&-1\\0&0&1\\2&-2&0\end{array}\right)[/tex] in this basis. So, what I've done is calculated the derivative of each of the basis vectors:
[tex] \frac{d}{dx}e_1 = 2cos(x)sin(x) = 2e_3 ... \frac{d}{dx}e_2 = -2cos(x)sin(x) = -2e_3 ... \frac{d}{dx}e_3 = cos^2(x) - sin^2(x) = e_2 - e_1 [/tex]

Now I'm looking at this vector p(x) being transformed into its 1st derivative p'(x). If I'm not mistaken, we can write p(x) as some arbitrary linear combination of the basis vectors, like:

[tex] p(x) = p_1e_1 + p_2e_2 + p_3e_3 = \sum_k p_ke_k = \left(\begin{array}{c}p_1e_1 \\ p_2e_2 \\ p_3e_3 \end{array}\right)... p'(x) = p_1\frac{d}{dx}e_1 + p_2\frac{d}{dx}e_2 + p_3\frac{d}{dx}e_3 = \left(\begin{array}{c}p_1e_1' \\ p_2e_2' \\ p_3e_3' \end{array}\right) [/tex]
and I need to somehow find the coefficients Dij for [tex] p'(x) = Dp(x) = \left(\begin{array}{ccc}D_{11}&D_{12}&D_{13}\\D_{21}&D_{22}&D_{23}\\D_{31}&D_{32}&D_{33} \end{array}\right) \left(\begin{array}{c}p_1e_1 \\ p_2e_2 \\ p_3e_3 \end{array}\right) = \left(\begin{array}{c}D_{11}p_1e_1 + D_{12}p_2e_2 + D_{13}p_3e_3\\ D_{21}p_1e_1 + D_{22}p_2e_2 + D_{23}p_3e_3 \\ D_{31}p_1e_1 + D_{32}p_2e_2 + D_{33}p_3e_3 \end{array}\right) = \left(\begin{array}{c}p_1e_1' \\ p_2e_2' \\ p_3e_3' \end{array}\right) [/tex]

My problem is that I cannot quite figure out how I can get from what I currently know to having the coefficients of the D matrix. I know this is probably trivial but I've been sat staring at this for ages now...

For instance, take, the first component of p'(x). we have [tex] p_1e_1' = D_{11}p_1e_1 + D_{12}p_2e_2 + D_{13}p_3e_3 = 2p_1e_3[/tex]. Since the coefficients pi are arbitrary, it means we can set D11 = D12 = 0, hence p1e1' = D13p3e3 therefore D13 = p1e1' / p3e3, and since the pi factors are arbitrary, they can be set = 1.

thus D13 = e1' / e3, but this does not give me the D13 = -1 that I require - it gives me D13 = 2e3 / e3 = 2.
What am I doing wrong here? I notice there is a 2 in the matrix, but it's in the wrong corner - D31 rather than D13. I can't see where my mistake is though, unless that switching of corners is just a fluke.
 
Last edited:
Physics news on Phys.org
  • #2
hi jeebs! :smile:

i can't make out what you're doing :confused:

to find D, you know that …

D(1,0,0) has to be (0,0,2)

D(0,1,0) has to be (0,0,-2)

D(0,0,1) has to be (-1,1,0) …

doesn't that make it clear what D is? :wink:
 
  • #3
Right I partially get why you said that, but don't fully understand. We have:
e1' = 2e3
e2' = 2e3
e3' = e2 - e1

So, for some reason (I'm not sure why this is allowed), we've decided that:
[tex] e_1 = \left(\begin{array}{c}1\\ 0 \\ 0\end{array}\right) ... e_2 = \left(\begin{array}{c}0\\ 1 \\ 0\end{array}\right) ... e_3 = \left(\begin{array}{c}0\\ 0 \\ 1\end{array}\right) [/tex]
Why are we allowed to say this? Isn't this only true for the Cartesian unit vectors? *I get the feeling there's something big about this that I doo't understand yet *.
Anyway, from there we get:
[tex] e_1' = \left(\begin{array}{c}0\\ 0 \\ 2\end{array}\right) ... e_2' = \left(\begin{array}{c}0\\ 0 \\ -2\end{array}\right) ... e_3' = \left(\begin{array}{c}0\\ 1 \\ 0\end{array}\right) - \left(\begin{array}{c}1\\ 0 \\ 0\end{array}\right) = \left(\begin{array}{c}-1\\ 1 \\ 0\end{array}\right) [/tex]

It's still not obvious to me where these matrix coefficients come from either. What I tried from this is to say that the vector p could be, say,
p = e1 + e2 + e3 and p' = e1' + + e2' + e3'. In other words, we get:

[tex] p' = \left(\begin{array}{ccc}D_{11}&D_{12}&D_{13}\\D_{21}&D_{22}&D_{23}\\D_{31}&D_{32}&D_{33} \end{array}\right) \left(\begin{array}{c}1 \\ 1 \\ 1 \end{array}\right) = \left(\begin{array}{c}-1 \\ 1 \\ 0 \end{array}\right)[/tex]

(noting that e3' = e3' + e1' + e2') .So from this I would have 3 equations to find 9 unknowns, which aint happening. What's going on here?
 
Last edited:
  • #4
(abc;def;ghi)(100) = (adg)
(abc;def;ghi)(010) = (beh)
(abc;def;ghi)(001) = (cfi)
jeebs said:
So, for some reason (I'm not sure why this is allowed), we've decided that:
[tex] e_1 = \left(\begin{array}{c}1\\ 0 \\ 0\end{array}\right) ... e_2 = \left(\begin{array}{c}0\\ 1 \\ 0\end{array}\right) ... e_3 = \left(\begin{array}{c}0\\ 0 \\ 1\end{array}\right) [/tex]
Why are we allowed to say this?

but that's the definition of e1 e2 and e3
 
  • #5
PS. I just added a bit on right at the bit that you just mentioned, just before you responded. Why is that the definition of all basis vectors? not just the Cartesian i,j,k?
 
  • #6
jeebs said:
Why is that the definition of all basis vectors? not just the Cartesian i,j,k?

Because it is.

That's what a basis is (in a vector space).

What did you think it is? :confused:
 
  • #7
I don't know, is it the law that all basis vectors have to have a length of 1?
also, doesn't writing them as (1,0,0), (0,1,0) and (0,0,1) imply they are orthogonal, which I did not think was necessary for a set of basis vectors?
 

FAQ: Matrix Representation of Differential Operator w/ 3 Basis Vectors

What is a matrix representation of a differential operator?

A matrix representation of a differential operator is a way to express a differential operator in terms of a matrix, where the elements of the matrix represent the coefficients of the differential operator. This allows for easier manipulation and calculation of the operator.

How many basis vectors are needed for a matrix representation of a differential operator?

For a matrix representation of a differential operator, three basis vectors are needed. These basis vectors are typically chosen to be the unit vectors in the x, y, and z directions for three-dimensional space.

What is the purpose of using a matrix representation for a differential operator?

The purpose of using a matrix representation for a differential operator is to simplify the calculations and manipulations of the operator. By representing the operator as a matrix, it can be easily multiplied and composed with other matrices, making it easier to solve equations involving differential operators.

How is a differential operator matrix representation related to the Taylor series expansion?

A differential operator matrix representation is closely related to the Taylor series expansion, as both involve expressing a function as a series of terms. In a matrix representation, the coefficients of the differential operator correspond to the coefficients in the Taylor series expansion.

Can a differential operator matrix representation be used for any type of differential equation?

Yes, a differential operator matrix representation can be used for any type of differential equation. However, the basis vectors used may vary depending on the type of equation. For example, for a partial differential equation, the basis vectors may be chosen to be the partial derivatives with respect to each variable.

Similar threads

Back
Top