What are the applications of function-valued matrices?

In summary: A_{JK_1} (s_1) A_{K_1 K} (s_2)\cdots A_{L_1 L} (s_n) \delta_{l_1 j} \delta_{l_2 k} \cdots \delta_{l_n k_n}\n
  • #1
askmathquestions
65
6
TL;DR Summary
Inquiries on the applications of a function of a matrix of a single variable.
I'm glad there's a section here dedicated to differential equations.

I've seen in the fundamental theorem of linear ordinary systems, that, for a real matrix ##A##, we have ## d/dt \exp(At) = A \exp(At)##. I'm wondering if there are analogs of this, like for instance, generalizing a system of non-autonomous linear systems where the matrix ##A## is dependent on ##t##, or in other words, not a constant as in the linear systems example.

What are the applications of derivatives of function-valued matrices of a single variable, like ##A(t)##? Is there a way to generalize non-autonomous systems with this? Or otherwise, where else might derivatives of ##A(t)## occur?
 
Physics news on Phys.org
  • #2
If I remember correctly (and if indeed I ever understood correctly), in the Heisenberg Picture of Quantum Mechanics, a time dependent operator matrix captures the evolution of the system over time, while the state vector remains fixed.
 
  • #3
There is an analogy of ##d/dt \exp (t {\bf A}) = {\bf A} \exp (t {\bf A})## for a non-autonomous linear system where the matrix is dependent on time. Funny you should ask! Over the weekend I was writing up a problem I was going to post on the forum that involved the need to obtain a formal solution to a non-autonomous linear systems where the matrix is dependent on time, but as I was writing it I realized how to solve the problem! So I didn't need to post the problem in the end. Now thanks to your question I can actually post a lot of what I wrote up! I will post it in the next post. It is a bit more complicated than you need because it isn't the simplest case, as a will explain in a moment

The simplest case is if you have a differential equation like this

$$
\frac{d}{dt} M (t) = A (t) M (t)
$$

where ##M (t)## and ##A(t)## are matrices that are time dependent, then the solution is:

$$
M (t) = \mathcal{T} \left\{ \exp \left( \int_0^t A (s) ds \right) \right\} M (0)
$$

where ##\mathcal{T}## stands for the time ordered product. I explain what that is later.

If you wanted to solve

$$
\frac{d}{dt} M (t) = M (t) A (t) \Leftrightarrow \frac{d}{dt} M^T (t) = A^T (t) M^T (t)
$$

where ##T## stands for transpose, then the solution is:

$$
M (t) = M (0) \left( \mathcal{T} \left\{ \exp \left( \int_0^t A^T (s) ds \right) \right\} \right)^T .
$$

The time-ordered exponential

$$
T [A] (t) := \mathcal{T} \exp \left( \int_0^t A (s) ds \right)
$$

is the solution to the initial value problem:

\begin{align*}
\frac{d}{dt} T [A] (t) & = A (t) T [A] (t)
\nonumber \\
T [A] (0) & = \mathbb{1}
\end{align*}

Anyway, what I did in my calculation is more complicated than the calculation you would do with the cases above because I was considering a second order in the time derivative non-autonomous linear system where the matrix is dependent on time - which I wrote as an equivalent system of first order differential equations which are a non-autonomous linear system where the matrix is dependent on time. The basic method I used to find a formal solution is the same method you use to obtain the above equations. You can read the more complicated proof that I post next or you can wait until later when I post the proof of the simplest case.
 
Last edited:
  • #4
Here's the calculation I was writing up at the weekend...

I needed a formal solution to the differential equation: ##a_{ij}'' + a_{ik} R_{k44j} = 0## where ##R_{k44j}## is time dependent.

We note that the differential equation ##a_{ij}'' + a_{ik} R_{k44j} = 0## is equivalent to a system of first order differential equations which can be written in matrix form as

$$
\frac{d}{ds}
\left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right)
=
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
-R_{j 4 4 k} (s) & 0
\end{array}
\right)
\left(
\begin{array}{c}
a_{ik} (s) \\
\frac{da_{ik}}{ds} (s)
\end{array}
\right)
$$

where I have used that ##R_{j 4 4 k} = R_{k 4 4 j}## (this happens to be a symmetry of the matrix I'm considering). This can be written as an integral equation,

$$
\left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right)
=
\left(
\begin{array}{c}
a_{ij} (0) \\
\frac{da_{ij}}{ds} (0)
\end{array}
\right)
+
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
-R_{j 4 4 k} (s_1) & 0
\end{array}
\right)
\left(
\begin{array}{c}
a_{ij} (s_1) \\
\frac{da_{ij}}{ds} (s_1)
\end{array}
\right)
ds_1
$$

Which can be substituted into itself,

\begin{align*}
\left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right)
& =
\left(
\begin{array}{c}
a_{ij} (0) \\
\frac{da_{ij}}{ds} (0)
\end{array}
\right)
+
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
-R_{j 4 4 k} (s_1) & 0
\end{array}
\right)
\left(
\begin{array}{c}
a_{ik} (0) \\
\frac{da_{ik}}{ds} (0)
\end{array}
\right)
ds_1
\nonumber \\
& +
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk_1} \\
-R_{j 4 4 k_1} (s_1) & 0
\end{array}
\right)
\int_0^{s_1}
\left(
\begin{array}{cc}
0 & \delta_{k_1 k} \\
-R_{k_1 4 4 k} (s_2) & 0
\end{array}
\right)
\left(
\begin{array}{c}
a_{ik} (s_2) \\
\frac{da_{ik}}{ds} (s_2)
\end{array}
\right)
ds_2 ds_1
\end{align*}

Continuing in this way we obtain:

\begin{align*}
&
\left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right) =
\sum_{n=0}^\infty
\int_0^s ds_1 \int_0^{s_1} ds_2 \cdots \int_0^{s_{n-1}} ds_n
\nonumber \\
& \qquad \qquad \qquad \qquad
\left(
\begin{array}{cc}
0 & \delta_{jk_1} \\
-R_{j 4 4 k_1} (s_1) & 0
\end{array}
\right) \cdots
\left(
\begin{array}{cc}
0 & \delta_{k_{n-1}k} \\
-R_{k_{n-1} 4 4 k} (s_n) & 0
\end{array}
\right)
\left(
\begin{array}{c}
a_{ik} (0) \\
\frac{da_{ik}}{ds} (0)
\end{array}
\right)
\end{align*}

where we have put ##k \equiv k_n##.

Let us look in detail at the case of two integrals, i.e.,

$$
\int_0^s ds_1 \int_0^{s_1} ds_2
\left(
\begin{array}{cc}
0 & \delta_{jk_1} \\
-R_{j 4 4 k_1} (s_1) & 0
\end{array}
\right)
\left(
\begin{array}{cc}
0 & \delta_{k_1 k} \\
-R_{k_1 4 4 k} (s_2) & 0
\end{array}
\right)
$$

We'll write the matrices in component form, so that the above equation reads

$$
\int_0^s ds_1 \int_0^{s_1} ds_2 A_{J K_1} (s_1) A_{K_1K} (s_2)
$$

We have that

\begin{align*}
& \int_0^s ds_1 \int_0^{s_1} ds_2 A_{JK_1} (s_1) A_{K_1K} (s_2)
\nonumber \\
& = \frac{1}{2} \int_0^s ds_1 \int_0^{s_1} ds_2 A_{JK_1} (s_1) A_{K_1K} (s_2) + \frac{1}{2} \int_0^s ds_2 \int_{s_2}^s ds_1 A_{JK_1} (s_1) A_{K_1K} (s_2)
\end{align*}

where in the second integral on the RHS we are integrating over the same region but we have changed the order of integration. By renaming the integration variables in the second integral, we have

\begin{align*}
& \int_0^s ds_1 \int_0^{s_1} ds_2 A_{JK_1} (s_1) A_{K_1K} (s_2)
\nonumber \\
& = \frac{1}{2} \int_0^s ds_1 \int_0^{s_1} ds_2 A_{JK_1} (s_1) A_{K_1K} (s_2) + \frac{1}{2} \int_0^s ds_1 \int_{s_1}^s ds_2 A_{JK_1} (s_2) A_{K_1K} (s_1) \qquad (*)
\end{align*}

We define the time-ordered product of two matrices ##A(s_i)## and ##A(s_{i+1})##,

$$
T \{ A_{K_{i-1} K_i} (s_i) A_{K_i K_{i+1}} (s_{i+1}) \} := \left\{
\begin{matrix}
A_{K_{i-1} K_i} (s_i) A_{K_i K_{i+1}} (s_{i+1}) & : s_i > s_{i+1} \\
A_{K_{i-1} K_i} (s_{i+1}) A_{K_i K_{i+1}} (s_i) & : s_{i+1} > s_i
\end{matrix}
\right. \qquad (**)
$$

Using ##(**)##, it can be easily verified that ##(*)## can be written,

$$
\int_0^s ds_1 \int_0^{s_1} ds_2 A_{JK_1} (s_1) A_{K_1K} (s_2)
= \frac{1}{2} \int_0^s ds_1 \int_0^s ds_2 T \{ A_{JK_1} (s_1) A_{K_1K} (s_2) \} .
$$

The definition ##(**)## obviously generalises: let ##\alpha## be a permutation of ##\{ 1,2, \dots n\}## such that ##s_{\alpha (1)} > s_{\alpha (1)} > \cdots > s_{\alpha (n)}## then

$$
T \{ A_{J K_1} (s_1) A_{K_2 K_3} (s_2) \cdots A_{K_{n-1} K_n} (s_n) \} := A_{J K_1} (s_{\alpha (1)}) A_{K_2 K_3} (s_{\alpha (2)}) \cdots A_{K_{n-1} K_n} (s_{\alpha (n)})
$$

It can be shown that we can then formally write the solution as:

\begin{align*}
& \left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right) =
\sum_{n=0}^\infty \frac{1}{n!}
\int_0^s ds_1 \int_0^s ds_2 \cdots \int_0^s ds_n
\nonumber \\
& \qquad \qquad \qquad \qquad T
\left\{
\left(
\begin{array}{cc}
0 & \delta_{jk_1} \\
-R_{j 4 4 k_1} (s_1) & 0
\end{array}
\right) \cdots
\left(
\begin{array}{cc}
0 & \delta_{k_{n-1}k} \\
-R_{k_{n-1} 4 4 k} (s_n) & 0
\end{array}
\right)
\right\}
\left(
\begin{array}{c}
a_{ik} (0) \\
\frac{da_{ik}}{ds} (0)
\end{array}
\right)
\end{align*}

or

$$
\left(
\begin{array}{c}
a_{ij} (s) \\
\frac{da_{ij}}{ds} (s)
\end{array}
\right)
=
T \exp \left\{
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
-R_{j 4 4 k} (t) & 0
\end{array}
\right) dt
\right\}
\left(
\begin{array}{c}
a_{ik} (0) \\
\frac{da_{ik}}{ds} (0)
\end{array}
\right)
$$We see from this that any solution for ##a_{ij} (s)## will be of the general form

$$
a_{ij} (s) = a_{ik} (0) f_{kj} (s) + \frac{d}{ds} a_{ik} (0) h_{kj} (s)
$$

where the functions ##f_{kj} (s)## and ##h_{kj} (s)## are solely determined by the matrix

$$
T \exp \left\{
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
-R_{j 4 4 k} (t) & 0
\end{array}
\right) dt
\right\}
$$

and as such are independent of the initial conditions placed on ##a_{ij}##.

NOTE: You don't have to go through the next part unless you want to know how to extract ##f_{kj} (s)## and ##h_{kj} (s)##, and hence have the solution to the original second order differential equation.

If we define

$$
Q_{jk} (s) := - R_{j 4 4 k} (s)
$$

then

\begin{eqnarray}
&\;&
T \exp \left\{
\int_0^s
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
Q_{jk} & 0
\end{array}
\right) dt
\right\}
\nonumber\\
&=&
\left(
\begin{array}{cc}
\delta_{jk} & 0 \\
0 & \delta_{jk}
\end{array}
\right) + \int_0^s ds_1
\left(
\begin{array}{cc}
0 & \delta_{jk} \\
Q_{jk} (s_1) & 0
\end{array}
\right)
\nonumber \\
&+& {1 \over 2!} \int_0^s ds_1 \int_0^s ds_2 T \left\{
\left(
\begin{array}{cc}
\delta_{jk_1} Q_{k_1k} (s_2) & 0 \\
0 & Q_{jk_1} (s_1) \delta_{k_1k}
\end{array}
\right) \right\}
\nonumber \\
& +& \; {1 \over 3!} \int_0^s ds_1 \int_0^s ds_2 \int_0^s ds_3 T \left\{
\left(
\begin{array}{cc}
0 & \delta_{jk_1} Q_{k_1k_2} \delta_{k_2 k} \\
Q_{jk_1} \delta_{k_1k_2} Q_{k_2 k} & 0
\end{array}
\right) \right\}
\nonumber \\
&+&
{1 \over 4!} \int_0^s ds_1 \int_0^s ds_2 \int_0^s ds_3 \int_0^s ds_4
\nonumber \\
&\;& T \left\{
\left(
\begin{array}{cc}
\delta_{jk_1} Q_{k_1k_2} (s_2) \delta_{k_2 k_3} Q_{k_3 k} (s_4) & 0 \\
0 & Q_{jk_1} (s_1) \delta_{k_1k_2} Q_{k_2 k_3} (s_3) \delta_{k_3k}
\end{array}
\right) \right\}
\nonumber \\
&+&
{1 \over 5!} \int_0^s ds_1 \int_0^s ds_2 \int_0^s ds_3 \int_0^s ds_4 \int_0^s ds_5
\nonumber \\
&\;& T \left\{
\left(
\begin{array}{cc}
0 & \delta_{jk_1} Q_{k_1k_2} (s_2) \delta_{k_2 k_3} Q_{k_3 k} (s_4) \delta_{k_4 k} \\
Q_{jk_1} (s_1) \delta_{k_1k_2} Q_{k_2 k_3} (s_3) \delta_{k_3k_4} Q_{k_4 k} (s_5) & 0
\end{array}
\right) \right\}
\nonumber \\
&+&
{1 \over 6!} \int_0^s ds_1 \int_0^s ds_2 \int_0^s ds_3 \int_0^s ds_4 \int_0^s ds_5 \int_0^s ds_6
\nonumber \\
&\;& T \left\{
\left(
\begin{array}{cc}
\delta_{jk_1} Q_{k_1k_2} (s_2) \delta_{k_2 k_3} Q_{k_3 k} (s_4) \delta_{k_4 k_4} Q_{k_4 k} (s_6) & 0 \\
0 & Q_{jk_1} (s_1) \delta_{k_1k_2} Q_{k_2 k_3} (s_3) \delta_{k_3k_4} Q_{k_4 k} (s_5) \delta_{k_4k}
\end{array}
\right) \right\}
\nonumber \\
&\;& + \dots
\nonumber
\end{eqnarray}The function ##f_{kj} (s)## corresponds to the top left hand quadrant:

\begin{align*}
f_{kj} (s) & = \delta_{jk} + \sum_{n=1}^\infty \frac{1}{(2n)!} \int_0^s ds_1 \cdots \int_0^s ds_{2n} T \{ \delta_{jk_1} Q_{k_1k_2} (s_2) \cdots \delta_{k_{2n}k_{2n-1}} Q_{k_{2n-1} k} (s_{2n}) \}
\nonumber \\
& = \delta_{jk} + \sum_{n=1}^\infty \frac{s^n}{(2n)!} \int_0^s ds_1 \cdots \int_0^s ds_n T \{ Q_{jk_1} (s_1) \cdots Q_{k_{n-1} k} (s_n) \}
\end{align*}

The function ##h_{kj} (s)## corresponds to the top right hand quadrant:

\begin{align*}
h_{kj} (s) & = s \delta_{jk} + \sum_{n=1}^\infty \frac{1}{(2n+1)!} \int_0^s ds_1 \cdots \int_0^s ds_{2n+1} T \{ \delta_{jk_1} Q_{k_1k_2} (s_2) \cdots \delta_{k_{2n}k_{2n-1}} Q_{k_{2n-1} k_{2n}} (s_{2n}) \delta_{k_{2n} k} \}
\nonumber \\
& = s \delta_{jk} + \sum_{n=1}^\infty \frac{s^{n+1}}{(2n+1)!} \int_0^s ds_1 \cdots \int_0^s ds_n T \{ Q_{jk_1} (s_1) \cdots Q_{k_{n-1} k} (s_n) \}
\end{align*}
 
Last edited:
  • #5
The simplest case is if you have a differential equation like this

$$
\frac{d}{dt} M (t) = A (t) M (t)
$$

where ##M (t)## and ##A(t)## are matrices where ##A (t)## is a given matrix which we are taking to be time dependent.

This can be written as an integral equation,

$$
M (t) = M (0) + \int_0^t A (t_1) M (t_1) dt_1
$$

Which can be substituted into itself,

\begin{align*}
M (t)
& =
M (0) + \int_0^t A (t_1) M (0) dt_1 + \int_0^t A (t_1) \int_0^{t_1} A (t_2) M (t_2) dt_2 dt_1
\end{align*}

Continuing in this way we obtain:

\begin{align*}
& M (t) = M (0) + \sum_{n=1}^\infty
\int_0^t dt_1 \int_0^{t_1} dt_2 \cdots \int_0^{t_{n-1}} dt_n
A (t_1) \cdots A (t_n) M (0) \qquad (*)
\end{align*}

(assuming convergence). Let us look in detail at the case of two integrals, i.e.,

$$
\int_0^t dt_1 \int_0^{t_1} dt_2 A (t_1) A (t_2)
$$

We'll write the matrices in component form, so that the above equation reads

$$
\sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{j k_1} (t_1) A_{k_1k} (t_2)
$$

We have that

\begin{align*}
& \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
\nonumber \\
& = \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2) + \frac{1}{2} \sum_{k_1} \int_0^t dt_2 \int_{t_2}^t dt_1 A_{jk_1} (t_1) A_{k_1k} (t_2)
\end{align*}

where in the second integral on the RHS we are integrating over the same region but we have changed the order of integration (compare figures (a) and (b)). By renaming the integration variables in the second integral, we have

\begin{align*}
& \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
\nonumber \\
& = \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2) + \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_{t_1}^t dt_2 A_{jk_1} (t_2) A_{k_1k} (t_1) \qquad (**)
\end{align*}

integration.jpg


We define the time-ordered product of two matrices ##A(t_1)## and ##A(t_2)##,

$$
\mathcal{T} \{ A_{j k_1} (t_1) A_{k_1 k} (t_2) \} := \left\{
\begin{matrix}
A_{j k_1} (t_1) A_{k_1 k} (t_2) & : t_1 > t_2 \\
A_{j k_1} (t_2) A_{k_1 k} (t_1) & : t_2 > t_1
\end{matrix}
\right. \qquad (***)
$$

Using ##(***)##, it can be easily verified that ##(**)## can be written,

$$
\sum_{k_1} \int_0^t dt_1 \int_0^{t_1} dt_2 A_{jk_1} (t_1) A_{k_1k} (t_2)
= \frac{1}{2} \sum_{k_1} \int_0^t dt_1 \int_0^t dt_2 \mathcal{T} \{ A_{jk_1} (t_1) A_{k_1k} (t_2) \} .
$$

The definition ##(***)## obviously generalises: if ##t_{\alpha (1)} > t_{\alpha (1)} > \cdots > t_{\alpha (n)}## where ##\alpha## a permutation of ##\{ 1,2, \dots n\}##, then

$$
\mathcal{T} \{ A_{j k_1} (t_1) A_{k_1 k_2} (t_2) \cdots A_{k_{n-1} k_n} (t_n) \} := A_{j k_1} (t_{\alpha (1)}) A_{k_1 k_2} (t_{\alpha (2)}) \cdots A_{k_{n-1} k_n} (t_{\alpha (n)})
$$

It can be shown that we can then formally write the solution, ##(*)##, as:

\begin{align*}
& M (t) = M (0) + \sum_{n=0}^\infty \frac{1}{n!} \int_0^t dt_1 \int_0^t dt_2 \cdots \int_0^t dt_n \mathcal{T} \left\{ A (t_1) \cdots A (t_n) \right\} M (0)
\end{align*}

or

$$
M (t) = \mathcal{T} \exp \left( \int_0^t A (s) ds \right) M (0)
$$

From which we have that the time-ordered exponential

$$
T [A] (t) := \mathcal{T} \exp \left( \int_0^t A (s) ds \right)
$$

is the solution to the initial value problem:

\begin{align*}
\frac{d}{dt} T [A] (t) & = A (t) T [A] (t)
\nonumber \\
T [A] (0) & = \mathbb{1}
\end{align*}

This is the generalisation of ##d/dt \exp (t {\bf A}) = {\bf A} \exp (t {\bf A})##.
 
Last edited:
  • Like
Likes askmathquestions and PeroK
  • #6
Thank you for your in-depth reply Julian, it's going to take some time to digest this.

Are there applications for function-valued matrices outside of strictly differential equations?
 

FAQ: What are the applications of function-valued matrices?

What is a function-valued matrix?

A function-valued matrix is a matrix in which the elements are functions rather than numerical values. These functions can take in one or more variables and produce a set of outputs, creating a matrix of functions.

What are the benefits of using function-valued matrices?

Function-valued matrices are beneficial because they allow for the representation of complex relationships between variables. They can also be used to model dynamic systems and make predictions based on changing inputs.

What are the applications of function-valued matrices in mathematics?

Function-valued matrices are used in various areas of mathematics, including linear algebra, differential equations, and functional analysis. They are also used in numerical methods for solving differential equations and optimization problems.

How are function-valued matrices used in engineering?

In engineering, function-valued matrices are used for modeling and analyzing systems with multiple inputs and outputs. They are also used in control systems to design controllers that can adapt to changing conditions.

Can function-valued matrices be applied in other fields besides mathematics and engineering?

Yes, function-valued matrices have applications in other fields such as physics, biology, and economics. They can be used to model and analyze complex systems and make predictions about their behavior.

Back
Top