Evaluating Nested Integral in MATLAB for General K

In summary: G(0,m)& \mbox{otherwise}\end{cases}$$where ##G## is the recursive function, and ##u_0,u_1,\ldots,u_{K-1}## are the independent limits.You can define the nested integral as a recursive function by defining the integrand to bewhat is written above if ##u_j<\gamma - \sum_{k=1}^{j-1}u_k## for all ##j\in 1,...,K##, where we set ##u_0=0##
  • #36
Thanks. I think you use too many list comprehensions, which are in effect FOR loops. I am not an expert in Python, but I think we should be able to use vectorization. The fact that R runs in 11 secs, while the Python version runs in 2 min and 22 secs indicates an inefficiency in the coding. Do you use vectorization in your R code instead of the for loops or list comprehension in Python? I will see what I can do.
 
Physics news on Phys.org
  • #37
EngWiPy said:
Do you use vectorization in your R cod
Yes the R code is vectorised wherever possible, which is in most of the code. I look forward to learning how to vectorise in Python!
 
  • Like
Likes EngWiPy
  • #38
andrewkirk said:
Yes the R code is vectorised wherever possible, which is in most of the code. I look forward to learning how to vectorise in Python!

That's why the R code is faster. I am not sure how to vectorize the code in Python myself, but I am researching it. Will update if I find anything. Thanks
 
  • #39
Let's us take these two lines

Python:
...
u = [x * du for x in range(n)]
x = max([f(u1) for u1 in u]) * z
...

These can be replaced by

Python:
import numpy as np
#Vectorize the PDF function to be applied to a vector
f_vec = np.vectorize(f)

u = x*np.arange(n)
x = max(f_vec(u))*z

I will try to spot where vectorization can be used in your code.
 
  • #40
After some thought, I don't think I get your algorithm. I understand you go from ##Z_1## to ##Z_1+Z_2## up to ##Z_1+Z_2+\cdots+Z_K##, but not sure how ##\text{Pr}\left[Z_1+Z_2+Z_3\leq z\right]## could be computed from ##\text{Pr}\left[Z_1+Z_2\leq z\right]##. The integration you wrote is the same integration we are trying to evaluate recursively. How do you go forward instead of backward? Let take a simple example where ##K=3##, could you list the procedures of how to evaluate the 2-dimensional integral forwardly?
 
  • #41
EngWiPy said:
integration you wrote is the same integration we are trying to evaluate recursively.
here's the integral again:
$$
F_{Z_k}(u) = \int_0^\infty f_U(v) F_{Z_{k-1}}(u-v)\,dv
$$
In the recursive version we start at the top, with ##k=3## and in the formula we use a recursive call to evaluate ##F_{Z_{k-1}}(u-v) ##. Recall that ##F_{Z_k}## is the CDF of ##Z_k##, which is the sum of k iid copies of the base random variable ##U## whose PDF is ##f_U##.

The light-bulb moment was to realize that, if we start at the bottom (k=1) instead of the top (k=3), at each level k, we can save a vector that gives ##F_{Z_k}(u)## for ##u## covering the support interval at small increments. Then when we increment ##k## by 1 and apply the above formula to calculate the next level (one higher than the previous one), we don't need to recurse down to calculate ##F_{Z_{k-1}}(u-v)##. Rather we just look up the value in the vector we created at the previous step.

If you look at the code you'll see that it builds up a K x n matrix called convolution, for which the ##k##th column is a vector that saves ##F_{Z_k}(u)##. We look up values in the ##(k-1)##th column in the course of calculating each entry in the ##k##th column. We get a vector of integrand values and we then use Simpson's rule to estimate the integral that gives us ##F_{Z_k}(u)##. There's a bit of fiddling about to cope with the integration in cases where we have an even number of points, or only one point (Simpson requires an odd number greater than 2).

The vector of points to be integrated (multiplied by du) is set up as x in this line:
Code:
 x = [du * f(u[i-k]) * convolution[j-1][k] for k in range(i + 1)]
note how it looks up the previous column (j-1) of convolution to get values of ##F_{j-1}##, rather than recursing down to calculate them.

EDITED: changed X to U for consistency with OP
 
Last edited:
  • Like
Likes EngWiPy
  • #42
OK, right. Yesterday I understood the math, but somehow I forgot it today. You are right. We view ##Z_k## as ##Z_{k-1}+U_k## for ##k=2,\,3,\,\ldots, K##, and ##Z_1=U_1##. That is a neat idea. Now I need to get my head around the implementation. How do you deal with the ##\infty## limit in the convolution? I think the upper limit should be ##u##, right?

I know I keep doing this, but you introduced a new notation here which is ##X##. To eliminate any confusion, and to make the notations consistent, I think the integral we want to evaluate eventually for ##K=3## is

[tex]F_3(z) = F_{Z_3}(z)=\int_0^{z}F_{Z_2}(z-u_2)f_U(u_2)\,du_2[/tex]

where ##z## here is ##\gamma## in the original formulation.

I need to read your post a couple more times to understand it. The code even in Python is not easy to follow.

Thanks
 
Last edited:
  • #43
@andrewkirk To be honest, I still don't understand the implementation. Let me explain what I do understand. We start with ##Z_1 = U_1##, then we have ##F_{Z_1}(z) = F_U(z) = 1-1/(1+z)##.

Then we can formulate ##Z_2## as ##Z_2 = Z_1 + U_2##, and thus

[tex]F_{Z_2}(z)=\int_{u_2=0}^zF_{Z_1}(z-u_2)f_U(u_2)\,du_2[/tex]

This integration is easy to evaluate, because we have ##F_{Z_1}(z)## in closed form. We can define a vector ##\mathbf{u}_2= 0,\,du,\ldots,\,(N-1)du##, where ##du## is the step size, and ##N = \lceil z/du \rceil## (I assume a uniform step size for the simplicity of exposition). And using vectorization, the above integral can be evaluated as

[tex]\text{sum}\left(F_{Z_1}(z-\mathbf{u}_2)*f_U(\mathbf{u}_2)*du\right)[/tex]

After than I get lost. I mean we write ##Z_3 = Z_2 + U_3##, and

[tex]F_{Z_3}(z)=\int_{u_3=0}^zF_{Z_2}(z-u_3)f_U(u_3)\,du_3[/tex]

but the implementation will be

[tex]\text{sum}\left(F_{Z_2}(z-\mathbf{u}_3)*f_U(\mathbf{u}_3)*du\right) = \text{sum}\left(\left[F_{Z_1}(z-\mathbf{u}_3-\mathbf{u}_2)*f_U(\mathbf{u}_3)*du\right]*f_U(\mathbf{u}_2)*du\right)[/tex]

where ##\mathbf{u}_3-\mathbf{u}_2## is all possible values of ##u_3## minus all possible values of ##u_2##. But we need o make sure that ##z-u_3-u_2 > 0##, so, we can write the argument as ##\min(0, z-u_3-u_2)##, or loop as

Python:
for m1 in len(u): #u = 0, du, 2*du, ... , (N-1)*du
      for m2 in len(u)-m1:
            z = z - u[m1] - u[m2]

Which I can evaluate. For ##u_3 = 0##, ##F_{Z_1}(z-u_3-u_2)*f_U(u_2)*du## is already evaluated in the previous step. Am I in the correct direction? I know it is a messy presentation, but it is just because I don't understand the approach very well. So, bear with me, please.

When you implemented the new approach in R, what is the speed up factor compared to the original recursive code also in R?
 
Last edited:
  • #44
Where your exposition differs from my algorithm is that you are making ##du_k## dependent on ##z## whereas in my algorithm ##the step size ##du## is selected at the beginning by a loop ('while x > max_dp') that selects it to be small enough that ##f(u)du<\textit{max_dp}## for some pre-set tolerance max_dp. Perhaps the algorithm can be done with varying increments but it would make things unnecessarily complex.

Define a function simp that is given an odd number of consecutive values of a function at evenly spaced increments du, and returns the Simpson's Rule estimate of the integral of the function over the relevant interval.
Then define another function x_simp (Extended Simpson) that is given any finite number of values of the function and estimates the integral as:
  • zero if there is only one value
  • simp applied to those values if there is an odd number of values
  • the trapezoidal integral for the first incremental volume, added to simp applied to the odd sequence of values obtained by dropping the first one, if there's a nonzero, even number of values
Let the practical support of ##f## be ##[0,\textit{max_lim})##, so that we can ignore the probability of ##U## exceeding max_lim. Then we define M to be max_lim / du rounded up to the next integer. At each nesting level k, we will estimate ##F_{Z_k}(u)## for ##u\in\{0, du, ...,M\,du\}##. The elements of this estimate make up a vector ##(A_0^k, ..., A_M^k)##, and we can calculate this directly for ##k=1## as ##A_r^k=F_U(r\, du)##.

Then for ##k>1## and natural number ##j\le M## we have:

$$F_{Z_k}(j\,du) =
\int_0^z F_{Z_{k-1}}(z-j\,du)f_U(j\,du)\,du
\approx
\textit{x_simp}(A_0^{k-1}f_U(j\,du), A_1^{k-1}f_U((j-1)du), ..., A_{j-1}^{k-1}f_U(du), A_j^{k-1}f_U(0))
$$

Note how we store in the vector of ##A^{k-1}_r##s all the information we'll need later about ##F_{Z_{k-1}}## when we do level ##k-1##. Then we use that, together with the values of ##f_U## at multiples of ##du##, to calculate the next set of ##A^k_r##s.

So no recursive expansions are ever needed. In your post you've done a recursive expansion over two levels, so that your last formula contains both ##u_2## and ##u_3##. Rather than recursively expanding, we just access the A values from teh previous level and use them in the Simpson integration estimate.

I hope that's clearer.

As I recall, the recursive version in R ran in about 3 mins 30, while the new version runs in about seven seconds, so the speed-up factor is about thirty. That's because the recursive version, at levels below the top one, is always redoing calcs that have already been done, whereas this approach avoids that.
 
  • Like
Likes EngWiPy
  • #45
Oh, really? I am doing recursion again :DD I told you I am still not sure about the implementation. As usual, I will have to read and analyse your post to understand it. It is very complex for me, to be honest.

A speed-up factor of 30x compared to the original is great. I believe we can get another 15-30x speed up factor if we use C++ using the same algorithm. Imagine when I have to evaluate not (K-1)-nested integrals, but 2*(K-1)-nested integrals, when I consider the order statistics case as we were discussing earlier on another thread! Computational speed is a stumbling block here.
 

Similar threads

Back
Top