Evaluating Nested Integral in MATLAB for General K

In summary: G(0,m)& \mbox{otherwise}\end{cases}$$where ##G## is the recursive function, and ##u_0,u_1,\ldots,u_{K-1}## are the independent limits.You can define the nested integral as a recursive function by defining the integrand to bewhat is written above if ##u_j<\gamma - \sum_{k=1}^{j-1}u_k## for all ##j\in 1,...,K##, where we set ##u_0=0##
  • #1
EngWiPy
1,368
61
Hello,

How can I evaluate the following nested integral in MATLAB for a general value of ##K##

[tex]{\int\limits_{u_1=0}^{\gamma}\int\limits_{u_2=0}^{\gamma-U_1}\cdots \int\limits_{u_{K}=0}^{\gamma-\sum_{k=1}^{K-1}U_k}}f_{U_1}(u_1)f_{U_2}(u_2)\cdots f_{U_{K}}(u_{K})\,du_1du_2\cdots du_{K}[/tex]

where ##f_{U_k}(u_k)## is the PDF of the random variable ##U_k##, and ##\gamma\geq 0## is a constant.

Thanks in advance
 
Physics news on Phys.org
  • #3
This is for double integrals. I am looking for a K nested integrals.
 
  • #4
Any hints?
 
  • #5
I'm not familiar with the exact application you have, and you might be able to use properties of the PDF, or knowledge about whether the random variables are independent, to simplify things. That said...

Do you have the function values as vectors or matrices? If so you can use trapz and nest the integrals to your hearts content. For a given value of K the array of function values would have K dimensions, and each integral evaluates over one dimension, so the final integral reduces a vector to a scalar.

Otherwise, if you have the functional expressions for the different [itex]f_{U_k}[/itex], then you would need to use integral/integral2/integral3, which limits how many nested numerical integrations you could do.

The third option is to use int() in Symbolic Math Toolbox which let's you calculate the answer symbolically. You should be able to nest as many integrals as you want with that.
 
  • #6
Unfortunately, I couldn't simply the results numerically, or write it as a product of one-dimensional integrals. The random variable ##U_k## are independent, but the integrations limits dependent on other random variables, so they are inseparable. I am looking to write a piece of code in MATLAB, that allows me to evaluate the K nested integrals without changing the code for each value of K. Could you elaborate on the last method by using int()?
 
  • #7
Here's an example that does a 4-D integral

[tex] \int_0^5 \int_0^5 \int_0^5 \int_0^5 \left( w^2 + x^2 + y^2 + z^2 \right) \mbox{dw} \, \mbox{dx} \,\mbox{dy} \,\mbox{dz}[/tex]

Code:
>> syms w x y z
>> f = w^2 + x^2 + y^2 + z^2;
>> dw = int(f,'w',0,5)
dw =

5*x^2 + 5*y^2 + 5*z^2 + 125/3

>> dx = int(dw,'x',0,5)
dx =

25*y^2 + 25*z^2 + 1250/3

>> dy = int(dx,'y',0,5)
dy =

125*z^2 + 3125

dz = int(dy,'z',0,5)
dz =

62500/3

You can do this for definite OR indefinite integrals, and you can nest as many calls as necessary. You can implement this as a loop that runs K times to do the K integrals, with the first pass integrating the defined function, and the subsequent passes integrating the result of the previous pass.
 
  • #8
That's interesting. Yes, I am looking to write a for loop to evaluate the integrals to automate the process. I will check this way out. Thanks
 
  • #9
I can write a for loop for the example you provided, because the limits are independent of each others. This takes only one for loop. But in my case, the limits are interdependent. How to handle this using this approach? I suspect we need ##K-1## for loops to iterate over all possible values of the limits, which is not efficient.
 
  • #10
EngWiPy said:
I can write a for loop for the example you provided, because the limits are independent of each others. This takes only one for loop. But in my case, the limits are interdependent. How to handle this using this approach? I suspect we need ##K-1## for loops to iterate over all possible values of the limits, which is not efficient.
First note that the above integral is written incorrectly. The order of the ##du_k##s on the right should be the reverse of the order of the integrals on the left. A last-in-first-out principle applies. I'll assume the intended order is that of the integrals on the left.

You can write your integral with independent limits, with every upper limit being ##\gamma##, by defining the integrand to be
  • what is written above if ##u_j<\gamma - \sum_{k=1}^{j-1}u_k## for all ##j\in 1,...,K##, where we set ##u_0=0##, and
  • zero otherwise.
What that does is implicitly apply the limits by setting the integrand to zero whenever one or more of the limits is breached.

Prima facie, I would expect that to be less efficient than manually programming the integration by adding up the areas of incremental rectangles, because that can use the interdependent limits to dramatically reduce the number of iterations needed.

But it is possible that MATLAB has some super-efficient routines for numerical integration when you use its built-in functions (eg they would use compiled code rather than interpreted code) that would more than offset manually programming the integration. I'd be surprised if that were the case, but it may be.

For the manual programming, you need to define the nested integral as a recursive function. The mathematics of this is as follows:

EDIT: The following equations and the code that follows contain some minor, but important corrections, plus a simplification (reduction in number of arguments to the recursive function) that were made on 19/7/18.

We can write this using a recursive function ##G:\mathbb R\times \mathbb Z_+\rightarrow\mathbb R##, defined as:
$$G(h,m) =
\begin{cases}
1 &\mbox{if } m=0 \\
\int_0^{h} G(h - u_m,m-1)\,f_{U_m}(u_m)\,du_m
& \mbox{otherwise }
\end{cases}
$$

The full nested integral is
$$G(\gamma,K)$$

This reverses the order of nesting of the integrals, but that doesn't change the result, since the random variables are all independent.

The Python code would be:
Code:
granularity = 1000
gamma = # set value of gamma ...
K = # set value of K ...

# assume that all the f_{U_k} are the same, so we only need one definition:
def f_U(u)
  # define function f_U here

def integ(h, m)
  if m == 0
    res = 1
  else
    du = h / granularity
    res, u = 0
    # integrate across this u_m
    while u < h
      u += du
      res += integ(h - u, m - 1) * f(u) * du
  return res
# end of definition of function integ

# call the function to find the nested integral
prob = integ(gamma, K)
print(prob)
 
Last edited:
  • Like
Likes EngWiPy and kreil
  • #11
andrewkirk said:
First note that the above integral is written incorrectly. The order of the ##du_k##s on the right should be the reverse of the order of the integrals on the left. A last-in-first-out principle applies. ...

I think that the limits of the integrals determine the order. But you are right, it is more accurate to reverse the order of the integrands, because the derivation shows them that way (as illustrated below).

For the implementation, I still don't get it. I think the reason is that I don't get the math behind it. Let us step back a bit. The following integration can be approximated using the rectangular method as

[tex]\int_a^bf(x)\,dx = \lim_{N\to \infty}\sum_{i=1}^{N-1}f(x_i)(x_{i+1}-x_i)[/tex]

where ##N## is the number of points, ##x_1 = a##, and ##x_N = b##.

Now let's take the simple case of 2 random variables in our case. We want to evaluate
[tex]\text{Pr}\left[U_1+U_2\leq \gamma\right]=\text{E}_{U_1}\left\{\text{Pr}\left[U_2\leq \gamma-U_1/ U_1=u_1\right]\right\}\\=\int_{u_1=0}^{\gamma}\text{Pr}\left[U_2\leq \gamma-u_1\right]f_{U_1}(u_1)\,du_1\\=\int_{u_1=0}^{\gamma}\underbrace{F_{U_2}(\gamma-u_1)f_{U_1}(u_1)}_{G_1(u_1)}\,du_1=\lim_{N\to \infty}\sum_{i=1}^{N-1}G_1(u_{1,i})(u_{1, i+1}-x_{1, i})[/tex]

where ##u_{1,1}=0##, and ##u_{1,N}=\gamma##.

For the case of 3 random variables, we want to evaluate
[tex]\text{Pr}\left[U_1+U_2+U_3\leq \gamma\right]=\text{E}_{U_1}\left\{\text{Pr}\left[U_2+U_3\leq \gamma-U_1/ U_1=u_1\right]\right\}\\=\int_{u_1=0}^{\gamma}\text{Pr}\left[U_2+U_3\leq \gamma-u_1\right]f_{U_1}(u_1)\,du_1=\\=\int_{u_1=0}^{\gamma}\text{E}_{U_2}\left\{\text{Pr}\left[U_3\leq \gamma-u_1-u_2/U_2=u_2\right]\right\}f_{U_1}(u_1)\,du_1\\=\int_{u_1=0}^{\gamma}\left[\int_{u_2=0}^{\gamma-u_1}\text{Pr}\left[U_3\leq\gamma-u_1-u_2\right]f_{U_2}(u_2)\,du_2\right]f_{U_1}(u_1)\,du_1= \int_{u_1=0}^{\gamma}\left[\int_{u_2=0}^{\gamma-u_1}\underbrace{{F_{U_3}(\gamma-u_1-u_2)f_{U_2}(u_2)\,du_2}}_{G_2(u_1+u_2)}\right]f_{U_1}(u_1)\,du_1[/tex]

I get lost here on how to approximate the nested integrals as summations. That is why I don't get the logic behind the code.

EDIT (to give it a try): for a given ##u_1##, the inner integral can be approximated as
[tex]\int_{u_2=0}^{\gamma-u_1}G_2(u_1+u_2)\,du_2=\lim_{N\to \infty}\sum_{i=1}^{N-1}G_2(u_{1}+u_{2,i})(u_{2, i+1}-u_{2, i})[/tex]

where ##u_{2,1} = 0##, and ##u_{2,1}=\gamma-u_1##. But ##u_1## changes from ##0## to ##\gamma## as ##u_{1,1}=0,\,u_{1,2},\ldots,\,u_{1,N}=\gamma##. So, at each point of ##u_{1,j}## for ##j=1,\,2,\,\ldots,\,N##, we have

[tex]\int_{u_2=0}^{\gamma-u_1}G_2(u_{1,j}+u_2)\,du_2=\lim_{N\to \infty}\sum_{i=1}^{N-1}G_2(u_{1,j}+u^{(j)}_{2,i})(u^{(j)}_{2, i+1}-u^{(j)}_{2, i})[/tex]

where ##u^{(j)}_{2, 1} = 0##, and ##u^{(j)}_{2, N} = \gamma-u_{1,j}##. Right? So, now to compute the whole integral as a summation approximation, we can write

[tex]\int_{u_1=0}^{\gamma}\left[\int_{u_2=0}^{\gamma-u_1}G_2(u_1+u_2)\,du_2\right]f_{U_1}(u_1)\,du_1\\=\lim_{N\to \infty}\sum_{j=1}^{N-1}\left[\sum_{i=1}^{N-1}G_2(u_{1,j}+u^{(j)}_{2,i})(u^{(j)}_{2, i+1}-u^{(j)}_{2, i})\right]f_{U_1}(u_{1,j})(u_{1,j+1}-u_{1,j})[/tex]

It this correct?
 
Last edited:
  • #12
@EngWiPy : I spotted some silly errors in my post. The last argument to the function G, the vector, should not be there. It is not used in the Python code and should not be used in the function definition. We do need to pass a vector along, and use it, when the central integrand is a joint density function of all components of the vector and the components are interdependent. But in this case the components are independent, so we can deal with them one at a time as we emerge outwards through the nesting levels.

Also, I forgot to put the recursive call into the function in the code so, as written, it never dived down through the nesting levels.

I apologise for the errors, which may have made things harder than necessary to follow.
I've corrected my post above - both the code and the equations - and I have also made a simplification that reduces to two the number of arguments to the recursive function, which we can do by reversing the order of nesting of the integrals (which has no mathematical effect).

Regarding your post:

Your approach has the right general idea, but differs from (the corrected version of) what I wrote in the following ways:

  1. I have a single function G (labelled 'integ' in the code), whereas you have a series of functions ##G_1,G_2,...##. The differences between different Gs in your case is covered in mine by the 2nd argument to my (corrected) G function. When the 2nd argument to my function is ##K-m##, it corresponds to your ##G_m##, but with the following difference:
  2. My G evaluates as an integral that gives a value of the CDF ##F_m##, except in the terminal case where ##m=K+1## where it evaluates to 1. Your ##G_m## is the whole integrand, which is the CDF integral multiplied by a value of the PDF ##f_m##.
Perhaps with my corrections and simplification it will be clearer how the recursive process works.
 
Last edited:
  • Like
Likes EngWiPy
  • #13
Thanks for the corrections, and pointing out the differences. Having a single function ##G## with two arguments, makes it simpler later in the code when the function calls itself recursively. Could you write the mathematical definition of your ##G## function?
 
  • #14
I think it is more convenient for mathematical notations to average over ##U_K, U_{K-1},\,\ldots,U_2## instead of averaging over ##U_1, U_2, \,\ldots,\,U_{K-1}##. In this case we can write ##G(\gamma, K) = \text{Pr}\left\{\sum_{k=1}^KU_k\leq \gamma\right\}## as

[tex]G(h, k) = \int_{u_k=0}^{h}G(h-u_k, k-1)f_{U_k}(u_k)\,du_k[/tex]

for ##k=2,\,3,\,\ldots,\,K##, and ##G(h, 1)## is the CDF of a single variable evaluated at ##h## (I don't see this value returned in your code. Maybe you are doing it differently?).

I will try to write the code in another way using Python. For simplicity, let us assume that the random variables ##\{U\}_{k=1}^K## are i.i.d. exponential random variables with parameter 1. So, the PDF and CDF of each random variables are defined as ##f_U(u)=e^{-u}## and ##F_U(u)=1-e^{-u}##, respectively.

Now we can write the code for this as following:

Python:
import math
#Define the CDF of U
def F(u):
      return 1-math,exp(-u)
#Define the PDF of U
def f(u):
     return math.exp(-u)
granularity = 1000
def G(h, k):
      if k == 1:
         res = F(h)
     else:
         du = h/granularity
         res, u = 0, 0
         while u < h:
              res += G(h-u, k-1)*f(u)*du
              u += du
     return res

#Find the total probability
prob = G(gamma, K)
print(prob)

Is this correct?

Using Monte-Carlo simulations, the value of ##G(1,4)## calculated as

Python:
count = 0
N = 1000000
for n in range(0, N):
    if np.sum(np.random.exponential(1, (1, 4))) <= 1:
        count += 1
print(count/N)

which gives 0.017 as a result. The numerical evaluation of the same function for a granularity of 300 gives 0.019 (granularity of 1000 took too long). I tried for other values ##G(0.5, 5)##, and Monte-Carlo simulations gives 0.00018, while the numerical evaluation with granularity 100 gives 0.00016 (as ##K## increases, increasing the granularity increases the time for the numerical integration to be evaluated. So, I had to reduce it for ##K=5##). I think this seems to work, but the question is: Is this a good approximation?
 
Last edited:
  • #15
EngWiPy said:
Could you write the mathematical definition of your ##G## function?
The mathematical definition of G is in the corrected post #10, immediately above the code.

I am away from home this weekend with only limited access to a computer. But I did manage to do (on paper) an analytic evaluation of the nested integral in the case where U is exponential with PDF ##f_U(u) = e^{-u}##. We can show by induction that, if the nesting depth is ##n##, the value of the ##m## innermost levels for ##m\le n## is:

$$ 1 - e^{-(\gamma - s_m)}\sum_{k=0}^{m-1} \frac{(\gamma - s_m)^k}{k!}$$

where ##s_m\triangleq \sum_{j=1}^{n-m} u_j##.

Hence the value of the full nested set of ##n## integrals is:
$$ 1 - e^{-\gamma}\sum_{k=0}^{n-1} \frac{\gamma ^k}{k!}$$

That value can be evaluated directly, without the need for any computer programming.

Note that the summation part is the first ##n## terms of the Taylor series for ##e^{\gamma}## so that, for a fixed ##\gamma##, the value asymptotically approaches zero as ##n\to\infty##, which is what we would expect.
 
  • #16
The exponential assumption is just to test if the numerical approach works. In my case, as we were discussing in the other thread, ##U_k=X_k/Y_k##, where ##X_k##, and ##Y_k## are exponential random variables with parameter 1. The summation of ##K## i.i.d. exponential random variables is ##\chi^2##-random variable with ##K## degrees of freedom.
 
  • #17
OK, now I put the code together for the actual PDF and CDF of the random variable, and did the numerical and Monte-Carlo simulation for a range of values of ##\gamma##. The code is as follows:

Python:
#************************** Import necessary libraries******************************************
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline

#******************************Set the constant scalars and vectors***************************
granularity = 700
K = 3
beta_dB = 3
beta = 10**(beta_dB/10)
EF = 100

gammadB = np.arange(-5,11)
gammaVec = 10**(gammadB/10)

N = 100000 # For simulation

#******************************Define functions to be used***************************************
#Define the CDF of U
def F(u):
      return 1-1/(u+1)
#Define the PDF of U
def f(u):
     return 1/((1+u))**2
 

def G(h, k):
    #print(f'h is {h}, and k is {k}')
    if k == 1:
        res = F(h)
    else:
        du = h/granularity
        res =  0
        u = 0
        while u < h:
            res += G(h-u, k-1)*f(u)*du
            u += du
    return res
#*******************Find the numerical and simulation values of the integral******************
Pout = []
PoutSim = []
for gamma in gammaVec:
    gammath = (gamma/beta)*(EF-beta*(K-1))
    # Numerical
    Pout.append(1-G(gamma, K))
    # Simulation
    count = 0
    for n in range(0, N):
        if np.sum(np.random.exponential(1, (1, K))/np.random.exponential(1, (1, K))) <= gamma:
            count += 1
    PoutSim.append(1-count/N)
 
#**************************Visualization******************

fig, ax = plt.subplots(figsize = (10, 8))
major_ticks = np.arange(0.1, 1.1, 0.1)
minor_ticks = np.arange(0.1, 1.1, 0.1)ax.set_yticks(major_ticks)
ax.set_yticks(minor_ticks, minor=True)

ax.set_yscale('log')

ax.grid(which='minor', alpha=0.2, linestyle='--', linewidth=1, color='k')
ax.grid(which='major', alpha=0.5, linestyle='-', linewidth=1, color='k')ax.plot(gammadB, Pout, 'r-*', label = 'Numerical')
ax.plot(gammadB, PoutSim, 'k-o', label = 'Sim')

ax.legend()
ax.grid(True)

plt.show()

fig.savefig("test.png", bbox_inches='tight')

The resulted figure is attached as test.png. It shows that both yield very close curves. Which is good.

What I actually want to do, is to replace gamma, by gammath. The changes will be

Python:
 ...
    Pout.append(1-G(gammath, K))
    ...
        if np.sum(np.random.exponential(1, (1, K))/np.random.exponential(1, (1, K))) <= gammath:
    ...

This resulted in test1.png, which is also attached. Here the curves are very different. Why? The simulation curve makes more sense than the numerical evaluation. This implies that there is something wrong in the numerical implementation of the integral. What is wrong in the numerical implementation?
 

Attachments

  • test.png
    test.png
    5.8 KB · Views: 478
  • test1.png
    test1.png
    4.8 KB · Views: 405
  • #18
@EngWiPy You are using a constant relative granularity as your integrating range increases. That means that the size of the integrating increment du gets bigger as the outer limit gamma or gammath gets bigger. Since the overstatement of the simple rectangular approach is proportional to the size of the increments, the integrating error gets larger when you get to very large ranges, as is the case with gammath, which goes to over 600.

You could fix this by setting the granularity to be a fixed absolute, rather than relative, amount. eg set du globally to be 0.01.
 
  • #19
andrewkirk said:
@EngWiPy You are using a constant relative granularity as your integrating range increases. That means that the size of the integrating increment du gets bigger as the outer limit gamma or gammath gets bigger. Since the overstatement of the simple rectangular approach is proportional to the size of the increments, the integrating error gets larger when you get to very large ranges, as is the case with gammath, which goes to over 600.

You could fix this by setting the granularity to be a fixed absolute, rather than relative, amount. eg set du globally to be 0.01.

Thanks. I am testing it. 0.01 wasn't small enough, so I decreased it to 0.001, and now it is taking too long. I will report back when I have some results.
 
  • #20
EngWiPy said:
Thanks. I am testing it. 0.01 wasn't small enough, so I decreased it to 0.001, and now it is taking too long. I will report back when I have some results.
Re run-time, there are two simple strategies that, taken together, should greatly reduce it:

1. Use trapezoidal rather than rectangular incremental areas. To do that you'll need to insert another variable to hold the previous value in each iteration of the 'while u<h' loop, which you'll use together with the new value to calc the incremental trapezium's area.

2. Based on the PDF, generate a set of integrating increments that starts at say 0.001 and increases as the PDF becomes flatter. Then for each u in the central loop, select the du that corresponds to the current value of u. That way you'll be using fine increments in the early part of the curve where approximations are least accurate, and coarse increments in the right tail. eg you could set du to the greater of 0.001 and ##0.0005\,e^{u/5}##. You could change the coefficient and the 5 in the exponent as needed to balance between runtime and accuracy.
 
  • #21
Where to insert No. 2 in the code?

What I did is that I fixed du for each gammath as gammath/granularity. To do this, I added a new argument to the function G(h, K, du), such that du remains constant for each call of the funcion from outside the function. I got a little faster, but not by that much.

I ran the same code on MATLAB, and it was MUCH faster (I ran the code the whole night on Python, and it wasn't finished in the morning. When I got to the office, and use MATLAB for the same parameter values, I got the result in less 5 minutes!).

Python has the option of parallel programming, but not sure if we can use it here, and how.

PS: OK, on my office computer, I have i7 with 8GB of RAM, while at home on my laptop I have i5 with 6GB of RAM. So, maybe the difference lies there. Unfortunately, I do't have Python on my office computer to test this hypothesis.
 
Last edited:
  • #22
This implementation in MATLAB runs in less than 30 seconds on my laptop. Interesting problem. If I understand your math correctly, there may be a way to call the built-in integration routines for a comparison. If I get a chance, I will look at this again.

Code:
function [] = nestint()
% www.physicsforums.com/threads/nested-integrals.951242/#post-6030169
K = 3;
beta_dB = 3;
% beta = 10^(beta_dB/10);
% EF = 100;
gammadB = -5:10;
L = length(gammadB);
gammaVec = 10.^(gammadB/10);
N = 100000;
Pout = zeros(1,L);
PoutSim = zeros(1,L);

for gamma = 1:L
%     gammath = (gammaVec(gamma)/beta)*(EF-beta*(K-1));
    Pout(gamma) = (1-G(gammaVec(gamma), K));
    count = 0;
   
    for n =0:(N-1)
        if sum(exprnd(1,1,K)./exprnd(1,1,K)) <= gammaVec(gamma)
            count = count +  1;
        end
    end
   
    PoutSim(gamma) = 1-count/N;
end

semilogy(gammadB, Pout)
hold on
semilogy(gammadB, PoutSim)

function res = G(h, k) 
du = .001;

if k == 1       
    res = F(h);    
else                
    res =  0;        
    u = 0;        
    while u < h            
        res = res +  G(h-u, k-1)*f(u)*du;            
        u = u + du;    
    end
end

function x = F(u)
x = 1-1./(u+1);

function x = f(u)
x = 1./((1+u)).^2;
 
  • #23
I've done an R implementation (Python turned out to be too painful, with my relative inexperience in that language) below. I used two techniques to improve accuracy:
  1. Setting the integration increment to be inversely proportional to the integrand, with a maximum increment size
  2. Using cubic spline interpolation to reduce errors from nonlinearity
It starts to misbehave when the gamma integration limit is very large, I think because the integrand gets so close to zero, and it gets in a mess if it goes below zero, which can easily happen with a cubic spline interpolation. Putting a floor of zero creates big errors - I don't know why yet, cos I'd expect the errors from that to be negligible. So instead I got it to switch to trapezoidal integration if any of the cubic splines dip below zero.

Attached are the code and three plots. The most accurate used cubic splines with a maximum increment of 3, and took 3.5 mins to run. The next was cubic with max increment of 4, and took 1.5 mins to run. Last is one that used trapezoidal integration, which took 14 secs to run. You can see it loses accuracy for gamma (integration limit) beyond 100, but the probability is already below 3% at that stage. The log scale exaggerates the errors.

I think the key point is that for deeply nested integrals like this, Monte Carlo simulation is actually more accurate for a given runtime than numerical integration. The MC sim runs in a couple of seconds and is much smoother (which I take to be a sign of accuracy.
Code:
dev.new()
time.save <- Sys.time()
#****************************^Set the constant scalars and vectors************************^*
spline.on <- FALSE # TRUE
diag.on <- FALSE
do.big.ones <-  FALSE # TRUE #
granularity <- 700 # 700
max.increment <- 10 # set to 4 for high accuracy, with run taking 3.5 mins. 3 gives reasonable and runs in 1.5 mins
K <- 3
beta0_dB <- 3
beta0 <- 10^(beta0_dB/10)
EF <- 100

set.seed(123)

gamma.dB <- (-5):11
gamma.Vec <- 10^(gamma.dB/10)

# du <- max(gamma.Vec) / granularity
dV <- 1 / granularity

N <- 100 * 1000 # For simulation

#****************************^Define functions to be used************************************^*
print.diag <- function(t){ # print the argument t only if global flag diag.on is TRUE
  if (diag.on) print (t)
}

# Define the CDF of U
F <- function(u){
      1-1/(u+1)
}
# Define the PDF of U
f <- function(u){
     1/((1+u))^2
}
# curve(f,from = 0, to = 10)

## a function for integration on a piece of the spline
piecewise_int <- function (hi, yi, bi, ci, di) {
  yi * hi + bi * hi ^ 2 / 2 + ci * hi ^ 3 / 3 + di * hi ^ 4 / 4
}

integ_fn <- function(x, y){ # do an integral of the cubic spline fitted to the points x, y
  sf <- splinefun(x, y, 'natural')
  # information for integration
  n <- length(x)
  int_info <- with(environment(sf)$z,
                   list(h = diff(x), y = y[-n], b = b[-n], c = c[-n], d = d[-n])
  )
  res <- do.call(piecewise_int, int_info)
  if (any(res < 0)){
    # do trapezoidal integration instead
    sum(diff(x) * (y[-length(y)] + 0.5 * diff(y)))
  } else
  ## cubic spline integration on all pieces
    sum(pmax(0,res))
}

big.up.gamma <- function(g) (g / beta0) * (EF - beta0 * (K-1))

G <- function(h, h0, k, diag){ # prob that the sum of k iid vbls lies between h0 and h
  u.all <- I.all <- NULL
  if (k == 1){
    res <- F(h) - F(h0)
  } else { # not base case. Set up to recurse
    res <- res0 <- curv <- 0
    u3 <- I3 <- rep(0,3)
    while (u3[2] < h) {
      I3[3] <- G(h - u3[3], max(0, h0 - u3[3]), k-1, diag1) * f(u3[3])
      if (u3[3] > 0){
        I.av <- 0.5 * sum(I3[2:3])
        res0 <- res0 + du * I.av
        if (u3[2] > 0){
          du <- min(max.increment, dV / I.av) # else leave du at initial low value
  
          curv <- ((I3[3] - I3[2]) / (u3[3] - u3[2]) - (I3[2] - I3[1]) / (u3[2] - u3[1])) / (0.5 * (u3[3] - u3[1]))
          curvhist <<- c(curvhist, curv)
  
          # currently disable the curvature adjustment, as it slows down run and doesn't change output
          # du <- du / max(1, abs(curv) / 0.5)
  
        } # end inner if
      } else # end outer if, except for else on next line
        du <- 1 / granularity # first point in integration
      u.all <- c(u.all, u3[3])
      I.all <- c(I.all, I3[3])
      u3 <- c(u3[-1], min(h, u3[3] + du))
      I3 <- c(I3[-1], -1)
    } # end while
    x <- c(u.all, I.all)
    n <- length(u.all)
    if (n > 0){
      if (spline.on){
        res <- integ_fn(u.all, I.all)
        #res <- max(0, res)
      } else {
        res <- sum(diff(u.all) * (I.all[-n] + 0.5 * diff(I.all)))
      }
    } else res <- 0
  } # end outermost if

  res
}
#****************^*Find the numerical and simulation values of the integral****************^
curvhist <- NULL
Pout <- 1
PoutSim <- NULL

call.count <- 0

exp.inv.cdf <- function(x){ # inverse CDF of exponential with parameter 1
  -log(1 - x)
}

gamma.Vec <- c(gamma.Vec, big.up.gamma(gamma.Vec))
# Do Numerical integration version
gamma.index <- 1
gamm.last <- 0
for (gamma0 in gamma.Vec){
    gamm <- if (do.big.ones) big.up.gamma(gamma0) else gamma0
    print(paste('gamma.index is', gamma.index,'-------------------------'))
    x <- tail(Pout, 1) - G(gamm, gamm.last, K, c(gamm, rep(-1, K - 1)))
    Pout <- c(Pout, x)
 
    if (x < 0){
      x=x
    }
 
    gamm.last <- gamm
    gamma.index <- gamma.index + 1
}
Pout <- Pout[-1]

# now do simulation version
simx <- exp.inv.cdf(runif(K * N)) / exp.inv.cdf(runif(K * N))
dim(simx) <- c(N, K)
PoutSim <- outer(apply(simx, 1, sum), gamma.Vec, "<=")
PoutSim <- 1 - apply(PoutSim, 2, sum) / N

#************************^Visualization****************^
time.end <- Sys.time()
time.gone <- difftime(time.end, time.save, units = "mins")
print(paste('time taken is', time.gone, "minutes"))

plotcurves <- function(gamma.Vec.x, Pout.x, PoutSim.x){
  Pout.x1 <- pmax(Pout.x, min(Pout.x[Pout.x > 0])) # set min of the min positive Pout value
  lims <- rep(max(c(Pout.x1, PoutSim.x)), 2)
  lims[1] <- min(c(Pout.x1, PoutSim.x))
  plot(gamma.Vec.x, Pout.x1, type='b',log="y", ylim = lims)
  lines(gamma.Vec.x, PoutSim.x, type='l')
  grid()
}

dev.new()
plotcurves(gamma.Vec, Pout, PoutSim)
plot.lim.3.png
plot.lim.4.png
plot.trapezoidal.lim.10.png
 

Attachments

  • plot.lim.3.png
    plot.lim.3.png
    11.2 KB · Views: 866
  • plot.lim.4.png
    plot.lim.4.png
    14 KB · Views: 895
  • plot.trapezoidal.lim.10.png
    plot.trapezoidal.lim.10.png
    13.2 KB · Views: 885
Last edited:
  • #24
mfig said:
This implementation in MATLAB runs in less than 30 seconds on my laptop. Interesting problem. If I understand your math correctly, there may be a way to call the built-in integration routines for a comparison. If I get a chance, I will look at this again.

Code:
function [] = nestint()
% www.physicsforums.com/threads/nested-integrals.951242/#post-6030169
K = 3;
beta_dB = 3;
% beta = 10^(beta_dB/10);
% EF = 100;
gammadB = -5:10;
L = length(gammadB);
gammaVec = 10.^(gammadB/10);
N = 100000;
Pout = zeros(1,L);
PoutSim = zeros(1,L);

for gamma = 1:L
%     gammath = (gammaVec(gamma)/beta)*(EF-beta*(K-1));
    Pout(gamma) = (1-G(gammaVec(gamma), K));
    count = 0;
  
    for n =0:(N-1)
        if sum(exprnd(1,1,K)./exprnd(1,1,K)) <= gammaVec(gamma)
            count = count +  1;
        end
    end
  
    PoutSim(gamma) = 1-count/N;
end

semilogy(gammadB, Pout)
hold on
semilogy(gammadB, PoutSim)

function res = G(h, k)
du = .001;

if k == 1      
    res = F(h);   
else               
    res =  0;       
    u = 0;       
    while u < h           
        res = res +  G(h-u, k-1)*f(u)*du;           
        u = u + du;   
    end
end

function x = F(u)
x = 1-1./(u+1);

function x = f(u)
x = 1./((1+u)).^2;

May I ask what are your computer specifications? Because you don't seem to change anything in the code. I am not sure if Python is slower than MATLAB, or it is a machine speed issue, because I have Python on my laptop (i5 with 6 GB of RAM) but not MATLAB, and on my office computer (i7 and 8 GB of RAM) I have MATLAB but not Python (I am an administrator and cannot download it without asking it to be installed).
 
  • #25
andrewkirk said:
I've done an R implementation (Python turned out to be too painful, with my relative inexperience in that language) below. I used two techniques to improve accuracy:
  1. Setting the integration increment to be inversely proportional to the integrand, with a maximum increment size
  2. Using cubic spline interpolation to reduce errors from nonlinearity
It starts to misbehave when the gamma integration limit is very large, I think because the integrand gets so close to zero, and it gets in a mess if it goes below zero, which can easily happen with a cubic spline interpolation. Putting a floor of zero creates big errors - I don't know why yet, cos I'd expect the errors from that to be negligible. So instead I got it to switch to trapezoidal integration if any of the cubic splines dip below zero.

Attached are the code and three plots. The most accurate used cubic splines with a maximum increment of 3, and took 3.5 mins to run. The next was cubic with max increment of 4, and took 1.5 mins to run. Last is one that used trapezoidal integration, which took 14 secs to run. You can see it loses accuracy for gamma (integration limit) beyond 100, but the probability is already below 3% at that stage. The log scale exaggerates the errors.

I think the key point is that for deeply nested integrals like this, Monte Carlo simulation is actually more accurate for a given runtime than numerical integration. The MC sim runs in a couple of seconds and is much smoother (which I take to be a sign of accuracy.
View attachment 228436 View attachment 228437 View attachment 228438
Code:
dev.new()
time.save <- Sys.time()
#****************************^Set the constant scalars and vectors************************^*
spline.on <- FALSE # TRUE
diag.on <- FALSE
do.big.ones <-  FALSE # TRUE #
granularity <- 700 # 700
max.increment <- 10 # set to 4 for high accuracy, with run taking 3.5 mins. 3 gives reasonable and runs in 1.5 mins
K <- 3
beta0_dB <- 3
beta0 <- 10^(beta0_dB/10)
EF <- 100

set.seed(123)

gamma.dB <- (-5):11
gamma.Vec <- 10^(gamma.dB/10)

# du <- max(gamma.Vec) / granularity
dV <- 1 / granularity

N <- 100 * 1000 # For simulation

#****************************^Define functions to be used************************************^*
print.diag <- function(t){ # print the argument t only if global flag diag.on is TRUE
  if (diag.on) print (t)
}

# Define the CDF of U
F <- function(u){
      1-1/(u+1)
}
# Define the PDF of U
f <- function(u){
     1/((1+u))^2
}
# curve(f,from = 0, to = 10)

## a function for integration on a piece of the spline
piecewise_int <- function (hi, yi, bi, ci, di) {
  yi * hi + bi * hi ^ 2 / 2 + ci * hi ^ 3 / 3 + di * hi ^ 4 / 4
}

integ_fn <- function(x, y){ # do an integral of the cubic spline fitted to the points x, y
  sf <- splinefun(x, y, 'natural')
  # information for integration
  n <- length(x)
  int_info <- with(environment(sf)$z,
                   list(h = diff(x), y = y[-n], b = b[-n], c = c[-n], d = d[-n])
  )
  res <- do.call(piecewise_int, int_info)
  if (any(res < 0)){
    # do trapezoidal integration instead
    sum(diff(x) * (y[-length(y)] + 0.5 * diff(y)))
  } else
  ## cubic spline integration on all pieces
    sum(pmax(0,res))
}

big.up.gamma <- function(g) (g / beta0) * (EF - beta0 * (K-1))

G <- function(h, h0, k, diag){ # prob that the sum of k iid vbls lies between h0 and h
  u.all <- I.all <- NULL
  if (k == 1){
    res <- F(h) - F(h0)
  } else { # not base case. Set up to recurse
    res <- res0 <- curv <- 0
    u3 <- I3 <- rep(0,3)
    while (u3[2] < h) {
      I3[3] <- G(h - u3[3], max(0, h0 - u3[3]), k-1, diag1) * f(u3[3])
      if (u3[3] > 0){
        I.av <- 0.5 * sum(I3[2:3])
        res0 <- res0 + du * I.av
        if (u3[2] > 0){
          du <- min(max.increment, dV / I.av) # else leave du at initial low value
 
          curv <- ((I3[3] - I3[2]) / (u3[3] - u3[2]) - (I3[2] - I3[1]) / (u3[2] - u3[1])) / (0.5 * (u3[3] - u3[1]))
          curvhist <<- c(curvhist, curv)
 
          # currently disable the curvature adjustment, as it slows down run and doesn't change output
          # du <- du / max(1, abs(curv) / 0.5)
 
        } # end inner if
      } else # end outer if, except for else on next line
        du <- 1 / granularity # first point in integration
      u.all <- c(u.all, u3[3])
      I.all <- c(I.all, I3[3])
      u3 <- c(u3[-1], min(h, u3[3] + du))
      I3 <- c(I3[-1], -1)
    } # end while
    x <- c(u.all, I.all)
    n <- length(u.all)
    if (n > 0){
      if (spline.on){
        res <- integ_fn(u.all, I.all)
        #res <- max(0, res)
      } else {
        res <- sum(diff(u.all) * (I.all[-n] + 0.5 * diff(I.all)))
      }
    } else res <- 0
  } # end outermost if

  res
}
#****************^*Find the numerical and simulation values of the integral****************^
curvhist <- NULL
Pout <- 1
PoutSim <- NULL

call.count <- 0

exp.inv.cdf <- function(x){ # inverse CDF of exponential with parameter 1
  -log(1 - x)
}

gamma.Vec <- c(gamma.Vec, big.up.gamma(gamma.Vec))
# Do Numerical integration version
gamma.index <- 1
gamm.last <- 0
for (gamma0 in gamma.Vec){
    gamm <- if (do.big.ones) big.up.gamma(gamma0) else gamma0
    print(paste('gamma.index is', gamma.index,'-------------------------'))
    x <- tail(Pout, 1) - G(gamm, gamm.last, K, c(gamm, rep(-1, K - 1)))
    Pout <- c(Pout, x)
 
    if (x < 0){
      x=x
    }
 
    gamm.last <- gamm
    gamma.index <- gamma.index + 1
}
Pout <- Pout[-1]

# now do simulation version
simx <- exp.inv.cdf(runif(K * N)) / exp.inv.cdf(runif(K * N))
dim(simx) <- c(N, K)
PoutSim <- outer(apply(simx, 1, sum), gamma.Vec, "<=")
PoutSim <- 1 - apply(PoutSim, 2, sum) / N

#************************^Visualization****************^
time.end <- Sys.time()
time.gone <- difftime(time.end, time.save, units = "mins")
print(paste('time taken is', time.gone, "minutes"))

plotcurves <- function(gamma.Vec.x, Pout.x, PoutSim.x){
  Pout.x1 <- pmax(Pout.x, min(Pout.x[Pout.x > 0])) # set min of the min positive Pout value
  lims <- rep(max(c(Pout.x1, PoutSim.x)), 2)
  lims[1] <- min(c(Pout.x1, PoutSim.x))
  plot(gamma.Vec.x, Pout.x1, type='b',log="y", ylim = lims)
  lines(gamma.Vec.x, PoutSim.x, type='l')
  grid()
}

dev.new()
plotcurves(gamma.Vec, Pout, PoutSim)

I am familiar with MATLAB and Python, but not R. One trick is how to control du, I think. What do you mean you set the increment to be inversely proportional to the integrad with a maximum increment size? Could you write that mathematically, please?

Monte-Carlo simulations run fast and accurate, but can be used as a verification to the numerical simulations when you write an academic paper. Usually, it cannot be used alone as an evaluation method.

I had to consider C++ as another language to implement the integration. For the same code, I got 30x speed factor over that written in Python, for a constant increment size. There is the option of multi-threading because the values in the FOR loop are independent, and could be implemented in parallel. I tried this tentatively, and it took 1min 9sec mins for a constant increment size du=0.001, while the original code with the same resolution in C++ took 2min 18sec, which isn't that a significant speed up factor!
 
Last edited:
  • #26
EngWiPy said:
May I ask what are your computer specifications?

It's an i7 with 16GB. Does the code I posted take 5 minutes as well on your machine? Mine is 2-3 years old at this point.
 
Last edited:
  • #27
mfig said:
It's an i7 with 16GB. Does the code I posted take 5 minutes as well on your machine? Mine is 23 years old at this point.

Less than 5 minutes on my office machine, yes. But when I tried du = 0.0001 for more accurate results on the same machine, I had to leave the office and keep it running, because it took too long to finish while I was there. But I am not sure how long it took, because I didn't time it.
 
  • #28
EngWiPy said:
What do you mean you set the increment to be inversely proportional to the integrad with a maximum increment size? Could you write that mathematically, please?
It's in the definition of the G function, see the last line of the following truncated version of the G function:
Code:
G <- function(h, h0, k, diag){ # prob that the sum of k iid vbls lies between h0 and h
  u.all <- I.all <- NULL
  if (k == 1){
    res <- F(h) - F(h0)
  } else { # not base case. Set up to recurse
    res <- res0 <- curv <- 0
    u3 <- I3 <- rep(0,3)
    while (u3[2] < h) {
      I3[3] <- G(h - u3[3], max(0, h0 - u3[3]), k-1, diag1) * f(u3[3])
      if (u3[3] > 0){
        I.av <- 0.5 * sum(I3[2:3])
        res0 <- res0 + du * I.av
        if (u3[2] > 0){
          du <- min(max.increment, dV / I.av)
dV is set globally at the beginning of the whole program to be 1/granularity, with granularity currently set to 700. So the du increment is set so that the estimated incremental area (actually it's a 'hypervolume') dV is small enough. dV is calculated as the increment du times I.av, the average of the integrand at the current and previous value of u.

Another streamlining measure I forgot to mention is that instead of calculating, for each value in gamma.Vec, the integral from 0 to there, it calculates the integral between successive elements of gamma.Vec, then adds. Otherwise, for each new value of gammaVec it is recalculating the integral just done before calculating the addon for the new segment between the previous and current value of gamma.Vec.
 
  • #29
andrewkirk said:
...

Another streamlining measure I forgot to mention is that instead of calculating, for each value in gamma.Vec, the integral from 0 to there, it calculates the integral between successive elements of gamma.Vec, then adds. Otherwise, for each new value of gammaVec it is recalculating the integral just done before calculating the addon for the new segment between the previous and current value of gamma.Vec.

If the gammath vector is gammath[0], gammath[1], ... gammath[N], then for gammath[0] we evaluate the integrals from 0 to gammath[0], but for gammath[n], for n =1, 2, ... N, we can use the result from gammath[n-1], and evaluate the integral between gammth[n-1] and gammath[n]. This is true. We can store the previous result, and add the difference. Good observation.

Another observation is that f(u) is re-calculated at each call, although it is evaluated at one of the values 0, du, 2*du, ... I think it must be calculated only once for all possible values, and then we index the value we want to use.
 
  • #30
I tried to implement the accumulation approach as following:

Python:
def G(h, k, u):
    #print(f'h is {h}, and k is {k}')
    if k == 1:
        res = F(h)
    else:
        #du = h/granularity
        res =  0
        #u = 0
        while u < h:
            res += G(h-u, k-1, u)*f(u)*du
            u += du
    return res

gammadB = np.arange(-5,11,4)# np.array([11]) #
gammaVec = 10**(gammadB/10)
gammathVec = (gammaVec/beta)*(EF-beta*(K-1))
print(gammathVec)

Pout = []

if (EF-beta*(K-1)) > 0:
    for index, gamma in enumerate(gammathVec):
        #For the first value, integrate from zero
        if index == 0:
            Pout.append(G(gamma, K, 0))
        else: #For other values integrate from
                   #gammathVec[index-1] and add the
                    #result from the previous value
            Pout.append(Pout[index - 1] + G(gamma, K,
                                                gammathVec[index-1]))

Pout = 1 - np.array(Pout)

But it doesn't give correct results. Where did I make a mistake?
 
Last edited:
  • #31
EngWiPy said:
Where did I make a mistake?
Best not to use u inside the G function to denote the limit for the previous integral, because u is used for the integration variable. Your call of the function G correctly passes that previous limit as the 3rd argument, but when you pass it on as you recurse down the stack, we need to pass on not u but that previous limit minus u. The function should read as follows:
Code:
def G(h, k, h_prev):
    #print(f'h is {h}, and k is {k}')
    if k == 1:
        res = F(h)-F(h_prev)
    else:
        #du = h/granularity
        res =  0
        u = 0
        while u < h:
            res += G(h-u, k-1, max(0,h_prev-u))*f(u)*du
            u += du
    return res
Note:
  • u=0 has been uncommented, as 0 is the correct starting point for the integrating variable in all cases except the base case
  • in the base case k==1 we deduct the CDF at h_prev.
  • the use of max in the recursing call to prevent the lower limit going below zero.
 
  • Like
Likes EngWiPy
  • #32
Yes, you are right about all three points, but interestingly, the evaluation time has increased for this approach about 1.3 times relative to the original approach! Does this make sense?
 
  • #33
EngWiPy said:
Yes, you are right about all three points, but interestingly, the evaluation time has increased for this approach about 1.3 times relative to the original approach! Does this make sense?
I think that can happen, and I think I realize why. Fortunately, thinking over that has given me a ripper of an idea! I've got a new program that runs in under 10 seconds with very high accuracy. It uses iteration rather than recursion, yet is completely general about depth of nesting. The trick is to calculate the CDFs from the bottom up, thereby avoiding all the repetition that goes on in existing versions. Let ##Z_k## be the sum of ##k## iid versions of RV ##X## that has PDF ##f_U## and CDF ##F_U##. Then the PDF ##f_{Z_1}## and CDF ##F_{Z_1}## are the same as ##f_U## and ##F_U##. Then for each level ##k## we can calculate the CDF of ##Z_k## in terms of ##f_U## and ##F_{Z_{k-1}}##, as the following convolution:

$$F_{Z_k}(u) = \int_0^\infty f_U(v) F_{Z_{k-1}}(u-v)\,dv$$

Then, by working from the bottom up we can calculate the CDFs for sums of increasing numbers of ##U##s.

We specifying a maximum allowable increment of probability weight per u increment and use that to calculate the required u granularity. We use that granularity to set a partition of the interval from 0 to the maximum (you were using about 606, and I've kept that), then we start building up from ##Z_1=U## up to ##Z_k## for ##k## as large as we like. For each step the computational complexity is only ##O(n^2)##, where ##n## is the number of segments used in the range of possible values. I think the complexity of the recursion approach is at least##O(n^K)##, which is why it takes so long.

EDITED the above para to change X to U, to be consistent with the OP.

I'll post the R code and two graphs - one log-scale and one not. You can see it's very accurate. The red line is from the sim and the black line from the numerical integration. Then I'll try to do a Python version for Python practice and post that later.

Code:
time.save <- Sys.time()
#*****************************Set the constant scalars and vectors**************************
max.dp <- 0.0001 # granularity will be set so that no interval has an incremental probability weight greater than this
numbins.sim <- 1000 # number of equal-frquency bins to use for graphing the simulation results
K <- 3 # number of iid variables being convoluted ( = nbr of levels of integral nesting)
N <- 100 * 1000 # Number of simulations
max.lim <- 606 # CDF will be calculated in interval from 0 to here
set.seed(123) # set seed for random number generator so that simulation results are reproducible
#*****************************Define functions to be used**************************************
# Define the CDF of U
F <- function(u){
      1-1/(u+1)
}
# Define the PDF of U
f <- function(u){
     1/((1+u))^2
}
#******************Find the numerical values of the integral*****************
# find smallest integration increment to satisfy max.dp constraint
x <- max(f(0), f(max.lim)) # start search loop with x > max.dp
y <- 0
while (x > max.dp){
  y <- y + 1
  z <- 2^-y
  du <- max.lim * z
  u <- du * 0:(2^y)  # divide up interval into 2^y equal segments
  x <- max(f(u) * 2^-y)
}
n <- 2^y + 1

convolution <- array(F(u), dim=c(n, K)) # initialise this array. It  will hold all the convolved CDFs

for (j in 2:K){
  # calc CDF for a convolution of j iid variables of PDF f and CDF F
  convolution[,j] <- 0
  for (i in 1:n){
    # calculate probability that sum of j iid variables is <= u[i]
    x <- du * f(u[i:1]) * convolution[1:i, j - 1]
    if (i %% 2 == 0){ # have extra segment to add to simpson integration
      y <- sum(x[1:2]) / 2
      x <- x[-1] # drop first element of integrand vector , so there are an odd number of points
    } else { # pure simpson integration, so make add-on zero
      y <- 0
    }
    m <- length(x) # this is the number of integration points for the Simpson calc, and it will be odd
    if (m == 1)
      y <- x[1] else
        y <- y + (1/3) * (2 * sum(x) - x[1] - x[m] + 2 * sum(x[2 * (1:((m - 1) / 2))]))
    convolution[i,j] <- min(1, y)
  }
}
Pout <- 1 - convolution[,3]
#******************Find the simulation values of the integral*****************
exp.inv.cdf <- function(x){ # inverse CDF of exponential with parameter 1
  -log(1 - x)
}
simx <- exp.inv.cdf(runif(K * N)) / exp.inv.cdf(runif(K * N))
dim(simx) <- c(N, K)
simx <- apply(simx, 1, sum)
simx <- simx[order(simx)]
u.sim <- simx[round((1:numbins.sim - 0.5) * N / numbins.sim)]
PoutSim <- 1 - (1:numbins.sim) / numbins.sim
#*************************Visualization*****************
time.end <- Sys.time()
time.gone <- difftime(time.end, time.save, units = "mins")
print(paste('time taken is', time.gone, "minutes"))

plotcurves <- function(x1, y1, x2, y2, log = FALSE){
  if (log) {
    y <- c(y1, y2)
    # y1.1 <- pmax(y1, ) # set min of the min positive Pout value
    # lims <- rep(max(c(y1.1, y2)), 2)
    # lims[1] <- min(c(y1.1, y2))
    y.min <- min(y[y > 0])
    y1 <- pmax(y.min, y1)
    y2 <- pmax(y.min, y2)
    lims <- c(y.min, 1)
    plot(x1, y1, type='l',log="y", ylim = lims)
  } else {
    lims <- 0:1
    plot(x1, y1, type='l', ylim = lims)
  }
  lines(x2, y2, type='l', col='red')
  grid()
}

png('plain.graph.png')
plotcurves(u, Pout, u.sim, PoutSim)
dev.off()
png('log.graph.png')
plotcurves(u, Pout, u.sim, PoutSim, log = T)
dev.off()
log.graph.png
plain.graph.png
 

Attachments

  • log.graph.png
    log.graph.png
    4.8 KB · Views: 419
  • plain.graph.png
    plain.graph.png
    4.6 KB · Views: 408
Last edited:
  • Like
Likes EngWiPy
  • #34
Very interesting. I am waiting to see your Python code, as I cannot understand the R code, and how you implemented the algorithm using loops instead of recursions.

How long did the original code with recursion and using the same numerical approach and parameters take? You mentioned that the cubic spline was the most accurate, and took 3min 30sec, but I am sure what numerical approach you are using here.
 
  • #35
EngWiPy said:
I am waiting to see your Python code
Here we are. I got it working in the end:
Code:
#************************** Import necessary libraries******************************************
import numpy as np
import math
import matplotlib.pyplot as plt
import datetime
#************************** Set global constants ******************************************
max_dp = 0.0001# 001 # granularity will be set so that no interval has an incremental probability weight greater than this
numbins_sim = 1000# 00 # number of equal-frquency bins to use for graphing the simulation results
K = 3 # number of iid variables being convoluted ( = nbr of levels of integral nesting)
N = 100 * 1000 # Number of simulations
max_lim = 606 # CDF will be calculated in interval from 0 to here
np.random.seed(123) # set seed for random number generator so that simulation results are reproducible

time_start = datetime.datetime.now()
#*****************************Define functions to be used**************************************
# Define the CDF of U
def F(u):
     return 1-1/(u+1)
# Define the PDF of U
def f(u):
    return (1+u)**-2
#******************Find the numerical values of the integral*****************
# find smallest integration increment to satisfy max.dp constraint
x = max(f(0), f(max_lim)) # start search loop with x > max.dp
y = 0
while x > max_dp:
  y = y + 1
  n = 2**y + 1
  z = 2**-y
  du = max_lim * z
  u = [x * du for x in range(n)] 
  x = max([f(u1) for u1 in u]) * z

# now start the convolution process, working from one RV up to K RVs
convolution = [[F(u[i]) for i in range(n)]] # base case of one RV
# now work up from two to K RVs
for j in range(1,K):
  # calc CDF for a convolution of j iid variables of PDF f and CDF F
  convolution.append([0] * n)
  for i in range(n):
    # calculate probability that sum of j iid variables is <= u[i]
    x = [du * f(u[i-k]) * convolution[j-1][k] for k in range(i + 1)]
    if (i % 2 == 1): # have extra segment to add to simpson integration
      y = sum(x[:2]) / 2
      x = x[1:] # drop first element of integrand vector , so there are an odd number of points
    else:  # pure simpson integration, so make add-on zero
      y = 0    
    m = len(x) # this is the number of integration points for the Simpson calc, and it will be odd
    if m == 1:
      y = x[0]
    elif m == 0:
      y = 0
    else:
      y = y + (1/3) * (2 * sum(x) - x[0] - x[m-1] + 2 * sum([x[2 * k + 1] for
      k in range(round((m - 1) / 2))]))
    convolution[j][i] = min(1, y) 
Pout = [1 - convolution[2][k] for k in range(n)]
#******************Find the simulation values of the integral*****************
exp1 = np.random.exponential(1, [N, K])
exp2 = np.random.exponential(1, [N, K])
simx = [[exp1[i][j] / exp2[i][j] for j in range(K)] for i in range(N)]
sumsimx = [sum([simx[i][j] for j in range(K)]) for i in range(N)]
sumsimx = sorted(sumsimx) # sort the list
u_sim = [sumsimx[round(N * (j + 0.5) / numbins_sim)] for j in range(numbins_sim)]
PoutSim = [1 - j / numbins_sim for j in range(numbins_sim)]
#*************************Visualization*****************
time_end = datetime.datetime.now()
print("Time elapsed for run is:")
print(time_end - time_start)

fig, ax = plt.subplots(figsize = (10, 8))
ax.plot(u, Pout, 'b-')
ax.plot(u_sim, PoutSim, 'r--')
ax.axis([0, 607, 0, 1])
plt.show()
fig.savefig("test_ak.png", bbox_inches='tight')
It produces results that match the sim and numeric results to each other very closely, as shown in the graph below, which is plain scale, as I don't know how to do log scale in Python. The numeric curve is unbroken blue and the sim surve is dashed red. If you look closely, you can see that they overlay one another as closely as the naked eye can make out. On a log scale one might be able to see a bit of separation.

The Python version runs in 2 mins 22 seconds, compared to only 11 seconds for the R version. That may be just because I'm a clumsy Python programmer, having only just started learning it. Perhaps someone more experienced with Python than me could implement the algorithm using operations that are more efficient than those I have used.
test_ak.png
 

Attachments

  • test_ak.png
    test_ak.png
    6.7 KB · Views: 418
  • Like
Likes EngWiPy

Similar threads

Back
Top