LightCone Calculator Improvements

  • I
  • Thread starter Jorrie
  • Start date
  • Tags
    Calculator
In summary: This should work. Refresh the page if it doesn't.In summary, the core team is now looking after LightCone8 with various bug fixes. The latest release includes a conversion function for parsecs to light years, as well as a fix for a duplication of z if this was the start or end of a range.
  • #36
JimJCW said:
I think if at input Ωm,0 = 0.3111 but at output it becomes Ωm,0 = 0.3110082030, it is a discrepancy.
You are right, but to output OmegaT,0 >1 would also be a discrepancy, because without giving the value explicitly, the collaboration the spatial flatness in words, if I read the paper correctly.
So maybe we miss something in the equations, but as a non-cosmologist, I can't figure out what.
Our previous versions of Lightcone8 suffered the same problem.

I'm also still perplexed as to why the Trial Version does not have CSV output option working when selected. I did upload it with the OuputCSV.js file (a new file that was not there is earlier versions) and it shows on my fork.
 
  • Like
Likes pbuk
Space news on Phys.org
  • #37
Jorrie said:
I'm also still perplexed as to why the Trial Version does not have CSV output option working when selected. I did upload it with the OuputCSV.js file (a new file that was not there is earlier versions) and it shows on my fork.
Ok, found error for CSV output and fixed it.
Should now work on http://jorrie.epizy.com/docs/index.html
 
  • #38
Jorrie said:
Ok, found error for CSV output and fixed it.
Should now work on http://jorrie.epizy.com/docs/index.html
@pbuk , I think the trial version is now at the top of your branch as well(?)
I think we should keep the pre-input-reconfigure at the top of your branch until we have consensus on which way to go.
 
  • #39
Jorrie said:
You are right, but to output OmegaT,0 >1 would also be a discrepancy, because without giving the value explicitly, the collaboration the spatial flatness in words, if I read the paper correctly.
So maybe we miss something in the equations, but as a non-cosmologist, I can't figure out what.
Yes I agree, if we want curvature then I think this must be input explicitly as it is in LightCone7 and 8. I'm not a cosmologist either, but I don't think we are missing anything, we just have to deal as modellers with the fact that our inputs are uncertain. The objective in creating any physical model is not to faithfully reproduce inputs, it is to create an output that is the best fit to the observations. With this in mind I plan to do some investigation on how best to 'spread' the Ωrad,0 adjustment to achieve the best fit to t0 (age).

JimJCW said:
I think if at input Ωm,0 = 0.3111 but at output it becomes Ωm,0 = 0.3110082030, it is a discrepancy.
But I think the alternative which is to have an input of Ω0 = 1 become an output of Ω0 = 0.9999101295 is worse: non-zero curvature is "a big thing" and is something that should be explicitly input, not extracted out of what is more or less rounding errors.

Jorrie said:
I'm also still perplexed as to why the Trial Version does not have CSV output option working when selected. I did upload it with the OuputCSV.js file (a new file that was not there is earlier versions) and it shows on my fork.
I think this may have got lost when I imported from your fork - because I can't create pull requests from your fork I had to clone it and copy files over: I picked up the changed files but must have missed the addition.
 
  • #40
Jorrie said:
You are right, but to output OmegaT,0 >1 would also be a discrepancy, because without giving the value explicitly, the collaboration the spatial flatness in words, if I read the paper correctly.
So maybe we miss something in the equations, but as a non-cosmologist, I can't figure out what.
Our previous versions of Lightcone8 suffered the same problem.

Let’s use PLANCK(2018+BAO) as an example for the following discussion (see Post #25):

If we insist to have,
1659523189406.png

we are forced to have,
1659523764741.png

So, Ω0 cannot be 1 in this case.

If we set ‘Total density parameter, Ω’ in LightCone8.1.2 - Jorrie to be 1.000091824, the discrepancy is eliminated:

1659523848759.png


To summarize:

We cannot insist that ΩΛ,0 + Ωm,0 =1 as is done in Planck 2018 results. I and still have Ω0 = 1.​
@pbuk
 
  • #41
JimJCW said:
We cannot insist that ΩΛ,0 + Ωm,0 =1 as is done in Planck 2018 results. I and still have Ω0 = 1.​
This is not correct. Planck 2018+BAO gives us ## \Omega_{m,0} = 0.3111 \pm 0.0056; \Omega_{\Lambda,0} = 0.6889 \pm 0.0056 ##. Now we can argue about the independence of those errors, but they certainly don't give us ## \Omega_{m,0} + \Omega_{\Lambda,0} = 1.00000 000 \pm 0.00000 000 ##. Adjusting ## \Omega_{m,0} ## to 0.311008176 to accommodate ## \Omega_{rad,0} ## is still within ## 0.02 \sigma ## of the survey result!

Let's look at the values at ## t = t_0 ##as they are now with Planck 2018+BAO selected, and with ## \Omega_{m,0} ## forced to 0.3111
ParameterPlanck 2018+BAOLightCone8 defaultForced ## \Omega_{m,0} ##
## \Omega_{\Lambda,0} ##0.6889±0.00560.68890000000.6889000000
## \Omega_{m,0} ##0.3111±0.00560.31100820300.3111000000
## \Omega_{rad,0} ##~9.180e-50.000091796990260.00009182408501
## \Omega_{0} ##"Consistent with 1"1.0000000001.000091824
## \Omega_{\kappa,0} ##"Consistent with 0"0.0000000000.00009182408501
Age13.787±0.020 Gyr13.78704250 Gyr13.78629120 Gyr

Now OK, it doesn't make a huge amount of difference (except to curvature itself of course), but LightCone8 as it is gets the age of the Universe bang in the middle of the Planck 2018+BAO result whereas if you allow positive curvature it is 1 out in the 5th significant figure. This is not the point though, the Planck 2018 results are fitted with the assumption of flatness so it is inconsistent to use the numbers to say there is curvature.
 
Last edited:
  • Like
Likes Jorrie
  • #42
pbuk said:
This is not correct. Planck 2018+BAO gives us ## \Omega_{m,0} = 0.3111 \pm 0.0056; \Omega_{\Lambda,0} = 0.6889 \pm 0.0056 ##. Now we can argue about the independence of those errors, but they certainly don't give us ## \Omega_{m,0} + \Omega_{\Lambda,0} = 1.00000 000 \pm 0.00000 000 ##. Adjusting ## \Omega_{m,0} ## to 0.311008176 to accommodate ## \Omega_{rad,0} ## is still within ## 0.02 \sigma ## of the survey result!

I think LightCone8 is working properly. We can use it to verify the following table:

1659609147713.png


It suggests we cannot insist that ΩΛ,0 + Ωm,0 =1 and still have Ω0 = 1 and Ωk = 0.

We may be able to use statistical argument to say that Ω0 is approximately equal to 1 for a particular observation data, but it is not.
 
  • #43
So what is the consensus: do we declare Lightcone8 with the trail UI interface as good enough, meaning we take away the "Jorrie Trial UI" in the title, or do we wait for a cosmologist to advise us?

My opinion is that as an educational tool, it is as good as it needs to be.
 
  • #44
Jorrie said:
So what is the consensus: do we declare Lightcone8 with the trail UI interface as good enough, meaning we take away the "Jorrie Trial UI" in the title, or do we wait for a cosmologist to advise us?

My opinion is that as an educational tool, it is as good as it needs to be.

I am confused. Currently I am looking at two versions:

LightCone8: https://burtjordaan.github.io/light-cone-calc.github.io/
LightCone8.1.2 - Jorrie Trial UI: https://light-cone-calc.github.io/

Which one are we talking about? Do you mean replacing LightCone8 with LightCone8.1.2 - Jorrie Trial UI?

@pbuk
 
  • #45
Yes, that the trail becomes the "official model", provided that we agree that we should input these 5 parameters, all givens by the Planck collaboration.

1659764882554.png

Our model implementation then forces ##\Omega## to be whatever this input is (default 1, but we are allowed to change this for educational purposes).(a) This is done by subtracting ##\Omega_R## from ##\Omega_M##.
Or we can decide to go more sophisticated by using a statistical method, based on the given error bars on all the inputs. My suggestion is that the "trail version" is good enough for now.

Footnote (a). When we started the original project, the then-core members were keen to keep options open so that hypothetical open and closed models could be demonstrated. Plus of course to be ready for potential future observation favoring a universe that is not spatially flat.
 

Attachments

  • 1659764743068.png
    1659764743068.png
    3.6 KB · Views: 83
  • #46
Jorrie said:
Yes, that the trail becomes the "official model", provided that we agree that we should input these 5 parameters, all givens by the Planck collaboration.

View attachment 305433
Our model implementation then forces ##\Omega## to be whatever this input is (default 1, but we are allowed to change this for educational purposes).(a) This is done by subtracting ##\Omega_R## from ##\Omega_M##.

I am still concerned with the discrepancy that at input, Ωm,0 = Matter density parameter, ΩM = 0.3111, but at output, Ωm,0 = OmegaM = 0.311008203. Some people may consider it as an error in the calculator.

1659774872564.png
What will happen to ‘LightCone8: https://burtjordaan.github.io/light-cone-calc.github.io/’? Will it be still available?

I think one should not take numbers in Planck 2018 results. I inflexibly. Using Planck + BAO as an example, it gives,

1659775007781.png

They do not add up to Ω0 = ΩΛ,0 + Ωm,0 + ΩR,0 =1 and are, therefore, not consistent with LightCone 8 Tutorial Part III – How Things are Computed,

The Ω without subscript represents the present overall density parameter: Ω=ΩΛmr, which adds up to unity in the case of flat space.​

@pbuk
 
  • #47
JimJCW said:
What will happen to ‘LightCone8: https://burtjordaan.github.io/light-cone-calc.github.io/’? Will it be still available?
I don't think so, because it gives exactly the same "error", if we can call it that. It likewise reduces Ωm,0 to 0.3110 in order to keep Ω0 =1. So no benefit in keeping it, AFAICS.

I tend to agree with @pbuk that it is preferable to "err" in ##\Omega_{m,0}##, rather than allowing Lightcone8 to output a non-unity Ω0. Maybe we should reduce both ##\Omega_{m,0}## and ##\Omega_{\Lambda,0}## proportionally.
We can easily explain the discrepancies in the UI.

Do you think we should use the Planck + BAO as initial default? I used the second last column, because the collaboration calls it their "base data".
 
  • #48
Jorrie said:
I don't think so, because it gives exactly the same "error", if we can call it that. It likewise reduces Ωm,0 to 0.3110 in order to keep Ω0 =1. So no benefit in keeping it, AFAICS.

I tend to agree with @pbuk that it is preferable to "err" in ##\Omega_{m,0}##, rather than allowing Lightcone8 to output a non-unity Ω0. Maybe we should reduce both ##\Omega_{m,0}## and ##\Omega_{\Lambda,0}## proportionally.
We can easily explain the discrepancies in the UI.

I believe LightCone8 (https://burtjordaan.github.io/light-cone-calc.github.io/) is functioning properly. Its results are consistent with those of ICRAR (see Post #22) and its calculated Ω’s can be verified with equations in LightCone 8 Tutorial Part III – How Things are Computed (see Post #148 of the thread A glitch in Jorrie’s Cosmo-Calculator?). I think at least this version should be kept as an option.

I think the discrepancy in LightCone8.1.2 - Jorrie Trial UI (https://light-cone-calc.github.io/) (see Post #46) is caused by a mathematical error. The calculator is requiring the following two equations to be valid at the same time in a flat space, but this is impossible:

ΩΛ,0 + Ωm,0 = 1​
ΩΛ,0 + Ωm,0 + ΩR,0 = 1​

The origin of the above problem can be traced to Planck 2018 results. I. We shouldn’t use the numbers in it inflexibly. Using Planck + BAO as an example, it gives,
1659797158109.png

They do not add up to Ω0 = ΩΛ,0 + Ωm,0 + ΩR,0 =1 and are, therefore, not consistent with LightCone 8 Tutorial Part III – How Things are Computed,

The Ω without subscript represents the present overall density parameter: Ω=ΩΛmr, which adds up to unity in the case of flat space.​

@pbuk
 
  • #49
Jim, I don't quite understand your objection against https://light-cone-calc.github.io/, because the two give identical outputs for the Planck + BAO input data, i.e.

1659811225643.png

against https://light-cone-calc.github.io/ :
1659811397374.png

Both deviate from the Planck + BAO input data for Ωm, for reasons discussed before.
IMO, it is simply a matter of Lightcone not taking the error bars into account. To include those would be an unnecessary complication for a relatively simple educational tool. I will update the UI and the Tutorial to explain the rationale and then remove the "Trial version" in the title.

Besides that, the newer version has the additional functionality of a full precision CSV output option.

If you want to preserve the previous version visibly on the Forum, simply put that link in your Forum signature, but mark it as "alternative UI" or something to that effect.
 

Attachments

  • 1659811177212.png
    1659811177212.png
    3.6 KB · Views: 82
  • 1659810981031.png
    1659810981031.png
    3.6 KB · Views: 73
  • #50
Ok, I have made the changes discussed above in the UI. Must still update Tutorial to explain.
An important change to take note of, is that I have reduced the mapping value of ##\Omega_{\Lambda,0}## input by 0.0001 in order to accommodate radiation density into flat space and remove any ambiguity in the UI or output data. So for now there is nothing extra to be done in the expansion calculation module.

The Planck collaboration says above Table 2: "The top group of six rows are the base parameters, which are sampled in the MCMC analysis with flat priors. The middle group lists derived parameters." ##\Omega_{\Lambda,0}## does not appear in the top group, but baryons and dark matter do. Hence I took it that ##\Omega_{m,0}## takes priority and so I changed ##\Omega_{\Lambda,0}## by the mentioned small amount.

@pbuk and @JimJCW. Comments?
 
  • #51
Jorrie said:
Jim, I don't quite understand your objection against https://light-cone-calc.github.io/, because the two give identical outputs for the Planck + BAO input data, . . .

Both deviate from the Planck + BAO input data for Ωm, for reasons discussed before.

Among Ω0, ΩΛ,0, Ωm,0, ΩR,0 and zeq, only three of them can be used as inputs because of the following two relations:
1659859408917.png

LightCone8 (https://burtjordaan.github.io/light-cone-calc.github.io/) correctly uses only three: Ω0, ΩΛ,0, and zeq, but LightCone8.1.2 - Jorrie Trial UI (https://light-cone-calc.github.io/) incorrectly uses four: Ω0, ΩΛ,0, Ωm,0, and zeq. This causes the discrepancy in LightCone8.1.2 (see Post #30, #40, and #42).

I noticed that in LightCone8.1.2 the inputted Ωm,0 is used only in calculating ΩR,0 printed to the right. The rest of the calculations is the same as LightCone8, using only Ω0, ΩΛ,0, and zeq. Let me illustrate this using an exaggerated example (setting Ωm,0 = 0.9):

1659859597669.png


The output remains the same as in LightCone8:
1659859758816.png

So, using both ΩΛ,0 and Ωm,0 as inputs at the same time is incorrect and produces discrepancies.

Suggestion:

We can use Ωm,0 and ΩR,0 as derived quantities and print them on the right-hand side:

1659859913974.png
 
  • #52
JimJCW said:
Suggestion:

We can use Ωm,0 and ΩR,0 as derived quantities and print them on the right-hand side:

View attachment 305466
Oops, yes thanks for the headsup - I see now that the table calculation module ignores the matter density input value. I will temporarily reset the old input tables. But I would still like to use Ωm,0 as one of the primary inputs (for reasons that I mentioned in my prior post). So it would mean swapping ΩL and Ωm in what you have above.

To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.
 
  • #53
Jorrie said:
Oops, yes thanks for the headsup - I see now that the table calculation module ignores the matter density input value. I will temporarily reset the old input tables. But I would still like to use Ωm,0 as one of the primary inputs (for reasons that I mentioned in my prior post). So it would mean swapping ΩL and Ωm in what you have above.

To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.

Please see Post #13:

If Ωm,0 is used as input (see Post #2, where seq = zeq + 1),

ΩΛ,0 = Ω0 - Ωm,0 (zeq + 2) / (zeq + 1)
ΩR,0 = Ωm,0 / (zeq + 1)

@pbuk
 
Last edited:
  • #55
Jorrie said:
To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.

The underlying model (since 24 July) does use Ωm,0, but only if ΩΛ,0 is not provided by the UI coupling. In that case it will apply the whole of the Ωrad,0 adjustment to ΩΛ,0, which I think we have agreed is not the best thing to do. If ΩΛ,0 is provided it will apply the whole of the Ωrad,0 adjustment to Ωm,0.

JavaScript:
    // Use omegaLambda0 if it is provided.
    if (props.omegaLambda0 != null) {
      omegaLambda0 = props.omegaLambda0;
      omegaM0 = (omega0 - omegaLambda0) * (sEq / (sEq + 1));
    } else if (props.omegaM0 != null) {
      omegaM0 = props.omegaM0;
      omegaLambda0 = omega0 - omegaM0 * ((sEq + 1) / sEq);
    } else {
      throw new Error('Must provide either omegaM0 or omegaLambda0');
    }
(see https://github.com/cosmic-expansion...0b12217c50ce8991181686fd406/src/model.ts#L156)

I think what we are now suggesting is that if both Ωm,0 and ΩΛ,0 are provided we should apply the Ωrad,0 adjustment proportionately across Ωm,0 and ΩΛ,0?

I'll have a look at this.
 
  • #56
pbuk said:
I think what we are now suggesting is that if both Ωm,0 and ΩΛ,0 are provided we should apply the Ωrad,0 adjustment proportionately across Ωm,0 and ΩΛ,0?

I'll have a look at this.
Yes, although I started to think we must use Ωm,0 (as the more directly established parameter), I guess that since we do not yet understand why the Planck collaboration have elected to present the data with this apparent inconsistency, it may be best to take their exact values at face value as inputs.

Since we are desirous to keep Ω0 = 1, I suppose we can decide how to process that. Adjusting proportionately across both seems to be the more neutral scheme.
 
  • #57
Jorrie said:
Yes, although I started to think we must use Ωm,0 (as the more directly established parameter), I guess that since we do not yet understand why the Planck collaboration have elected to present the data with this apparent inconsistency, it may be best to take their exact values at face value as inputs.

Since we are desirous to keep Ω0 = 1, I suppose we can decide how to process that. Adjusting proportionately across both seems to be the more neutral scheme.

The source of our problem is that Planck 2018 results. I data is incorrect for a flat universe:
1659958235413.png

We shouldn’t use these numbers as given at the same time.

We have been using ΩΛ,0 as input and Ωm,0 and ΩR,0 as derivatives. We are planning to use Ωm,0 as input and ΩΛ,0 and ΩR,0 as derivatives. I think either one is good.

@pbuk
 
  • #58
JimJCW said:
The source of our problem is that Planck 2018 results. I data is incorrect for a flat universe:
View attachment 305565
We shouldn’t use these numbers as given at the same time.
Yea, you are right. Trying to use all 4 will give inconsistencies.
For now, let's leave it as it is.
 
  • #59
If we ever find Omega_0 slightly above 1, the most likely culprit will be dark matter. Here is a graph for Omega_0 = 1.1 (an exaggerated case for visibility)
1659965892569.png

Interesting how Omega will first rise (in this case overshooting the the "current 1.1" slightly) and then being dragged down to unity again by dark energy dominance.
 
  • #60
Jorrie said:
If we ever find Omega_0 slightly above 1, the most likely culprit will be dark matter. Here is a graph for Omega_0 = 1.1 (an exaggerated case for visibility)
View attachment 305573
Interesting how Omega will first rise (in this case overshooting the the "current 1.1" slightly) and then being dragged down to unity again by dark energy dominance.

The expansion of space in the Big Bang model makes many situations very complicated. I often use the LightCone calculator to help me to get pictures in my mind. The following two pictures are similar to yours:

For Ω0 = 1:

1660011726495.png


For Ω0 = 0.9:

1660011808339.png
 
Last edited:
  • #61
Cool! - it looks like we've got ourselves a workable calculator. :smile:
I have just pushed a small update that shows ##\Omega_M## and ##\Omega_R## on the conversion side, as discussed before.
 
  • #62
Discrepancy between LightCone8 and LightCone7:

When comparing the outputs of LightCone8 and LightCone7, I noticed a discrepancy in calculated event horizon:

1660213893799.png


I think the value calculated with LightCone8 is questionable.

@Jorrie, @pbuk
 
  • #63
JimJCW said:
I think the value calculated with LightCone8 is questionable
It seems to be exactly the same as R, I'll have a look tonight (UK).
 
  • #64
JimJCW said:
Discrepancy between LightCone8 and LightCone7:

When comparing the outputs of LightCone8 and LightCone7, I noticed a discrepancy in calculated event horizon:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.

I have now added the correct calculation in the back end development branch, I will push this to the live site over the weekend along with the new UI.
 
  • #65
pbuk said:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.
No idea how that happened, accept that lots of experimentation happened around that time.
Anyway, thanks for the excellent work done by JimJCW and yourself.
 
Last edited:
  • #66
Jorrie said:
No idea how that happened, accept that lots of experimentation happened around that time.
@pbuk It happens in line 47 of your new calculate.js, where the mapping for Dhor is incorrect. Note Y (legacy) is the same as R later.
44 Y: entry.r,
---
47 Dhor: entry.r,
 
  • #67
Jorrie said:
@pbuk It happens in line 47 of your new calculate.js, where the mapping for Dhor is incorrect. Note Y (legacy) is the same as R later.
44 Y: entry.r,
---
47 Dhor: entry.r,

Yes, I picked this up from line 267 of the 2022-05-14 version of LightCone7
JavaScript:
          if (Dhor < Y)
              Dhor = Y ;
I removed the test because Dhor was never being set above 0 anywhere else in the code so was always being set to Y (r). This was because the line
JavaScript:
          Dhor =  a * (Dte-Dc);
which was there in LightCone7 2021-03-12 disappeared in the later version.
 
  • #68
Yes, correct. Is it something that you can fix on the calculate side, or must I fix it outside of calculate?
 
  • #69
pbuk said:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.

Let’s call
A: http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html
B: http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html
and use PLANCK Data (2015) for the present discussion.

Result from A:

1660648897696.png


Result from B:

1660648942045.png

Note that Ro and Dhor overlap with each other.Comparing the Calculation.js files of A and B, I noticed that some statements in A are not in B: near Line 180 and Line 245. If we put the missing statements back to B:

pa = a_ic * sf;
////////////////*************Add the following missing line
a = a_ic + (pa / 2);
////////////////*************Found the problem
qa = Math.sqrt((Om / a) + Ok + (Or / (a * a)) + (Ol * a * a));
Dthen = 0;
}
////////////////*************Add the following missing lines
else
{
Dnow = Math.abs(Dtc-Dc);
Dthen = a * Dnow;
}
Dhor = a * (Dte-Dc);

////////////////*************
Dpar = a * Dc;

the modified B gives the following result:

1660649168858.png


@Jorrie
 
  • #70
Jorrie said:
Yes, correct. Is it something that you can fix on the calculate side, or must I fix it outside of calculate?
Fixed now. I've also tidied up the html (there were some extra empty <td>s and unmatched </tr>s) and bumped the version to 8.3 for the added CSV functionality.

I've pulled all these changes into the 'main' branch and changed the source for the live site back to 'main' from 'develop' for good practice.
 
Last edited:

Similar threads

Replies
61
Views
4K
Replies
1
Views
1K
Replies
18
Views
2K
Replies
0
Views
1K
Replies
3
Views
2K
Replies
15
Views
2K
Replies
11
Views
938
Back
Top