In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of
N
{\displaystyle N}
, its subsequence
x
1
,
…
,
x
N
{\displaystyle x_{1},\ldots ,x_{N}}
has a low discrepancy.
Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers.
The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.
I have two different methods giving different results
Why is this the case? (Left method was answer in solutions, right method was my answer before checking the solutions). Also yes pretend V was V_1 or something ignore my dummy variables :)