Test whether the integer is a prime

In summary, the conversation discusses finding an efficient algorithm for checking whether an odd number is prime or not. This algorithm involves testing the congruence $(X+a)^n \equiv X^n + a$ modulo a polynomial $X^r-1$ where $r$ is chosen in a clever way. By calculating modulo $X^r-1$, the computational cost can be kept in the polynomial range as long as $r$ is $O((\log{n})^c)$ for some constant $c$. The conversation also delves into the computational cost for computing $(X+a)^n \pmod{X^r-1}$ using fast exponentiation and the FFT.
  • #36
In c we can either pass a pointer to return an additional value (an output parameter), or we can return a structure with the values.
In this case i recommend passing a pointer as input-output parameter to track the upper bound. (Thinking)
 
Mathematics news on Phys.org
  • #37
I like Serena said:
In c we can either pass a pointer to return an additional value (an output parameter), or we can return a structure with the values.
In this case i recommend passing a pointer as input-output parameter to track the upper bound. (Thinking)

I also took a look at the algorithm mentioned in my notes. The algorithm is the following:

View attachment 7608

It says the following about it:View attachment 7609

At the lemma it says that the algorithm makes $O((\log{n})^2 \log{\log{n}})$ multiplications of numbers from $\{1, \dots,n\}$. How do we deduce this? (Thinking)
 

Attachments

  • pptt.JPG
    pptt.JPG
    19.4 KB · Views: 86
  • com.JPG
    com.JPG
    57.3 KB · Views: 92
  • #38
evinda said:
At the lemma it says that the algorithm makes $O((\log{n})^2 \log{\log{n}})$ multiplications of numbers from $\{1, \dots,n\}$. How do we deduce this? (Thinking)

How many times does the outer loop run?
The inner loop?
And what is the complexity of fast exponentiation? (Wondering)
 
  • #39
I like Serena said:
How many times does the outer loop run?
The inner loop?

The outer loop runs $O(\log{n})$ times. Since with the inner loop we carry out a binary search, it is executed $O(\log{n})$ times, too. Right?
I like Serena said:
And what is the complexity of fast exponentiation? (Wondering)

For the computation of $m^b$, it is required time $O(\log^3{b})$ using fast exponentiation, right? (Thinking)
 
  • #40
evinda said:
The outer loop runs $O(\log{n})$ times. Since with the inner loop we carry out a binary search, it is executed $O(\log{n})$ times, too. Right?

Yep. (Nod)

evinda said:
For the computation of $m^b$, it is required time $O(\log^3{b})$ using fast exponentiation, right?

Isn't the fast exponentiation $O(\log{b})$?
And we have $b=O(\log n)$ don't we? (Wondering)
 
  • #41
I like Serena said:
Isn't the fast exponentiation $O(\log{b})$?

How is it justified that the fast exponentiation is $O(\log{b})$?

I like Serena said:
And we have $b=O(\log n)$ don't we? (Wondering)

Yes. (Smile)
 
  • #42
evinda said:
How is it justified that the fast exponentiation is $O(\log{b})$?

Isn't the computational complexity of calculating $a^n$ with fast exponentiation:
$$\begin{aligned}g(n)&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2}\right )+1, & n \text{ even } \\
g(n-1)+1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2} \right )+1, & n \text{ even } \\
g(\frac{n-1}2)+1 + 1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\lfloor\frac{n}{2}\rfloor \right )+O(1), & n >1
\end{cases} \\
&= O(\log n)
\end{aligned}$$
(Wondering)
 
  • #43
I like Serena said:
Isn't the computational complexity of calculating $a^n$ with fast exponentiation:
$$\begin{aligned}g(n)&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2}\right )+1, & n \text{ even } \\
g(n-1)+1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2} \right )+1, & n \text{ even } \\
g(\frac{n-1}2)+1 + 1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\lfloor\frac{n}{2}\rfloor \right )+O(1), & n >1
\end{cases} \\
&= O(\log n)
\end{aligned}$$
(Wondering)

Why is it plus $1$ at the last two cases?

$$g(n)=\begin{cases}
0, n=1\\
g\left(\frac{n}{2}\right )+1, n \text{ even } \\
g(n-1)+1, n \text{ odd}>1
\end{cases}$$
 
  • #44
evinda said:
Why is it plus $1$ at the last two cases?

$$g(n)=\begin{cases}
0, n=1\\
g\left(\frac{n}{2}\right )+1, n \text{ even } \\
g(n-1)+1, n \text{ odd}>1
\end{cases}$$

We have $a^{2k}=(a^k)^2$ respectively $a^{2k+1}=(a^{2k})\cdot a$, and each takes 1 multiplication. (Thinking)
 
  • #45
I like Serena said:
We have $a^{2k}=(a^k)^2$ respectively $a^{2k+1}=(a^{2k})\cdot a$, and each takes 1 multiplication. (Thinking)

I see... So the computational complexity is the number of multiplications done?
It differs from the time complexity, right? (Thinking)

Furthermore, it says that the number $\lceil \log{n} \rceil$ may be calculated in $O(\log{n})$ arithmetic operations (by repeated halving).
How do we calculate $\lceil \log{n} \rceil$ by repeated halving? (Thinking)
 
  • #46
evinda said:
I see... So the computational complexity is the number of multiplications done?
It differs from the time complexity, right?

It depends on how we measure time complexity.
Typically this is the number of multiplications/divisions and (possibly separately) the number of additions/subtractions.
Either way, we can lump it all into one big number with the $O$ operator.
So instead of adding $1\text{ multiplication}$, we might add $O(1)$ to eliminate the type of operation. (Nerd)

evinda said:
Furthermore, it says that the number $\lceil \log{n} \rceil$ may be calculated in $O(\log{n})$ arithmetic operations (by repeated halving).
How do we calculate $\lceil \log{n} \rceil$ by repeated halving?

If we want to know what $\log_2 n$ is, we can simply count the number of bits can't we? (Wondering)
 
  • #47
I like Serena said:
It depends on how we measure time complexity.
Typically this is the number of multiplications/divisions and (possibly separately) the number of additions/subtractions.
Either way, we can lump it all into one big number with the $O$ operator.
So instead of adding $1\text{ multiplication}$, we might add $O(1)$ to eliminate the type of operation. (Nerd)

I thought that at the time complexity we usually multiply the number of arithmetic operations by the corresponding time needed... (Thinking)

I like Serena said:
If we want to know what $\log_2 n$ is, we can simply count the number of bits can't we? (Wondering)

How do we count the number of bits? (Thinking)
 
  • #48
evinda said:
I thought that at the time complexity we usually multiply the number of arithmetic operations by the corresponding time needed...

As far as I know, time complexity is an abstract measure, which can be anything, as long is it's (more or less) related to time.
And saying it's $O(f(n))$ says that it's less then some constant times $f(n)$, which covers any amount of time that a multiplication or addition (that is presumed to be more or less constant) may need. (Nerd)

evinda said:
How do we count the number of bits?

Don't we usually already have a bit representation of $n$?
Either way, even counting bits is effectively the same as repeated halving, which is $O(\log n)$. (Thinking)
 
  • #49
I like Serena said:
As far as I know, time complexity is an abstract measure, which can be anything, as long is it's (more or less) related to time.
And saying it's $O(f(n))$ says that it's less then some constant times $f(n)$, which covers any amount of time that a multiplication or addition (that is presumed to be more or less constant) may need. (Nerd)

So we could consider that one multiplication of some integers $a,b$ takes time $O(1)$ or that it takes time $O(\log^2{n})$, right?

I like Serena said:
Don't we usually already have a bit representation of $n$?
Either way, even counting bits is effectively the same as repeated halving, which is $O(\log n)$. (Thinking)

Ok. But how can we find $\lceil \log{n}\rceil$ by repeated halving? (Thinking)
 
  • #50
evinda said:
So we could consider that one multiplication of some integers $a,b$ takes time $O(1)$ or that it takes time $O(\log^2{n})$, right?

Indeed. (Nod)

evinda said:
Ok. But how can we find $\lceil \log{n}\rceil$ by repeated halving? (Thinking)

How about:
$$
\textbf{Algorithm ILSe.1 (Count Bits)} \\
\textrm{INPUT: Integer }n \ge 0 \\
\begin{array}{rl}
1.& k \leftarrow 0\ ; \\
2.& \textbf{while }n \ne 0 \\
3.& \quad n \leftarrow \lfloor \frac n2 \rfloor\ ; \\
4.& \quad k \leftarrow k+1\ ; \\
5.& \textbf{return } k\ ; \\
\end{array}$$
(Wondering)
 
  • #51
I like Serena said:
How about:
$$
\textbf{Algorithm ILSe.1 (Count Bits)} \\
\textrm{INPUT: Integer }n \ge 0 \\
\begin{array}{rl}
1.& k \leftarrow 0\ ; \\
2.& \textbf{while }n \ne 0 \\
3.& \quad n \leftarrow \lfloor \frac n2 \rfloor\ ; \\
4.& \quad k \leftarrow k+1\ ; \\
5.& \textbf{return } k\ ; \\
\end{array}$$
(Wondering)

Nice... (Smile) The algorithm runs as long as $\frac{n}{2^i}\geq 1 \Rightarrow n \geq 2^i \Rightarrow i \leq \log{n}$.

So the time complexity of the algorithm is $O(i)=O(\log{n})$. Right? (Thinking)
 
  • #52
evinda said:
Nice... (Smile) The algorithm runs as long as $\frac{n}{2^i}\geq 1 \Rightarrow n \geq 2^i \Rightarrow i \leq \log{n}$.

Happy that you like it. (Happy)

evinda said:
So the time complexity of the algorithm is $O(i)=O(\log{n})$. Right?

Yup. (Nod)
 
  • #53
I like Serena said:
Happy that you like it. (Happy)
Yup. (Nod)

(Happy)

I have also a question for the time required for testing $r$. According to my notes:

View attachment 7611

First of all, how do we deduce that the total cost for line $4$ is $O(\rho(n))$ ?

The Sieve of Eratosthenes algorithm is the following:View attachment 7612

Why do we check what happens when $r$ gets the value $2^i+1$ for some $i$ ? Could you explain it to me? (Thinking)
 

Attachments

  • r.JPG
    r.JPG
    60 KB · Views: 94
  • soe.JPG
    soe.JPG
    15 KB · Views: 91
  • #54
evinda said:
I have also a question for the time required for testing $r$. According to my notes:

First of all, how do we deduce that the total cost for line $4$ is $O(\rho(n))$ ?

Isn't line 4 executed $\rho(n)$ times, while the cost of one execution is $1\text{ division} = O(1)$? (Wondering)

evinda said:
The Sieve of Eratosthenes algorithm is the following:

Why do we check what happens when $r$ gets the value $2^i+1$ for some $i$ ? Could you explain it to me?

I believe it's to reuse the same table for a number of iterations of $r$.
When $r$ runs out of the table, the table size is doubled and recalculated for the next series of iterations of $r$. (Thinking)
 
  • #55
I like Serena said:
Isn't line 4 executed $\rho(n)$ times, while the cost of one execution is $1\text{ division} = O(1)$? (Wondering)

Oh yes, right... (Nod)
I like Serena said:
I believe it's to reuse the same table for a number of iterations of $r$.
When $r$ runs out of the table, the table size is doubled and recalculated for the next series of iterations of $r$. (Thinking)

But why do we check only what happens when $r$ is of the form $2^{i}+1$ although not every prime can be written in this form?

Also if $r=2^i+1$ for some $i$ don't we get a table of size $2^{\lceil \log{(2^{i}+1)}\rceil}-1$ ? Or am I wrong?
 
  • #56
evinda said:
But why do we check only what happens when $r$ is of the form $2^{i}+1$ although not every prime can be written in this form?

Also if $r=2^i+1$ for some $i$ don't we get a table of size $2^{\lceil \log{(2^{i}+1)}\rceil}-1$ ? Or am I wrong?

The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.
And when $r$ becomes $2^i+1$ it refers to an entry in the table that is not there yet, which is when the table gets resized, after which it will have $2^{i+1}$ entries.
Now the main algorithm can continue running $r$ from $2^i+1$ up to $2^{i+1}$ before we run out of the table again. (Thinking)
 
  • #57
I like Serena said:
The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.

Why do we double the table size each time? I haven't understood it... (Worried)
 
  • #58
evinda said:
Why do we double the table size each time? I haven't understood it... (Worried)

The Sieve of Erathosthenes does not find individual primes, but instead finds all primes in the range that we specify.
The complexity to figure out if some number $m$ is prime has the exact same complexity as finding all primes up to $m$.
So instead of rerunning the sieve for every $r$, we run it once for a range, and then iterate $r$ through that range.
And afterwards we rerun the sieve on a larger range.

It's an arbitrary decision to double the table every time.
We could also triple it, or each time add 1000 entries to it, or use yet some other scheme to grow the table.
However, we do not want to run the sieve for the full range up to $n$, since that is too computationally expensive. (Thinking)
 
  • #59
I like Serena said:
The Sieve of Erathosthenes does not find individual primes, but instead finds all primes in the range that we specify.
The complexity to figure out if some number $m$ is prime has the exact same complexity as finding all primes up to $m$.
So instead of rerunning the sieve for every $r$, we run it once for a range, and then iterate $r$ through that range.
And afterwards we rerun the sieve on a larger range.

It's an arbitrary decision to double the table every time.
We could also triple it, or each time add 1000 entries to it, or use yet some other scheme to grow the table.
However, we do not want to run the sieve for the full range up to $n$, since that is too computationally expensive. (Thinking)

Ok.

I like Serena said:
The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.
And when $r$ becomes $2^i+1$ it refers to an entry in the table that is not there yet, which is when the table gets resized, after which it will have $2^{i+1}$ entries.
Now the main algorithm can continue running $r$ from $2^i+1$ up to $2^{i+1}$ before we run out of the table again. (Thinking)
So at the first execution of the Sieve of Erathosthenes we will get the table $m[2 \dots 2^{i}+1]$ so that it contains $2^{i}$ entries? (Thinking)
But if so then $m[2^{i}+1]$ will have got an entry, or am I wrong? (Thinking)
 
  • #60
evinda said:
So at the first execution of the Sieve of Erathosthenes we will get the table $m[2 \dots 2^{i}+1]$ so that it contains $2^{i}$ entries?
But if so then $m[2^{i}+1]$ will have got an entry, or am I wrong?

I think we're only talking about the upper bound instead of the actual size of the table.
In the first iteration we will have $m[2 \dots 2^{1}]$, then $m[2 \dots 2^{2}]$, and so on.
Generally we have $m[2 \dots 2^{i}]$, meaning that $2^i+1$ is not in the table. (Thinking)
 
  • #61
I like Serena said:
I think we're only talking about the upper bound instead of the actual size of the table.
In the first iteration we will have $m[2 \dots 2^{1}]$, then $m[2 \dots 2^{2}]$, and so on.
Generally we have $m[2 \dots 2^{i}]$, meaning that $2^i+1$ is not in the table. (Thinking)

A ok. So when we have the table $m[2 \dots 2^{i}]$, the table will have $2^i-1$ elements, at the next iteration it will have $2^{i+1}-1$ elements, and so on. Right? (Thinking)
 
Last edited:
  • #62
evinda said:
A ok. So when we have the table $m[2 \dots 2^{i}]$, the table will have $2^i-1$ elements, at the next iteration it will have $2^{i+1}-1$ elements, and so on. Right?

Yep. (Nod)
 
  • #63
I like Serena said:
Yep. (Nod)
So when we execute the algorithm and we have at the beginning $r=2$, we create the table $m[2 \dots 2^1]=m[2]$.
Then we get $r=3$ and we create the table $m[3 \dots 2^2]=m[3 \ 4]$.
Then we get $r=4$ and we just have to look at the previous table we created, and so on, Right?
 
  • #64
evinda said:
So when we execute the algorithm and we have at the beginning $r=2$, we create the table $m[2 \dots 2^1]=m[2]$.
Then we get $r=3$ and we create the table $m[3 \dots 2^2]=m[3 \ 4]$.
Then we get $r=4$ and we just have to look at the previous table we created, and so on, Right?

Wouldn't the table still start at 2 in every iteration? (Wondering)
 
  • #65
I like Serena said:
Wouldn't the table still start at 2 in every iteration? (Wondering)

Oh yes, right.
So when $r=2$ we create the table $m[2]$, when $r=3$ we create the table $m[2 \ 3 \ 4]$, when $r=4$ we verify with 3 steps using the previous table that it is composite. Then when $r=5$ we get the table $m[2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8]$.
We check what holds for the numbers $k=6,7,8$ with the last table that we got with $k-1$ steps. Right?
So the number of arithmetic operations done is $\sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)+O(i)= \sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)$.

Right? (Thinking)
 
  • #66
evinda said:
Oh yes, right.
So when $r=2$ we create the table $m[2]$, when $r=3$ we create the table $m[2 \ 3 \ 4]$, when $r=4$ we verify with 3 steps using the previous table that it is composite. Then when $r=5$ we get the table $m[2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8]$.
We check what holds for the numbers $k=6,7,8$ with the last table that we got with $k-1$ steps. Right?
So the number of arithmetic operations done is $\sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)+O(i)= \sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)$.

Right?

That looks about right yes. (Nod)
 
  • #67
I like Serena said:
That looks about right yes. (Nod)

Nice. (Happy) Then we calculate $n^i \mod{r}$, for $i=1,2, \dots, 4 \lceil \log{n}\rceil^2$.
Why is the number of multiplications modulo $r$ $O((\log{n})^2)$ for one $r$ and not $O(\log{i})$?
How do we deduce this? (Thinking)
 
  • #68
evinda said:
Nice. (Happy) Then we calculate $n^i \mod{r}$, for $i=1,2, \dots, 4 \lceil \log{n}\rceil^2$.
Why is the number of multiplications modulo $r$ $O((\log{n})^2)$ for one $r$ and not $O(\log{i})$?
How do we deduce this? (Thinking)

$O(\log{i})$ would be for fast exponentiation for a single $i$, wouldn't it?
But don't we need it for all $i$?
And can't we use the previous result $n^{i-1}$ to find $n^i$? (Wondering)
 
  • #69
I like Serena said:
$O(\log{i})$ would be for fast exponentiation for a single $i$, wouldn't it?

Yes, right... (Nod)

I like Serena said:
But don't we need it for all $i$?
And can't we use the previous result $n^{i-1}$ to find $n^i$? (Wondering)

Yes, so we make $4 \lceil \log{n}\rceil^2-1$ multiplications, right? (Thinking)

And $4 \lceil \log{n}\rceil^2-1 \leq 4 (\log{n}+1)^2-1=O(\log^2{n})$, right?
 
  • #70
evinda said:
Yes, so we make $4 \lceil \log{n}\rceil^2-1$ multiplications, right? (Thinking)

And $4 \lceil \log{n}\rceil^2-1 \leq 4 (\log{n}+1)^2-1=O(\log^2{n})$, right?

Right. (Nod)
 

Similar threads

Back
Top