Generator redefined more rigorously

  • Thread starter synergy
  • Start date
  • Tags
    Generator
In summary: I think I found a new way to define my generator. Let p(n) be the number of primes less than n. Then p(n)# is the number of primes less than p(n)*p(n-1)*p(n-2)*...*5*3*2. This is because contradiction of the distributive principle shows that q(x1,x2,x3,...xn) will not have any of p(1), p(2),... p(n) as factors, and the square root of q(x1,x2,x3,...xn) is less than p(n+1).
  • #1
synergy
62
0
In case you don't like the WAY I defined my generator, I'll redefine it here for you:

Let p(n) be the nth prime
Let p(n)# be p(n)*p(n-1)*p(n-2)*...*5*3*2
Let abs() be absolute value
Let +/- be "plus or minus", though the theory works with just minus

Let q(x1,x2,x3,...xn):positive integers --> positive integers given by
q(x1,x2,x3,...xn)= abs( A +/- B)
where A and B are given by:
p(n)# divides A*B
p(n+r) doesn't divide A*B for all r in the positive integers
gcd(A,B)=1
each xi is the positive integer exponent on each p(i)

Then if q(x1,x2,x3,...xn) < [p(n+1)]^2, then q(x1,x2,x3,...xn)is prime.

This is because contradiction of the distributive principle shows that q(x1,x2,x3,...xn) will not have any of p(1), p(2),... p(n) as factors, and the square root of q(x1,x2,x3,...xn) is less than p(n+1).

As I said before, to get all primes < [p(n+1)]^2, it's important to use ALL POSSIBLE disjoint partitions of {p(1),...,p(n)} into the two sets to be put into A and B.

This will produce primes without testing any composites, because the elimination of those composites is intrinsic to the definition of A and B, the gcd(A,B)=1, and the "less than" test. This indicates that this algorithm may still be in the running as the most efficient way to find primes, given that it doesn't waste time testing a large number of BIG composites. Prove me wrong, if you dare!
 
Physics news on Phys.org
  • #2
it's important to use ALL POSSIBLE disjoint partitions of {p(1),...,p(n)} into the two sets to be put into A and B.

Your algorithm can be ruled inefficient from this requirement alone.


If you're seeking primes less than N, you need to look at partitions of all primes that are less than sqrt(N).

Asymptotically, the number of primes less than sqrt(N) is greater than sqrt(N) / ln sqrt(N) = 2 * sqrt(N) / ln N.


The number of ways to partition a set S into two partitions is 2^|S|. So, the number of partitions to consider is larger than 2^(2 * sqrt(N) / ln N).

For comparison, the most naive way to find the primes less than N is to simply trial divide each number less than N by every number less than N -- N^2 divisions.


This ultra-naive algorithm runs in polynomial time, whereas yours does not, yours cannot be efficient.

For example, if N is 10^6, then the naive algorithm requires 10^12 divisions, whereas your algorithm needs to consider over 10^43 partitions.
 
  • #3
And the efficiency of a naive sieving method: the time it takes to cross out all multiples of all numbers less than N is:

N/2 multiples of 2
N/3 multiples of 3
N/4 multiples of 4
...
------------------

which sums to approximately N (1/2 + 1/3 + ... + 1/N) which is approximately N ln N.
 
Last edited:
  • #4
Hurkyl said:
Your algorithm can be ruled inefficient from this requirement alone.


If you're seeking primes less than N, you need to look at partitions of all primes that are less than sqrt(N).

Asymptotically, the number of primes less than sqrt(N) is greater than sqrt(N) / ln sqrt(N) = 2 * sqrt(N) / ln N.


The number of ways to partition a set S into two partitions is 2^|S|. So, the number of partitions to consider is larger than 2^(2 * sqrt(N) / ln N).

For comparison, the most naive way to find the primes less than N is to simply trial divide each number less than N by every number less than N -- N^2 divisions.


This ultra-naive algorithm runs in polynomial time, whereas yours does not, yours cannot be efficient.

For example, if N is 10^6, then the naive algorithm requires 10^12 divisions, whereas your algorithm needs to consider over 10^43 partitions.


So we need to take this number: 2^(2 * sqrt(N) / ln N) and divide by the number of primes in the range to see how many partitions we will need for each prime we are seeking, correct?
 
  • #5
What if we found a "smart algorithm" which could eliminate most partitions as having less chance of producing several outputs small enough. Then we could run just a single iteration, using most of the known consecutive primes and just a few partitions, and get many primes - not all in the range, but many, some of which might not be known.

It's recently come to my attention, though, that the multiplications/ exponentiations involved would be too big to be even do-able, for primes of a size that haven't been found yet. Is there, in fact, any kind of program that could handle these big numbers? A good test would be, if for example 10^40 was the biggest prime of which we had found all previous primes (I'm not sure), then do we have a program that can, in this example, multiply EVERY OTHER prime [p(1), p(3), p(5), p(7), p(9), ... up to 10^20, the square root of 10^40) together, and decently quickly? If not, then the efficiency question is pretty well trashed. If so, then we might be able to get SOME primes this way. Whattaya think?

Btw, any more ideas on how I can show whether or not any primes are skipped i.e. whether all primes p can be represented in this way abs(A +/- B) where A and B have all of the primes less than the square root of p as factors, and no other factors, and gcd(A,B)=1? Even if it isn't efficient, I would like to close this last chapter on the subject before too many more years go by, but I just haven't seen the one clear path to a proof. Yet. You all know pretty much everything I've found on the subject, any ideas?
Aaron
 
Last edited:
  • #6
Is there, in fact, any kind of program that could handle these big numbers?

Java has a "BigInteger" class, and there's the "Gnu Multiprecision Package" for C/C++. Those are the only freely available packages I know off the top of my head for manipulating very large integers.



Another thing to consider is how much work you spend at each step. The naive sieving algorithm I mentioned runs in N ln N time, and finds all N / ln N primes. On average it spends (ln N)^2 work per prime.

However, for each prime your algorithm finds, you have to have at least multiplied all of your primes together (plus or minus 2 multiplies) -- since you have about 2 sqrt(N) / ln N primes to multiply, you are doing 2 sqrt(N) / ln N work, on average... ignoring every step you take that gives you a duplicate prime, or a prime in the wrong range.

However,

(ln N)^2 << 2 sqrt(N) / lnN

so even with absolutely perfect operation of your algorithm, sieving is still faster.
 
  • #7
Thanks

Thanks, now at least I can put to rest the idea that it's efficient, so I can concentrate on the theoretical aspect of the problem that I was originally interested in i.e. is this a valid representation of the set of primes. In other words, could EVERY prime be represented this way? It may not be efficient, but it might give us a greater understanding of Goldbach's conjecture or the Riemann hypothesis or some such thing. So, any ideas to prove/disprove all primes have such a representation? Just one idea may be all I need to spark my creativity to a complete proof. I have been too close to this theory for too long, and I have "writer's block" as to how to next proceed. Help? Anyone?
Aaron
 
  • #8
By the way, I have an online friend who has developed her own theory, which I am sworn not to give away. It is a test, not a generator, and it involves approximately (p^2)/4 sums, worst case scenario, and all of the terms being added are less than squareroot(p), to prove p is prime. Could you tell me how efficient that would be with respect to other tests. It seems slow for small primes, but not much slower as the primes get bigger, so I'm thinking at some point it may be faster for large primes than other tests, if they slow down more as the primes get bigger. If you find this post first, please don't ignore the one directly above, as I am really interested in the question I ask there. Thank you.
Aaron
 
  • #9
So, your (or your friend's) test to see if P is prime takes P^2/4 operations? Even simplest way, divide by all numbers up to sqrt(P) takes only sqrt(P) operations and if we omit the evens sqrt(P)/2 , so your friend's method is slower by a *power* of 4.
 
  • #10
I guess I didn't know if being a sum rather than a division would make it quicker or not, after all, long division requires a bunch of subtractions, but I guess computers may do division in a way different than long division?
Aaron
 
  • #11
Well, if you can look up the running time of an addition vs a division, you can compare them... but we can just look at the stupedist approach here: I can test if N divides M by repeatedly subtracting N from M. That requires at most M subtractions (actually, more like M/N), and each subtraction takes around lg M time. (again overestimating) So, this operation runs in time O(M lg M)

So if we're testing P for division by every number less than sqrt(P), that can take at most O(P^(3/2) lg P) time... less than O(P^2) (and we're ignoring the fact your friends additions will take time). A reasonable division algorithm would be orders of magnitude faster.
 
  • #12
Thanks Hurkyl,
I just got word a little bit ago that she is going to post her theory elsewhere, I haven't asked her if I can post it here yet, but I'm sure she'll let me. You guys are a bit more receptive than the other forum we've been working at, at least about my idea, so I'm sure you'll help her along when the time comes. By the way, there is another way to implement her test, but I think it is probably slower, as it requires the quadratic formula. Still, who knows, there might be a way to speed it up. I also forgot to mention that the test is quicker when it's testing a composite, so maybe it can help sieve probable primes quickly. It's a unique idea, it doesn't generate primes like mine did, but it does what it does without the need for any other primes to be known. Probably other tests work that way, I just think it's cool how this one works, though. You'll see, if I get her to let me post it.
Aaron
 
  • #13
One last question on my friend's test

Okay, after thinking about it I'm about certain my friend's theory is not faster COMPUTATIONALLY than the sieve. One last question: given the types of summations I talked about before, would it become *significantly* faster if you knew that the memory that the computer would have to store was only 4 variables, not counting the prospective prime? After all, the sieve has to store all composites in a range until each is struck off, then it has to change its memory to strike them off, and at each turn it has to load a prime up to strike its products off. Her test doesn't require storing any primes or composites that haven't been struck off yet, only partial sums of two bounded sequences. I'm not sure that these factors are usually taken into account when defining the efficiency of an algorithm, the efficiency may just usually consider the time to make each computation, but memory swaps, loads, etc. all take some time, too. Whattaya think?
Aaron
 
  • #14
The naive method I mentioned requires even less storage. :-p Remember, I was just trial dividing N by every number less than sqrt(N) (whether it was prime or not)


To be considered efficient, a number theoretic algorithm applied to numbers on the order of N = 2^n have to run in time that is polynomial in n. (not N)


The AKS primality test, for example, runs in time less than n^6.1. While that's a high exponent for a polynomial time algorithm, it's still polynomial.

Even factoring, an inefficient operation in the worst case1, is still subexponential; it runs in time faster than a^n for any a > 1.

Your friends algorithm runs in time 2^(N^2) -- even worse than exponential time. It is simply not efficient.



As far as running times, issues of inefficient memory storage, et al. are irrelevant. It might mean that for very small numbers the algorithm with worse time complexity but better memory usage will run faster... but that's only for small numbers. The computationally faster algorithm will always beat out the computationally slower algorithm for large enough numbers.

(note: asymptotic analysis ignores constant factors; it treats something that takes X operations the same as one that takes 2X operations)


1: Actually, the time complexity of factoring is an open question. We simply have not found an algorithm that can do it in polynomial time.
 
Last edited:
  • #15
Hurkyl said:
Even factoring, an inefficient operation in the worst case1, is still subexponential; it runs in time faster than a^n for any a > 1.

Your friends algorithm runs in time 2^(N^2) -- even worse than exponential time. It is simply not efficient.

1: Actually, the time complexity of factoring is an open question. We simply have not found an algorithm that can do it in polynomial time.

So I suppose her method isn't efficient at factoring, either. It does always give two factors, but 2^(N^2) is worse than a^n, so I guess I've answered my own question.
 

FAQ: Generator redefined more rigorously

What is a generator?

A generator is a device that converts mechanical energy into electrical energy. It works by using a coil of wire and a magnet to create a flow of electrons, which produces an electric current.

How is a generator defined more rigorously?

A generator is defined more rigorously by the laws of physics, specifically Faraday's law of electromagnetic induction. This describes the relationship between a changing magnetic field and the production of an electric current.

What are the main components of a generator?

The main components of a generator are the rotor (a rotating component), the stator (a stationary component), and the exciter (a small power source that creates the initial magnetic field).

What are the different types of generators?

There are several types of generators, including AC generators (which produce alternating current), DC generators (which produce direct current), and synchronous generators (which are used in power plants to produce electricity).

How does a generator work?

A generator works by using the mechanical energy from a power source (such as a turbine or engine) to rotate a shaft, which turns the rotor inside the stator. The movement of the rotor produces a changing magnetic field, which induces an electric current in the stator's coils.

Similar threads

Replies
8
Views
1K
Replies
1
Views
1K
Replies
3
Views
3K
Replies
2
Views
1K
Replies
17
Views
5K
Replies
5
Views
2K
Back
Top