- #1
dodo
- 697
- 2
I was recently writing a program that needed to encode a permutation
into a single integer number (an index, from 0 to n! - 1).
The program transformed the permutation (f.i., of 5 numbers) into
a "series of indexes", in the ranges [0..4], [0..3], [0..2], [0..1],
4 indexes in total (one less than the permutation size, since
the last member of the permutation is totally dependent on the choices
of the previous members, and thus need not to be represented).
The first index directly represents the first number;
the second index is the *relative position* of the second number,
in the set where the first was removed from 1..5 (the "remaining numbers");
the third index is the position in the set 1..5 with first and second
removed, and so on. (That's why the indexes' range gets smaller
for each new "digit".) Then the digits are multiplied by factorials,
in order to assemble the final number:
[0..4].4! + [0..3].3! + [0..2].2! + [0..1].1!
(The factorials come the same way as you'd assemble a Horner-form
polynomial, only with "x" increasing from 1 to n-1 for each coefficient;
so you get factorials instead of powers of x.)
This produces a nice sequence of indexes from 0 to n! - 1, from all
permutations (1,2,3,4,5), (1,2,3,5,4), ... (5,4,3,2,1).
What striked me was the fact that the greatest index, n! - 1,
was represented as 4.4! + 3.3! + 2.2! + 1.1!.
How in heaven would you prove that sum(i!.i) = n! - 1 ??
(Edit: sum for i=1..n-1. Sorry for my pig-latex.)
(hey! programmer here! - if you had an homomorphism mapping people to
math knowledge, you'd call its kernel "computer programmers" :P )
into a single integer number (an index, from 0 to n! - 1).
The program transformed the permutation (f.i., of 5 numbers) into
a "series of indexes", in the ranges [0..4], [0..3], [0..2], [0..1],
4 indexes in total (one less than the permutation size, since
the last member of the permutation is totally dependent on the choices
of the previous members, and thus need not to be represented).
The first index directly represents the first number;
the second index is the *relative position* of the second number,
in the set where the first was removed from 1..5 (the "remaining numbers");
the third index is the position in the set 1..5 with first and second
removed, and so on. (That's why the indexes' range gets smaller
for each new "digit".) Then the digits are multiplied by factorials,
in order to assemble the final number:
[0..4].4! + [0..3].3! + [0..2].2! + [0..1].1!
(The factorials come the same way as you'd assemble a Horner-form
polynomial, only with "x" increasing from 1 to n-1 for each coefficient;
so you get factorials instead of powers of x.)
This produces a nice sequence of indexes from 0 to n! - 1, from all
permutations (1,2,3,4,5), (1,2,3,5,4), ... (5,4,3,2,1).
What striked me was the fact that the greatest index, n! - 1,
was represented as 4.4! + 3.3! + 2.2! + 1.1!.
How in heaven would you prove that sum(i!.i) = n! - 1 ??
(Edit: sum for i=1..n-1. Sorry for my pig-latex.)
(hey! programmer here! - if you had an homomorphism mapping people to
math knowledge, you'd call its kernel "computer programmers" :P )
Last edited: