- #1
iScience
- 466
- 5
Is this accurate?
$$x = some\_number$$
$$bits(x)= \frac{log(x)}{log(2)}$$
$$x = some\_number$$
$$bits(x)= \frac{log(x)}{log(2)}$$
No, not quite. The fraction you have on the right is the same as ##log_2(x)##.iScience said:Is this accurate?
$$x = some\_number$$
$$bits(x)= \frac{log(x)}{log(2)}$$
If you want to represent positive integers only, you can drop the leading 1. It will be there for every number anyway.Mark44 said:As to why your formula isn't correct, consider x = 4, and that ##log_2(4) = 2##. It takes 3 bits (##100_2##) to represent 4.
Mark44 said:As to why your formula isn't correct, consider x = 4, and that ##log_2(4) = 2##. It takes 3 bits (##100_2##) to represent 4.
I don't understand what you're saying. I was only considering positive integers. The binary representation of 4 as an unsigned number is ##100_2##. Are you interpreting the 1 digit to mean the number is negative?mfb said:If you want to represent positive integers only, you can drop the leading 1.
?mfb said:It will be there for every number anyway.
The binary representation for every positive integer starts with a 1. If you are interested in saving bits, as the title suggests, you do not have to store this 1. Floating point numbers do exactly this: they just do not store the 1 because it would be fully redundant.Mark44 said:?
The trouble is, negative integers are stored with a 1 digit in the most significant bit (MSB).mfb said:The binary representation for every positive integer starts with a 1. If you are interested in saving bits, as the title suggests, you do not have to store this 1.
Some floating point numbers do this. The ones that don't follow the older x87 Extended Precision format.mfb said:Floating point numbers do exactly this: they just do not store the 1 because it would be fully redundant.
The Borland C/C++ compiler back in the early 90's had an 80-bit long double type based on this format. The Microsoft compilers never did, to the best of my knowledge.In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit.
If you want to represent integer numbers between 0 and x (inclusive) the correct equation is ##ceiling(log_2(x+1))##.Mark44 said:No, not quite. The fraction you have on the right is the same as ##log_2(x)##.
As to why your formula isn't correct, consider x = 4, and that ##log_2(4) = 2##. It takes 3 bits (##100_2##) to represent 4.
I don't think so, and even if we would, storing them does not have to happen in the same way we would write those numbers down on paper.Mark44 said:The title of the thread is "Number of bits it takes to represent a number". With "bits" which is short for "binary digits," it's reasonable to assume that we're talking about a binary representation.
Well, most do. A few do not.Mark44 said:Some floating point numbers do this.
Mark44 said:The title of the thread is "Number of bits it takes to represent a number". With "bits" which is short for "binary digits," it's reasonable to assume that we're talking about a binary representation.
Of course, but this thread is about how they are represented, not how they can be stored. Whether one bit is implied or not, it takes three bits to represent 4, so the OP's formula as stated doesn't give the right result.mfb said:I don't think so, and even if we would, storing them does not have to happen in the same way we would write those numbers down on paper.
Both.Tom.G said:Represent to a human or to a computer?
I think that you are arguing just to be arguing. Given that this thread is in the Programming and Computer Science section, can you give us an example of one computer system or programming language or standard in which 4 is represented as "00"? I'll stipulate that the IEEE 754 standard for single-precision and double-precision floating point numbers does use an implied bit that is not stored, but can you show a similar standard for integer values?mfb said:I can represent 4 as "00", and no one can stop me.
"Number of bits it takes" implies that we want to minimize the number we need. Using binary representation would need one bit more than necessary.
I wrote 1002 as an abbreviation for the full byte representation 000001002.mfb said:I'm not aware of any computer system which stores 4 as "100" either. 00000100
Data compression. A 4 can be represented and stored in lots of ways. Those ways can depend on context. If I have a long string of 4's, I can represent those 4's pretty cheaply.Mark44 said:I think that you are arguing just to be arguing. Given that this thread is in the Programming and Computer Science section, can you give us an example of one computer system or programming language or standard in which 4 is represented as "00"? I'll stipulate that the IEEE 754 standard for single-precision and double-precision floating point numbers does use an implied bit that is not stored, but can you show a similar standard for integer values?
There are many formats that compress all needed data down to only a few bytes, I've run across them mostly when doing serial communications. They try to pack as much information as possible in those bits.mfb said:I'm not aware of any computer system which stores 4 as "100" either. 00000100 and similar - sure, and you cannot drop a leading 1 here because there is none. But a storage format that uses exactly the number of bits it needs (plus one)?
struct packedStructure {
unsigned int threeBits : 3;
};
int main(int, char**){
packedStructure mybits;
mybits.threebits = 4; //Okay
mybits.threebits = 8; //Compile with -Wall and this will warn you of an overflow
return 0;
}
Most if not all 16 bit Microsoft compilers supported 80 bit long doubles. This was dropped in 32 bit and 64 bit Microsoft compilers, and long double is now treated the same as double (64 bits).Mark44 said:The Borland C/C++ compiler back in the early 90's had an 80-bit long double type based on this format. The Microsoft compilers never did, to the best of my knowledge.
rcgldr said:Most if not all 16 bit Microsoft compilers supported 80 bit long doubles. This was dropped in 32 bit and 64 bit Microsoft compilers, and long double is now treated the same as double (64 bits).
Did you read the page you linked to? From that page:glappkaeft said:That has never been true, the latest VS compilers from 2015 still supports 80 bit floating point and often still default to it.
https://msdn.microsoft.com/en-us/library/9cx8xs15.aspx
Per the quote from the MSDN page whose link you provided, Microsoft compilers do not support 80-bit floating point numbers. You can declare a variable of type long double, but that's equivalent to declaring it double; i.e., 64-bit floating point.Previous 16-bit versions of Microsoft C/C++ and Microsoft Visual C++ supported the long double, 80-bit precision data type. In Win32 programming, however, the long double data type maps to the double, 64-bit precision data type.
In the spirit of "you know what I mean", the answer is one of these:glappkaeft said:The point most people participating in the thread should remember is that the Shannon information theorem does not consider the value of the largest number you want to represent but instead only the number of symbols you want to represent.
Thus
- x = some number uses is {x} thus log2(1) bits
- {1, 2, ..., x} uses log2(x) bits
- {0, 1, ..., x} uses log2(x+1) bits
- and many other possibilities
mfb said:The binary representation is 100. Who said we are limited to the binary representation? We can save bits with the following scheme, assuming we know where digits end:
1: ""
2: "0"
3: "1"
4: "00"
5: "01"
6: "10"
and so on.
The binary representation for every positive integer starts with a 1. If you are interested in saving bits, as the title suggests, you do not have to store this 1. Floating point numbers do exactly this: they just do not store the 1 because it would be fully redundant.
If we are playing this game, then the answer is always 0. Just use the null bit ""mfb said:I can represent 4 as "00", and no one can stop me.
"Number of bits it takes" implies that we want to minimize the number we need. Using binary representation would need one bit more than necessary.
If it is given that already know it's a four, then you don't need to store that information. You see that with optimizing compilers - they'll deal with the "4" at compile time and no "4" will exist in the actual executable code. For example, with "float area=4*3.14159*(radius*radius);", don't expect to see a four in the executable code.Jarvis323 said:If we are playing this game, then the answer is always 0. Just use the null bit ""
Depends on what game we are playing. One game we could play is:mfb said:Which number does it represent, and what about all the others?
The number of bits needed to represent a number is calculated by taking the logarithm base 2 of the number and rounding up to the nearest whole number. For example, to represent the number 10, we would need log210 = 3.32 bits, which would round up to 4 bits.
The number of bits determines the range of numbers that can be represented. For example, with 4 bits, we can represent 24 = 16 different numbers, ranging from 0 to 15.
No, not all numbers can be represented with a finite number of bits. This is because some numbers, such as irrational numbers, have an infinite number of decimal places and cannot be accurately represented with a finite number of bits.
The number of bits directly affects the precision of a number. The more bits used to represent a number, the more precise the number can be. For example, with 4 bits, we can only represent whole numbers from 0 to 15, but with 8 bits, we can represent numbers with decimal places up to 255.
The maximum number that can be represented with a given number of bits is 2n - 1, where n is the number of bits. For example, with 8 bits, the maximum number that can be represented is 28 - 1 = 255.