- #1
jackson6612
- 334
- 1
I have read somewhere: Any decimal number in the range 0 to 2^(n-1) can be represented in binary form as an n-bit number.
I suspect it's wrong. Shouldn't it rather be 0 to [(2^n)-1]?
Please guide me. Thanks.
I suspect it's wrong. Shouldn't it rather be 0 to [(2^n)-1]?
Please guide me. Thanks.