July 15th, 2013, 08:42 AM
Unable to understand one line of code.
I am unable to understand the meaning of the following one line of code.
- The code is from one book named "Fundamental of Embedded software by D.W.Lewis" on page number 44 example number 3-1. It is about displaying the size and range of C integer types.
I am not writing the full code here. In gist it is macro definition of function and the following line is bit confusing for me.
I am not writing the whole code because I think I have written the necessary things here.
#define RANGE(type, name)
type minival, maxval, bits;
bits = 1;
bit = 1;
while (bit <<= 1) bits++; /* <-- Here is the confusion I am unable to understand particular this one "<<=". I know its bitwise left shift operator but unable to get the meaning. */
thanking you in advance.
July 15th, 2013, 09:19 AM
What's bit? I don't see it declared anywhere.
Assuming bit is either unsigned int or unsigned char, what is it that you don't understand?
It is customary to write that statement on two lines for readability:
Was that it?
while (bit <<= 1)
Is it bit <<= 1 ?
It's just like writing
n += 42;
, which is equivalent to
n = n + 42;
Was that it?
When the last 1-bit gets shifted out of bit, then its value will be zero, which is false, while will drop it out of the loop.
Was that it?
Is your question what that's supposed to accomplish? Given that bit was initialized to 1, bit won't become zero until that 1 has been shifted out, so I'm guessing that it's to count the sizeof whatever type bit was declared as, which we do not know.
OK, I can't waste any more time with guessing games.
July 15th, 2013, 11:31 AM
If you are going to post code at least post real code that is syntactically and semantically correct! Ignoring the fact that this is neither a valid macro or function definition, and that you have declared bits twice, I am going to make some assumptions about the real code you did not post.
The variable bits is initialised to 1 (the LSB contains a 1 bit). Regardless of what type is specified by the type macro argument, when bits is shifted left, the LSB is replaced with zero, and the 1 moves up. The loop increments bits until bits becomes zero (when the 1 is shifted out of the integer altogether) , so that when this is the case, bits is equal to the number of bits in type.
All that said the simpler solution is simply:
where CHAR_BIT is defined in the standard header limits.h. sizeof(bits) * CHAR_BIT is a compile time constant so there is no code to execute to determine the number of bits. The point is the compiler already knows the size of data types - you never need code to determine that - entirely pointless.
int bit_count = sizeof(bits) * CHAR_BIT ;
Further limits.h also defines the range of the fundamental integer data types too. Rendering this whole exercise somewhat pointless.
Last edited by clifford; July 15th, 2013 at 11:38 AM.
July 15th, 2013, 11:42 AM
Fine but it should at least be real and accurate code! :rolleyes:
Originally Posted by jaysinhp
You are confused by this code, so by definition are not really qualified to decide what is or is not sufficient. What is so hard about copy & pasting the entire macro verbatim and letting those that know decide what is relevant!?
July 16th, 2013, 09:00 AM
Sorry its typo.Make it clear with the first line of code.
The code is about finding the total number of bits of any variable. Hope this will clear the confusion. The reason for not writing the entire code is I checked twice there is no need to put the whole code here. I am sorry for writing "bits" instead of "bit".
type minval, maxval, bit; /* In the previous code I declared this as "bits" but its typo it is "bit"*/