November 27th, 2012, 03:17 AM
Why variation in data types size ?
How size of integer,char..(Data types) is determined in C programming.Is it based on the compiler/processor that is in use or some other factors are there in calculating that.
Few websites say int takes 2 and 4 bytes in 32 bit machine and 4 and 8 bytes in 64 bit machine. I am confused with this.
If any proper link is available please share me.
November 27th, 2012, 04:48 AM
C and C++ were intended to be a higher level, more human readable version of assembler, which is nothing more than human readable (sort of) machine language. As such, there is an intimate connection between C/C++ and the machine and each machine is built with different capabilities. The sizeof(int) is generally expected to be a 'word' on the hardware and a word is generally expected to be the fastest processed block of bits. C/C++ runs on everything from 8 bit micro controllers all the way up to massively parallel 64 bit super computers (and everything in between), so that sort of flexibility is absolutely necessary to allow compiler writers to produce the most efficient code possible for each particular architecture.
November 27th, 2012, 04:52 AM
About the C standards
The C standards give minimum ranges for standard data types, such as CHAR_BIT should be at least 8, INT_MIN should be at least -32767 and INT_MAX should be at least +32767
The appropriate constants (for your compiler) are listed in limits.h, and reflect what the compiler writer chose for your machine. If you want to know what the minimum guarantee from the standard is, then you need to read the standard.
Beyond that, the compiler writer is free to choose what would be the most sensible / efficient for the particular machine architecture. So having 2, 4 or 8 byte integers are all allowed, because all of them meet the minimum guarantee.
Since C99, there is stdint.h as well, which declare things like int32_t, which is an int with exactly 32 bits (regardless of whatever a bare int is implemented as).
November 28th, 2012, 04:46 PM
It is defined by the compiler within certain constrains defined by the ISO language standard. The ISO standard is the definitive place to look, but here is a simple summary.
Where one is defined you can probably reasonably expect a compiler to follow the ABI of the particular architecture and/or operating system that they target.