#1
  1. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Oct 2012
    Posts
    30
    Rep Power
    2

    Why variation in data types size ?


    How size of integer,char..(Data types) is determined in C programming.Is it based on the compiler/processor that is in use or some other factors are there in calculating that.

    Few websites say int takes 2 and 4 bytes in 32 bit machine and 4 and 8 bytes in 64 bit machine. I am confused with this.

    If any proper link is available please share me.
  2. #2
  3. I'm Baaaaaaack!
    Devshed God 1st Plane (5500 - 5999 posts)

    Join Date
    Jul 2003
    Location
    Maryland
    Posts
    5,538
    Rep Power
    244
    C and C++ were intended to be a higher level, more human readable version of assembler, which is nothing more than human readable (sort of) machine language. As such, there is an intimate connection between C/C++ and the machine and each machine is built with different capabilities. The sizeof(int) is generally expected to be a 'word' on the hardware and a word is generally expected to be the fastest processed block of bits. C/C++ runs on everything from 8 bit micro controllers all the way up to massively parallel 64 bit super computers (and everything in between), so that sort of flexibility is absolutely necessary to allow compiler writers to produce the most efficient code possible for each particular architecture.

    My blog, The Fount of Useless Information http://sol-biotech.com/wordpress/
    Free code: http://sol-biotech.com/code/.
    Secure Programming: http://sol-biotech.com/code/SecProgFAQ.html.
    Performance Programming: http://sol-biotech.com/code/PerformanceProgramming.html.
    LinkedIn Profile: http://www.linkedin.com/in/keithoxenrider

    It is not that old programmers are any smarter or code better, it is just that they have made the same stupid mistake so many times that it is second nature to fix it.
    --Me, I just made it up

    The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.
    --George Bernard Shaw
  4. #3
  5. Contributed User
    Devshed Specialist (4000 - 4499 posts)

    Join Date
    Jun 2005
    Posts
    4,385
    Rep Power
    1871
    About the C standards
    The C standards give minimum ranges for standard data types, such as CHAR_BIT should be at least 8, INT_MIN should be at least -32767 and INT_MAX should be at least +32767

    The appropriate constants (for your compiler) are listed in limits.h, and reflect what the compiler writer chose for your machine. If you want to know what the minimum guarantee from the standard is, then you need to read the standard.

    Beyond that, the compiler writer is free to choose what would be the most sensible / efficient for the particular machine architecture. So having 2, 4 or 8 byte integers are all allowed, because all of them meet the minimum guarantee.

    Since C99, there is stdint.h as well, which declare things like int32_t, which is an int with exactly 32 bits (regardless of whatever a bare int is implemented as).
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper
  6. #4
  7. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,110
    Rep Power
    1803
    It is defined by the compiler within certain constrains defined by the ISO language standard. The ISO standard is the definitive place to look, but here is a simple summary.

    Where one is defined you can probably reasonably expect a compiler to follow the ABI of the particular architecture and/or operating system that they target.

IMN logo majestic logo threadwatch logo seochat tools logo