#1
  1. No Profile Picture
    Registered User
    Devshed Newbie (0 - 499 posts)

    Join Date
    May 2014
    Posts
    2
    Rep Power
    0

    Confusion in different sizes of the variable types


    Hi.

    I'm a starter of course. I found it hard to memorize all the sizes of variable types, like short, int, float, long, etc. Is it necessary to clarify all of the details of the numbers? By numbers, I mean the exact size of each type.

    And if possible, could anyone introduce why being aware of this is important in the future coding?

    Thank you in advance to anyone who will help.
  2. #2
  3. Lord of the Dance
    Devshed Specialist (4000 - 4499 posts)

    Join Date
    Oct 2003
    Posts
    4,131
    Rep Power
    2011
    The important things with size is to know what number-range each type support. Also that there is a difference whether it is an integer or a floating number.
    This can for example be seen here.
  4. #3
  5. No Profile Picture
    Registered User
    Devshed Newbie (0 - 499 posts)

    Join Date
    May 2014
    Posts
    2
    Rep Power
    0
    Thanks for so prompt reply!!!
    So this means indeed I need to practice memorizing this...

    BTW, do you have any recommendations for practice problem sets?

    Originally Posted by MrFujin
    The important things with size is to know what number-range each type support. Also that there is a difference whether it is an integer or a floating number.
    This can for example be seen here.
  6. #4
  7. Contributing User
    Devshed God 1st Plane (5500 - 5999 posts)

    Join Date
    Aug 2011
    Posts
    5,888
    Rep Power
    509
    In practice I've not needed to memorize all that stuff.
    Firstly, the integers are trivial being powers of 256, with appropriate considerations for signedness.
    The compiler operator sizeof tells how many bytes.
    limits.h provides the ranges.

    For floats I almost always choose double however for a long time now, thirty years or so, I've recognized that float is usually has sufficient precision (maybe not range for science) whereas the particular floating point algorithm is absolutely critical.

    Summary---know the approximate ranges of data your program will handle. If something feels like it might be extreme look it up.
    [code]Code tags[/code] are essential for python code and Makefiles!
  8. #5
  9. Lord of the Dance
    Devshed Specialist (4000 - 4499 posts)

    Join Date
    Oct 2003
    Posts
    4,131
    Rep Power
    2011
    Experience is your best friend. :)

    But a very basic rule list could be to use:
    - unsigned if you only care about positive integer value
    - int for use of integer values
    - long is only needed if you work on a system which has int at size of 2-bytes
    - double when you need floating-point number (decimal number)
  10. #6
  11. Banned ;)
    Devshed Supreme Being (6500+ posts)

    Join Date
    Nov 2001
    Location
    Woodland Hills, Los Angeles County, California, USA
    Posts
    9,782
    Rep Power
    4301
    No you don't have to memorize it and that URL has several mistakes anyway. For instance, it does NOT mention that the size of the different variable types is compiler specific. The C standard only specifies the minimum sizes of certain types. It does NOT specify the maximum size of any type and it has some loose rules about each type as well. For instance, the "short" type is defined to be at least 16 bits in size, but should not exceed the size of int (but can equal it). Similarly, an int has to be at least 16 bits in size and should not exceed the size of long (but can equal it). A char has to be at least 8 bits long, but can be longer.

    Therefore, in some compilers (e.g. Turbo C, Quick C, Lattice C etc.): short and int are 16 bit (-32768 to 32767) and long is 32 bits (-2,147,483,648 to 2,147,483,647) .
    In gcc compiler for 32 bit x86 linux and in Visual C++ (both 32 and 64 bit versions), short is 16 bit, int and long are both 32 bits.
    In gcc compiler for 64 bit linux, int is 32 bit, but long is 64 bit.
    In some embedded C compilers, a char is 16 bits, as is short.

    Note that all these compilers are obeying the C standard for data type sizes correctly. In fact, a compiler can make char, short, int and long to all be 64 bits and still be compliant with the standard.

    Therefore, memorizing that int and long can only hold between -2,147,483,648 to 2,147,483,647 is wrong, because on a gcc compiler on many 64 bit *nix systems, long can hold larger numbers than this, and on many 16-bit compilers, int holds a lot less than this. You should pick your variable types based on the compiler.

    This is why a lot of software written for cross-platform purposes uses #ifdef to adjust the variable sizes properly according to compiler.
    Code:
    #ifdef __TURBOC__
    typedef MYINT long;
    #elif __GCC__
    typedef MYINT int;
    #elif __VISUAL_C__
    typedef MYINT int;
    #endif
    
    MYINT a, b;
    Another way (at least for newer compilers) is to use i8_t, i16_t, i32_t etc. (and their unsigned counterparts u8_t, u16_t etc.) to declare variables:
    Code:
    u8_t foo; // Guaranteed 8-bit unsigned
    i32_t bar; // Guaranteed 32-bit signed

    Comments on this post

    • kicken agrees
    Last edited by Scorpions4ever; May 3rd, 2014 at 06:42 PM.
    Up the Irons
    What Would Jimi Do? Smash amps. Burn guitar. Take the groupies home.
    "Death Before Dishonour, my Friends!!" - Bruce D ickinson, Iron Maiden Aug 20, 2005 @ OzzFest
    Down with Sharon Osbourne

    "I wouldn't hire a butcher to fix my car. I also wouldn't hire a marketing firm to build my website." - Nilpo

IMN logo majestic logo threadwatch logo seochat tools logo