Thread: C programming

    #1
  1. No Profile Picture
    Registered User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Sep 2012
    Posts
    1
    Rep Power
    0

    Exclamation C programming


    i have seen that size of integer (int) in windows is 2 bytes but 4 in linux terminal....
    can any one give reason for why that happens.....

    and why size of float and char remain same in both os...
    rply fast...
  2. #2
  3. I'm Baaaaaaack!
    Devshed God 1st Plane (5500 - 5999 posts)

    Join Date
    Jul 2003
    Location
    Maryland
    Posts
    5,538
    Rep Power
    244
    I don't imagine you Googled, did you? Windows started as a 16 bit application on top of 16 bit DOS, hence a 'word' was 2 bytes. It then got 'promoted' to 32 bit, so a 'word' now was 4 bytes, but there was a whole lot of legacy code that required support, so the value of 'int' has drifted around during the interval. Linux has always been (at least) 32 bit, so a wee bit less confusion, though the transition to 64 bit has introduced some issues. If you access the macros in limits.h you can find out exactly what size each data type is.

    My blog, The Fount of Useless Information http://sol-biotech.com/wordpress/
    Free code: http://sol-biotech.com/code/.
    Secure Programming: http://sol-biotech.com/code/SecProgFAQ.html.
    Performance Programming: http://sol-biotech.com/code/PerformanceProgramming.html.
    LinkedIn Profile: http://www.linkedin.com/in/keithoxenrider

    It is not that old programmers are any smarter or code better, it is just that they have made the same stupid mistake so many times that it is second nature to fix it.
    --Me, I just made it up

    The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.
    --George Bernard Shaw
  4. #3
  5. Banned ;)
    Devshed Supreme Being (6500+ posts)

    Join Date
    Nov 2001
    Location
    Woodland Hills, Los Angeles County, California, USA
    Posts
    9,643
    Rep Power
    4247
    Originally Posted by vikas.boundless
    i have seen that size of integer (int) in windows is 2 bytes but 4 in linux terminal....
    can any one give reason for why that happens.....

    and why size of float and char remain same in both os...
    rply fast...
    Lemme guess, you're using Turbo-C (an ancient DOS compiler) on a windows machine (only compiler I know that still uses 16-bit ints). If you use a more modern C compiler (such as Visual C++ or gcc or whatever) on the very same machine, you'll see that size of integer is 4.

    The size of integer is compiler-dependent per the C standards. There is a lower limit on how many minimum bits are needed for an int (standard says 16 bits minimum), but it doesn't say what the maximum is. So it can be 32 bits, 64 bits, 80 bits, 128 bits or whatever, it is left up to the compiler vendor to decide what it should be.

    Also, by C standards, sizeof(char) = 1 always, immaterial of whether a char is 8-bit, 16-bit or whatever.
    Up the Irons
    What Would Jimi Do? Smash amps. Burn guitar. Take the groupies home.
    "Death Before Dishonour, my Friends!!" - Bruce D ickinson, Iron Maiden Aug 20, 2005 @ OzzFest
    Down with Sharon Osbourne

    "I wouldn't hire a butcher to fix my car. I also wouldn't hire a marketing firm to build my website." - Nilpo
  6. #4
  7. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Sep 2012
    Posts
    62
    Rep Power
    4
    Originally Posted by mitakeet
    I don't imagine you Googled, did you? Windows started as a 16 bit application on top of 16 bit DOS, hence a 'word' was 2 bytes. It then got 'promoted' to 32 bit, so a 'word' now was 4 bytes, but there was a whole lot of legacy code that required support, so the value of 'int' has drifted around during the interval. Linux has always been (at least) 32 bit, so a wee bit less confusion, though the transition to 64 bit has introduced some issues. If you access the macros in limits.h you can find out exactly what size each data type is.
    Linux did indeed begin life on the Sinclair QL, which, according to Sir Clive Sinclair, was a 32-bit 'serious' computer. Although it only had an 8-bit data bus, it was internally 32-bit. Some of the upgrades made it pack a punch though, such as the Super Gold Card.

    Regards,

    Shaun.
  8. #5
  9. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by mitakeet
    If you access the macros in limits.h you can find out exactly what size each data type is.
    Or just use sizeof(<datatype>).
  10. #6
  11. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by Scorpions4ever
    The size of integer is compiler-dependent per the C standards.
    It is, but also for the sake of inter-operability, most compilers comply with a specific ABI which is architecture and/or OS dependent.
  12. #7
  13. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by vikas.boundless
    and why size of float and char remain same in both os...
    The size of float and double are determined by the representation and precision required. Most commonly IEEE 754 is used - not least because that is the representation used by most hardware floating point units including that on the x86.

    The size of a char is intended to be the size of a character in the execution environment, bit in practice is the size of the smallest separately addressable memory location, which on most processors is 8 bits. The official definition of char would cause problems in Unicode execution environments where depending on the encoding a character could be variable in width. Even in environments such as Windows where the Unicode encoding is 16bit, making a char that size would make 8bit data unaddressable without introducing a new data type name. C# overcomes this by having a 16 bit char type and an 8 bit byte type.


    Originally Posted by vikas.boundless
    rply fast...
    A bit pushy. You are lucky that such rudeness has been mostly ignored. Often that would result in getting no useful answer at all! Not every one will be in the same time-zone as you; 24 hours is reasonable, and no one has to answer your question.

IMN logo majestic logo threadwatch logo seochat tools logo