I don't have the time nor inclination to look up the proper terminology at this very moment, but you have to add a type qualifier [incorrect term, I'm sure] to a numeric literal if it is not the default.
The integer type default is int and the floating point default is double. int only ranges over +/- 32K. To make a literal a long int, you need to append an 'L' to it. To make it unsigned, you need to append a 'U'. To make a double a float, append an 'F'. The 'L' and the 'U' and the 'F' can also be in lower case.
So your code should read:
Having the wrong format specifier in a printf can cause an apparent error. printf will take whatever value you give it and interpret it as what the format specifier says. In one example, we had a long int output and I used an 'L' instead of an 'l' in order to make it more readable (ie, that it not be confused with a one). It totally wacked out our output, which was a strictly formatted ASCII string requiring specific fields to start at specific places in the string. That's when I learned that an 'L' format specifier means long double; a long int must be indicated by a lower-case 'l'.
/* make the test value a long
also make the smaller values long in order to eliminate
the need for automatic type conversion. Not absolutely
necessary, but a good practice.
for (i = 0L; i <=100000L; i += 2000L)
/* the format specifier also needs to be long int or else you
will still get an erroneous output
printf("%ld ", i); /* "ell dee" */
All that having been said, your original error may have been directly caused by the literal not having a type qualifier or, more likely, by your having the wrong format specifier in your printf. Either way, both corrections should be made, not just the latter.