eramit2010 is right and his point is very important, but it needs a bit more explanation.
Humans work in decimal (base 10), whereas computers work in binary (base 2). There are many possible ways to represent decimal numbers in binary, but the industry has settled on just a few. In C (and in most programming languages now), integer values are represented in Two's complement
form and floating-point values use IEEE 754
format. Those are both links to their Wikipedia articles.
IEEE 754 floating-point is not in decimal, but rather in binary. Decimal fractions do not translate to binary exactly. 0.7 looks like a simple fraction with just one digit, but in IEEE it looks like this (hexadecimal representation; read the article to decipher it): 3F E6 66 66 66 66 66 66 . That appears to be a repeating fraction, meaning that there is an infinite number of digits that we had to truncate off.
In the expression 0.7>a, a is a float, so its version of 0.7 is 3F 33 33 33 . When you compare a double to a float, the float has to first be converted to a double, which is what the compiler does automatically. I believe that the conversion of a to a double just involves converting the exponent and then filling the extra bits of precision in the mantissa with zeros. Thus the double, 0.7, is greater than a converted float 0.7.
If you had compared a to a float
constant, 0.7f, then that test, if(0.7f>a), would have failed and it would have printed out "Hello". I just now ran that test to confirm that.
But this has far greater implications that are very far-reaching. Your floating-point variables can only approximate the decimal fractions you input. Furthermore, when you perform floating-point arithmetic, a certain amount of round-off error will occur. That means that when you are working with floating-point values, you cannot depend on a calculated value being exactly equal to a constant.
I first encountered that simple fact of life in a programming assignment for school. The program depended on a calculated value being equal to 0.0. I applied a simple test problem that I had worked out by hand so I knew what it would do, but that test for 0.0 just refused to work. The problem with testing floating-point values for equality is that they must be equal, meaning that every single bit in that mantissa has to be exactly the same. If they are exact the same except for their lowest-order bits, then they are not equal. What I learned that I had to do was not test for equality, but rather test for the calculated value being very, very close to the test value. So if you were testing for a double, d, being equal to 0.7, (d == 0.7) would not work, but rather it would have to be something like (fabs(d - 0.7) < 0.0001); ie, if the absolute value of the difference of the calculated and test values is less than some very small value.
BTW, you will get exact binary representations for fractions whose divisor is a power of two. For example, 0.75 is 3.0/4.0. Its float and double representations are:
3F 40 00 00
3F E8 00 00 00 00 00 00
Again, read the article to see how float and double exponent fields differ, but you can plainly see that the lower-order bits of the mantissae are all zeros.
This point is mainly of academic interest with very little practical use. Though one day you just might find a use for this.
I know that is a lot of information, but you will need to learn it eventually.