In general, if we assign the integer value to the float type, it automatically converts the type to float by just adding the decimal points.
For example,
But what happens if we assign float to int? Does it give an error? Let us check it now.
Consider the following example,
Similarly when we return a float type when int is expected, it automatically takes the int part leaving the floating-point.
Let us see another example, which takes different format specifier.
For example,
float a=3;It takes the value as 3.000000. You can understand this by referring to the concept of Type casting by clicking here.
But what happens if we assign float to int? Does it give an error? Let us check it now.
Consider the following example,
- #include<stdio.h>
- main()
- {
- float a=3.4;
- int b=a;
- printf("%d", b);
- return 0;
- }
What will be the output of the program? Check it now.
3This will be the output of the above program. This is because, when we assign a float value to an integer, it just takes the integral part and leaves the floating-point value.
Similarly when we return a float type when int is expected, it automatically takes the int part leaving the floating-point.
Let us see another example, which takes different format specifier.
- #include<stdio.h>
- main()
- {
- float a=3.4;
- printf("%d", a);
- return 0;
- }
Observe the output of the above program, you will wondering to see the result as 0. This is because the float can't convert directly to an integer. In the previous example, we have assigned float to an int hence, it has changed its type. But here this is not the case and the result is changed. To convert it from float, we have to Type cast the variable by changing the Line 5 as
printf("%d", (int)a);This converts the float value to int and the result is 3.