How the numbers are interprated in compiler with type conversion examples.
This post gives you some concepts about how the number are repersented inside processor
and with help of type conversion example you will get some practical insights.
The things in this post is observed by my own observation .
Type conversion: What happen inside?
I was dealing with fix and floating point arithmetic and wondered what happen inside of register when
the compiler execute this statement:
float fv = 5.0;
int a = (int)fv;
If you give this than their is in built routine in compiler which rearrange the bitstructure so that
you can get the appropriate representation.
At present the 5 is stored as 0101 in processors registers.
Now after this conversion the representation or arrangement of bits is changed by compiler.
Now the 5 is arranged as the single precision IEEE754 format of 1+8+23.
Here in this cases:-
0101 and all leading bits zero -->0 10000001 01 and trailing bits all zero
That is one bit for sign bit, 8 bits for exponent and 23 bits for mantissa.
This was case for normal floats but if you are using double than the representation is converted in 64 bits instead
of 32 bits.
lets see another example :
typedef unsigned long long ui64;
typedef unsigned int ui32;
ui32 x; ui64 y;
y=0x 20000000 10000000;
x = (ui32)y;
so now x will holds ox10000000 and sadly it truncates msb.
another type of example may be the sign extensions.
see here i demonstrates heavy use of typedefs but believe me they are really useful
for porting purposes. that is when you want to use your existing code for another
processor. just build this into your habit.
//some more typedefs
typedef signed int si32;
typedef unsigned char uc8;
typedef signed char sc8;
case 1:
uc8 a;
ui32 b = (ui32)a;
case 2:
sc8 c;
si32 d = (si32)c;
The first case will be same for all compilers. That is it will fill the msbs with zeroes.
But the second one i though fill with signs or sign extensions but it is highly suggestable
to find it in your compiler what is tha nature. In fact whereever such sign conversion occur
it is better to find out the nature by executing on your compiler and for most
important for that architecture.
Ever wondered what may be the maximum and minimum allowed numbers
represented by single precision float variable ?
Is it -3.4*pow(10,38) to 3.4*pow(10,38) as we read in books and if it is why it is?
Can you represnt any range of numbers or say any number with floats?
Just try to find out and if you cant find out wait for the next post for its answer.......
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment