PART I: Integer
There are two types of integer : unsigned integer(only positive) & signed integer(positive,negative and 0)
So how does a computer storage an integer?
1.Regarding unsigned integer:
CPU use binary to represent unsigned integer directly.
eg: 2333 --> 1001 0001 1101
Notice that large integer needs wide-ranging datatype.
eg: unsigned int datatype can storage integer ranging from 0 to 4294967295
unsigned long long : 0 to 18,446,744,073,709,551,615
(For 32bit CPU only)
2.Regarding signed integer:
An essential bit must be allocated in order to record whether it‘s positive or negative.
Many modern computer use Two‘s-Compelement Encodings to represent signed integer.
Assume we use 4 bits to save a integer (bit 0~3) , each bit has a unique weight
Bit: X X X X
Weight: -(2^3) (2^2) (2^1) (2^0)
That seems just like unsigned integer , aha~ .
But here is a difference : the weight of the highest bit is a negative value , which will also determine the sign.
Here are 3 examples:
1. 0101 (2^2)+(2^1) = 5
2. 1011 -(2^3)+(2^1)+(2^0) = -5
3. 0000 = 0
In computer , the binary code might be the same , but the value might be totally different.
That depends on different decoding standard.
eg:
unsigned int: 0 ...... 127 128 ...... 255
signed int: 0 ...... 127 -128 ...... -1
binary: 00000000 ...... 01111111 10000000 ...... 11111111
hexadecimal: 00 ...... 7F 80 ...... FF
PS:
There are many mathematcial practices in this period on CSAPP.
Just like converting from binary to decimal, as well as decode a binary for unsigned or signed.
But in my opinion they aren‘t that useful :P