Computers store values in fixed-width bit strings. For signed integers we need a way to encode “−” as well as magnitude. Three classic schemes exist:
They agree on non-negative encodings but differ in how they encode negatives and how arithmetic works.
0 = “+”, sign 1 = “−”.+21 = 0 | 0010101₂−21 = 1 | 0010101₂+0 = 0|000…0, −0 = 1|000…0), and hardware must treat sign separately for addition/subtraction.Value range (n bits): −(2^(n−1)−1) … + (2^(n−1)−1) (note: both +0 and −0 exist)
Rule: Represent a negative number by bitwise inverting the positive pattern.
Example (8-bit):
+21 = 0001 0101
−21 = ~0001 0101 = 1110 1010
Pros: Subtraction can be implemented as “add the one’s complement”.
Cons: Still has −0 = 1111 1111, so there are two zeros; arithmetic needs end-around carry fix-ups.
Value range (n bits): −(2^(n−1)−1) … + (2^(n−1)−1) (with ±0)
Rule: Represent −x by invert the bits of x and add 1 (modulo 2^n).
Example (8-bit):
+21 = 0001 0101
~ = 1110 1010
+1 = 1110 1011 -> −21
Pros: A single zero, and the same adder does both addition & subtraction. Simple sign extension.
Cons: Asymmetric range (one extra negative value).
Value range (n bits): −2^(n−1) … + (2^(n−1)−1)
(e.g., 8-bit: −128 … +127)
| Scheme | +21 (binary) | −21 (binary) | Notes | ||
|---|---|---|---|---|---|
| Sign–Magnitude | `0 | 0010101` | `1 | 0010101` | sign bit + 7-bit magnitude |
| One’s Complement | 00010101 | 11101010 | invert for negatives | ||
| Two’s Complement | 00010101 | 11101011 | invert + 1 |
Tip: Sign extension (two’s complement): when widening, copy the sign bit into the new upper bits.
Example:1110 1011(−21, 8-bit) →1111 1111 1110 1011(−21, 16-bit).
| Representation | Zero(s) | Integer Range (n bits) | Arithmetic Hardware | Notes |
|---|---|---|---|---|
| Sign–Magnitude | +0 and −0 | −(2^(n−1)−1) … +(2^(n−1)−1) | Complex (separate sign) | Conceptually simple; not used in CPUs |
| 1’s Complement | +0 and −0 | −(2^(n−1)−1) … +(2^(n−1)−1) | Needs end-around carry | Historical systems (e.g., early CDC) |
| 2’s Complement | Single 0 | −2^(n−1) … +(2^(n−1)−1) | Simple, unified adder | Ubiquitous in modern hardware |
Let n = 24.
…000100111₂.1 | 000000000000000000100111+39 = 00000000 00000000 0010011111111111 11111111 11011000 (hex 0xFFFFD8)11111111 11111111 11011001 (hex 0xFFFFD9)Computers also need fractions. Two main approaches:
value = (−1)^sign × 1.fraction × 2^(exponent − bias) (for normalized numbers)Single precision (32-bit)
[ sign (1) | exponent (8, bias=127) | fraction (23) ]
Double precision (64-bit)
[ sign (1) | exponent (11, bias=1023) | fraction (52) ]
The implicit leading
1.is not stored for normalized numbers; the fraction field holds the bits after the point.
Example: convert 100010.101₂ to single & double precision.
Normalize so the leading bit is 1:
100010.101₂ = 1.00010101₂ × 2^5
Sign: 0 (positive)
Exponent:
5 + 127 = 132 = 1000 0100₂5 + 1023 = 1028 = 100 0000 0100₂Fraction (drop the leading 1. and fill right with zeros):
1.00010101₂ → fraction bits 00010101…Final encodings
Single (32-bit)
Bits: 0 | 10000100 | 00010101000000000000000
Hex: 0x420A8000
Double (64-bit)
Bits: 0 | 10000000100 | 0001010100000000000000000000000000000000000000000000
Hex: 0x4041500000000000
One’s complement (n bits)
encode(+x) = binary(x)
encode(−x) = (2^n − 1) − binary(x)
Two’s complement (n bits)
encode(+x) = binary(x)
encode(−x) = 2^n − binary(x) = (~binary(x) + 1) mod 2^n
Sign extension (two’s complement)
from n→m bits (m>n): replicate sign bit into the new high (m−n) bits