Why did Intel choose to split the base and limit of a segment into different parts in the segment descriptor rather than using contiguous bits?
See figure 5-3 of http://css.csail.mit.edu/6.858/2014/readings/i386/s05_01.htm
Why did they not store base address in bits 0 through 31, limit in bits 32 through 51 and use the remaining position for other bits (or some similar layout)?
Raymond Chen already answered this question in the comments:
For compatibility with the 80286. The 80286 had a maximum segment size of 2^16 and a maximum base of 2^24. Therefore, the base and limit fields were 16 and 24 bits wide. When the size and base were expanded to 32 bits, they had to be placed somewhere else because the good places were already taken.
Here is a scan of a segment descriptor (of a code or data type) from the Intel 80286 Programmer's Reference Manual:
For comparison, here is a screenshot from the Intel® 64 and IA-32 Architectures Software Developer’s Manual (Volume 3A):
The format is exactly the same, save for the use of the reserved bits. The base was extended from 24 to 32 bits, the segment limit was extended from 16 to 20 bits, and some additional flags were added. (The "Accessed" bit is included as part of the "Type" field in the second screenshot)
So, in short: The layout is only weird because it is a backwards-compatible extension of an older layout designed for a 16-bit processor.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With