Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is the memory address printed with {:p} much bigger than my RAM specs?

Tags:

memory

rust

I want to print the memory location (address) of a variable with:

let x = 1;
println!("{:p}", &x);

This prints the hex value 0x7fff51ef6380 which in decimal is 140734568031104.

My computer has 16GB of RAM, so why this huge number? Does the x64 architecture use a big interval sequence instead of just simple 1 increment for accessing memory location?

In x86, usually the first location starts at 0, then 1, 2, etc. so the highest number you can have is around 4 billion, so the address number was always equals or less than 4 billion.

Why is this not the case with x64?

like image 625
John Smith Avatar asked Oct 11 '17 08:10

John Smith


2 Answers

What you see here is an effect of virtual memory. Memory management is hard and it becomes even harder when the operating system and tens of hundreds of processes have to share the memory. In order to handle this huge complexity, the concept of virtual memory was used. I'll just briefly explain the basics here; the topic is far more complex and you should read about it somewhere else, too.

On most modern computers, each process thinks that it owns (almost) the complete memory space. But processes never deal with physical addresses, but with virtual ones. These virtual addresses are mapped to physical ones each time the process actually reads from memory. This translation of addresses is done by the so called MMU (memory management unit). The rules for how to map the addresses are setup by the operating system.

When you boot your PC, the operating system creates an initial mapping. Every time you start a process, the operating system adds a few slices of physical memory to the process and modifies the mapping appropriately. That way, the process has memory to play with.

On x86_64, the address space is 64 bit wide, so each process thinks it owns all of those 2^64 addresses. This is not true, of course:

  1. There isn't a single PC on the world with that much memory. (In fact, most CPUs today can merely use 280 TB of RAM, since they internally can only use 48bit for addressing physical memory. And even these 280TB are enough for now, apparently.)
  2. Even if you had that much memory, there are other processes which use part of that memory, too.

So what happens when you try to read an address which isn't mapped (which in 64bit land, are the vast majority of the addresses)? The MMU triggers a page fault. This makes the CPU notify the operating system to handle this.

What I mean is that in x86, usually first location starts at 0, then 1, 2, etc. so the highest number you can have is around 4 billion.

That is true, but it is also true if your x86 system has less than 4GB of RAM. Virtual memory exists for quite some time already.


So that's a short summary of why you see such big addresses. Again, please note that I glossed over many details here.

like image 167
Lukas Kalbertodt Avatar answered Sep 22 '22 19:09

Lukas Kalbertodt


The pointers your program works with are in virtual address space. x86-64 uses 64-bit pointers. This was one of the major goals of AMD64, along with adding more integer and XMM registers. You are correct that i386 only has 32-bit pointers which only cover 4GB of address space in each process.

0x7fff51ef6380 looks like a stack pointer, which I guess makes sense for that code.

Linux on x86-64 (for example) puts the stack near the top of the lower canonical address range: current x86-64 hardware only implements 48-bit virtual addresses and this is the mechanism to prevent software from depending on it. This allows the address space to be extended in the future without breaking software.

The amount of phyiscal RAM in your system has nothing to do with this. You'd see (approximately) the same number on an x86-64 system with 128MB of RAM, +/- stack address space layout randomization (ASLR).

like image 31
Peter Cordes Avatar answered Sep 18 '22 19:09

Peter Cordes