Why does the code below work without any crash @ runtime ?
And also the size is completely dependent on machine/platform/compiler!!. I can even give upto 200 in a 64-bit machine. how would a segmentation fault in main function get detected in the OS?
int main(int argc, char* argv[])
{
int arr[3];
arr[4] = 99;
}
Where does this buffer space come from? Is this the stack allocated to a process ?
When a program uses more space than available on the stack then the stack is said to overflow and cause the program to crash. The most common cause is infinite recursion. The following program contains infinite calls to function factorial().
Code crash can occur for a number of reasons, but is mostly a result of: Buffer overflow. Incorrect memory addressing. Illegal instructions.
Something I wrote sometime ago for education-purposes...
Consider the following c-program:
int q[200];
main(void) {
int i;
for(i=0;i<2000;i++) {
q[i]=i;
}
}
after compiling it and executing it, a core dump is produced:
$ gcc -ggdb3 segfault.c
$ ulimit -c unlimited
$ ./a.out
Segmentation fault (core dumped)
now using gdb to perform a post mortem analysis:
$ gdb -q ./a.out core
Program terminated with signal 11, Segmentation fault.
[New process 7221]
#0 0x080483b4 in main () at s.c:8
8 q[i]=i;
(gdb) p i
$1 = 1008
(gdb)
huh, the program didn’t segfault when one wrote outside the 200 items allocated, instead it crashed when i=1008, why?
Enter pages.
One can determine the page size in several ways on UNIX/Linux, one way is to use the system function sysconf() like this:
#include <stdio.h>
#include <unistd.h> // sysconf(3)
int main(void) {
printf("The page size for this system is %ld bytes.\n",
sysconf(_SC_PAGESIZE));
return 0;
}
which gives the output:
The page size for this system is 4096 bytes.
or one can use the commandline utility getconf like this:
$ getconf PAGESIZE
4096
post mortem
It turns out that the segfault occurs not at i=200 but at i=1008, lets figure out why. Start gdb to do some post mortem ananlysis:
$gdb -q ./a.out core
Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
[New process 4605]
#0 0x080483b4 in main () at seg.c:6
6 q[i]=i;
(gdb) p i
$1 = 1008
(gdb) p &q
$2 = (int (*)[200]) 0x804a040
(gdb) p &q[199]
$3 = (int *) 0x804a35c
q ended at at address 0x804a35c, or rather, the last byte of q[199] was at that location. The page size is as we saw earlier 4096 bytes and the 32-bit word size of the machine gives that an virtual address breaks down into a 20-bit page number and a 12-bit offset.
q[] ended in virtual page number:
0x804a = 32842 offset:
0x35c = 860 so there were still:
4096 - 864 = 3232 bytes left on that page of memory on which q[] was allocated. That space can hold:
3232 / 4 = 808 integers, and the code treated it as if it contained elements of q at position 200 to 1008.
We all know that those elements don’t exists and the compiler didn’t complain, neither did the hw since we have write permissions to that page. Only when i=1008 did q[] refer to an address on a different page for which we didn’t have write permission, the virtual memory hw detected this and triggered a segfault.
An integer is stored in 4 bytes, meaning that this page contains 808 (3236/4) additional fake elements meaning that it is still perfectly legal to access these elements from q[200], q[201] all the way up to element 199+808=1007 (q[1007]) without triggering a seg fault. When accessing q[1008] you enter a new page for which the permission are different.
Since you're writing outside the boundaries of your array, the behaviour of your code in undefined.
It is the nature of undefined behaviour that anything can happen, including lack of segfaults (the compiler is under no obligation to perform bounds checking).
You're writing to memory you haven't allocated but that happens to be there and that -- probably -- is not being used for anything else. Your code might behave differently if you make changes to seemingly unrelated parts of the code, to your OS, compiler, optimization flags etc.
In other words, once you're in that territory, all bets are off.
Regarding exactly when / where a local variable buffer overflow crashes depends on a few factors:
Remember that stacks grow downwards. I.e. process execution starts with a stackpointer close to the end of the memory to-be-used as stack. It doesn't start at the last mapped word though, and that's because the system's initialization code may decide to pass some sort of "startup info" to the process at creation time, and often do so on the stack.
That is the usual failure mode - a crash when returning from the function that contained the overflow code.
If the total amount of data written into a buffer on the stack is larger than the total amount of stackspace used previously (by callers / initialization code / other variables) then you'll get a crash at whatever memory access first runs beyond the top (beginning) of the stack. The crashing address will be just past a page boundary - SIGSEGV
due to accessing memory beyond the top of the stack, where nothing is mapped.
If that total is less than the size of the used part of the stack at this time, then it'll work just ok and crash later - in fact, on platforms that store return addresses on the stack (which is true for x86/x64), when returning from your function. That's because the CPU instruction ret
actually takes a word from the stack (the return address) and redirects execution there. If instead of the expected code location this address contains whatever garbage, an exception occurs and your program dies.
To illustrate this: When main()
is called, the stack looks like this (on a 32bit x86 UNIX program):
[ esp ] <return addr to caller> (which exits/terminates process)
[ esp + 4 ] argc
[ esp + 8 ] argv
[ esp + 12 ] envp <third arg to main() on UNIX - environment variables>
[ ... ]
[ ... ] <other things - like actual strings in argv[], envp[]
[ END ] PAGE_SIZE-aligned stack top - unmapped beyond
When main()
starts, it will allocate space on the stack for various purposes, amongst others to host your to-be-overflowed array. This will make it look like:
[ esp ] <current bottom end of stack>
[ ... ] <possibly local vars of main()>
[ esp + X ] arr[0]
[ esp + X + 4 ] arr[1]
[ esp + X + 8 ] arr[2]
[ esp + X + 12 ] <possibly other local vars of main()>
[ ... ] <possibly other things (saved regs)>
[ old esp ] <return addr to caller> (which exits/terminates process)
[ old esp + 4 ] argc
[ old esp + 8 ] argv
[ old esp + 12 ] envp <third arg to main() on UNIX - environment variables>
[ ... ]
[ ... ] <other things - like actual strings in argv[], envp[]
[ END ] PAGE_SIZE-aligned stack top - unmapped beyond
This means you can happily access way beyond arr[2]
.
For a taster of different crashes resulting from buffer overflows, attempt this one:
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
int i, arr[3];
for (i = 0; i < atoi(argv[1]); i++)
arr[i] = i;
do {
printf("argv[%d] = %s\n", argc, argv[argc]);
} while (--argc);
return 0;
}
and see how different the crash will be when you overflow the buffer by a little (say, 10) bit, compared to when you overflow it beyond the end of the stack. Try it with different optimization levels and different compilers. Quite illustrative, as it shows both misbehaviour (won't always print all argv[]
correctly) as well as crashes in various places, maybe even endless loops (if, e.g., the compiler places i
or argc
into the stack and the code overwrites it during the loop).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With