The question is about how linux handle the stack. Why is not deterministic when I get segmentation-fault running this code?
#include <stdio.h>
#include <stdlib.h>
#include <sys/resource.h>
void step(int n) {
printf("#%d\n", n);
step(n + 1);
}
int main() {
step(1);
return 0;
}
2 Answers. The main issue that causes a segmentation fault is line 22 - printf ("Name: %s \n", name); This is because you are printing a string, while name is 1 character. If I enter my name, only the first letter entered is actually written to the name variable.
Use debuggers to diagnose segfaultsStart your debugger with the command gdb core , and then use the backtrace command to see where the program was when it crashed. This simple trick will allow you to focus on that part of the code.
The following are some typical causes of a segmentation fault: Attempting to access a nonexistent memory address (outside process's address space) Attempting to access memory the program does not have rights to (such as kernel structures in process context) Attempting to write read-only memory (such as code segment)
Looks like non-deterministic results is a consequence of environment randomization policy that kernel uses when starts new program. Lets try next code:
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
int main(int argc, char **argv) {
char c;
uintptr_t addr = (uintptr_t)&c;
unsigned pagesize = (unsigned)sysconf(_SC_PAGE_SIZE);
printf("in-page offset: %u\n", (unsigned)(addr % pagesize));
return 0;
}
On my 64-bit Linux it gives next output:
$ ./a.out
in-page offset: 3247
$ ./a.out
in-page offset: 2063
$ ./a.out
in-page offset: 863
$ ./a.out
in-page offset: 1871
Each time c
gets new offset within its stack page, and knowing that kernel always allocates discrete number of pages for stack - it is easy to see that each time program has slightly different amount of allocated stack. Thus program described in the question has non-constant amount of stack for its frames per each invokation.
Being frankly I'm not sure if it is kernel who tunes initial value for stack pointer or maybe it is some trick from dynamic linker. Anyway - user code will run in randomized environment each time.
Because a stack overflow is Undefined Behaviour. An implementation is free to test that it does not occur, in which case the program should end with error when the stack is full. But the environment could also provide a stack with a size depending on the free memory. Or more probably you could get various memory overwriting problem in interaction with the io system which could be non deterministics. Or... (essentially UB means that anything could happen).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With