Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

If I run this program repeatedly, why the last number printed before seg-fault varies?

The question is about how linux handle the stack. Why is not deterministic when I get segmentation-fault running this code?

#include <stdio.h>
#include <stdlib.h>
#include <sys/resource.h>

void step(int n) {
    printf("#%d\n", n);
    step(n + 1);
}

int main() {
    step(1);
    return 0;
}
like image 325
João Vitor Barbosa Avatar asked Jul 31 '17 14:07

João Vitor Barbosa


People also ask

Why does printf cause segmentation fault?

2 Answers. The main issue that causes a segmentation fault is line 22 - printf ("Name: %s \n", name); This is because you are printing a string, while name is 1 character. If I enter my name, only the first letter entered is actually written to the name variable.

How do you determine where a segmentation fault occurs?

Use debuggers to diagnose segfaultsStart your debugger with the command gdb core , and then use the backtrace command to see where the program was when it crashed. This simple trick will allow you to focus on that part of the code.

What are the causes of segmentation fault?

The following are some typical causes of a segmentation fault: Attempting to access a nonexistent memory address (outside process's address space) Attempting to access memory the program does not have rights to (such as kernel structures in process context) Attempting to write read-only memory (such as code segment)


2 Answers

Looks like non-deterministic results is a consequence of environment randomization policy that kernel uses when starts new program. Lets try next code:

#include <stdio.h>
#include <stdint.h>
#include <unistd.h>

int main(int argc, char **argv) {
    char c;
    uintptr_t addr = (uintptr_t)&c;
    unsigned pagesize = (unsigned)sysconf(_SC_PAGE_SIZE);
    printf("in-page offset: %u\n", (unsigned)(addr % pagesize));
    return 0;
}

On my 64-bit Linux it gives next output:

$ ./a.out
in-page offset: 3247
$ ./a.out
in-page offset: 2063
$ ./a.out
in-page offset: 863
$ ./a.out
in-page offset: 1871

Each time c gets new offset within its stack page, and knowing that kernel always allocates discrete number of pages for stack - it is easy to see that each time program has slightly different amount of allocated stack. Thus program described in the question has non-constant amount of stack for its frames per each invokation.

Being frankly I'm not sure if it is kernel who tunes initial value for stack pointer or maybe it is some trick from dynamic linker. Anyway - user code will run in randomized environment each time.

like image 98
Sergio Avatar answered Sep 27 '22 16:09

Sergio


Because a stack overflow is Undefined Behaviour. An implementation is free to test that it does not occur, in which case the program should end with error when the stack is full. But the environment could also provide a stack with a size depending on the free memory. Or more probably you could get various memory overwriting problem in interaction with the io system which could be non deterministics. Or... (essentially UB means that anything could happen).

like image 30
Serge Ballesta Avatar answered Sep 27 '22 17:09

Serge Ballesta