I wrote the code below to test shellcode (for unlinking /tmp/passwd
) for an assignment in a security class.
When I compile with gcc -o test -g test.c
, I get a segfault on the jump into the shellcode.
When I postprocess the binary with execstack -s test
, I no longer get a segfault and the shellcode executes correctly, removing /tmp/passwd
.
I am running gcc 4.7.2
. It seems like it is bad idea to require the stack to be executable in order to make the heap executable, since there are many more legitimate use cases of the latter than the former.
Is this expected behavior? If so, what is the rationale?
#include <stdio.h>
#include <stdlib.h>
char* shellcode;
int main(){
shellcode = malloc(67);
FILE* code = fopen("shellcode.bin", "rb");
fread(shellcode, 1, 67, code);
int (*fp)(void) = (int (*) (void)) shellcode;
fp();
}
Here is the output of xxd shellcode.bin
:
0000000: eb28 5e89 760c 31c0 8846 0bfe c0fe c0fe .(^.v.1..F......
0000010: c0fe c0fe c0fe c0fe c0fe c0fe c0fe c089 ................
0000020: f3cd 8031 db89 d840 cd80 e8d3 ffff ff2f ...1...@......./
0000030: 746d 702f 7061 7373 7764 tmp/passwd
The real "unexpected" behavior is that setting the flag makes the heap executable as well as the stack. The flag is intended for use with executables that generate stack-based thunks (such as gcc when you take the address of a nested function) and shouldn't really affect the heap. But Linux implements this by globally making ALL readable pages executable.
If you want finer-grained control, you could instead use the mprotect
system call to control executable permissions on a per-page basis -- Add code like:
uintptr_t pagesize = sysconf(_SC_PAGE_SIZE);
#define PAGE_START(P) ((uintptr_t)(P) & ~(pagesize-1))
#define PAGE_END(P) (((uintptr_t)(P) + pagesize - 1) & ~(pagesize-1))
mprotect((void *)PAGE_START(shellcode), PAGE_END(shellcode+67) - PAGE_START(shellcode),
PROT_READ|PROT_WRITE|PROT_EXEC);
Is this expected behavior?
Looking at the Linux kernel code, I think that the kernel-internal name for this flag is "read implies exec". So yes, I think that it's expected.
It seems like it is bad idea to require the stack to be executable in order to make the heap executable, since there are many more legitimate use cases of the latter than the former.
Why would you need the complete heap to be executable? If you really need to dynamically generate machine code and run it or so, you can explicitly allocate executable memory using the mmap
syscall.
what is the rationale?
I think that the idea is that this flag can be used for legacy programs that expect that everything that's readable is also executable. Those programs might try to run stuff on the stack and they might try to run stuff on the heap, so it's all permitted.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With