Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is Swift vulnerable to code injection?

I was reading about Cycript and Cydia Substrate and how they can be used for code injection attacks on an iOS app. Code like this should scare you if you are working in a high security environment. (Ignore the /etc/password part, just consider the ability to replace originalMessage with crackedMessage.)

cy# MS.hookFunction(fopen, function(path, mode) {
cy>     if (path == "/etc/passwd")
cy>         path = "/var/passwd-fake";
cy>     var file = (*oldf)(path, mode);
cy>     log.push([path, mode, file]);
cy>     return file;
cy> }, oldf)

I read one blog (which I didn't save) that said that Swift was not as vulnerable as Objective-C since it wasn't as dynamic. Then again, I've also read that you can do method swizzling in Swift so it isn't clear to me if Swift offers any protections against code injection attacks.

So, is Swift vulnerable to code injection attacks?

like image 341
Paul Cezanne Avatar asked Feb 16 '15 18:02

Paul Cezanne


People also ask

What is an example of code injection?

Examples. If an application passes a parameter sent via a GET request to the PHP include() function with no input validation, the attacker may try to execute code other than what the developer had in mind.

Is injection a vulnerability?

Injection occurs when a hacker feeds malicious, input into the web application that is then acted on (processed) in an unsafe manner. This is one of the oldest attacks against web applications, but it's still the king of the vulnerabilities because it is still widespread and very damaging.

What are the risks of client side code injections?

Security Weakness Client-side injection results in the execution of malicious code on the mobile device via the mobile app. Typically, this malicious code is provided in the form of data that the threat agent inputs to the mobile app through a number of different means.

What prevents malicious code injection?

Malicious code injection occurs when an attacker exploits an input validation flaw in software to inject malicious code. This injected code is then interpreted by the application and changes the way the program is executed. Malicious code injection is the top OWASP API security vulnerability.


1 Answers

Ultimately, there is no way to prevent someone from hijacking your program if you let it run on their device. There are ways to make it harder, but there is no way to make it impossible.

I can think of these major ways of injecting code into an application:

  • swizzling Objective-C methods with the runtime;
  • swizzling virtual Swift methods by parsing out the executable and figuring the right bits to change;
  • modifying call targets;
  • swizzling imported symbols by changing symbol stub targets;
  • using dyld to force-load libraries or change which libraries your program loads;
  • replacing the libraries that your program links against.

And there's no 100% effective way to prevent any of these in an environment that the user fully controls. You should decide whether to be worried or not depending on your threat model.

Swizzling Objective-C methods with the runtime

Method swizzling is a technique where you change the implementation of a method at runtime with arbitrary, different code (usually for a different purpose). Common use cases are bypassing checks or logging parameters.

Swizzling in Objective-C was a huge thing because the runtime needs metadata that identifies every method and every instance field. I don't know any other language that compiles to native machine code and that keeps this much metadata around. If you have something like -[AccessControl validatePassword:], you're just making it really easy for the bad guys. With method_setImplementation, this is just begging to happen.

As Swift classes can inherit from Objective-C classes, this is still something to look for. However, new methods on classes that inherit from an Objective-C class are only exposed to the Objective-C runtime if they have the @objc attribute (or if the class itself has the @objc attribute), so this limits the attack surface compared to Objective-C.

Additionally, the Swift compiler may bypass the Objective-C runtime to call, devirtualize or inline Swift methods that were not marked dynamic, even if they were marked @objc. This means that in some cases, swizzling could be possible only for calls dispatched through Objective-C.

And of course, it's entirely impossible if your class or method is not exposed to the Objective-C runtime.

Swizzling virtual Swift methods by parsing out the executable and figuring the right bits to change

However, you don't need the Objective-C runtime to swap method implementations. Swift still has virtual tables for its virtual methods, and as of February 2015, they are located in the __DATA segment of the executable. It is writable, so it should be possible to swizzle Swift virtual methods if you can figure out the right bits to change. There is no convenient API for this.

C++ classes can similarly be modified, but Swift methods being virtual by default, the attack surface is much larger. The compiler is allowed to devirtualize methods as an optimization if it finds no override, but relying on compiler optimizations as a security feature is not responsible.

By default, deployed Swift executables are stripped. Information for non-public/open symbols is discarded, and this makes identifying the symbols that you want to change this much harder compared to Objective-C. Public/open symbols are not stripped because it is assumed that other, external code clients may need them.

However, if someone figures out which function implementation they want to swap out, all they have to do is write the address of the new implementation in the correct virtual table slot. They will probably need to make their own Mach-O parser, but this is certainly not out of the range of the people who make things like Cycript.

Finally, final methods reduce this risk because the compiler doesn't need to call them through the vtable. Also, struct methods are never virtual.

Modifying call targets

If all else fails, your attacker can still walk through your machine code and change the bl or call instruction operands to anywhere they'd like better. This is more involved and fairly hard/impossible to get 100% right with an automated method, especially if symbols are missing, but someone determined enough will be able to do it. You decide if someone will eventually find it worth the trouble to do it for your application.

This works for virtual and non-virtual methods. It is, however, extremely difficult to do when the compiler inlines calls.

Swizzling imported symbols by changing symbol stub targets

Any imported symbol, regardless of the language it's been written with and the language it's being used from, is vulnerable to swizzling. This is because external symbols are bound at runtime. Whenever you use a function from an external library, the compiler generates an entry in a lookup table. This is an example of what a call to fopen could look like if you returned your executable to C code:

FILE* locate_fopen(const char* a, const char* b) {
    fopen_stub = dyld->locate("fopen"); // find actual fopen and replace stub pointer to it
    return fopen_stub(a, b);
}

FILE* (*fopen_stub)(const char*, const char*) = &locate_fopen;

int main() {
    FILE* x = fopen_stub("hello.txt", "r");
}

The initial call to fopen_stub finds the actual fopen, and replaces the address pointed to by fopen_stub with it. That way, dyld doesn't need to resolve the thousands of external symbols used from your program and its libraries before it starts running at all. However, this means that an attacker can replace fopen_stub with the address of any function that they'd like to call instead. This is what your Cycript example does.

Short of writing your own linker and dynamic linker, your only protection against this kind of attack is to not use shared libraries or frameworks. This is not a viable solution in a modern development environment, so you will probably have to deal with it.

There could be ways to ensure that stubs go where you expect them to be, but it would be kind of flaky, and these checks can always be noped out by a determined attacker. Additionally, you wouldn't be able to insert these checks before shared libraries you have no control over call imported symbols. These checks would also be useless if the attacker decided to just replace the shared library with one they control.

As an aside, launch closures allow dyld 3 to replace these lookup tables with pre-bound information. I don't think that launch closures are currently read-only, but it looks like they could eventually be. If they are, then swizzling symbols will become harder.

Using dyld to force-load libraries or change which libraries your program loads

Dyld supports force-loading libraries into your executable. This capability can be used to replace just about any imported symbol that your executable uses. Don't like the normal fopen? Write a dylib that redefines it!

Dyld will not cooperate with this method if the executable is marked as restricted. There are three ways to achieve this status (look for pruneEnvironmentVariables):

  • enable the setuid bit or the setgid bit on your executable;
  • be code-signed and have the "Restricted" OS X-only entitlement;
  • have a section called __restrict in a segment called __RESTRICT.

You can create the __restrict section and the __RESTRICT segment using the following "Other Linker Flags":

-Wl,-sectcreate,__RESTRICT,__restrict,/dev/null

Note that all of these are pretty easy to break. The setuid and setgid bits are trivial to clear when the user controls the execution environment, a code signature is easy to remove, and the section or segment just has to be renamed to get rid of the restricted status as well.

Replacing the libraries that your program links against

If all else fails, an attacker can still replace the shared libraries that your executable uses to make it do whatever they like. You have no control over that.

tl;dr

Injecting code in a Swift application is harder than it was for an Objective-C application, but it's still possible. Most of the methods that can be used to inject code are language-independent, meaning that no language will make you safer.

For the most part, there is nothing that you can do to protect yourself against this. As long as the user controls the execution environment, your code is running as a guest on their system, and they can do almost whatever they want with it.

like image 130
zneak Avatar answered Sep 22 '22 18:09

zneak