CVE-2021-26084. This vulnerability quickly became one of the most routinely exploited vulnerabilities after a POC was released within a week of its disclosure.
Definition(s): A security flaw, glitch, or weakness found in software code that could be exploited by an attacker (threat source).
As we've written before, a vulnerability is a weakness in a software system. And an exploit is an attack that leverages that vulnerability. So while vulnerable means there is theoretically a way to exploit something (i.e., a vulnerability exists), exploitable means that there is a definite path to doing so in the wild.
My favorite and most impressive I've seen so far are a class of cryptography techniques know as Side Channel Attacks.
One type of a side channel attack uses power monitoring. Encryption keys have been recovered from smart card devices by carefully analyzing how much power is drawn from the power supply. The processors embedded within them use different amounts of power to process different sets of instructions. Using this tiny bit of information, it's possible to recover protected data, completely passively.
Everyone does know about SQL injections, but one of the most surprising exploits I recently heard about was putting SQL injections into bar codes. Testers should be checking ALL inputs for malicious SQL. An attacker could show up at an event and crash their registration system, change prices at stores, etc. I think just bar code hacking in general was surprising to me. No wow factor here, just something else to be aware of.
EDIT: Just had a discussion where the idea of putting the SQL injection on a magnetic card strip was brought up. I guess you can put one anywhere, so test any and all input, especially from users and these kinds of data storage devices.
I think a relatively recent Linux vulnerability qualifies for your description of exploiting code that seems safe (though a bit mistructured).
This was specifically the piece of code in the Linux kernel:
struct sock *sk = tun->sk; // initialize sk with tun->sk
…
if (!tun)
return POLLERR; // if tun is NULL return error
Due to a GCC optimization, the if statement and body are removed (which is reasonable for userland code, not so much for kernel code). Through some cleverness a person was able to build an exploit out of this.
A summary:
http://isc.sans.org/diary.html?storyid=6820
A posted exploit:
http://lists.grok.org.uk/pipermail/full-disclosure/2009-July/069714.html
EDIT: Here is a much more in depth summary of how this code was exploited. It's a short read, but a very good explanation of the mechanisms used for the exploit.
http://lwn.net/SubscriberLink/342330/f66e8ace8a572bcb/
A classic exploit was Ken Thompson's hack to give him root access to every Unix system on Earth.
Back when Bell Labs was the sole supplier of Unix, they distributed the source code so each installation could build and customize their own OS. This source included the Unix logon command. Ken modified the C compiler to recognize if it was compiling the logon command, and if so add an initial password check. This password was his own magic one and granted root access.
Of course anyone reading the C compiler source would see this and take it out. So Ken modified the C compiler again so that if it was compiling a C compiler it would put the logon hack back in.
Now comes the mindbending part; Ken compiled the C compiler with his hacked compiler, then deleted all trace of his hack, deleting it from the source, backups, source control, everything. It only existed in the compiled binary that was part of the Unix distro.
Anyone who got this Unix from Bell Labs got a hacked login and C compiler. If they looked at the source, it was safe. If they rebuilt the OS, the hacked compiler would hack the rebuilt compiler, which would re-insert the hack into the logon command.
The lesson I take from this is that you cannot guarantee security from any amount of static analysis (inspecting the source code, OS, applications).
Ken revealed this in an ACM article titled Reflections on Trusting Trust.
Years ago I took a look at a program (on the Acorn Archimedes) that was protected with a complex system of encryption (just to see how it was done and learn from it). It was very cleverly done, with the decryption code itself used as part of the decryption key so that any attempt to mess with it would break the decryption and thus leave you with garbage in memory.
After 2 days trying to work out how it was done and how you could get around it, a friend visited. Using an operating system tool (a click and a drag to max out the RMA memory allocaton) he limited the available RAM for the process to run in to just slightly larger than the .exe's size. Then he ran it. Immediately after decrypting itself it tried to allocate memory, failed, and crashed. He then saved the decrypted program from memory. Total crack time: about 2 minutes, using only a mouse drag and a command line save command.
I learned from this that it isn't worth putting too much time and effort into protecting your software - if someone wants to crack it they will, and probably they'll do it by a simple means that never occurred to you.
(Disclaimer: We had both bought legal copies of this program, and never used the cracked code in any way. It truly was just an intellectual programming exercise)
Ok, this isn't a software vulnerability or exploit, but even so:
"Van Eck Phreaking is the process of eavesdropping on the contents of a CRT and LCD display by detecting its electromagnetic emissions." (From Wikipedia)
Just... wow...
I read about a clever way to steal your browser history just yesterday: By adding JavaScript that looks at the color of your links (they change color for sites which you visited).
This can be used to attack sites which add a security token the URL (if that token is not too long) by simply trying all possible combinations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With