Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is this practical on a small supercomputer?

I'm investigating WEP and as part of that, I'm toying with the RC4 algorithm. I'm trying to decide if an inverse table is feasible to write (although large... I don't have the space and I don't intend to write one). For that, I've decided to check how many matching outputs there are in the first 10 bytes. That will help me decide how well an inverse table would work.

Of course, a 64 bit RC4 encryption has 2^64 possible keys, so that would mean making ~ 2^128 comparisons. Plus, 10 bytes have to be generated for each comparison, which is approximately 265 loops. (256 for RC4 initialization, 10 for the bytes themselves).

Down to business:

On a supercomputer with around 100 cores , would it be possible to perform around 2^135 calculations in 20 days?

(20 days is the limit until I am kicked off. I could end up with only 8, or I could end up with 400+, but I'm guessing 100 cores.)

If it means anything, my program is written in Java. http://pastie.org/2118864

like image 223
Ryan Amos Avatar asked Jun 25 '11 01:06

Ryan Amos


2 Answers

Interesting question, and very hard to answer properly. Scalability is one of those "try it and see" things most of the time.

One thing to note is that because of other factors, you're going to get less-than-linear scaling with multi-core systems.

Let's say your program can compare n keys per second. So an ideal (i.e. linear) 100-core system will compute 100n keys per second. To compare all keys (a worst-case scenario, realistic would be half of that) would take (2^135/100n)/86400 days.

If n is 1000, it would take 5041220250680569829087031221211 days, which is about 100 thousand million times longer than some estimates of the age of the universe are.

So I'm going to say... no :) Cryptography algorithms are designed for these kinds of attacks. Also, Java would be the last language to choose when writing such an application :p

like image 146
bcoughlan Avatar answered Oct 13 '22 02:10

bcoughlan


In an ideal world, it's around:

2135 operations ÷ 20 days ÷ 24 hours/day ÷ 60 min/hour ÷ 60 sec/min ÷ 100 cores = 1032 operations per second per core (Hz/core), assuming my math isn't off.

You'd need 1032 Hz cores, which do a single calculation per clock. Normally, it'd need multiple. That's... not very reachable at the moment, to say the least. The best you'd reach with a supercomputer is probably around the general area of ~10 GHz = 1010 Hz/core, if you're lucky.

(And this is all ignoring Amdahl's law...)

like image 21
user541686 Avatar answered Oct 13 '22 02:10

user541686