Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Commercial-grade randomization for Poker game

I need some advice on how to tackle an algorithmic problem (ie. not programming per se). What follows are my needs and how I tried to meet them. Any comments for improvement would be welcome.

Let me first start off by explaining my goal. I would like to play some poker about a billion times. Maybe I'm trying to create the next PokerStars.net, maybe I'm just crazy.

I would like to create a program that can produce better randomized decks of cards, than say the typical program calling random(). These need to be production quality decks created from high quality random numbers. I've heard that commercial-grade poker servers use 64-bit vectors for every card, thus ensuring randomness for all the millions of poker games played daily.

I'd like to keep whatever I write simple. To that end, the program should only need one input to achieve the stated goal. I have decided that whenever the program begins, it will record the current time and use that as the starting point. I realize that this approach would not be feasible for commercial environments, but as long as it can hold up for a few billion games, better than simpler alternatives, I'll be happy.

I began to write pseudo-code to solve this problem, but ran into a thorny issue. It's clear to me, but it might not be to you, so please let me know.

Psuedo-code below:

    Start by noting the system time.
    Hash the current time (with MD5) around ten times (I chose the ten arbitrarily).
    Take the resulting hash, and use it as the seed to the language-dependent random() function.
    Call random() 52 times and store the results.
    Take the values produced by random() and hash them.
    Any hash function that produces at least 64-bits of output will do for this.
    Truncate (if the hash is too big) so the hashes will fit inside a 64-bit double.
    Find a way to map the 52 doubles (which should be random now, according to my calculations) into 52 different cards, so we can play some poker.

My issue is with the last step. I cannot think of a way to properly map each 64-bit value to a corresponding card, without having to worry about two numbers being the same (unlikely) or losing any randomness (likely).

My first idea was to break 0x0000000000000000 - 0xFFFFFFFFFFFFFFFF into four even sections (to represent the suits). But there is no guarantee that we will find exactly thirteen cards per section, which would be bad.

Now that you know where I am stuck, how would you overcome this challenge?

-- Edited --

Reading bytes from /dev/random would work well actually. But that still leaves me lost on how to do the conversion? (assuming I read enough bytes for 52 cards).

My real desire is to take something simple and predictable, like the system time, and transform it into a randomized deck of cards. Seeding random() with the system time is a BAD way of going about doing this. Hence the hashing of the time and hashing the values that come out of random().

Hell, if I wanted to, I could hash the bytes from /dev/random, just for shizzles and giggles. Hashing improves the randomness of things, doesn't it? Isn't that why modern password managers store passwords that have been hashed thousands of times?

-- Edit 2 --

So I've read your answers and I find myself confused by the conclusion many of you are implying. I hinted at it in my first edit, but it's really throwing me for a loop. I'd just like to point it out and move on.

Rainbow tables exist which do funky math and clever magic to essentially act as a lookup table for common hashes that map to a particular password. It is my understanding that longer, better passwords are unlikely to show up in these rainbow tables. But the fact still stands that despite how common many user passwords are, the hashed passwords remain safe after being hashed thousands of times.

So is that a case where many deterministic operations have increased the randomness of the original password (or seems to?) I'm not saying I'm right, I'm just saying thats my feeling.

The second thing I want to point out is I'm doing this backwards.

What I mean is that you all are suggesting I take a sorted, predictable, non-random deck of cards and use the Fisher-Yates shuffle on it. I'm sure Fisher-Yates is a fine algorithm, but lets say you couldn't use it for whatever reason.

Could you take a random stream of bytes, say in the neighborhood of 416 bytes (52 cards with 8 bytes per card) and BAM produce an already random deck of cards? The bytes were random, so it shouldn't be too hard to do this.

Most people would start with a deck of 52 cards (random or not) and swap them around a bunch of times (by picking a random index to swap). If you can do that, then you can take 52 random numbers, run through them once, and produce the randomized deck.

As simply as I can describe it, The algorithm to accepts a stream of randomized bytes and looks at each 8-byte chunk. It maps each chunk to a card.

Ex. 0x123 maps to the Ace of Spades Ex. 0x456 maps to the King of Diamonds Ex. 0x789 maps to the 3 of Clubs .... and so on.

As long as we chose a good model for the mapping, this is fine. No shuffling required. The program will be reduced to two steps.

Step 1: Obtain a sufficient quantity of random bytes from a good source Step 2: Split this stream of bytes into 52 chunks, one for each card in the deck Step 2a: Run through the 52 chunks, converting them into card values according to our map.

Does that makes sense?

like image 888
Tommy Fisk Avatar asked Nov 27 '22 12:11

Tommy Fisk


2 Answers

You are massively overcomplicating the problem. You need two components to solve your problem:

  1. A shuffling algorithm
  2. A sufficiently high-quality random number generator for the shuffling algorithm to use.

The first is easy, just use the Fisher-Yates shuffle algorithm.

For the second, if you want sufficient degrees of freedom to be able to generate every possible permutation (of the 52! possibilities) then you need at least 226 bits of entropy. Using the system clock won't give you more than 32 or 64 bits of entropy (in practice far fewer as most of the bits are predictable), regardless of how many redundant hashes you perform. Find an RNG that uses a 256-bit seed and seed it with 256 random bits (a bootstrapping problem, but you can use /dev/random or a hardware RNG device for this).

like image 173
Dan Dyer Avatar answered Jan 01 '23 20:01

Dan Dyer


You don't mention which OS you're on, but most modern OS's have pre-made sources of high quality entropy. On Linux, it's /dev/random and /dev/urandom, from which you can read as many random bytes as you want.

Writing your own random number generator is highly non-trivial, if you want good randomness. Any homebrew solution is likely to be flawed and could potentially be broken and its outputs predicted.

like image 34
Marc B Avatar answered Jan 01 '23 21:01

Marc B