I did go through this.
I don't understand this.
Sudoku is NP-complete when generalized to a n × n grid however a standard 9 × 9 Sudoku is not NP- complete.
The generalised Sudoku problem is an NP-complete problem which, effectively, requests a Latin square that satisfies some additional constraints. In addition to the standard requirement that each row and column of the Latin square contains each symbol precisely once, Sudoku also demands block constraints.
Sudoku is actually what's called an NP-Complete problem. That is, computer scientists have discovered links between the Sudoku problem and every other NP problem.
Proof: Sudoku is NP-complete We simply go through every column, row, and box to check if there are duplicate integers. If there is, we reject. Since this verifier should run in O(n2) time, the Sudoku problem is in NP.
The analogy with Sudoku is strong as 'trial and error' is using very much the same idea. When navigating through mazes an experienced human can make educated guesses on the most promising path rather than slavishly trying each option in turn.
You are analyzing a problem in O(N)
when N goes towards infinity. But you input problem does not vary with N to infinity, you have a finite upper bound. This upper bound is constant.
The reason for this is that there are a finite set of solutions. You can list and enumerate each and every 9x9 sudoku. Index all solutions into a dictionary with the known input values as index. Finding a solution is then just a constant-time lookup in your pre-generated dictionary. It does not matter that the list is huge, only that it is finite.
In fact, another solution is to generate all possible sudoku grids until you find one that solves your input. This might seem like a linear solution at first sight but since there is a finite upper limit it is actually a constant-time algorithm.
Correct; any 9x9 Sudoku can be solved in O(1) time (as can a 1x1 Sudoko, or a 4x4 Sudoko, or even a 1000x1000 Sudoku) because the input size is fixed. NP-completeness is a concept that applies to decision problems with variable input size, so that you can analyze the running time of an algorithm as that input size grows asymptotically.
The difference is whether the algorithm can assume the size of the input, or has to wait until it receives the input to see how big it is.
The input doesn't have to be encoded in binary; it just has to use some finite-sized alphabet. For fixed-sized Sudoku, you can choose an alphabet that has one unique symbol for each possible puzzle. (In practical terms, you can encode the theoretical alphabet in binary, with a fixed-sized binary string for each alphabet symbol. This is how ASCII works. The input size is still constant; it's just a constant that is bigger than one.) The algorithm then uses a hard-coded table that pairs each symbol in the input alphabet with its solution. The constant-time algorithm that solves the puzzle is just a table lookup.
Now consider the problem where the puzzles do not have a fixed size. There are an infinite number of possible puzzles, so the algorithm has to specify some encoding scheme that can describe an infinite number of puzzles using a finite-sized alphabet. These has two immediate consequences.
You cannot store the solutions to all possible inputs in a finite amount of space, so your algorithm needs to do actual work to solve a puzzle once it sees the input.
Not all the inputs will have the same size, since a fixed string of symbols from a finite alphabet can only encode a finite number of puzzles. Once the inputs have different sizes, you can consider how much work you algorithm has to do as a function of the input size. (Just reading the input is now an O(n) operation; the work needed to solve the problem maybe, and usually is, greater.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With