Please bear with me on this one. I am first going to describe an example from the book, and then ask my question at the end.
According to the text on Programming Language Paradigms:
Inductive specification is a powerful method of specifying a set of values. To illustrate this method, we use it to describe a certain subset S of the natural numbers N = {0, 1, 2, . . .}.
Top-down definition:
A natural number n is in S if and only if
We know that 0 ∈ S. Therefore 3 ∈ S, since (3 − 3) = 0 and 0 ∈ S. Similarly 6 ∈ S, since (6−3) = 3 and 3 ∈ S. Continuing in this way, we can conclude that all multiples of 3 are in S.
What about other natural numbers? Is 1 ∈ S? We know that 1 != 0, so the first condition is not satisfied. Furthermore, (1−3) = −2, which is not a natural number and thus is not a member of S. Therefore the second condition is not satisfied.
Bottom-up definition:
Define the set S to be the smallest set contained in N and satisfying the following two properties:
A “smallest set” is the one that satisfies properties 1 and 2 and that is a subset of any other set satisfying properties 1 and 2. It is easy to see that there can be only one such set: if S1 and S2 both satisfy properties 1 and 2, and both are smallest, then S1 ⊆ S2 (since S1 is smallest), and S2 ⊆ S1 (since S2 is smallest), hence S1 = S2. We need this extra condition, because otherwise there are many sets that satisfy the remaining two conditions
Rules of Inference:
_____
0 ∈ S
n ∈ S
---------
(n+3) ∈ S
This is simply a shorthand notation for the preceding version of the definition. Each entry is called a rule of inference, or just a rule; the horizontal line is read as an “if-then.” The part above the line is called the hypothesis or the antecedent; the part below the line is called the conclusion or the consequent. When there are two or more hypotheses listed, they are connected by an implicit “and”
Now here comes the questions.
I want to find the top-down, bottom-up, and inferece definitions for the 2 problems below. You don't have to give me the answer, but I do want to know how do I go about deriving the inductive definition. Where do I start? Is there a systematic approach to it (a recipe) for doing these type of problems?
1. {2n+3m +1 | n,m ∈ N}
2. {(n, 2n+1) | n ∈ N}
You've asked a lot of questions here, so hopefully this reply answers all of them. If there's something you'd like clarified, please let me know.
Your first question - why do we need inductive definitions? - is easiest to answer. In computer science, a huge number of structures are defined inductively. For example, your simple linked list has the inductive definition
Or a binary tree:
Or regular expressions:
These definitions have many nice properties. First, they're amenable to recursive algorithms. If you want to visit every node in a binary tree, we can use the inductive definition of binary trees to build a simple visiting algorithm:
Similarly, if we want to manipulate the structure of a regular expression - perhaps to build a matching automaton for it, for example - then the inductive definition lets us build up automata out of the regular expression piecewise.
Inductive definitions can also be used to formally prove properties of structures. For example, here's a formal definition of an AVL tree:
Using this definition, it's possible to formally prove that AVL trees are balanced.
We can similarly use these sorts of definitions to prove properties about programming languages. Most languages have some sort of inductive definition, so by proving that each part of a program preserves some piece of information we can build up correctness proofs from scratch.
Your second question - why doesn't Google turn up any examples for inductive definition? - I think is because it's interpreting it as "define induction," rather than as a term itself. If you look up recursive definition, then you'll find plenty of examples of inductive definitions, as inductive definitions and recursive definitions are very similar to one another.
Your third question - why do we need multiple ways of expressing the same thing? - is also easy. Inductive definitions are excellent if you want to prove something about a system. If you prove that it holds for the basic elements, then show that the larger elements preserve that property, you can prove that it's always true. Recursive definitions are good for writing programs, since recursive functions tend to run induction backwards. Rules of inference connect with logical proof systems and give a starting point for using classical logic to prove properties of your system.
Your fourth question - why can't you find any examples? - can easily be fixed by taking a minute to look at the classical data structures and algorithms that you know of. How many data structures could you define inductively? Try looking at linked lists, binary trees, red-black trees, AVL trees, etc. for inspiration. How would you define a graph? A DAG? Similarly, try looking at syntactic structures. For example, how would you inductively define strings of balanced parentheses? How about function prototypes in C?
Your final question - is there a systematic way of solving these problems? - has a negative answer. You can define recurrences that are equivalent to simulating arbitrary Turing machines on inputs, and since the halting problem is undecidable there is no general procedure for solving these sorts of problems. Numerous approaches do exist, though. Try looking at "Concrete Mathematics" by Graham, Knuth, and Patashnik for inspiration about how to work through recurrences.
Hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With