Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does the term "reason about" mean in computer science?

While learning functional programming, I keep coming across the term "reason about" especially in the context of pure functions and/or referential transparency. Can somebody explain what exactly does this mean?

like image 393
user2456976 Avatar asked Sep 06 '13 21:09

user2456976


People also ask

What is a statement in computer?

In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements. A statement may have internal components (e.g., expressions).

What is done in computer science?

Computer Science is the study of computers and computational systems. Unlike electrical and computer engineers, computer scientists deal mostly with software and software systems; this includes their theory, design, development, and application.

How does computer science approach science?

Computer Science draws its foundations from a wide variety of disciplines. Study of Computer Science consequently requires utilizing concepts from many different fields. Computer Science integrates theory and practice, abstraction (general) and design (specific).

Why computer science is science?

By these definitions, computing qualifies as an exact science. It studies information processes, which occur naturally in the physi- cal world; computer scientists work with an accepted, systematized body of knowledge; much com- puter science is applied; and com- puter science is used for prediction and verification.


2 Answers

Typically when writing a program, your job doesn't end with merely writing the code, but you would also want to know some properties your code exhibits. You can arrive at these properties by two means: either by logical analysis or by empirical observation.

Examples of such properties include:

  • correctness (does the program do what it is supposed to)
  • performance (how long does it take)
  • scalability (how is performance affected with input)
  • security (can the algorithm be maliciously misused)

When you measure these properties empirically, you get results with limited precision. Therefore mathematically proving these properties is far superior, however it is not always easy to do. Functional languages typically have as one of their design goals making mathematical proofs of their properties more tractable. This is what is typically meant by reasoning about programs.


In terms of functions or lesser units, the above applies, but also sometimes the author simply means thinking about the algorithm or designing the algorithm. It depends on the particular usage.


As an aside, some examples of how one might reason about some of these things and how empirical observation can be made:

Correctness: We can prove that code is correct, if we can show equationally that it does what it is supposed to do. So for a sort function if we can show that any list we give it will have the property of being sorted, we know our code is correct. Empirically we can create a unit test suite where we give our code examples of input and check that the code has the desired output.

Performance & Scalability: We can analyze our code and prove performance bounds of the algorithm so that we know how does the time it takes depend on the size of the input. Empirically we can benchmark our code and see how quickly it actually runs on a particular machine. We can perform load testing and seeing how much actual input can our machine/algorithm take before it folds/becomes impractical.

like image 116
Jakub Hampl Avatar answered Oct 03 '22 23:10

Jakub Hampl


Reasoning about code, in the most loose sense of the word, means thinking about your code and what it really does (not what you think it should do.) It means

  • being aware of the way your code behaves when you throw data at it,
  • knowing what things you can refactor without breaking it, and
  • keeping tabs on what optimisations can be performed,

among other things. For me, the reasoning part plays the biggest role when I'm debugging or refactoring.

To use an example you mentioned: Referential transparency helps me a lot when I'm trying to figure out what's wrong with a function. The referential transparency guarantees that as I'm poking around with the function, giving it different arguments, I know that the function will react the same way inside my program. It doesn't depend on anything other than its arguments. That makes the function easier to reason about – as opposed to an imperative language where the function may depend on some external variable that can change under my nose.

Another way of looking at it (and this is more helpful when refactoring) is that the more you know your code satisfies certain properties, the easier it becomes to reason about. I know, for example, that

map f (map g xs) === map (f . g) xs 

This is a useful property I can just apply directly when I'm refactoring. The fact that I can state such properties of Haskell code makes it easier to reason about. I could try to state this property in a Python program, but I would be much, much less confident of it, because if I'm unlucky in my choices of f and g the results may vary wildly.

like image 20
kqr Avatar answered Oct 04 '22 00:10

kqr