Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In most SQL implementations, as opposed to standard programming languages, why doesn't x != null return true?

Tags:

sql

null

Let's suppose that x is some variable that has any value other than null, say 4, as an example. What should the following expression return?

x != null

In just about every programming language I have ever worked with (C#, Javascript, PHP, Python), this expression, or an equivalent expression in that language, evaluates to true.

SQL implementations, on the other hand, all seem to handle this quite differently. If one or both operands of the inequality operator are NULL, either NULL or False will be returned. This is basically the opposite of the behavior that most programming languages use, and it is extremely unintuitive to me.

Why is the behavior in SQL like this? What is it about relationaly database logic that makes null behave so much differently than it does in general purpose programming?

like image 593
Peter Olson Avatar asked Jun 21 '12 20:06

Peter Olson


People also ask

Why NULL does not work in SQL?

NULL can be assigned, but using ' = NULL ', ' <> NULL ', or any other comparison operator, in an expression with NULL as a value, is illegal in SQL and should trigger an error. It can never be correct. The expression will always return NULL .

Why we use NULL?

In SQL, null or NULL is a special marker used to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E.

Why is it is NULL instead of NULL?

It means that there is no value. Any two SQL results that are null both have no value. No value does not equal unknown value.

What is nullable in database?

In database management, a field that is allowed to have no values is called nullable. Depending on the application, nullable may also be called a null reference or null object.


1 Answers

The null in most programming languages is considered "known", while NULL in SQL is considered "unknown".

  • So X == null compares X with a known value and the result is known (true or false).
  • But X = NULL compares X with an unknown value and the result is unknown (i.e. NULL, again). As a consequence, we need a special operator IS [NOT] NULL to test for it.

I'm guessing at least part of the motivation for such NULLs would be the behavior of foreign keys. When a child endpoint of a foreign key is NULL, it shouldn't match any parent, even if the parent is NULL (which is possible if parent is UNIQUE instead of primary key). Unfortunately, this brings many more gotchas than it solves and I personally think SQL should have gone the route of the "known" null and avoided this monkey business altogether.

Even E. F. Codd, inventor or relational model, later indicated that the traditional NULL is not optimal. But for historical reasons, we are pretty much stuck with it.

like image 76
Branko Dimitrijevic Avatar answered Oct 12 '22 23:10

Branko Dimitrijevic