Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Native JSON support in MYSQL 5.7 : what are the pros and cons of JSON data type in MYSQL?

People also ask

Does MySQL 5.7 support JSON data type?

As of MySQL 5.7. 8, MySQL supports a native JSON data type defined by RFC 7159 that enables efficient access to data in JSON (JavaScript Object Notation) documents.

What is the drawback of JSON in MySQL?

The drawback? If your JSON has multiple fields with the same key, only one of them, the last one, will be retained. The other drawback is that MySQL doesn't support indexing JSON columns, which means that searching through your JSON documents could result in a full table scan.

Can you store JSON in MySQL?

MySQL supports the native JSON data type since version 5.7. 8. The native JSON data type allows you to store JSON documents more efficiently than the JSON text format in the previous versions. MySQL stores JSON documents in an internal format that allows quick read access to document elements.

Can MySQL read JSON data?

MySQL supports a native JSON data type that supports automatic validation and optimized storage and access of the JSON documents. Although JSON data should preferably be stored in a NoSQL database such as MongoDB, you may still encounter tables with JSON data from time to time.


SELECT * FROM t1
WHERE JSON_EXTRACT(data,"$.series") IN ...

Using a column inside an expression or function like this spoils any chance of the query using an index to help optimize the query. The query shown above is forced to do a table-scan.

The claim about "efficient access" is misleading. It means that after the query examines a row with a JSON document, it can extract a field without having to parse the text of the JSON syntax. But it still takes a table-scan to search for rows. In other words, the query must examine every row.

By analogy, if I'm searching a telephone book for people with first name "Bill", I still have to read every page in the phone book, even if the first names have been highlighted to make it slightly quicker to spot them.

MySQL 5.7 allows you to define a virtual column in the table, and then create an index on the virtual column.

ALTER TABLE t1
  ADD COLUMN series AS (JSON_EXTRACT(data, '$.series')),
  ADD INDEX (series);

Then if you query the virtual column, it can use the index and avoid the table-scan.

SELECT * FROM t1
WHERE series IN ...

This is nice, but it kind of misses the point of using JSON. The attractive part of using JSON is that it allows you to add new attributes without having to do ALTER TABLE. But it turns out you have to define an extra (virtual) column anyway, if you want to search JSON fields with the help of an index.

But you don't have to define virtual columns and indexes for every field in the JSON document—only those you want to search or sort on. There could be other attributes in the JSON that you only need to extract in the select-list like the following:

SELECT JSON_EXTRACT(data, '$.series') AS series FROM t1
WHERE <other conditions>

I would generally say that this is the best way to use JSON in MySQL. Only in the select-list.

When you reference columns in other clauses (JOIN, WHERE, GROUP BY, HAVING, ORDER BY), it's more efficient to use conventional columns, not fields within JSON documents.

I presented a talk called How to Use JSON in MySQL Wrong at the Percona Live conference in April 2018. I'll update and repeat the talk at Oracle Code One in the fall.

There are other issues with JSON. For example, in my tests it required 2-3 times as much storage space for JSON documents compared to conventional columns storing the same data.

MySQL is promoting their new JSON capabilities aggressively, largely to dissuade people against migrating to MongoDB. But document-oriented data storage like MongoDB is fundamentally a non-relational way of organizing data. It's different from relational. I'm not saying one is better than the other, it's just a different technique, suited to different types of queries.

You should choose to use JSON when JSON makes your queries more efficient.

Don't choose a technology just because it's new, or for the sake of fashion.


Edit: The virtual column implementation in MySQL is supposed to use the index if your WHERE clause uses exactly the same expression as the definition of the virtual column. That is, the following should use the index on the virtual column, since the virtual column is defined AS (JSON_EXTRACT(data,"$.series"))

SELECT * FROM t1
WHERE JSON_EXTRACT(data,"$.series") IN ...

Except I have found by testing this feature that it does NOT work for some reason if the expression is a JSON-extraction function. It works for other types of expressions, just not JSON functions. UPDATE: this reportedly works, finally, in MySQL 5.7.33.


The following from MySQL 5.7 brings sexy back with JSON sounds good to me:

Using the JSON Data Type in MySQL comes with two advantages over storing JSON strings in a text field:

Data validation. JSON documents will be automatically validated and invalid documents will produce an error. Improved internal storage format. The JSON data is converted to a format that allows quick read access to the data in a structured format. The server is able to lookup subobjects or nested values by key or index, allowing added flexibility and performance.

...

Specialised flavours of NoSQL stores (Document DBs, Key-value stores and Graph DBs) are probably better options for their specific use cases, but the addition of this datatype might allow you to reduce complexity of your technology stack. The price is coupling to MySQL (or compatible) databases. But that is a non-issue for many users.

Note the language about document validation as it is an important factor. I guess a battery of tests need to be performed for comparisons of the two approaches. Those two being:

  1. Mysql with JSON datatypes
  2. Mysql without

The net has but shallow slideshares as of now on the topic of mysql / json / performance from what I am seeing.

Perhaps your post can be a hub for it. Or perhaps performance is an after thought, not sure, and you are just excited to not create a bunch of tables.


I got into this problem recently, and I sum up the following experiences:

1, There isn't a way to solve all questions. 2, You should use the JSON properly.

One case:

I have a table named: CustomField, and it must two columns: name, fields. name is a localized string, it content should like:

{
  "en":"this is English name",
  "zh":"this is Chinese name"
   ...(other languages)
}

And fields should be like this:

[
  {
    "filed1":"value",
    "filed2":"value"
    ...
  },
  {
    "filed1":"value",
    "filed2":"value"
    ...
  }
  ...
]

As you can see, both the name and the fields can be saved as JSON, and it works!

However, if I use the name to search this table very frequently, what should I do? Use the JSON_CONTAINS,JSON_EXTRACT...? Obviously, it's not a good idea to save it as JSON anymore, we should save it to an independent table:CustomFieldName.

From the above case, I think you should keep these ideas in mind:

  1. Why MYSQL support JSON?
  2. Why you want to use JSON? Did your business logic just need this? Or there is something else?
  3. Never be lazy

Thanks


From my experience, JSON implementation at least in MySql 5.7 is not very useful due to its poor performance. Well, it is not so bad for reading data and validation. However, JSON modification is 10-20 times slower with MySql that with Python or PHP. Lets imagine very simple JSON:

{ "name": "value" }

Lets suppose we have to convert it to something like that:

{ "name": "value", "newName": "value" }

You can create simple script with Python or PHP that will select all rows and update them one by one. You are not forced to make one huge transaction for it, so other applications will can use the table in parallel. Of course, you can also make one huge transaction if you want, so you'll get guarantee that MySql will perform "all or nothing", but other applications will most probably not be able to use database during transaction execution.

I have 40 millions rows table, and Python script updates it in 3-4 hours.

Now we have MySql JSON, so we don't need Python or PHP anymore, we can do something like that:

UPDATE `JsonTable` SET `JsonColumn` = JSON_SET(`JsonColumn`, "newName", JSON_EXTRACT(`JsonColumn`, "name"))

It looks simple and excellent. However, its speed is 10-20 times slower than Python version, and it is single transaction, so other applications can not modify the table data in parallel.

So, if we want to just duplicate JSON key in 40 millions rows table, we need to not use table at all during 30-40 hours. It has no sence.

About reading data, from my experience direct access to JSON field via JSON_EXTRACT in WHERE is also extremelly slow (much slower that TEXT with LIKE on not indexed column). Virtual generated columns perform much faster, however, if we know our data structure beforehand, we don't need JSON, we can use traditional columns instead. When we use JSON where it is really useful, i. e. when data structure is unknown or changes often (for example, custom plugin settings), virtual column creation on regular basis for any possible new columns doesn't look like good idea.

Python and PHP make JSON validation like a charm, so it is questionable do we need JSON validation on MySql side at all. Why not also validate XML, Microsoft Office documents or check spelling? ;)


Strong disagree with some of things that are said in other answers (which, to be fair, was a few years ago).

We have very carefully started to adopt JSON fields with a healthy skepticism. Over time we've been adding this more.

This generally describes the situation we are in:

  • Like 99% of applications out there, we are not doing things at a massive scale. We work with many different applications and databases, the majority of these are capable of running on modest hardware.
  • We have processes and know-how in place to make changes if performance does become a problem.
  • We have a general idea of which tables are going to be large and think carefully about how we optimize queries for them.
  • We also know in which cases this is not really needed.
  • We're pretty good at data validation and static typing at the application layer.

Lastly,

When we use JSON for storing complex data, that data is never referenced directly by other tables. We also tend to never need to use them in where clauses in hot paths.

So with all this in mind, using a little JSON field instead of 1 or more tables vastly reduces the complexity of queries and data model. Removing this complexity makes it easier to write certain queries, makes our code simpler and just generally saves time.

Complexity and performance is something that needs to be carefully balanced. JSON fields should not be blindly applied, but for the cases where this works it's fantastic.

'JSON fields don't perform well' is a valid reason to not use JSON fields, if you are at a place where that performance difference matters.

One specific example is that we have a table where we store settings for video transcoding. The settings table has 1 'profile' per row, and the settings themselves have a maximum nesting level of 4 (arrays and objects).

Despite this being a large database overall, there's only a few hundreds of these records in the database. Suggesting to split this into 5 tables would yield no benefit and lots of pain.

This is an extreme example, but we have plenty of others (with more rows) where the decision to use JSON fields is a few years in the past, and hasn't yet caused an issue.

Last point: it is now possible to directly index on JSON fields.