Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

After recreating BigQuery table streaming inserts are not working?

I just came a cross an interesting issue with the BigQuery.

Essentially there is a batch job that recreates a table in BigQuery - to delete the data - and than immediately starts to feed in a new set through streaming interface.

Used to work like this for quite a while - successfully.

Lately it started to loose data.

A small test case has confirmed the situation – if the data feed starts immediately after recreating (successfully!) the table, parts of the dataset will be lost. I.e. Out of 4000 records that are being fed in, only 2100 - 3500 would make it through.

I suspect that table creation might be returning success before the table operations (deletion and creation) have been successfully propagated throughout the environment, thus the first parts of the dataset are being fed to the old replicas of the table (speculating here).

To confirm this I have put a timeout between the table creation and starting the data feed. Indeed, if the timeout is less than 120 seconds – parts of the dataset are lost.

If it is more than 120 seconds - seems to work OK.

There used to be no requirement for this timeout. We are using US BigQuery. Am I missing something obvious here?

EDIT: From the comment provided by Sean Chen below and a few other sources - the behaviour is expected due to the way the tables are cached and internal table id is propagated through out the system. BigQuery has been built for append-only type of operations. Re-writes is not something that one can easily accomodate into the design and should be avoided.

like image 450
Evgeny Minkevich Avatar asked Apr 05 '16 00:04

Evgeny Minkevich


1 Answers

This is more or less expected due to the way that BigQuery streaming servers cache the table generation id (an internal name for the table).

Can you provide more information about the use case? It seems strange to delete the table then to write to the same table again.

One workaround could be to truncate the table, instead of deleting the it. You can do this by running SELECT * FROM <table> LIMIT 0, and the table as a destination table (you might want to use allow_large_results = true and disable flattening, which will help if you have nested data), then using write_disposition=WRITE_TRUNCATE. This will empty out the table but preserve the schema. Then any rows streamed afterwards will get applied to the same table.

like image 69
Jordan Tigani Avatar answered Nov 27 '22 09:11

Jordan Tigani