Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

BigQueryIO Read vs fromQuery

Say in Dataflow/Apache Beam program, I am trying to read table which has data that is exponentially growing. I want to improve the performance of the read.

BigQueryIO.Read.from("projectid:dataset.tablename")

or

BigQueryIO.Read.fromQuery("SELECT A, B FROM [projectid:dataset.tablename]")

Will the performance of my read improve, if i am only selecting the required columns in the table, rather than the entire table in above?

I am aware that selecting few columns results in the reduced cost. But would like to know the read performance in above.

like image 827
Roshan Fernando Avatar asked Dec 11 '22 03:12

Roshan Fernando


1 Answers

You're right that it will reduce cost instead of referencing all the columns in the SQL/query. Also, when you use from() instead of fromQuery(), you don't pay for any table scans in BigQuery. I'm not sure if you were aware of that or not.

Under the hood, whenever Dataflow reads from BigQuery, it actually calls its export API and instructs BigQuery to dump the table(s) to GCS as sharded files. Then Dataflow reads these files in parallel into your pipeline. It does not ready "directly" from BigQuery.

As such, yes, this might improve performance because the amount of data that needs to be exported to GCS under the hood, and read into your pipeline will be less i.e. less columns = less data.

However, I'd also consider using partitioned tables, and then even think about clustering them too. Also, use WHERE clauses to even further reduce the amount of data to be exported and read.

like image 177
Graham Polley Avatar answered Feb 17 '23 08:02

Graham Polley