I am trying to run the query "select * from tablename ". But it throws error like "Error: Response too large to return".
I was able to process some other table which contains TB of data. But I am getting this error for the table which contain 294 MB.
I was able to select the table by selecting the column name with some limitation not able to process all the column in select query. In my select query I have totally 26 column but I was able to select 16 column without error. "select column1,column2,column3,....column16 from tablename".
Is there any relation with column and size of the table.
Please help me to fix this issue.
Big Query table details:
Total Records: 683,038
Table Size: 294 MB
No of Column: 26
Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag.
If querying via API,
"configuration":
{
"query":
{
"allowLargeResults": true,
"query": "select uid from [project:dataset.table]"
"destinationTable": [project:dataset.table]
}
}
If using the bq command line tool,
$ bq query --allow_large_results --destination_table "dataset.table" "select uid from [project:dataset.table]"
If using the browser tool,
- Click 'Enable Options'
- Select 'Allow Large Results'
jobData = {'configuration': {'query': {'query': sql,
'allowLargeResults': 'true',
'destinationTable':{
"projectId": "projectXYZ",
"tableId": "tableXYZ",
"datasetId": "datasetXYZ",
}
}}}
You can use 'writeDisposition' to specify whether to overwrite the destination table or not.
'writeDisposition':'WRITE_TRUNCATE' # If the table already exists,
# BigQuery overwrites the table data.
'writeDisposition':'WRITE_APPEND' # If the table already exists,
# BigQuery appends the data to the table
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With