guillaume blaquiere
1 min readSep 22, 2023

Your questions are great and very technical.

Firstly, using Cloud Run (especially Cloud Run jobs) to perform the extract and the load in only one step is file, as long as it fits the Cloud Run constraints (memory and timeout especially).

Secondly, the READ API is a great solution to go faster and to avoid query to BigQuery. However, it's a bit techie and you can't perform aggregate while you are reading the data (and views/materialized views are also not supported). But if you have the courage to implement it, you will avoid the the BigQuery warmup (the slot reservation for doing your export) and go faster.

Finally, last option that can be great in some situation, did you try BigQuery engine? With all your table and processing, I'm sure you have reservation. You have free BQ Engine with your reservation. Have a try on a subset of data. If it fits your requirement, increase the size and put your table in the cache to speed up the query. By any chance, it will avoid to use Cloud SQL to serve data at low latency!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

guillaume blaquiere
guillaume blaquiere

Written by guillaume blaquiere

GDE cloud platform, Group Data Architect @Carrefour, speaker, writer and polyglot developer, Google Cloud platform 3x certified, serverless addict and Go fan.

No responses yet

Write a response