Stack Overflow Asked by Counter10000 on December 17, 2020
Is there a way to drop a BigQuery table from Spark by using Scala?
I only find ways to read and write BigQuery table from Spark by using Scala from the example here:
https://cloud.google.com/dataproc/docs/tutorials/bigquery-connector-spark-example
Can someone provide an example to drop a BigQuery table? For example, I can drop a table in BigQuery console using this statement "drop table if exists projectid1.dataset1.table1
".
Please note that my purpose of removing the existing table is NOT to overwrite. I simply want to remove it. Please help. Thanks.
Please refer to the BigQuery API:
import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.{BigQueryOptions, TableId}
val bq = BigQueryOptions.getDefaultInstance().getService()
val table = bq.getTable(TableId.of("projectid1", "dataset1", "table1"))
if(table != null) {
table.delete()
}
Notice, this should work in Dataproc. In other cluster you will need to properly set the cresentials
Correct answer by David Rabinowitz on December 17, 2020
3 Asked on November 22, 2021
4 Asked on November 22, 2021 by user3061338
1 Asked on November 22, 2021 by gulzar-ali
3 Asked on November 22, 2021
1 Asked on November 22, 2021 by premier12
1 Asked on November 22, 2021 by chinwe-watkins
3 Asked on November 22, 2021
3 Asked on November 22, 2021 by agiftel-longwave
0 Asked on November 22, 2021
2 Asked on November 22, 2021 by trooller
0 Asked on November 22, 2021 by user3142695
1 Asked on November 22, 2021 by reel
3 Asked on November 22, 2021 by blitzmann
1 Asked on November 22, 2021 by vittorio-romeo
Get help from others!
Recent Questions
Recent Answers
© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP