Prism
v0.3.0
v0.3.0
  • 👋Welcome to Prism!
  • Getting Started
    • Installation
    • Creating your first project
    • Why Prism?
  • Fundamentals
    • PrismProject API
      • PrismProject().run
      • PrismProject().graph
    • Tasks
    • Targets
      • Multiple targets
    • CurrentRun API
      • CurrentRun.ref()
      • CurrentRun.conn()
      • CurrentRun.ctx()
  • Connectors
    • Overview
    • BigQueryConnector
    • PostgresConnector
    • RedshiftConnector
    • SnowflakeConnector
    • TrinoConnector
    • PrestoConnector
  • CLI
    • Command Line Interface
    • graph
    • init
    • run
  • Advanced features
    • Concurrency
    • Logging
    • Callbacks
    • Retries
    • Skipping tasks
  • API Reference
    • prism.task.PrismTask
    • @task(...)
    • @target(...)
    • @target_iterator(...)
    • prism.target.PrismTarget
  • Use Cases
    • Analytics on top of dbt
    • Machine Learning
  • Wiki
    • DAGs
Powered by GitBook
On this page
  • Configuration
  • execute_sql
  1. Connectors

BigQueryConnector

PreviousOverviewNextPostgresConnector

Last updated 1 year ago

Configuration

The required BigQueryConnector arguments are:

  • id:

  • creds: the path to the Google authentication credentials. The default is the environment variable GOOGLE_APPLICATION_CREDENTIALS.

bigquery_connector = BigQueryConnector(
    id="bigquery_connector_id",
    creds="/example_path/creds.json"
)

Under the hood, prism interacts with the BigQuery Python API to create the SQL engine. For more information, see here the .

execute_sql

You can run queries against the BigQuery engine using the execute_sql function:

from prism.decorators import task
from prism.runtime import CurrentRun

@task()
def bigquery_task(self):
    conn = CurrentRun.conn("bigquery_connector_id")
    data = conn.execute_sql(
        sql="SELECT * FROM table"
    )

Note that when return_type = None, the result will be a list of objects containing the query data.

Google BigQuery documentation
Row