BigQuery

Overview

Google BigQuery is a fully managed, serverless data warehouse that enables scalable analysis over large datasets. In Zenskar, you can use BigQuery as a data source for aggregates and dashboards.

Zenskar connects to BigQuery using service account credentials that grant secure access to your project and dataset.

Prerequisites

Enable BigQuery API in your Google Cloud Project (GCP).

The BigQuery API must be enabled before connecting it to Zenskar. Follow GCP’s guide to enable and disable APIs or enable BigQuery API directly from your project.

Create a Service Account.

Create a Service Account in Google Cloud Project

A service account is a special kind of account typically used by an application or compute workload, such as a Compute Engine instance, rather than a person. A service account is identified by its email address, which is unique to the account.

Create a Service Account with BigQuery User and BigQuery Data Editor roles.

  • Follow GCP's guide to create a service account. Once you've created the Service Account, keep its ID handy as you will need it while granting role-based permissions. Service Account IDs typically take the form <account-name>@<project-name>.iam.gserviceaccount.com

  • Add the Service Account as a Member in your GCP account with the BigQuery User role. To do this, follow the instructions for granting access in the Google documentation. The email address of the member you are adding is the same as the Service Account ID you just created.

  • At this point you should have a Service Account with the BigQuery User project-level permission. Similarly, assign the BigQuery Data Editor role to the Service Account.

Zenskar needs credentials for a Service Account with the BigQuery User and BigQuery Data Editor roles. These roles grant Zenskar the following permissions:

  • Run BigQuery jobs
  • Write to BigQuery Datasets
  • Read table metadata
📖

We highly recommend that you create a Service Account exclusive to Zenskar for ease of permission and auditing. However, you can use a pre-existing Service Account that has the correct permissions.

Generate a Service Account Key in JSON format.

Create and manage a Service Account Key in Google Cloud Project

  • Service Account keys are used to authenticate as Google Service Accounts. Zenskar requires the Service Account keys to leverage the role-based permissions you granted to the Service Account in the previous section.
  • Follow the Google documentation to create and manage a key. Currently, Zenskar supports only JSON keys. Ensure that:
    • You create your key in JSON format
    • You download the key immediately.
  • Google will not allow you to see the contents of the key once you navigate away.
⚠️

Delete the downloaded key file after setup for security.


Set up the BigQuery data source in Zenskar dashboard

  1. Log into your Zenskar dashboard.
  2. In the left side bar, click Usage > Data Sources.
  3. In the top-right corner, click + ADD DATA SOURCE.
  4. On the Add New Data Source page, select BigQuery from the Source Type drop-down menu.
  5. Configure the BigQuery connector:

General details:

FieldDescriptionRequired
Source NameEnter a unique name for this data source.Yes
Source TypeSelect BigQuery from the dropdown menu.Yes

Connector configuration:

FieldDescriptionRequired
Project IDThe ID of your Google Cloud project that contains the BigQuery datasets.Yes
Dataset IDThe dataset you want to query.Yes
LocationThe geographic location of your BigQuery dataset (for example, US or EU).Yes
Credentials JSONPaste the contents of your Service Account Key (JSON format).Yes

Data source access mode (read-only):

  • Zenskar queries data directly from BigQuery without syncing it to Zenskar’s data infrastructure. This option is:
    • Ideal for large databases (more than 30 GB)
    • Suitable for real-time data access
    • No sync waiting time You will be able to browse and query tables from this BigQuery source in the Data Navigator while creating aggregates.
  1. Click on the SAVE SOURCE button.

Create a BigQuery data source connector via API

Request example

curl --location 'https://api.zenskar.com/datasources' \
  -H 'Authorization: Bearer <your_token>' \
  -H 'apiversion: 20240301' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "abc",
    "connector_type": "BigQuery",
    "destination": "BigQuery",
    "status": "active",
    "connector_config": {
      "project_id": "my-test-project-84030",
      "dataset_id": "abc123",
      "location": "US",
      "credentials_json": "{ \"type\": \"service_account\", \"project_id\": \"my-test-project-84030\", \"private_key_id\": \"0533306f55057\", \"private_key\": \"-----BEGIN PRIVATE KEY-----\\t/YgHKB4tH\\n-----END PRIVATE KEY-----\\n\", \"client_email\": \"[email protected]\", \"token_uri\": \"https://oauth2.googleapis.com/token\" }"
    },
    "source_definition_id": "bfd1ddf8-ae8a-4620-b1d7-55597d2ba08c",
    "remote_conn": true
  }'

BigQuery-specific connector configuration

💡

Note

The connector_config object is the only part of the request that differs across connector types (such as BigQuery, Snowflake, or Redshift). The create data-source connector API reference provides a generic overview, while this document explains the BigQuery-specific structure of connector_config.

The connector_config object contains configuration fields specific to the BigQuery data source.

FieldDescriptionRequired
project_idThe unique ID of your Google Cloud project that contains the BigQuery datasets.Yes
dataset_idThe dataset name (for example, abc123). Do not include the project ID prefix (such as my-test-project-84030.abc123). The project ID is already supplied separately.Yes
locationThe geographic location of your BigQuery dataset (for example, US or EU).Yes
credentials_jsonThe complete contents of your Service Account key, formatted as a stringified JSON (escaped quotes and newlines).Yes

💡

Important

  • Some users may specify dataset_id in the format <project_id>.<dataset_id> (for example, my-test-project-84030.abc123), which is valid in BigQuery but not supported in Zenskar. Pass only the dataset name (abc123), as project_id is already handled separately.
  • Ensure that the Service Account used has the BigQuery User and BigQuery Data Editor roles.
  • Zenskar supports only JSON-format Service Account keys.


Data-type mapping

The BigQuery data types mapping:

BigQuery TypeResulting TypeNotes
BOOLBoolean
INT64Number
FLOAT64Number
NUMERICNumber
BIGNUMERICNumber
STRINGString
BYTESString
DATEStringIn ISO8601 format
DATETIMEStringIn ISO8601 format
TIMESTAMPStringIn ISO8601 format
TIMEString
ARRAYArray
STRUCTObject
GEOGRAPHYString

Notes

  • Zenskar does not modify your existing BigQuery data or schema.
  • Queries are executed using your service account’s permissions.
  • For best performance, use partitioned or clustered tables when working with large datasets.