Skip to content

Latest commit

 

History

History
 
 

scheduled-asset-inventory-export-bq

Scheduled Cloud Asset Inventory Export to Bigquery

This blueprint shows how to leverage Cloud Asset Inventory Exporting to Bigquery feature to keep track of your project wide assets over time storing information in Bigquery.

The data stored in Bigquery can then be used for different purposes:

  • dashboarding
  • analysis

The blueprint uses export resources at the project level for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:

  • the export should be set at the folder or organization level
  • the roles/cloudasset.viewer on the service account should be set at the folder or organization level

The resources created in this blueprint are shown in the high level diagram below:

Prerequisites

Ensure that you grant your account one of the following roles on your project, folder, or organization:

  • Cloud Asset Viewer role (roles/cloudasset.viewer)
  • Owner primitive role (roles/owner)

Running the blueprint

Clone this repository, specify your variables in a terraform.tvars and then go through the following steps to create resources:

  • terraform init
  • terraform apply

Once done testing, you can clean up resources by running terraform destroy. To persist state, check out the backend.tf.sample file.

Testing the blueprint

Once resources are created, you can run queries on the data you exported on Bigquery. Here you can find some blueprint of queries you can run.

You can also create a dashboard connecting Looker Studio or any other BI tools of your choice to your Bigquery dataset.

File exporter for JSON, CSV (optional).

This is an optional part.

Regular file-based exports of data from Cloud Asset Inventory may be useful for e.g. scale-out network dependencies discovery tools like Planet Exporter, or to update legacy workloads tracking or configuration management systems. Bigquery supports multiple export formats and one may upload objects to Storage Bucket using provided Cloud Function. Specify job.DestinationFormat as defined in documentation, e.g. NEWLINE_DELIMITED_JSON.

It helps to create custom scheduled query from CAI export tables, and to write out results in to dedicated table (with overwrites). Define such query's output columns to comply with downstream systems' fields requirements, and time query execution after CAI export into BQ for freshness. See sample queries.

This is an optional part, created if cai_gcs_export is set to true. The high level diagram extends to the following:

Variables

name description type required default
cai_config Cloud Asset Inventory export config. object({…})
project_id Project id that references existing project. string
billing_account Billing account id used as default for new projects. string null
bundle_path Path used to write the intermediate Cloud Function code bundle. string "./bundle.zip"
bundle_path_cffile Path used to write the intermediate Cloud Function code bundle. string "./bundle_cffile.zip"
cai_gcs_export Enable optional part to export tables to GCS. bool false
file_config Optional BQ table as a file export function config. object({…}) {…}
location Appe Engine location used in the example. string "europe-west"
name Arbitrary string used to name created resources. string "asset-inventory"
name_cffile Arbitrary string used to name created resources. string "cffile-exporter"
project_create Create project instead ofusing an existing one. bool true
region Compute region used in the example. string "europe-west1"
root_node The resource name of the parent folder or organization for project creation, in 'folders/folder_id' or 'organizations/org_id' format. string null

Outputs

name description sensitive
bq-dataset Bigquery instance details.
cloud-function Cloud Function instance details.

Test

module "test" {
  source          = "./fabric/blueprints/cloud-operations/scheduled-asset-inventory-export-bq"
  billing_account = "1234-ABCD-1234"
  cai_config = {
    bq_dataset         = "my_dataset"
    bq_table           = "my_table"
    bq_table_overwrite = "true"
    target_node        = "organization/1234567890"
  }
  cai_gcs_export = true
  file_config = {
    bucket     = "my-bucket"
    filename   = "my-folder/myfile.json"
    format     = "NEWLINE_DELIMITED_JSON"
    bq_dataset = "my_dataset"
    bq_table   = "my_table"
  }
  project_create = true
  project_id     = "project-1"
}
# tftest modules=8 resources=34