page_title |
---|
Troubleshooting Guide |
If you have problems with code that uses Databricks Terraform provider, follow these steps to solve them:
- Check symptoms and solutions in the Typical problems section below.
- Upgrade provider to the latest version. The bug might have already been fixed.
- In case of authentication problems, see the Data resources and Authentication is not configured errors below.
- Collect debug information using following command:
TF_LOG=DEBUG DATABRICKS_DEBUG_TRUNCATE_BYTES=250000 terraform apply 2>&1 > tf-debug.log
- Open a new GitHub issue providing all information described in the issue template - debug logs, your Terraform code, Terraform & plugin versions, etc.
In Terraform 0.13 and later, data resources have the same dependency resolution behavior as defined for managed resources. Most data resources make an API call to a workspace. If a workspace doesn't exist yet, authentication is not configured for provider
error is raised. To work around this issue and guarantee a proper lazy authentication with data resources, you should add depends_on = [azurerm_databricks_workspace.this]
or depends_on = [databricks_mws_workspaces.this]
to the body. This issue doesn't occur if workspace is created in one module and resources within the workspace are created in another. We do not recommend using Terraform 0.12 and earlier, if your usage involves data resources.
The most common reason for technical difficulties might be related to missing alias
attribute in provider "databricks" {}
blocks or provider
attribute in resource "databricks_..." {}
blocks, when using multiple provider configurations. Please make sure to read alias
: Multiple Provider Configurations documentation article.
Error while installing hashicorp/databricks: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/databricks
If you notice below error, it might be due to the fact that required_providers block is not defined in every module, that uses Databricks Terraform Provider. Create versions.tf
file with the following contents:
# versions.tf
terraform {
required_providers {
databricks = {
source = "databricks/databricks"
version = "1.0.1"
}
}
}
... and copy the file in every module in your codebase. Our recommendation is to skip the version
field for versions.tf
file on module level, and keep it only on the environment level.
├── environments
│ ├── sandbox
│ │ ├── README.md
│ │ ├── main.tf
│ │ └── versions.tf
│ └── production
│ ├── README.md
│ ├── main.tf
│ └── versions.tf
└── modules
├── first-module
│ ├── ...
│ └── versions.tf
└── second-module
├── ...
└── versions.tf
Running the terraform init
command, you may see Failed to install provider
error if you didn't check-in .terraform.lock.hcl
to the source code version control:
Error: Failed to install provider
Error while installing databricks/databricks: v1.0.0: checksum list has no SHA-256 hash for "https://github.com/databricks/terraform-provider-databricks/releases/download/v1.0.0/terraform-provider-databricks_1.0.0_darwin_amd64.zip"
You can fix it by following three simple steps:
- Replace
databrickslabs/databricks
withdatabricks/databricks
in all your.tf
files with thepython3 -c "$(curl -Ls https://dbricks.co/updtfns)"
command. - Run the
terraform state replace-provider databrickslabs/databricks databricks/databricks
command and approve the changes. See Terraform CLI docs for more information. - Run
terraform init
to verify everything working.
The terraform apply command should work as expected now.
Alternatively, you can find the hashes of the last 30 provider versions in .terraform.lock.hcl
. As a temporary measure, you can lock on a prior version by following the following steps:
- Copy
versions-lock.hcl
to the root folder of your terraform project. - Rename to
terraform.lock.hcl
- Run
terraform init
and verify the provider is installed. - Be sure to commit the new
.terraform.lock.hcl
file to your source code repository.
See the same steps as in Error: Failed to install provider.
You can get this error during provisioning of the Databricks workspace. It arises when you're trying to set deployment_name
by no deployment prefix was set on the Databricks side (you can't set it yourself). The problem could be solved one of the following methods:
-
Contact your Databricks representative, like Solutions Architect, Customer Success Engineer, Account Executive, or Partner Solutions Architect to set a deployment prefix for your account.
-
Comment out the
deployment_name
parameter to create workspace with default URL:dbc-XXXXXX.cloud.databricks.com
.
This is a well known limitation of the Azure Databricks - currently you cannot create Azure Key Vault-based secret scope because OBO flow is not supported yet for service principals on Azure Active Directory side. Use azure-cli authentication with user principal to create AKV-based secret scope.