Skip to content

Commit

Permalink
update dbt-core to v1.7.3 (#60)
Browse files Browse the repository at this point in the history
* update dbt-core to v1.7.3

* add dbt-databricks

* update databricks to v1.7.3 and add http_path
as a user input
update ghcr instructions

* updates from Leo (#61)

* fix optional parameter INPUT_HTTP_PATH

* fix databricks token profiles.yml file

* fix INPUT_HTTP_PATH escape slash in sed

* check if profiles.yml exist

---------

Co-authored-by: Leo Schick <[email protected]>

---------

Co-authored-by: Leo Schick <[email protected]>
  • Loading branch information
mwhitaker and leo-schick authored Jan 17, 2024
1 parent a38e25d commit 551e391
Show file tree
Hide file tree
Showing 6 changed files with 60 additions and 36 deletions.
20 changes: 14 additions & 6 deletions Docker_build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,13 @@ FROM --platform=$build_for python:3.10.7-slim-bullseye as base
# N.B. The refs updated automagically every release via bumpversion
# N.B. dbt-postgres is currently found in the core codebase so a value of dbt-core@<some_version> is correct

ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
ARG [email protected]
# special case args
ARG dbt_spark_version=all
ARG dbt_third_party
Expand Down Expand Up @@ -106,6 +107,12 @@ RUN apt-get update \
RUN python -m pip install --no-cache-dir "git+https://github.com/dbt-labs/${dbt_spark_ref}#egg=dbt-spark[${dbt_spark_version}]"


##
# dbt-databricks
##
FROM base as dbt-databricks
RUN python -m pip install --no-cache-dir "git+https://github.com/databricks/${dbt_databricks_ref}#egg=dbt-databricks"

##
# dbt-third-party
##
Expand All @@ -131,5 +138,6 @@ RUN apt-get update \
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_redshift_ref}#egg=dbt-redshift"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_bigquery_ref}#egg=dbt-bigquery"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_snowflake_ref}#egg=dbt-snowflake"
RUN python -m pip install --no-cache "git+https://github.com/databricks/${dbt_databricks_ref}#egg=dbt-databricks"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_spark_ref}#egg=dbt-spark[${dbt_spark_version}]"
RUN python -m pip install --no-cache "git+https://github.com/dbt-labs/${dbt_postgres_ref}#egg=dbt-postgres&subdirectory=plugins/postgres"
7 changes: 5 additions & 2 deletions Docker_build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ update the references in the Dockerfile to current ones.

using the dbt [Dockerfile](https://github.com/dbt-labs/dbt-core/blob/main/docker/Dockerfile) as a template.

`docker build --tag mwhitaker/dbt_all:v1.6.3 --target dbt-all .`
`docker build --tag ghcr.io/mwhitaker/dbt_all:v1.7.3 --target dbt-all .`

`docker push mwhitaker/dbt_all:v1.6.3`
export CR_PAT=ghp_xxxx
echo $CR_PAT | docker login ghcr.io -u mwhitaker --password-stdin

`docker push ghcr.io/mwhitaker/dbt_all:v1.7.3`
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ARG DBT_VERSION=v1.6.3
ARG DBT_VERSION=v1.7.3
FROM ghcr.io/mwhitaker/dbt_all:${DBT_VERSION}

COPY entrypoint.sh /entrypoint.sh
Expand Down
13 changes: 5 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
A GitHub Action to run [dbt](https://www.getdbt.com) commands in a Docker container. It uses the official images provided by [Fishtown Analytics](https://hub.docker.com/r/fishtownanalytics/dbt/tags). You can use [dbt commands](https://docs.getdbt.com/reference/dbt-commands) such as `run`, `test` and `debug`. This action captures the dbt console output for use in subsequent steps.

### dbt version
The current version of dbt is **1.6.3**. Please note that from dbt v1.0.0. you may have to change your dbt project structure compared to v0.x.x. See the [migration](https://docs.getdbt.com/docs/guides/migration-guide/upgrading-to-1-0-0) docs.
The current version of dbt is **1.7.3**. Please note that from dbt v1.0.0. you may have to change your dbt project structure compared to v0.x.x. See the [migration](https://docs.getdbt.com/docs/guides/migration-guide/upgrading-to-1-0-0) docs.

dbt updates their [docker images](https://hub.docker.com/r/fishtownanalytics/dbt/tags?page=1&ordering=last_updated) on a frequent basis and the main branch of this Github Action should be close to the last stable tag. If you need to use an earlier version of dbt, you can call this action with a specific [release](https://github.com/mwhitaker/dbt-action/releases), eg `mwhitaker/[email protected]` or `mwhitaker/[email protected]`.

Expand Down Expand Up @@ -113,23 +113,20 @@ default:
target: dev
outputs:
dev:
type: spark
method: http
type: databricks
schema: dev_user
host: abc-12345-3cc5.cloud.databricks.com
port: 443
schema: abc
token: _token_ # this will be substituted during build time
cluster: 1234-56789-abc233
connect_timeout: 30
connect_retries: 15
threads: 5
http_path: _http_path_ # this will be substituted during build time
```
Create a secret for `DBT_TOKEN` and reference it in your workflow.
```yml
- name: dbt-action
uses: mwhitaker/dbt-action@master
with:
dbt_command: "dbt run --profiles-dir ."
http_path: "sql/protocol/"
env:
DBT_TOKEN: ${{ secrets.DBT_TOKEN }}
```
Expand Down
3 changes: 3 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ inputs:
description: "dbt project folder. Defaults to ."
default: "."
required: false
http_path:
description: "http_path for databricks"
required: false
outputs:
result:
description: "Success or failure of the dbt command"
Expand Down
51 changes: 32 additions & 19 deletions entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,31 +5,44 @@ set -o pipefail
echo "dbt project folder set as: \"${INPUT_DBT_PROJECT_FOLDER}\""
cd ${INPUT_DBT_PROJECT_FOLDER}

if [ -n "${DBT_BIGQUERY_TOKEN}" ]
export PROFILES_FILE="${DBT_PROFILES_DIR:-.}/profiles.yml"
if [ -e "${PROFILES_FILE}" ] # check if file exist
then
echo trying to parse bigquery token
$(echo ${DBT_BIGQUERY_TOKEN} | base64 -d > ./creds.json 2>/dev/null)
if [ $? -eq 0 ]
if [ -n "${DBT_BIGQUERY_TOKEN}" ]
then
echo success parsing base64 encoded token
elif $(echo ${DBT_BIGQUERY_TOKEN} > ./creds.json)
echo trying to parse bigquery token
$(echo ${DBT_BIGQUERY_TOKEN} | base64 -d > ./creds.json 2>/dev/null)
if [ $? -eq 0 ]
then
echo success parsing base64 encoded token
elif $(echo ${DBT_BIGQUERY_TOKEN} > ./creds.json)
then
echo success parsing plain token
else
echo cannot parse bigquery token
exit 1
fi
elif [ -n "${DBT_USER}" ] && [ -n "$DBT_PASSWORD" ]
then
echo success parsing plain token
echo trying to use user/password
sed -i "s/_user_/${DBT_USER}/g" $PROFILES_FILE
sed -i "s/_password_/${DBT_PASSWORD}/g" $PROFILES_FILE
elif [ -n "${DBT_TOKEN}" ]
then
echo trying to use DBT_TOKEN/databricks
sed -i "s/_token_/${DBT_TOKEN}/g" $PROFILES_FILE
else
echo cannot parse bigquery token
exit 1
echo no tokens or credentials supplied
fi

if [ -n "${INPUT_HTTP_PATH}" ]
then
echo trying to use http_path for databricks
sed -i "s/_http_path_/$(echo $INPUT_HTTP_PATH | sed 's/\//\\\//g')/g" $PROFILES_FILE
fi
elif [ -n "${DBT_USER}" ] && [ -n "$DBT_PASSWORD" ]
then
echo trying to use user/password
sed -i "s/_user_/${DBT_USER}/g" ./profiles.yml
sed -i "s/_password_/${DBT_PASSWORD}/g" ./profiles.yml
elif [ -n "${DBT_TOKEN}" ]
then
echo trying to use DBT_TOKEN/databricks
sed -i "s/_token_/${DBT_TOKEN}/g" ./datab.yml
else
echo no tokens or credentials supplied
echo "profiles.yml not found"
exit 1
fi

DBT_ACTION_LOG_FILE=${DBT_ACTION_LOG_FILE:="dbt_console_output.txt"}
Expand Down

0 comments on commit 551e391

Please sign in to comment.