master bac0422ccd6e cached
36 files
59.8 KB
15.0k tokens
33 symbols
1 requests
Download .txt
Repository: databrickslabs/cicd-templates
Branch: master
Commit: bac0422ccd6e
Files: 36
Total size: 59.8 KB

Directory structure:
gitextract_01rcl0_x/

├── .github/
│   └── workflows/
│       ├── onpush.yml
│       └── onrelease.yml
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── NOTICE
├── README.md
├── cookiecutter.json
├── hooks/
│   └── post_gen_project.py
├── pytest.ini
├── requirements.txt
├── tests/
│   ├── __init__.py
│   ├── test_e2e_local.py
│   └── utils.py
├── tox.ini
├── utils/
│   └── profile_creator.py
└── {{cookiecutter.project_name}}/
    ├── .coveragerc
    ├── .github/
    │   └── workflows/
    │       ├── onpush.yml
    │       └── onrelease.yml
    ├── .gitignore
    ├── .gitlab-ci.yml
    ├── README.md
    ├── azure-pipelines.yml
    ├── conf/
    │   └── test/
    │       └── sample.json
    ├── pytest.ini
    ├── setup.py
    ├── tests/
    │   ├── integration/
    │   │   └── sample_test.py
    │   └── unit/
    │       └── sample_test.py
    ├── unit-requirements.txt
    └── {{cookiecutter.project_slug}}/
        ├── __init__.py
        ├── common.py
        └── jobs/
            ├── __init__.py
            └── sample/
                ├── __init__.py
                └── entrypoint.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/onpush.yml
================================================
name: test

on:
  push:
    branches-ignore:
      - 'master' # the branch will be tested before push to master
    tags-ignore:
      - '*.*'
    paths-ignore:
      - 'README.md' # if we change documentation only, we don't need to run tests

jobs:
  test-e2e-azure-github-actions:

    runs-on: ubuntu-latest
    strategy:
      max-parallel: 4
      matrix:
        python-version: [3.7.5]

    steps:
    - uses: actions/checkout@v1

    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v2
      with:
        python-version: ${{ matrix.python-version }}

    - name: Install pip
      run: |
        python -m pip install --upgrade pip

    - name: Install test dependencies
      run: |
        pip install -r requirements.txt

================================================
FILE: .github/workflows/onrelease.yml
================================================
on:
  push:
    # Sequence of patterns matched against refs/tags
    tags:
      - 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10

name: release

jobs:
  build:
    name: Create Release
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
        with:
          tag_name: ${{ github.ref }}
          release_name: Release ${{ github.ref }}
          body: |
            Release for version ${{ github.ref }}. Please check CHANGELOG.md for more information.
          draft: false
          prerelease: false


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# Distribution / packaging
*.egg-info/
build
dist

# Unit test / coverage reports
.coverage
coverage.xml
junit/*
htmlcov/*

# Caches
.pytest_cache/

# VSCode
.vscode/

# Idea
.idea/
*.iml

# MacOS
.DS_Store

# aux mlruns
mlruns/

# secrets
.env

================================================
FILE: CHANGELOG.md
================================================
# Changelog

## Release 1.0.1

This release provides `init_adapter` method to make `dbx execute` method more user-friendly.

## Release 1.0.2

This release provides hotfix for incorrect reference of `init_conf` and adds proper testing checks for github workflows.

## Release 1.0.3

This release adds more compatibility with win-based development environments, as well as extensive tests for win-based launches.

## Release 1.0.4

Added support for picking configuration properties from environment variable. 
Fixed issue with non-existent lockfile. 

## Release 1.0.5

Minor fixes in the dbx behaviour.

## Release 1.0.6

Fixed multiple issues with run status checks:
- status check code is now unified int one method
- `--existing-runs=cancel` instabilities fixed
- `--trace` stucks in case of skipped status fixed
- proper exit code for failed integration tests

## Release 1.0.7

- Since dbx is moved to public, no more whl file is needed. whl file is deleted from the repository, as well as all references to it.

## Release 1.0.8

- Introduced support for Google Cloud
- project template code is formatted with black for better readability

## Release 1.0.9

- Add installation of `dbx` to the `unit-requirements.txt` file
- Switch generated top-level folder name to project_name variable, not project_slug.
- Add `.coveragerc` to the generated project
- Explicitly include support for `dbutils` in the `common.py` file

## Release 1.0.10

- Remove `init_adapter` logic, now all parameters are directly passed from the `deployment.json` file.

## Release 1.0.11

- Fixed issue with project name in Gitlab CI.

================================================
FILE: CONTRIBUTING.md
================================================
# Contributing

## Legal

Thank you for your interest in contributing to the `cicd-templates` project (the “Project”). In order to clarify the intellectual property license granted with Contributions from any person or entity who contributes to the Project, Databricks, Inc. ("Databricks") must have a Contributor License Agreement (CLA) on file that has been signed by each such Contributor (or if an entity, an authorized representative of such entity). This license is for your protection as a Contributor as well as the protection of Databricks and its users; it does not change your rights to use your own Contributions for any other purpose.
You may sign this CLA either on your own behalf (with respect to any Contributions that are owned by you) and/or on behalf of an entity (the "Corporation") (with respect to any Contributions that are owned by such Corporation (e.g., those Contributions you make during the performance of your employment duties to the Corporation)).  Please mark the corresponding box below.
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to Databricks. Except for the licenses granted herein to Databricks, You reserve all right, title, and interest in and to Your Contributions.
1. Definitions.
"You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with Databricks. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"Contribution" shall mean the code, documentation or any original work of authorship, including any modifications or additions to an existing work, that is submitted by You to Databricks for inclusion in, or documentation of, any of the products owned or managed by Databricks, including the Project, whether on, before or after the date You sign this CLA. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Databricks or its representatives, including but not limited to communication on electronic mailing lists, source code control systems (e.g., Github), and issue tracking systems that are managed by, or on behalf of, Databricks for the purpose of discussing and improving the Project, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Databricks a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense (through multiple tiers), and distribute Your Contributions and such derivative works.  For the avoidance of doubt, and without limitation, this includes, at our option, the right to sublicense this license to recipients or users of any products or services (including software) distributed or otherwise made available (e.g., by SaaS offering) by Databricks (each, a “Downstream Recipient”).
3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to Databricks a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable, sublicensable (through multiple tiers) (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Contribution in whole or in part, alone or in combination with any other products or services (including for the avoidance of doubt the Project), where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Project to which such Contribution(s) was submitted.  For the avoidance of doubt, and without limitation, this includes, at our option, the right to sublicense this license to Downstream Recipients.
4. Authorized Users. If you are signing this CLA on behalf of a Corporation, you may also add additional designated employees of the Corporation who will be covered by this CLA without the need to separately sign it (“Authorized Users”).  Your Primary Point of Contact (you or the individual specified below) may add additional Authorized Users at any time by contacting Databricks at cla@databricks.com (or such other method as Databricks informs you).
5. Representations. You represent that:
   1. You are legally entitled to grant the above licenses, and, if You are signing on behalf of a Corporation and have added any Authorized Users, You represent further that each employee of the Corporation designated by You is authorized to submit Contributions on behalf of the Corporation;
   2. each of Your Contributions is Your original creation;
   3. to your knowledge, Your Contributions do not infringe or otherwise misappropriate the intellectual property rights of a third person; and
   4. you will not assert any moral rights in your Contribution against us or any Downstream Recipients.
6. Support. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, and except as specified above, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
7. Notification. It is your responsibility to notify Databricks when any change is required to the list of Authorized Users, or to the Corporation's Primary Point of Contact with Databricks.  You agree to notify Databricks of any facts or circumstances of which you become aware that would make the representations or warranties herein inaccurate in any respect.
8. This CLA is governed by the laws of the State of California and applicable U.S. Federal law. Any choice of law rules will not apply.

Please check one of the applicable statement below. Please do NOT mark both statements:
* ___ I am signing on behalf of myself as an individual and no other person or entity, including my employer, has or will have rights with respect my Contributions.
* ___ I am signing on behalf of my employer or a legal entity and I have the actual authority to contractually bind such entity (the Corporation).


| Name*:                                                                                                    |   |
|-----------------------------------------------------------------------------------------------------------|---|
| Corporation Entity Name (if applicable):                                                                  |   |
| Title or Role (if applicable):                                                                            |   |
| Mailing Address*:                                                                                         |   |
| Email*:                                                                                                   |   |
| Signature*:                                                                                               |   |
| Date*:                                                                                                    |   |
| Github Username (if applicable):                                                                          |   |
| Primary Point of Contact (if not you) (please provide name and email and Github username, if applicable): |   |
| Authorized Users (please list Github usernames):**                                                        |   |

- \* Required field
- ** Please note that Authorized Users may not be immediately be granted authorization to submit Contributions; should more than one individual attempt to sign a CLA on behalf of a Corporation, the first such CLA will apply and later CLAs will be deemed void.


================================================
FILE: LICENSE
================================================
Databricks Labs CI/CD Templates

Copyright (2020) Databricks, Inc.

This library (the "Software") may not be used except in connection with the Licensee's use of the Databricks Platform Services pursuant 
to an Agreement (defined below) between Licensee (defined below) and Databricks, Inc. ("Databricks"). The Object Code version of the 
Software shall be deemed part of the Downloadable Services under the Agreement, or if the Agreement does not define Downloadable Services, 
Subscription Services, or if neither are defined then the term in such Agreement that refers to the applicable Databricks Platform 
Services (as defined below) shall be substituted herein for �Downloadable Services.�  Licensee's use of the Software must comply at 
all times with any restrictions applicable to the Downlodable Services and Subscription Services, generally, and must be used in 
accordance with any applicable documentation. For the avoidance of doubt, the Software constitutes Databricks Confidential Information
under the Agreement.
Additionally, and notwithstanding anything in the Agreement to the contrary: 
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 
  OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 
  LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR 
  IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
* you may view, make limited copies of, and may compile the Source Code version of the Software into an Object Code version of the
  Software.  For the avoidance of doubt, you may not make derivative works of Software (or make any any changes to the Source Code 
  version of the unless you have agreed to separate terms with Databricks permitting such modifications (e.g., a contribution license
  agreement)).
If you have not agreed to an Agreement or otherwise do not agree to these terms, you may not use the Software or view, copy or compile
the Source Code of the Software.
This license terminates automatically upon the termination of the Agreement or Licensee's breach of these terms.  Additionally, 
Databricks may terminate this license at any time on notice.  Upon termination, you must permanently delete the Software and all
copies thereof (including the Source Code).
Agreement: the agreement between Databricks and Licensee governing the use of the Databricks Platform Services, which shall be, with
respect to Databricks, the Databricks Terms of Service located at www.databricks.com/termsofservice, and with respect to Databricks
Community Edition, the Community Edition Terms of Service located at www.databricks.com/ce-termsofuse, in each case unless Licensee 
has entered into a separate written agreement with Databricks governing the use of the applicable Databricks Platform Services.
Databricks Platform Services: the Databricks services or the Databricks Community Edition services, according to where the Software is used.
Licensee: the user of the Software, or, if the Software is being used on behalf of a company, the company.
Object Code: is version of the Software produced when an interpreter or a compiler translates the Source Code into recognizable and 
executable machine code.
Source Code: the human readable portion of the Software.


================================================
FILE: Makefile
================================================
VERSION = 1.0.11

release:
	git add .
	git tag -a v$(VERSION) -m "Release tag for version $(VERSION)"
	git push origin --tags

================================================
FILE: NOTICE
================================================
mlflow-deployments

Copyright (2020) Databricks, Inc.


This Software includes software developed at Databricks (https://www.databricks.com/) and its use is subject to the included LICENSE file.


Additionally, this Software contains code from the following open source projects:

1) MLflow - https://mlflow.org/
2) Databricks CLI - https://github.com/databricks/databricks-cli
3) Cookiecutter Data Sciense - https://github.com/drivendata/cookiecutter-data-science

[mlflow-deployments - License]


================================================
FILE: README.md
================================================
# [DEPRECATED] Databricks Labs CI/CD Templates

This repository provides a template for automated Databricks CI/CD pipeline creation and deployment.

> **_NOTE:_**  This repository is **deprecated** and provided for maintenance purposes only. Please use the [dbx init](https://dbx.readthedocs.io/en/latest/templates/python_basic.html) functionality instead.



Table of Contents
=================

   * [Databricks Labs CI/CD Templates](#databricks-labs-cicd-templates)
      * [Table of Contents](#table-of-contents)
      * [Sample project structure (with GitHub Actions)](#sample-project-structure-with-github-actions)
      * [Sample project structure (with Azure DevOps)](#sample-project-structure-with-azure-devops)
      * [Sample project structure (with GitLab)](#sample-project-structure-with-gitlab)
      * [Note on dbx](#note-on-dbx)
      * [Quickstart](#quickstart)
         * [Local steps](#local-steps)
         * [Setting up CI/CD pipeline on GitHub Actions](#setting-up-cicd-pipeline-on-github-actions)
         * [Setting up CI/CD pipeline on Azure DevOps](#setting-up-cicd-pipeline-on-azure-devops)
         * [Setting up CI/CD pipeline on Gitlab](#setting-up-cicd-pipeline-on-gitlab)
      * [Deployment file structure](#deployment-file-structure)
      * [Different deployment types](#different-deployment-types)
         * [Deployment for Run Submit API](#deployment-for-run-submit-api)
         * [Deployment for Run Now API](#deployment-for-run-now-api)
      * [Troubleshooting](#troubleshooting)
      * [FAQ](#faq)
      * [Legal Information](#legal-information)
      * [Feedback](#feedback)
      * [Contributing](#contributing)
      * [Kudos](#kudos)
    
## Sample project structure (with GitHub Actions)
```
.
├── .dbx
│   └── project.json
├── .github
│   └── workflows
│       ├── onpush.yml
│       └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt
```

Some explanations regarding structure:
- `.dbx` folder is an auxiliary folder, where metadata about environments and execution context is located.
- `sample_project` - Python package with your code (the directory name will follow your project name)
- `tests` - directory with your package tests
- `conf/deployment.json` - deployment configuration file. Please read the [following section](#deployment-file-structure) for a full reference.
- `.github/workflows/` - workflow definitions for GitHub Actions

## Sample project structure (with Azure DevOps)
```
.
├── .dbx
│   └── project.json
├── .gitignore
├── README.md
├── azure-pipelines.yml
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project_azure_dev_ops
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt
```

Some explanations regarding structure:
- `.dbx` folder is an auxiliary folder, where metadata about environments and execution context is located.
- `sample_project_azure_dev_ops` - Python package with your code (the directory name will follow your project name)
- `tests` - directory with your package tests
- `conf/deployment.json` - deployment configuration file. Please read the [following section](#deployment-file-structure) for a full reference.
- `azure-pipelines.yml` - Azure DevOps Pipelines workflow definition

## Sample project structure (with GitLab)
```
.
├── .dbx
│   └── project.json
├── .gitignore
├── README.md
├── .gitlab-ci.yml
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project_gitlab
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt
```

Some explanations regarding structure:
- `.dbx` folder is an auxiliary folder, where metadata about environments and execution context is located.
- `sample_project_gitlab` - Python package with your code (the directory name will follow your project name)
- `tests` - directory with your package tests
- `conf/deployment.json` - deployment configuration file. Please read the [following section](#deployment-file-structure) for a full reference.
- `.gitlab-ci.yml` - GitLab CI/CD workflow definition

## Note on dbx

> **_NOTE:_**  
[dbx](https://github.com/databrickslabs/dbx) is a CLI tool for advanced Databricks jobs management. 
It can be used separately from cicd-templates, and if you would like to preserve your project structure, please refer to dbx documentation on how to use it with customized project structure.


## Quickstart

> **_NOTE:_**  
As a prerequisite, you need to install [databricks-cli](https://docs.databricks.com/dev-tools/cli/index.html) with a [configured profile](https://docs.databricks.com/dev-tools/cli/index.html#set-up-authentication).
In this instruction we're based on [Databricks Runtime 7.3 LTS ML](https://docs.databricks.com/release-notes/runtime/7.3ml.html). 
If you don't need to use ML libraries, we still recommend to use ML-based version due to [`%pip` magic support](https://docs.databricks.com/libraries/notebooks-python-libraries.html).

### Local steps
Perform the following actions in your development environment:
- Create new conda environment and activate it:
```bash
conda create -n <your-environment-name> python=3.7.5
conda activate <your-environment-name>
```
- If you would like to be able to run local unit tests, you'll need JDK. If you don't have one, It can be installed via:
```
conda install -c anaconda "openjdk=8.0.152"
```
- Install cookiecutter and path:
```bash
pip install cookiecutter path
```
- Create new project using cookiecutter template:
```
cookiecutter https://github.com/databrickslabs/cicd-templates
```
- Install development dependencies:
```bash
pip install -r unit-requirements.txt
```
- Install generated package in development mode:
```
pip install -e .
```
- In the generated directory you'll have a sample job with testing and launch configurations around it.
- Launch and debug your code on an interactive cluster via the following command. Job name could be found in `conf/deployment.json`:
```
dbx execute --cluster-name=<my-cluster> --job=<job-name>
```
- Make your first deployment from the local machine:
```
dbx deploy
```
- Launch your first pipeline as a new separate job, and trace the job status. Job name could be found in `conf/deployment.json`:
```
dbx launch --job <your-job-name> --trace
```
- For an in-depth local development and unit testing guidance, please refer to a generated `README.md` in the root of the project.

### Setting up CI/CD pipeline on GitHub Actions

- Create a new repository on GitHub
- Configure `DATABRICKS_HOST` and `DATABRICKS_TOKEN` secrets for your project in [GitHub UI](https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets)
- Add a remote origin to the local repo
- Push the code 
- Open the GitHub Actions for your project to verify the state of the deployment pipeline

### Setting up CI/CD pipeline on Azure DevOps

- Create a new repository on GitHub
- Connect the repository to Azure DevOps
- Configure `DATABRICKS_HOST` and `DATABRICKS_TOKEN` secrets for your project in [Azure DevOps](https://docs.microsoft.com/en-us/azure/devops/pipelines/release/azure-key-vault?view=azure-devops). Note that secret variables must be mapped to env as mentioned [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables) using the syntax `env:` for example:
```
variables:
- group: Databricks-environment
stages:
...
...
    - script: |
        dbx deploy
      env:
        DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
```
- Add a remote origin to the local repo
- Push the code 
- Open the Azure DevOps UI to check the deployment status 

### Setting up CI/CD pipeline on Gitlab

- Create a new repository on Gitlab
- Configure `DATABRICKS_HOST` and `DATABRICKS_TOKEN` secrets for your project in [GitLab UI](https://docs.gitlab.com/ee/ci/variables/#create-a-custom-variable-in-the-ui)
- Add a remote origin to the local repo
- Push the code 
- Open the GitLab CI/CD UI to check the deployment status 

 
## Deployment file structure
A sample deployment file could be found in a generated project.

General file structure could look like this:
```json
{
    "<environment-name>": {
        "jobs": [
            {
                "name": "sample_project-sample",
                "existing_cluster_id": "some-cluster-id", 
                "libraries": [],
                "max_retries": 0,
                "spark_python_task": {
                    "python_file": "sample_project/jobs/sample/entrypoint.py",
                    "parameters": [
                        "--conf-file",
                        "conf/test/sample.json"
                    ]
                }
            }
        ]
    }
}
```
Per each environment you could describe any amount of jobs. Job description should follow the [Databricks Jobs API](https://docs.databricks.com/dev-tools/api/latest/jobs.html#create). 

However, there is some advanced behaviour for a `dbx deploy`.

When you run `dbx deploy` with a given deployment file (by default it takes the deployment file from `conf/deployment.json`), the following actions will be performed:
- Find the deployment configuration in `--deployment-file` (default: `conf/deployment.json`) 
- Build .whl package in a given project directory (could be disabled via `--no-rebuild` option)
- Add this .whl package to a job definition
- Add all requirements from `--requirements-file` (default: `requirements.txt`). Step will be skipped if requirements file is non-existent.
- Create a new job or adjust existing job if the given job name exists. Job will be found by it's name.

Important thing about referencing is that you can also reference arbitrary local files. This is very handy for `python_file` section.
In the example above, the entrypoint file and the job configuration will be added to the job definition and uploaded to `dbfs` automatically. No explicit file upload is needed.

## Different deployment types

Databricks Jobs API provides two methods for launching a particular workload:
- [Run Submit API](https://docs.databricks.com/dev-tools/api/latest/jobs.html#runs-submit)
- [Run Now API](https://docs.databricks.com/dev-tools/api/latest/jobs.html#run-now)

Main logical difference between these methods is that Run Submit API allows to submit a workload directly without creating a job.
Therefore, we have two deployment types - one for Run Submit API, and one for Run Now API. 

### Deployment for Run Submit API

To deploy only the files and not to override the job definitions, do the following:

```bash
dbx deploy --files-only
```

To launch the file-based deployment:
```
dbx launch --as-run-submit --trace
```

This type of deployment is handy for working in different branches, and it won't affect the job definition.

### Deployment for Run Now API

To deploy files and update the job definitions:

```bash
dbx deploy
```

To launch the file-based deployment:
```
dbx launch --job=<job-name>
```

This type of deployment shall be mainly used in automated way during new release. 
`dbx deploy` will change the job definition (unless `--files-only` option is provided).

## Troubleshooting

###
*Q*: When running ```dbx deploy``` I'm getting the following exception ```json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)``` and stack trace:
```
...
  File ".../lib/python3.7/site-packages/dbx/utils/common.py", line 215, in prepare_environment
    experiment = mlflow.get_experiment_by_name(environment_data["workspace_dir"])
...

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```

What could be causing it and what is the potential fix?

*A*:  
We've seen this exception when in the profile the ```host=https://{domain}/?o={orgid}``` format is used for Azure. It is valid for the databricks cli, but not for the API. If that's the cause, once the "?o={orgid}" suffix is removed, the problem should be gone.

## FAQ

###
*Q*: I'm using [poetry](https://python-poetry.org/) for package management. Is it possible to use poetry together with this template?

*A*:  
    Yes, it's also possible, but the library management during cluster execution should be performed via `libraries` section of job description. 
    You also might need to disable the automatic rebuild for `dbx deploy` and `dbx execute` via `--no-rebuild` option. Finally, the built package should be in wheel format and located in `/dist/` directory.

###
*Q*: How can I add my Databricks Notebook to the `deployment.json`, so I can create a job out of it?
 
*A*:  
    Please follow [this](https://docs.databricks.com/dev-tools/api/latest/jobs.html#notebooktask) documentation section and add a notebook task definition into the deployment file.

###
*Q*: Is it possible to use `dbx` for non-Python based projects, for example Scala-based projects?

*A*:  
    Yes, it's possible, but the interactive mode `dbx execute` is not yet supported. However, you can just take the `dbx` wheel to your Scala-based project and reference your jar files in the deployment file, so the `dbx deploy` and `dbx launch` commands be available for you.

###
*Q*: I have a lot of interdependent jobs, and using solely JSON seems like a giant code duplication. What could solve this problem?

*A*:  
    You can implement any configuration logic and simply write the output into a custom `deployment.json` file and then pass it via `--deployment-file` option. 
    As an example, you can generate your configuration using Python script, or [Jsonnet](https://jsonnet.org/).

###
*Q*: How can I secure the project environment?

*A*:  
From the state serialization perspective, your code and deployments are stored in two separate storages:
- workspace directory -this directory is stored in your workspace, described per-environment and defined in `.dbx/project.json`, in `workspace_dir` field.
        To control access to this directory, please use [Workspace ACLs](https://docs.databricks.com/security/access-control/workspace-acl.html).  
- artifact location - this location is stored in DBFS, described per-environment and defined in `.dbx/project.json`, in `artifact_location` field.
        To control access this location, please use credentials passthrough (docs for [ADLS](https://docs.microsoft.com/en-us/azure/databricks/security/credential-passthrough/adls-passthrough) and for [S3](https://docs.databricks.com/security/credential-passthrough/index.html)).

###
*Q*: I would like to use self-hosted (private) pypi repository. How can I configure my deployment and CI/CD pipeline?

*A*:  
To set up this scenario, there are some settings to be applied:
- Databricks driver should have network access to your pypi repository
- Additional step to deploy your package to pypi repo should be configured in CI/CD pipeline
- Package re-build and generation should be disabled via `--no-rebuild --no-package` arguments for `dbx execute`
- Package reference should be configured in job description

Here is a sample for `dbx deploy` command:
```
dbx deploy --no-rebuild --no-package
```

Sample section to `libraries` configuration:
```json
{
    "pypi": {"package": "my-package-name==1.0.0", "repo": "my-repo.com"}
}
```

###
*Q*: What is the purpose of `init_adapter` method in SampleJob?

*A*: 
This method should be primarily used for adapting configuration for `dbx execute` based run. 
By using this method, you can provide an initial configuration in case if `--conf-file` option is not provided.  

###
*Q*: I don't like the idea of hosting the host and token variables into `~/.databrickscfg` file inside the CI pipeline. How can I make this setup more secure?

*A*:  
`dbx` now supports environment variables, provided via `DATABRICKS_HOST` and `DATABRICKS_TOKEN`. 
If these variables are defined in env, no `~/.databrickscfg` file needed.

## Legal Information
This software is provided as-is and is not officially supported by Databricks through customer technical support channels. 
Support, questions, and feature requests can be communicated through the Issues page of this repo. 
Please see the legal agreement and understand that issues with the use of this code will not be answered or investigated by Databricks Support.

## Feedback
Issues with template? Found a bug? Have a great idea for an addition? Feel free to file an issue.

## Contributing
Have a great idea that you want to add? Fork the repo and submit a PR!

## Kudos
- Project based on the [cookiecutter datascience project](https://drivendata.github.io/cookiecutter-data-science)
- README.md ToC generated via [gh-md-toc](https://github.com/ekalinin/github-markdown-toc)


================================================
FILE: cookiecutter.json
================================================
{
  "project_name": "cicd-sample-project",
  "version": "0.0.1",
  "description": "Databricks Labs CICD Templates Sample Project",
  "author": "",
  "cloud": [
    "AWS",
    "Azure",
    "Google Cloud"
  ],
  "cicd_tool": [
    "GitHub Actions",
    "Azure DevOps",
    "GitLab"
  ],
  "project_slug": "{{ cookiecutter.project_name.lower().replace(' ', '_').replace('-', '_') }}",
  "workspace_dir": "/Shared/dbx/{{cookiecutter.project_slug}}",
  "artifact_location": "dbfs:/Shared/dbx/projects/{{cookiecutter.project_slug}}",
  "profile": "DEFAULT",
  "_copy_without_render": [
    "*.github"
  ]
}


================================================
FILE: hooks/post_gen_project.py
================================================
import json
import os
import shutil

from path import Path

cicd_tool = "{{cookiecutter.cicd_tool}}"
cloud = "{{cookiecutter.cloud}}"
project_slug = "{{cookiecutter.project_slug}}"
project_name = "{{cookiecutter.project_name}}"
environment = "default"
profile = "{{cookiecutter.profile}}"
workspace_dir = "{{cookiecutter.workspace_dir}}"
artifact_location = "{{cookiecutter.artifact_location}}"

PROJECT_FILE_CONTENT = {
    "environments": {
        environment: {
            "profile": profile,
            "workspace_dir": workspace_dir,
            "artifact_location": artifact_location,
        }
    }
}

DEPLOYMENT = {
    "AWS": {
        environment: {
            "jobs": [
                {
                    "name": "%s-sample" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-cpu-ml-scala2.12",
                        "node_type_id": "i3.xlarge",
                        "aws_attributes": {
                            "first_on_demand": 0,
                            "availability": "SPOT",
                        },
                        "num_workers": 2,
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "%s/jobs/sample/entrypoint.py" % project_slug,
                        "parameters": ["--conf-file", "conf/test/sample.json"],
                    },
                },
                {
                    "name": "%s-sample-integration-test" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-cpu-ml-scala2.12",
                        "node_type_id": "i3.xlarge",
                        "aws_attributes": {
                            "first_on_demand": 0,
                            "availability": "SPOT",
                        },
                        "num_workers": 1,
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "tests/integration/sample_test.py"
                    },
                },
            ]
        }
    },
    "Azure": {
        environment: {
            "jobs": [
                {
                    "name": "%s-sample" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-cpu-ml-scala2.12",
                        "node_type_id": "Standard_F4s",
                        "num_workers": 2,
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "%s/jobs/sample/entrypoint.py" % project_slug,
                        "parameters": ["--conf-file", "conf/test/sample.json"],
                    },
                },
                {
                    "name": "%s-sample-integration-test" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-cpu-ml-scala2.12",
                        "node_type_id": "Standard_F4s",
                        "num_workers": 1,
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "tests/integration/sample_test.py"
                    },
                },
            ]
        }
    },
    "Google Cloud": {
        environment: {
            "jobs": [
                {
                    "name": "%s-sample" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-scala2.12",
                        "node_type_id": "n1-standard-4",
                        "num_workers": 2,
                        "spark_env_vars": {},
                        "cluster_source": "JOB",
                        "gcp_attributes": {"use_preemptible_executors": True},
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "%s/jobs/sample/entrypoint.py" % project_slug,
                        "parameters": ["--conf-file", "conf/test/sample.json"],
                    },
                },
                {
                    "name": "%s-sample-integration-test" % project_name,
                    "new_cluster": {
                        "spark_version": "7.3.x-scala2.12",
                        "node_type_id": "n1-standard-4",
                        "num_workers": 2,
                        "spark_env_vars": {},
                        "cluster_source": "JOB",
                        "gcp_attributes": {"use_preemptible_executors": True},
                    },
                    "libraries": [],
                    "email_notifications": {
                        "on_start": [],
                        "on_success": [],
                        "on_failure": [],
                    },
                    "max_retries": 0,
                    "spark_python_task": {
                        "python_file": "tests/integration/sample_test.py"
                    },
                },
            ]
        }
    },
}


def replace_vars(file_path: str):
    _path = Path(file_path)
    content = _path.read_text().format(
        project_name=project_name, environment=environment, profile=profile
    )
    _path.write_text(content)


class PostProcessor:
    @staticmethod
    def process():

        if cicd_tool == "GitHub Actions":
            os.remove("azure-pipelines.yml")
            os.remove(".gitlab-ci.yml")

            replace_vars(".github/workflows/onpush.yml")
            replace_vars(".github/workflows/onrelease.yml")

        if cicd_tool == "Azure DevOps":
            shutil.rmtree(".github")
            os.remove(".gitlab-ci.yml")

        if cicd_tool == "GitLab":
            shutil.rmtree(".github")
            os.remove("azure-pipelines.yml")

        deployment = json.dumps(DEPLOYMENT[cloud], indent=4)
        deployment_file = Path("conf/deployment.json")
        if not deployment_file.parent.exists():
            deployment_file.parent.mkdir()
        deployment_file.write_text(deployment)
        project_file = Path(".dbx/project.json")
        if not project_file.parent.exists():
            project_file.parent.mkdir()
        deployment_file.write_text(deployment)
        project_file.write_text(json.dumps(PROJECT_FILE_CONTENT, indent=2))
        Path(".dbx/lock.json").write_text("{}")
        os.system("git init")


if __name__ == "__main__":
    post_processor = PostProcessor()
    post_processor.process()


================================================
FILE: pytest.ini
================================================
[pytest]
addopts = -s -p no:warnings
log_cli = 1
log_cli_level = INFO
log_cli_format = [pytest][%(asctime)s][%(levelname)s][%(module)s][%(funcName)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
log_level = INFO

================================================
FILE: requirements.txt
================================================
setuptools
wheel
cookiecutter
pathlib
pytest
path
python-dotenv
pygithub
requests
pynacl
click
databricks-cli

================================================
FILE: tests/__init__.py
================================================


================================================
FILE: tests/test_e2e_local.py
================================================
import unittest
from .utils import CicdTemplatesTest
import logging


class LocalExecuteTest(CicdTemplatesTest):
    def tearDown(self) -> None:
        logging.info(self.project_path)

    def test_local_execute_azure(self):
        with self.project_path:
            self.execute_command("pip install dbx")
            self.execute_command("dbx deploy")
            self.execute_command(
                f"dbx execute --cluster-name=cicd-templates.testing --job={self.project_name}-sample"
            )


if __name__ == "__main__":
    unittest.main()


================================================
FILE: tests/utils.py
================================================
import logging
import os
import pathlib
import shutil
import tempfile
import unittest
from uuid import uuid4

from cookiecutter.main import cookiecutter
from path import Path


class CicdTemplatesTest(unittest.TestCase):
    TEMPLATE_PATH = str(pathlib.Path(".").absolute())

    def setUp(self) -> None:
        self.test_dir = tempfile.mkdtemp()
        self.project_name = "cicd_templates_%s" % str(uuid4()).split("-")[0]
        logging.info(
            f"Launching test with project name {self.project_name} and test dir {self.test_dir}"
        )
        self.prepare_repository()
        self.project_path = Path(self.test_dir).joinpath(self.project_name)

    def tearDown(self) -> None:
        logging.info(f"Deleting test directory: {self.test_dir}")
        shutil.rmtree(self.test_dir)

    def prepare_repository(self):
        cookiecutter(
            template=self.TEMPLATE_PATH,
            no_input=True,
            output_dir=self.test_dir,
            extra_context={
                "project_name": self.project_name,
                "cloud": "Azure",
                "cicd_tool": "GitHub Actions",
                "profile": "dbx-dev-azure",
            },
        )

    def execute_command(self, cmd):
        exit_code = os.system(cmd)
        self.assertEqual(exit_code, 0)


================================================
FILE: tox.ini
================================================
[flake8]
max-line-length = 79
max-complexity = 10


================================================
FILE: utils/profile_creator.py
================================================
# platform independent profile configurator
import click
from databricks_cli.configure.cli import DatabricksConfig, update_and_persist_config


@click.command()
@click.option("--profile", required=True, type=str)
@click.option("--host", required=True, type=str)
@click.option("--token", required=True, type=str)
def configure(profile: str, host: str, token: str):
    new_config = DatabricksConfig.from_token(host, token, False)
    update_and_persist_config(profile, new_config)


if __name__ == "__main__":
    configure()


================================================
FILE: {{cookiecutter.project_name}}/.coveragerc
================================================
[run]
branch = True
source = {{cookiecutter.project_slug}}

[report]
exclude_lines =
    if self.debug:
    pragma: no cover
    raise NotImplementedError
    if __name__ == .__main__.:

ignore_errors = True
omit =
    tests/*
    setup.py
    # this file is autogenerated by cicd-templates
    {{cookiecutter.project_slug}}/common.py

================================================
FILE: {{cookiecutter.project_name}}/.github/workflows/onpush.yml
================================================
name: Test pipeline

on:
  push:
    branches:
      - '**'
    tags-ignore:
      - 'v*' # this tag type is used for release pipelines

jobs:
  test-pipeline:

    runs-on: ubuntu-latest
    strategy:
      max-parallel: 4

    env:
      DATABRICKS_HOST: ${{{{ secrets.DATABRICKS_HOST }}}}
      DATABRICKS_TOKEN:  ${{{{ secrets.DATABRICKS_TOKEN }}}}

    steps:
      - uses: actions/checkout@v1

      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.7.5

      - name: Install pip
        run: |
          python -m pip install --upgrade pip

      - name: Install dependencies and project in dev mode
        run: |
          pip install -r unit-requirements.txt
          pip install -e .

      - name: Run unit tests
        run: |
          echo "Launching unit tests"
          pytest tests/unit

      - name: Deploy integration test
        run: |
          dbx deploy --jobs={project_name}-sample-integration-test --files-only

      - name: Run integration test
        run: |
          dbx launch --job={project_name}-sample-integration-test --as-run-submit --trace





================================================
FILE: {{cookiecutter.project_name}}/.github/workflows/onrelease.yml
================================================
name: Release pipeline

on:
  push:
    tags:
      - 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10


jobs:
  release-pipeline:

    runs-on: ubuntu-latest
    strategy:
      max-parallel: 4
      matrix:
        python-version: [ 3.7 ]

    env:
      DATABRICKS_HOST: ${{{{ secrets.DATABRICKS_HOST }}}}
      DATABRICKS_TOKEN:  ${{{{ secrets.DATABRICKS_TOKEN }}}}

    steps:
      - uses: actions/checkout@v1

      - name: Set up Python
        uses: actions/setup-python@v1
        with:
          python-version: 3.7

      - name: Install pip
        run: |
          python -m pip install --upgrade pip

      - name: Install dependencies and project in dev mode
        run: |
          pip install -r unit-requirements.txt

      - name: Deploy the job
        run: |
          dbx deploy --jobs={project_name}-sample

      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: ${{{{ secrets.GITHUB_TOKEN }}}} # This token is provided by Actions
        with:
          tag_name: ${{{{ github.ref }}}}
          release_name: Release ${{{{ github.ref }}}}
          body: |
            Release for version ${{{{ github.ref }}}}.
          draft: false
          prerelease: false




================================================
FILE: {{cookiecutter.project_name}}/.gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# Distribution / packaging
*.egg-info/
build
dist

# Unit test / coverage reports
.coverage
coverage.xml
junit/*
htmlcov/*

# Caches
.pytest_cache/

# VSCode
.vscode/

# Idea
.idea/
*.iml

# MacOS
.DS_Store

# Databricks eXtensions
.dbx/lock.json

# local mlflow files
mlruns/

================================================
FILE: {{cookiecutter.project_name}}/.gitlab-ci.yml
================================================
stages:
    - unit-testing
    - integration-testing
    - release

unit-testing:
  stage: unit-testing
  image: python:3.7-stretch
  script:
    - echo "Install dependencies"
    - apt-get update
    - apt-get install default-jdk -y
    - pip install -r unit-requirements.txt
    - pip install -e .
    - echo "Launching unit tests"
    - pytest tests/unit

integration-testing:
  stage: integration-testing
  image: python:3.7-stretch
  script:
    - echo "Install dependencies"
    - pip install -r unit-requirements.txt
    - pip install -e .
    - echo "Deploying tests"
    - dbx deploy --jobs={{cookiecutter.project_name}}-integration-test --files-only
    - echo "Running tests"
    - dbx launch --job={{cookiecutter.project_name}}-integration-test --as-run-submit --trace

release:
  stage: release
  image: python:3.7-stretch
  only:
    refs:
      - master
  script:
    - echo "Install dependencies"
    - pip install -r unit-requirements.txt
    - pip install -e .
    - echo "Deploying Job"
    - dbx deploy --jobs={{cookiecutter.project_name}}-sample


================================================
FILE: {{cookiecutter.project_name}}/README.md
================================================
# {{cookiecutter.project_name}}

This is a sample project for Databricks, generated via cookiecutter.

While using this project, you need Python 3.X and `pip` or `conda` for package management.

## Installing project requirements

```bash
pip install -r unit-requirements.txt
```

## Install project package in a developer mode

```bash
pip install -e .
```

## Testing

For local unit testing, please use `pytest`:
```
pytest tests/unit --cov
```

For an integration test on interactive cluster, use the following command:
```
dbx execute --cluster-name=<name of interactive cluster> --job={{cookiecutter.project_name}}-sample-integration-test
```

For a test on an automated job cluster, deploy the job files and then launch:
```
dbx deploy --jobs={{cookiecutter.project_name}}-sample-integration-test --files-only
dbx launch --job={{cookiecutter.project_name}}-sample-integration-test --as-run-submit --trace
```

## Interactive execution and development

1. `dbx` expects that cluster for interactive execution supports `%pip` and `%conda` magic [commands](https://docs.databricks.com/libraries/notebooks-python-libraries.html).
2. Please configure your job in `conf/deployment.json` file. 
2. To execute the code interactively, provide either `--cluster-id` or `--cluster-name`.
```bash
dbx execute \
    --cluster-name="<some-cluster-name>" \
    --job=job-name
```

Multiple users also can use the same cluster for development. Libraries will be isolated per each execution context.

## Preparing deployment file

Next step would be to configure your deployment objects. To make this process easy and flexible, we're using JSON for configuration.

By default, deployment configuration is stored in `conf/deployment.json`.

## Deployment for Run Submit API

To deploy only the files and not to override the job definitions, do the following:

```bash
dbx deploy --files-only
```

To launch the file-based deployment:
```
dbx launch --as-run-submit --trace
```

This type of deployment is handy for working in different branches, not to affect the main job definition.

## Deployment for Run Now API

To deploy files and update the job definitions:

```bash
dbx deploy
```

To launch the file-based deployment:
```
dbx launch --job=<job-name>
```

This type of deployment shall be mainly used from the CI pipeline in automated way during new release.


## CICD pipeline settings

Please set the following secrets or environment variables for your CI provider:
- `DATABRICKS_HOST`
- `DATABRICKS_TOKEN`

## Testing and releasing via CI pipeline

- To trigger the CI pipeline, simply push your code to the repository. If CI provider is correctly set, it shall trigger the general testing pipeline
- To trigger the release pipeline, get the current version from the `{{cookiecutter.project_slug}}/__init__.py` file and tag the current code version:
```
git tag -a v<your-project-version> -m "Release tag for version <your-project-version>"
git push origin --tags
```


================================================
FILE: {{cookiecutter.project_name}}/azure-pipelines.yml
================================================
variables:
- group: Databricks-environment

trigger:
  batch: true
  branches:
    include:
    - '*'

  tags:
    include:
      - v*.*
      - prod

stages:
- stage: onPush
  jobs:
  - job: onPushJob
    pool:
      vmImage: 'ubuntu-18.04'

    steps:
    - script: env | sort
      displayName: 'Environment / Context'

    - task: UsePythonVersion@0
      displayName: 'Use Python 3.7'
      inputs:
        versionSpec: 3.7

    - checkout: self
      persistCredentials: true
      clean: true
      displayName: 'Checkout & Build.Reason: $(Build.Reason) & Build.SourceBranchName: $(Build.SourceBranchName)'

    - script: |
        python -m pip install --upgrade pip
        pip install -r unit-requirements.txt
        pip install -e .
      displayName: 'Install dependencies'

    - script: |
        pytest tests/unit --junitxml=test-unit.xml
      displayName: 'Run Unit tests'

    - script: |
        dbx deploy --jobs={{cookiecutter.project_name}}-sample-integration-test --files-only
      displayName: 'Deploy integration test'

    - script: |
        dbx launch --job={{cookiecutter.project_name}}-sample-integration-test --as-run-submit --trace
      displayName: 'Launch integration on test'

    - task: PublishTestResults@2
      condition: succeededOrFailed()
      inputs:
        testResultsFormat: 'JUnit'
        testResultsFiles: '**/test-*.xml' 
        failTaskOnFailedTests: true

- stage: onRelease
  condition: |
    or(
      startsWith(variables['Build.SourceBranch'], 'refs/heads/releases'),
      startsWith(variables['Build.SourceBranch'], 'refs/tags/v')
    )
  jobs:
  - job: onReleaseJob
    pool:
      vmImage: 'ubuntu-18.04'

    steps:
      - script: env | sort
        displayName: 'Environment / Context'

      - task: UsePythonVersion@0
        displayName: 'Use Python 3.7'
        inputs:
          versionSpec: 3.7

      - checkout: self
        persistCredentials: true
        clean: true
        displayName: 'Checkout & Build.Reason: $(Build.Reason) & Build.SourceBranchName: $(Build.SourceBranchName)'

      - script: |
          python -m pip install --upgrade pip
          pip install -r unit-requirements.txt
          pip install -e .
        displayName: 'Install dependencies'

      - script: |
          pytest tests/unit --junitxml=test-unit.xml
        displayName: 'Run Unit tests'

      - script: |
          dbx deploy --jobs={{cookiecutter.project_name}}-sample
        displayName: 'Deploy the job'

      - task: PublishTestResults@2
        condition: succeededOrFailed()
        inputs:
          testResultsFormat: 'JUnit'
          testResultsFiles: '**/test-*.xml' 
          failTaskOnFailedTests: true


================================================
FILE: {{cookiecutter.project_name}}/conf/test/sample.json
================================================
{
  "output_format": "delta",
  "output_path": "dbfs:/dbx/tmp/test/{{cookiecutter.project_slug}}"
}

================================================
FILE: {{cookiecutter.project_name}}/pytest.ini
================================================
[pytest]
addopts = -s -p no:warnings
log_cli = 1
log_cli_level = INFO
log_cli_format = [pytest][%(asctime)s][%(levelname)s][%(module)s][%(funcName)s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
log_level = INFO

================================================
FILE: {{cookiecutter.project_name}}/setup.py
================================================
from setuptools import find_packages, setup
from {{cookiecutter.project_slug}} import __version__

setup(
    name="{{cookiecutter.project_slug}}",
    packages=find_packages(exclude=["tests", "tests.*"]),
    setup_requires=["wheel"],
    version=__version__,
    description="{{cookiecutter.description}}",
    author="{{cookiecutter.author}}",
)


================================================
FILE: {{cookiecutter.project_name}}/tests/integration/sample_test.py
================================================
import unittest

from {{cookiecutter.project_slug}}.jobs.sample.entrypoint import SampleJob
from uuid import uuid4
from pyspark.dbutils import DBUtils  # noqa


class SampleJobIntegrationTest(unittest.TestCase):
    def setUp(self):

        self.test_dir = "dbfs:/tmp/tests/sample/%s" % str(uuid4())
        self.test_config = {"output_format": "delta", "output_path": self.test_dir}

        self.job = SampleJob(init_conf=self.test_config)
        self.dbutils = DBUtils(self.job.spark)
        self.spark = self.job.spark

    def test_sample(self):

        self.job.launch()

        output_count = (
            self.spark.read.format(self.test_config["output_format"])
            .load(self.test_config["output_path"])
            .count()
        )

        self.assertGreater(output_count, 0)

    def tearDown(self):
        self.dbutils.fs.rm(self.test_dir, True)


if __name__ == "__main__":
    # please don't change the logic of test result checks here
    # it's intentionally done in this way to comply with jobs run result checks
    # for other tests, please simply replace the SampleJobIntegrationTest with your custom class name
    loader = unittest.TestLoader()
    tests = loader.loadTestsFromTestCase(SampleJobIntegrationTest)
    runner = unittest.TextTestRunner(verbosity=2)
    result = runner.run(tests)
    if not result.wasSuccessful():
        raise RuntimeError(
            "One or multiple tests failed. Please check job logs for additional information."
        )


================================================
FILE: {{cookiecutter.project_name}}/tests/unit/sample_test.py
================================================
import unittest
import tempfile
import os
import shutil

from {{cookiecutter.project_slug}}.jobs.sample.entrypoint import SampleJob
from pyspark.sql import SparkSession
from unittest.mock import MagicMock

class SampleJobUnitTest(unittest.TestCase):
    def setUp(self):
        self.test_dir = tempfile.TemporaryDirectory().name
        self.spark = SparkSession.builder.master("local[1]").getOrCreate()
        self.test_config = {
            "output_format": "parquet",
            "output_path": os.path.join(self.test_dir, "output"),
        }
        self.job = SampleJob(spark=self.spark, init_conf=self.test_config)

    def test_sample(self):
        # feel free to add new methods to this magic mock to mock some particular functionality
        self.job.dbutils = MagicMock()

        self.job.launch()

        output_count = (
            self.spark.read.format(self.test_config["output_format"])
            .load(self.test_config["output_path"])
            .count()
        )

        self.assertGreater(output_count, 0)

    def tearDown(self):
        shutil.rmtree(self.test_dir)


if __name__ == "__main__":
    unittest.main()


================================================
FILE: {{cookiecutter.project_name}}/unit-requirements.txt
================================================
setuptools
wheel
pyspark==3.0.0
pytest
pytest-cov
dbx

================================================
FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/__init__.py
================================================
__version__ = "0.0.1"


================================================
FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/common.py
================================================
import json
from abc import ABC, abstractmethod
from argparse import ArgumentParser
from logging import Logger
from typing import Dict, Any

from pyspark.sql import SparkSession
import sys


# abstract class for jobs
class Job(ABC):

    def __init__(self, spark=None, init_conf=None):
        self.spark = self._prepare_spark(spark)
        self.logger = self._prepare_logger()
        self.dbutils = self.get_dbutils()
        if init_conf:
            self.conf = init_conf
        else:
            self.conf = self._provide_config()
        self._log_conf()

    @staticmethod
    def _prepare_spark(spark) -> SparkSession:
        if not spark:
            return SparkSession.builder.getOrCreate()
        else:
            return spark

    @staticmethod
    def _get_dbutils(spark: SparkSession):
        try:
            from pyspark.dbutils import DBUtils # noqa
            if "dbutils" not in locals():
                utils = DBUtils(spark)
                return utils
            else:
                return locals().get("dbutils")
        except ImportError:
            return None

    def get_dbutils(self):
        utils = self._get_dbutils(self.spark)

        if not utils:
            self.logger.warn("No DBUtils defined in the runtime")
        else:
            self.logger.info("DBUtils class initialized")

        return utils

    def _provide_config(self):
        self.logger.info("Reading configuration from --conf-file job option")
        conf_file = self._get_conf_file()
        if not conf_file:
            self.logger.info(
                "No conf file was provided, setting configuration to empty dict."
                "Please override configuration in subclass init method"
            )
            return {}
        else:
            self.logger.info(
                f"Conf file was provided, reading configuration from {conf_file}"
            )
            return self._read_config(conf_file)

    @staticmethod
    def _get_conf_file():
        p = ArgumentParser()
        p.add_argument("--conf-file", required=False, type=str)
        namespace = p.parse_known_args(sys.argv[1:])[0]
        return namespace.conf_file

    def _read_config(self, conf_file) -> Dict[str, Any]:
        raw_content = "".join(
            self.spark.read.format("text").load(conf_file).toPandas()["value"].tolist()
        )
        config = json.loads(raw_content)
        return config

    def _prepare_logger(self) -> Logger:
        log4j_logger = self.spark._jvm.org.apache.log4j # noqa
        return log4j_logger.LogManager.getLogger(self.__class__.__name__)

    def _log_conf(self):
        # log parameters
        self.logger.info("Launching job with configuration parameters:")
        for key, item in self.conf.items():
            self.logger.info("\t Parameter: %-30s with value => %-30s" % (key, item))

    @abstractmethod
    def launch(self):
        """
        Main method of the job.
        :return:
        """
        pass


================================================
FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/__init__.py
================================================


================================================
FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/sample/__init__.py
================================================


================================================
FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/sample/entrypoint.py
================================================
from {{cookiecutter.project_slug}}.common import Job


class SampleJob(Job):

    def launch(self):
        self.logger.info("Launching sample job")

        listing = self.dbutils.fs.ls("dbfs:/")

        for l in listing:
            self.logger.info(f"DBFS directory: {l}")

        df = self.spark.range(0, 1000)

        df.write.format(self.conf["output_format"]).mode("overwrite").save(
            self.conf["output_path"]
        )

        self.logger.info("Sample job finished!")


if __name__ == "__main__":
    job = SampleJob()
    job.launch()
Download .txt
gitextract_01rcl0_x/

├── .github/
│   └── workflows/
│       ├── onpush.yml
│       └── onrelease.yml
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── NOTICE
├── README.md
├── cookiecutter.json
├── hooks/
│   └── post_gen_project.py
├── pytest.ini
├── requirements.txt
├── tests/
│   ├── __init__.py
│   ├── test_e2e_local.py
│   └── utils.py
├── tox.ini
├── utils/
│   └── profile_creator.py
└── {{cookiecutter.project_name}}/
    ├── .coveragerc
    ├── .github/
    │   └── workflows/
    │       ├── onpush.yml
    │       └── onrelease.yml
    ├── .gitignore
    ├── .gitlab-ci.yml
    ├── README.md
    ├── azure-pipelines.yml
    ├── conf/
    │   └── test/
    │       └── sample.json
    ├── pytest.ini
    ├── setup.py
    ├── tests/
    │   ├── integration/
    │   │   └── sample_test.py
    │   └── unit/
    │       └── sample_test.py
    ├── unit-requirements.txt
    └── {{cookiecutter.project_slug}}/
        ├── __init__.py
        ├── common.py
        └── jobs/
            ├── __init__.py
            └── sample/
                ├── __init__.py
                └── entrypoint.py
Download .txt
SYMBOL INDEX (33 symbols across 8 files)

FILE: hooks/post_gen_project.py
  function replace_vars (line 173) | def replace_vars(file_path: str):
  class PostProcessor (line 181) | class PostProcessor:
    method process (line 183) | def process():

FILE: tests/test_e2e_local.py
  class LocalExecuteTest (line 6) | class LocalExecuteTest(CicdTemplatesTest):
    method tearDown (line 7) | def tearDown(self) -> None:
    method test_local_execute_azure (line 10) | def test_local_execute_azure(self):

FILE: tests/utils.py
  class CicdTemplatesTest (line 13) | class CicdTemplatesTest(unittest.TestCase):
    method setUp (line 16) | def setUp(self) -> None:
    method tearDown (line 25) | def tearDown(self) -> None:
    method prepare_repository (line 29) | def prepare_repository(self):
    method execute_command (line 42) | def execute_command(self, cmd):

FILE: utils/profile_creator.py
  function configure (line 10) | def configure(profile: str, host: str, token: str):

FILE: {{cookiecutter.project_name}}/tests/integration/sample_test.py
  class SampleJobIntegrationTest (line 8) | class SampleJobIntegrationTest(unittest.TestCase):
    method setUp (line 9) | def setUp(self):
    method test_sample (line 18) | def test_sample(self):
    method tearDown (line 30) | def tearDown(self):

FILE: {{cookiecutter.project_name}}/tests/unit/sample_test.py
  class SampleJobUnitTest (line 10) | class SampleJobUnitTest(unittest.TestCase):
    method setUp (line 11) | def setUp(self):
    method test_sample (line 20) | def test_sample(self):
    method tearDown (line 34) | def tearDown(self):

FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/common.py
  class Job (line 12) | class Job(ABC):
    method __init__ (line 14) | def __init__(self, spark=None, init_conf=None):
    method _prepare_spark (line 25) | def _prepare_spark(spark) -> SparkSession:
    method _get_dbutils (line 32) | def _get_dbutils(spark: SparkSession):
    method get_dbutils (line 43) | def get_dbutils(self):
    method _provide_config (line 53) | def _provide_config(self):
    method _get_conf_file (line 69) | def _get_conf_file():
    method _read_config (line 75) | def _read_config(self, conf_file) -> Dict[str, Any]:
    method _prepare_logger (line 82) | def _prepare_logger(self) -> Logger:
    method _log_conf (line 86) | def _log_conf(self):
    method launch (line 93) | def launch(self):

FILE: {{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/sample/entrypoint.py
  class SampleJob (line 4) | class SampleJob(Job):
    method launch (line 6) | def launch(self):
Condensed preview — 36 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (67K chars).
[
  {
    "path": ".github/workflows/onpush.yml",
    "chars": 767,
    "preview": "name: test\n\non:\n  push:\n    branches-ignore:\n      - 'master' # the branch will be tested before push to master\n    tags"
  },
  {
    "path": ".github/workflows/onrelease.yml",
    "chars": 805,
    "preview": "on:\n  push:\n    # Sequence of patterns matched against refs/tags\n    tags:\n      - 'v*' # Push events to matching v*, i."
  },
  {
    "path": ".gitignore",
    "chars": 319,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# Distribution / packaging\n*.egg-info/\nbuild\n"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 1614,
    "preview": "# Changelog\n\n## Release 1.0.1\n\nThis release provides `init_adapter` method to make `dbx execute` method more user-friend"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 8462,
    "preview": "# Contributing\n\n## Legal\n\nThank you for your interest in contributing to the `cicd-templates` project (the “Project”). I"
  },
  {
    "path": "LICENSE",
    "chars": 3451,
    "preview": "Databricks Labs CI/CD Templates\n\nCopyright (2020) Databricks, Inc.\n\nThis library (the \"Software\") may not be used except"
  },
  {
    "path": "Makefile",
    "chars": 125,
    "preview": "VERSION = 1.0.11\n\nrelease:\n\tgit add .\n\tgit tag -a v$(VERSION) -m \"Release tag for version $(VERSION)\"\n\tgit push origin -"
  },
  {
    "path": "NOTICE",
    "chars": 497,
    "preview": "mlflow-deployments\n\nCopyright (2020) Databricks, Inc.\n\n\nThis Software includes software developed at Databricks (https:/"
  },
  {
    "path": "README.md",
    "chars": 17413,
    "preview": "# [DEPRECATED] Databricks Labs CI/CD Templates\n\nThis repository provides a template for automated Databricks CI/CD pipel"
  },
  {
    "path": "cookiecutter.json",
    "chars": 601,
    "preview": "{\n  \"project_name\": \"cicd-sample-project\",\n  \"version\": \"0.0.1\",\n  \"description\": \"Databricks Labs CICD Templates Sample"
  },
  {
    "path": "hooks/post_gen_project.py",
    "chars": 7703,
    "preview": "import json\nimport os\nimport shutil\n\nfrom path import Path\n\ncicd_tool = \"{{cookiecutter.cicd_tool}}\"\ncloud = \"{{cookiecu"
  },
  {
    "path": "pytest.ini",
    "chars": 218,
    "preview": "[pytest]\naddopts = -s -p no:warnings\nlog_cli = 1\nlog_cli_level = INFO\nlog_cli_format = [pytest][%(asctime)s][%(levelname"
  },
  {
    "path": "requirements.txt",
    "chars": 109,
    "preview": "setuptools\nwheel\ncookiecutter\npathlib\npytest\npath\npython-dotenv\npygithub\nrequests\npynacl\nclick\ndatabricks-cli"
  },
  {
    "path": "tests/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/test_e2e_local.py",
    "chars": 556,
    "preview": "import unittest\nfrom .utils import CicdTemplatesTest\nimport logging\n\n\nclass LocalExecuteTest(CicdTemplatesTest):\n    def"
  },
  {
    "path": "tests/utils.py",
    "chars": 1303,
    "preview": "import logging\nimport os\nimport pathlib\nimport shutil\nimport tempfile\nimport unittest\nfrom uuid import uuid4\n\nfrom cooki"
  },
  {
    "path": "tox.ini",
    "chars": 50,
    "preview": "[flake8]\nmax-line-length = 79\nmax-complexity = 10\n"
  },
  {
    "path": "utils/profile_creator.py",
    "chars": 525,
    "preview": "# platform independent profile configurator\nimport click\nfrom databricks_cli.configure.cli import DatabricksConfig, upda"
  },
  {
    "path": "{{cookiecutter.project_name}}/.coveragerc",
    "chars": 334,
    "preview": "[run]\nbranch = True\nsource = {{cookiecutter.project_slug}}\n\n[report]\nexclude_lines =\n    if self.debug:\n    pragma: no c"
  },
  {
    "path": "{{cookiecutter.project_name}}/.github/workflows/onpush.yml",
    "chars": 1138,
    "preview": "name: Test pipeline\n\non:\n  push:\n    branches:\n      - '**'\n    tags-ignore:\n      - 'v*' # this tag type is used for re"
  },
  {
    "path": "{{cookiecutter.project_name}}/.github/workflows/onrelease.yml",
    "chars": 1277,
    "preview": "name: Release pipeline\n\non:\n  push:\n    tags:\n      - 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10\n\n\njobs:\n  "
  },
  {
    "path": "{{cookiecutter.project_name}}/.gitignore",
    "chars": 351,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# Distribution / packaging\n*.egg-info/\nbuild\n"
  },
  {
    "path": "{{cookiecutter.project_name}}/.gitlab-ci.yml",
    "chars": 1067,
    "preview": "stages:\n    - unit-testing\n    - integration-testing\n    - release\n\nunit-testing:\n  stage: unit-testing\n  image: python:"
  },
  {
    "path": "{{cookiecutter.project_name}}/README.md",
    "chars": 2968,
    "preview": "# {{cookiecutter.project_name}}\n\nThis is a sample project for Databricks, generated via cookiecutter.\n\nWhile using this "
  },
  {
    "path": "{{cookiecutter.project_name}}/azure-pipelines.yml",
    "chars": 2689,
    "preview": "variables:\n- group: Databricks-environment\n\ntrigger:\n  batch: true\n  branches:\n    include:\n    - '*'\n\n  tags:\n    inclu"
  },
  {
    "path": "{{cookiecutter.project_name}}/conf/test/sample.json",
    "chars": 99,
    "preview": "{\n  \"output_format\": \"delta\",\n  \"output_path\": \"dbfs:/dbx/tmp/test/{{cookiecutter.project_slug}}\"\n}"
  },
  {
    "path": "{{cookiecutter.project_name}}/pytest.ini",
    "chars": 218,
    "preview": "[pytest]\naddopts = -s -p no:warnings\nlog_cli = 1\nlog_cli_level = INFO\nlog_cli_format = [pytest][%(asctime)s][%(levelname"
  },
  {
    "path": "{{cookiecutter.project_name}}/setup.py",
    "chars": 349,
    "preview": "from setuptools import find_packages, setup\nfrom {{cookiecutter.project_slug}} import __version__\n\nsetup(\n    name=\"{{co"
  },
  {
    "path": "{{cookiecutter.project_name}}/tests/integration/sample_test.py",
    "chars": 1501,
    "preview": "import unittest\n\nfrom {{cookiecutter.project_slug}}.jobs.sample.entrypoint import SampleJob\nfrom uuid import uuid4\nfrom "
  },
  {
    "path": "{{cookiecutter.project_name}}/tests/unit/sample_test.py",
    "chars": 1149,
    "preview": "import unittest\nimport tempfile\nimport os\nimport shutil\n\nfrom {{cookiecutter.project_slug}}.jobs.sample.entrypoint impor"
  },
  {
    "path": "{{cookiecutter.project_name}}/unit-requirements.txt",
    "chars": 53,
    "preview": "setuptools\nwheel\npyspark==3.0.0\npytest\npytest-cov\ndbx"
  },
  {
    "path": "{{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/__init__.py",
    "chars": 22,
    "preview": "__version__ = \"0.0.1\"\n"
  },
  {
    "path": "{{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/common.py",
    "chars": 2985,
    "preview": "import json\nfrom abc import ABC, abstractmethod\nfrom argparse import ArgumentParser\nfrom logging import Logger\nfrom typi"
  },
  {
    "path": "{{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "{{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/sample/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "{{cookiecutter.project_name}}/{{cookiecutter.project_slug}}/jobs/sample/entrypoint.py",
    "chars": 559,
    "preview": "from {{cookiecutter.project_slug}}.common import Job\n\n\nclass SampleJob(Job):\n\n    def launch(self):\n        self.logger."
  }
]

About this extraction

This page contains the full source code of the databrickslabs/cicd-templates GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 36 files (59.8 KB), approximately 15.0k tokens, and a symbol index with 33 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!