Full Code of confluentinc/ducktape for AI

master 7b1e2132b337 cached
193 files
650.7 KB
151.6k tokens
1130 symbols
1 requests
Download .txt
Showing preview only (703K chars total). Download the full file or copy to clipboard to get everything.
Repository: confluentinc/ducktape
Branch: master
Commit: 7b1e2132b337
Files: 193
Total size: 650.7 KB

Directory structure:
gitextract_muj9xuou/

├── .coveragerc
├── .dockerignore
├── .github/
│   └── ISSUE_TEMPLATE/
│       └── bug_report.md
├── .gitignore
├── .readthedocs.yaml
├── .semaphore/
│   └── semaphore.yml
├── CODEOWNERS
├── Dockerfile
├── Jenkinsfile.disabled
├── README.md
├── Vagrantfile
├── docs/
│   ├── Makefile
│   ├── README.md
│   ├── _static/
│   │   └── theme_overrides.css
│   ├── api/
│   │   ├── clusters.rst
│   │   ├── remoteaccount.rst
│   │   ├── services.rst
│   │   ├── templates.rst
│   │   └── test.rst
│   ├── api.rst
│   ├── changelog.rst
│   ├── conf.py
│   ├── debug_tests.rst
│   ├── index.rst
│   ├── install.rst
│   ├── make.bat
│   ├── misc.rst
│   ├── new_services.rst
│   ├── new_tests.rst
│   ├── requirements.txt
│   ├── run_tests.rst
│   └── test_clusters.rst
├── ducktape/
│   ├── __init__.py
│   ├── __main__.py
│   ├── cluster/
│   │   ├── __init__.py
│   │   ├── cluster.py
│   │   ├── cluster_node.py
│   │   ├── cluster_spec.py
│   │   ├── consts.py
│   │   ├── finite_subcluster.py
│   │   ├── json.py
│   │   ├── linux_remoteaccount.py
│   │   ├── localhost.py
│   │   ├── node_container.py
│   │   ├── node_spec.py
│   │   ├── remoteaccount.py
│   │   ├── vagrant.py
│   │   └── windows_remoteaccount.py
│   ├── command_line/
│   │   ├── __init__.py
│   │   ├── defaults.py
│   │   ├── main.py
│   │   └── parse_args.py
│   ├── errors.py
│   ├── json_serializable.py
│   ├── jvm_logging.py
│   ├── mark/
│   │   ├── __init__.py
│   │   ├── _mark.py
│   │   ├── consts.py
│   │   ├── mark_expander.py
│   │   └── resource.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── background_thread.py
│   │   ├── service.py
│   │   └── service_registry.py
│   ├── template.py
│   ├── templates/
│   │   └── report/
│   │       ├── report.css
│   │       └── report.html
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── event.py
│   │   ├── loader.py
│   │   ├── loggermaker.py
│   │   ├── reporter.py
│   │   ├── result.py
│   │   ├── runner.py
│   │   ├── runner_client.py
│   │   ├── scheduler.py
│   │   ├── serde.py
│   │   ├── session.py
│   │   ├── status.py
│   │   ├── test.py
│   │   └── test_context.py
│   └── utils/
│       ├── __init__.py
│       ├── http_utils.py
│       ├── local_filesystem_utils.py
│       ├── persistence.py
│       ├── terminal_size.py
│       └── util.py
├── requirements-test.txt
├── requirements.txt
├── ruff.toml
├── service.yml
├── setup.cfg
├── setup.py
├── systests/
│   ├── __init__.py
│   └── cluster/
│       ├── __init__.py
│       ├── test_debug.py
│       ├── test_no_cluster.py
│       ├── test_remote_account.py
│       └── test_runner_operations.py
├── tests/
│   ├── __init__.py
│   ├── cluster/
│   │   ├── __init__.py
│   │   ├── check_cluster.py
│   │   ├── check_cluster_spec.py
│   │   ├── check_finite_subcluster.py
│   │   ├── check_json.py
│   │   ├── check_localhost.py
│   │   ├── check_node_container.py
│   │   ├── check_remoteaccount.py
│   │   └── check_vagrant.py
│   ├── command_line/
│   │   ├── __init__.py
│   │   ├── check_main.py
│   │   └── check_parse_args.py
│   ├── ducktape_mock.py
│   ├── loader/
│   │   ├── __init__.py
│   │   ├── check_loader.py
│   │   └── resources/
│   │       ├── __init__.py
│   │       ├── loader_test_directory/
│   │       │   ├── README
│   │       │   ├── __init__.py
│   │       │   ├── invalid_test_suites/
│   │       │   │   ├── empty_file.yml
│   │       │   │   ├── malformed_test_suite.yml
│   │       │   │   ├── not_yaml.yml
│   │       │   │   ├── test_suite_refers_to_non_existent_file.yml
│   │       │   │   ├── test_suite_with_malformed_params.yml
│   │       │   │   └── test_suites_with_no_tests.yml
│   │       │   ├── name_does_not_match_pattern.py
│   │       │   ├── sub_dir_a/
│   │       │   │   ├── __init__.py
│   │       │   │   ├── test_c.py
│   │       │   │   └── test_d.py
│   │       │   ├── sub_dir_no_tests/
│   │       │   │   ├── __init__.py
│   │       │   │   └── just_some_file.py
│   │       │   ├── test_a.py
│   │       │   ├── test_b.py
│   │       │   ├── test_decorated.py
│   │       │   ├── test_suite_cyclic_a.yml
│   │       │   ├── test_suite_cyclic_b.yml
│   │       │   ├── test_suite_decorated.yml
│   │       │   ├── test_suite_import_malformed.yml
│   │       │   ├── test_suite_import_py.yml
│   │       │   ├── test_suite_malformed.yml
│   │       │   ├── test_suite_multiple.yml
│   │       │   ├── test_suite_single.yml
│   │       │   ├── test_suite_with_self_import.yml
│   │       │   ├── test_suite_with_single_import.yml
│   │       │   └── test_suites/
│   │       │       ├── refers_to_parent_dir.yml
│   │       │       ├── sub_dir_a_test_c.yml
│   │       │       ├── sub_dir_a_test_c_via_class.yml
│   │       │       ├── sub_dir_a_with_exclude.yml
│   │       │       ├── sub_dir_test_import.yml
│   │       │       └── test_suite_glob.yml
│   │       └── report.json
│   ├── logger/
│   │   ├── __init__.py
│   │   └── check_logger.py
│   ├── mark/
│   │   ├── __init__.py
│   │   ├── check_cluster_use_metadata.py
│   │   ├── check_env.py
│   │   ├── check_ignore.py
│   │   ├── check_parametrize.py
│   │   └── resources/
│   │       └── __init__.py
│   ├── reporter/
│   │   └── check_symbol_reporter.py
│   ├── runner/
│   │   ├── __init__.py
│   │   ├── check_runner.py
│   │   ├── check_runner_memory.py
│   │   ├── check_sender_receiver.py
│   │   ├── fake_remote_account.py
│   │   └── resources/
│   │       ├── __init__.py
│   │       ├── test_bad_actor.py
│   │       ├── test_failing_tests.py
│   │       ├── test_fails_to_init.py
│   │       ├── test_fails_to_init_in_setup.py
│   │       ├── test_memory_leak.py
│   │       ├── test_thingy.py
│   │       └── test_various_num_nodes.py
│   ├── scheduler/
│   │   ├── __init__.py
│   │   └── check_scheduler.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── check_background_thread_service.py
│   │   ├── check_jvm_logging.py
│   │   └── check_service.py
│   ├── templates/
│   │   ├── __init__.py
│   │   ├── service/
│   │   │   ├── __init__.py
│   │   │   ├── check_render.py
│   │   │   └── templates/
│   │   │       └── sample
│   │   └── test/
│   │       ├── __init__.py
│   │       ├── check_render.py
│   │       └── templates/
│   │           └── sample
│   ├── test_utils.py
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── check_session.py
│   │   ├── check_test.py
│   │   └── check_test_context.py
│   └── utils/
│       ├── __init__.py
│       └── check_util.py
└── tox.ini

================================================
FILE CONTENTS
================================================

================================================
FILE: .coveragerc
================================================
[run]
omit =
    .git/*
    .tox/*
    docs/*
    setup.py
    test/*
    tests/*


================================================
FILE: .dockerignore
================================================
*


================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---
@confluentinc/devprod-apac-n-frameworks-eng is tagged for visibility

**Describe the bug**
A clear and concise description of what the bug is.


**To Reproduce**
Steps to reproduce the behavior

**Expected behavior**
A clear and concise description of what you expected to happen.

**Additional context**
Add any other context about the problem here.


================================================
FILE: .gitignore
================================================
# Duckape
systests/.ducktape/
systests/results/
results
.ducktape
.vagrant

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
textcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
.virtualenvs/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/

.idea
/.vagrant/
.vscode


================================================
FILE: .readthedocs.yaml
================================================
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

build:
  os: ubuntu-20.04
  tools:
    python: "3.13"

sphinx:
  configuration: docs/conf.py

python:
  install:
    - requirements: docs/requirements.txt
    - method: setuptools
      path: .


================================================
FILE: .semaphore/semaphore.yml
================================================
version: v1.0
name: pr-test-job
agent:
  machine:
    type: s1-prod-ubuntu24-04-amd64-1

execution_time_limit:
  hours: 1

global_job_config:
  prologue:
    commands:
      - checkout


blocks:
  - name: Test
    dependencies: []
    task:
      jobs:
        - name: Test Python 3.8
          commands:
            - sem-version python 3.8
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py38.xml'
            - tox -e py38
        - name: Test Python 3.9
          commands:
            - sem-version python 3.9
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py39.xml'
            - tox -e py39
        - name: Test Python 3.10
          commands:
            - sem-version python 3.10
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py310.xml'
            - tox -e py310
        - name: Test Python 3.11
          commands:
            - sem-version python 3.11
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py311.xml'
            - tox -e py311
        - name: Test Python 3.12
          commands:
            - sem-version python 3.12
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py312.xml'
            - tox -e py312
        - name: Test Python 3.13
          commands:
            - sem-version python 3.13
            - pip install tox
            - export PYTESTARGS='--junitxml=test/results-py313.xml'
            - tox


================================================
FILE: CODEOWNERS
================================================
* @confluentinc/cp-test-frameworks-and-readiness


================================================
FILE: Dockerfile
================================================
# An image of ducktape that can be used to setup a Docker cluster where ducktape is run inside the container.

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y libffi-dev libssl-dev openssh-server python-dev python-pip python-virtualenv && \
    virtualenv /opt/ducktape && \
    . /opt/ducktape/bin/activate && \
    pip install -U pip setuptools wheel && \
    pip install bcrypt cryptography==2.2.2 pynacl && \
    mkdir /var/run/sshd && \
    mkdir /root/.ssh && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ARG DUCKTAPE_VERSION=0.7.3

RUN . /opt/ducktape/bin/activate && \
    pip install ducktape==$DUCKTAPE_VERSION && \
    ln -s /opt/ducktape/bin/ducktape /usr/local/bin/ducktape && \
    deactivate && \
    /usr/local/bin/ducktape --version

EXPOSE 22

CMD    ["/usr/sbin/sshd", "-D"]


================================================
FILE: Jenkinsfile.disabled
================================================
python {
    publish = false  // Release is done manually to PyPI and not supported yet.
}


================================================
FILE: README.md
================================================
[![Documentation Status](https://readthedocs.org/projects/ducktape/badge/?version=latest)](https://ducktape.readthedocs.io/en/latest/?badge=latest)


Distributed System Integration & Performance Testing Library
============================================================

Overview
--------

Ducktape contains tools for running system integration and performance tests. It provides the following features:

* Isolation by default so system tests are as reliable as possible.
* Utilities for pulling up and tearing down services easily in clusters in different environments
  (e.g. local, custom cluster, Vagrant, K8s, Mesos, Docker, cloud providers, etc.)
* Easy to write unit tests for distributed systems
* Trigger special events (e.g. bouncing a service)
* Collect results (e.g. logs, console output)
* Report results (e.g. expected conditions met, performance results, etc.)

Documentation
-------------

For detailed documentation on how to install, run, create new tests please refer to: http://ducktape.readthedocs.io/

Contribute
----------

- Source Code: https://github.com/confluentinc/ducktape
- Issue Tracker: https://github.com/confluentinc/ducktape/issues

License
-------
The project is licensed under the Apache 2 license.


================================================
FILE: Vagrantfile
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -*- mode: ruby -*-
# vi: set ft=ruby :

require 'socket'

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

# General config
enable_dns = false
num_workers = 3
ram_megabytes = 300
base_box = "ubuntu/focal64"

local_config_file = File.join(File.dirname(__FILE__), "Vagrantfile.local")
if File.exists?(local_config_file) then
  eval(File.read(local_config_file), binding, "Vagrantfile.local")
end

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.hostmanager.enabled = true
  config.hostmanager.manage_host = enable_dns
  config.hostmanager.include_offline = false

  ## Provider-specific global configs
  config.vm.provider :virtualbox do |vb,override|
    override.vm.box = base_box

    override.hostmanager.ignore_private_ip = false

    # Brokers started with the standard script currently set Xms and Xmx to 1G,
    # plus we need some extra head room.
    vb.customize ["modifyvm", :id, "--memory", ram_megabytes.to_s]
  end

  ## Cluster definition
  (1..num_workers).each { |i|
    name = "ducktape" + i.to_s
    config.vm.define name do |worker|
      worker.vm.hostname = name
      worker.vm.network :private_network, ip: "192.168.56." + (150 + i).to_s
    end
  }

end


================================================
FILE: docs/Makefile
================================================
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line.
SPHINXOPTS    =
SPHINXBUILD   = sphinx-build
SPHINXPROJ    = ducktape
SOURCEDIR     = .
BUILDDIR      = _build

# Put it first so that "make" without argument is like "make help".
help:
	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

================================================
FILE: docs/README.md
================================================
Ducktape documentation quick start guide
========================================


Build the documentation
-----------------------

To render the pages run::
```shell
tox -e docs
```
    
The rendered pages will be in ``docs/_build/html``


Specify documentation format
----------------------------

Documentation is built using [sphinx-build](https://www.sphinx-doc.org/en/master/man/sphinx-build.html) command.
You can select which builder to use using SPHINX_BUILDER command:
```shell
SPHINX_BUILDER=man tox -e docs
```
All available values: https://www.sphinx-doc.org/en/master/man/sphinx-build.html#cmdoption-sphinx-build-M


Pass options to sphinx-build
----------------------------
Any argument after `--` will be passed to the 
[sphinx-build](https://www.sphinx-doc.org/en/master/man/sphinx-build.html) command directly:
```shell
tox -e docs -- -E
```




================================================
FILE: docs/_static/theme_overrides.css
================================================
/* override table width restrictions */
@media screen and (min-width: 767px) {

   .wy-table-responsive table td {
      /* !important prevents the common CSS stylesheets from overriding
         this as on RTD they are loaded after this stylesheet */
      white-space: normal !important;
   }

   .wy-table-responsive {
      overflow: visible !important;
   }
}

================================================
FILE: docs/api/clusters.rst
================================================
Clusters
========

.. autoclass:: ducktape.cluster.cluster.Cluster
    :members:

.. autoclass:: ducktape.cluster.vagrant.VagrantCluster
    :members:

.. autoclass:: ducktape.cluster.localhost.LocalhostCluster
    :members:

.. autoclass:: ducktape.cluster.json.JsonCluster
    :members:


================================================
FILE: docs/api/remoteaccount.rst
================================================
Remote Account
==============

.. autoclass:: ducktape.cluster.remoteaccount.RemoteAccount
    :members:

.. autoclass:: ducktape.cluster.remoteaccount.LogMonitor
    :members:

.. autoclass:: ducktape.cluster.linux_remoteaccount.LinuxRemoteAccount
    :members:

.. autoclass:: ducktape.cluster.windows_remoteaccount.WindowsRemoteAccount
    :members:



================================================
FILE: docs/api/services.rst
================================================
Services
========

.. autoclass:: ducktape.services.service.Service
    :members:

.. autoclass:: ducktape.services.background_thread.BackgroundThreadService
    :members:




================================================
FILE: docs/api/templates.rst
================================================
Template
========

.. autoclass:: ducktape.template.TemplateRenderer
    :members:

================================================
FILE: docs/api/test.rst
================================================
Test
====

.. autoclass:: ducktape.tests.test.Test
    :members:

.. autoclass:: ducktape.tests.test.TestContext
    :members:


================================================
FILE: docs/api.rst
================================================
.. _topics-api:

=======
API Doc
=======

.. toctree::
    api/test
    api/services
    api/remoteaccount
    api/clusters
    api/templates


================================================
FILE: docs/changelog.rst
================================================
.. _topics-changelog:

====
Changelog
====

0.14.0
======
Tuesday, March 10th, 2026
-------------------------
- Ensure log collection in case of runner client unresponsive issue
- Enable jvm logging for java based services
- Graceful shutdown and reporting fix for runner client unresponsive issue
- Add heterogeneous cluster support
- Add types to ducktape
- Call JUnitReporter after each test completes
- Update dependency requests to v2.32.4 [security]


0.13.0
======
Monday, June 09th, 2025
-----------------------
- Report expected test count in the summary
- Add python 3.13 support and run pr test job with all supported python versions
- Upgrade PyYAML and fix style error
- Add support for historical report in loader
- Update dependency requests to v2.32.2
- Update dependency pycryptodome to v3.19.1
- Removing generated internal project.yml, public project.yml


0.12.0
======
Friday, October 04th, 2024
--------------------------
- Store summary of previous runs when deflaking
- Runner Client Minor Refactor and Test
- Adding nodes used in test summary for HTML report
- Parse args from config files independently
- add support to python 3.10, 3.11 and 3.12


0.11.4
======
Friday, August 18th, 2023
-------------------------
- Updated `requests` version to 2.31.0

0.11.3
======
Wednesday, November 30th, 2022
------------------------------
- Bugfix: fixed an edge case when BackgroundThread wait() method errors out if start() method has never been called.

0.11.2
======
Wednesday, November 30th, 2022
------------------------------
- Bugfix: fixed an edge case when BackgroundThread wait() method errors out if start() method has never been called.

0.11.1
======
- Removed `tox` from requirements. It was not used, but was breaking our builds due to recent pushes to `virtualenv`.
- Bumped `jinja2` to `3.0.x`

0.11.0
======
- Option to fail tests without `@cluster` annotation. Deprecate ``min_cluster_spec()`` method in the ``Test`` class - `#336 <https://github.com/confluentinc/ducktape/pull/336>`_

0.10.3
======
Friday, August 18th, 2023
-------------------------
- Updated `requests` version to 2.31.0

0.10.2
======
- Removed `tox` from requirements. It was not used, but was breaking our builds due to recent pushes to `virtualenv`.

0.10.1
======
- Disable health checks for nodes, effectively disabling `#325 <https://github.com/confluentinc/ducktape/pull/325>`_. See github issue for details - `#339 <https://github.com/confluentinc/ducktape/issues/339>`_

0.10.0
======
- **DO NOT USE**, this release has a nasty bug - `#339 <https://github.com/confluentinc/ducktape/issues/339>`_
- Do not schedule tests on unresponsive nodes - `#325 <https://github.com/confluentinc/ducktape/pull/325>`_

0.9.4
=====
Friday, August 18th, 2023
-------------------------
- Updated `requests` version to 2.31.0

0.9.3
=====
- Removed `tox` from requirements. It was not used, but was breaking our builds due to recent pushes to `virtualenv`.

0.9.2
=====
- Service release, no ducktape changes, simply fixed readthedocs configs.

0.9.1
=====
- use a generic network device based on the devices found on the remote machine rather than a hardcoded one - `#314 <https://github.com/confluentinc/ducktape/pull/314>`_ and `#328 <https://github.com/confluentinc/ducktape/pull/328>`_
- clean up process properly after an exception during test runner execution - `#323 <https://github.com/confluentinc/ducktape/pull/323>`_
- log ssh errors - `#319 <https://github.com/confluentinc/ducktape/pull/319>`_
- update vagrant tests to use ubuntu20 - `#328 <https://github.com/confluentinc/ducktape/pull/328>`_
- added command to print the total number of nodes the tests run will require - `#320 <https://github.com/confluentinc/ducktape/pull/320>`_
- drop support for python 3.6 and add support for python 3.9 - `#317 <https://github.com/confluentinc/ducktape/pull/317>`_

0.9.0
=====
- Upgrade paramiko version to 2.10.0 - `#312 <https://github.com/confluentinc/ducktape/pull/312>`_
- Support SSH timeout - `#311 <https://github.com/confluentinc/ducktape/pull/311>`_

0.8.18
======
- Updated `requests` version to `2.31.0`

0.8.17
======
- Removed `tox` from requirements. It was not used, but was breaking our builds due to recent pushes to `virtualenv`.

0.8.x
=====
- Support test suites
- Easier way to rerun failed tests - generate test suite with all the failed tests and also print them in the log so that user can copy them and paste as ducktape command line arguments
- Python 2 is no longer supported, minimum supported version is 3.6
- Added `--deflake N` flag - if provided, it will attempt to rerun each failed test  up to N times, and if it eventually passes, it will be marked as Flaky - `#299 <https://github.com/confluentinc/ducktape/pull/299>`_
- [backport, also in 0.9.1] - use a generic network device based on the devices found on the remote machine rather than a hardcoded one - `#314 <https://github.com/confluentinc/ducktape/pull/314>`_ and `#328 <https://github.com/confluentinc/ducktape/pull/328>`_
- [backport, also in 0.9.1] - clean up process properly after an exception during test runner execution - `#323 <https://github.com/confluentinc/ducktape/pull/323>`_
- [backport, also in 0.9.1] - log ssh errors - `#319 <https://github.com/confluentinc/ducktape/pull/319>`_
- [backport, also in 0.9.1] - update vagrant tests to use ubuntu20 - `#328 <https://github.com/confluentinc/ducktape/pull/328>`_
- [backport, also in 0.9.1] - added command to print the total number of nodes the tests run will require - `#320 <https://github.com/confluentinc/ducktape/pull/320>`_


================================================
FILE: docs/conf.py
================================================
# -*- coding: utf-8 -*-
#
# ducktape documentation build configuration file, created by
# sphinx-quickstart on Mon Mar 13 14:06:07 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import sphinx_rtd_theme
from ducktape import __version__

sys.path.insert(0, os.path.abspath(".."))


# -- General configuration ------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.viewcode", "sphinx.ext.autodoc", "sphinxarg.ext"]

# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]

# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ".rst"

# The master toctree document.
master_doc = "index"

# General information about the project.
project = "Ducktape"
copyright = "2017, Confluent Inc."
author = "Confluent Inc."

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#

# The short X.Y version.
version = __version__
# The full version, including alpha/beta/rc tags.
release = version

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"

# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False


# -- Options for HTML output ----------------------------------------------

# The theme to use for HTML and HTML Help pages.  See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_rtd_theme"

# Theme options are theme-specific and customize the look and feel of a theme
# further.  For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}

# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']


# -- Options for HTMLHelp output ------------------------------------------

# Output file base name for HTML help builder.
htmlhelp_basename = "ducktapedoc"


# -- Options for LaTeX output ---------------------------------------------

latex_elements = {
    # The paper size ('letterpaper' or 'a4paper').
    #
    # 'papersize': 'letterpaper',
    # The font size ('10pt', '11pt' or '12pt').
    #
    # 'pointsize': '10pt',
    # Additional stuff for the LaTeX preamble.
    #
    # 'preamble': '',
    # Latex figure (float) alignment
    #
    # 'figure_align': 'htbp',
}

# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
#  author, documentclass [howto, manual, or own class]).
latex_documents = [
    (master_doc, "ducktape.tex", "Ducktape Documentation", "Confluent Inc.", "manual"),
]


# -- Options for manual page output ---------------------------------------

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "ducktape", "Ducktape Documentation", [author], 1)]


# -- Options for Texinfo output -------------------------------------------

# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
#  dir menu entry, description, category)
texinfo_documents = [
    (
        master_doc,
        "ducktape",
        "Ducktape Documentation",
        author,
        "ducktape",
        "One line description of project.",
        "Miscellaneous",
    ),
]


# ---- Options for Autodoc ------------------------------------------------

autodoc_default_flags = ["show-inheritance"]


def skip(app, what, name, obj, skip, options):
    if name == "__init__":
        return False
    return skip


def setup(app):
    app.connect("autodoc-skip-member", skip)


================================================
FILE: docs/debug_tests.rst
================================================
.. _topics-debug_tests:

===========
Debug Tests
===========

The test results go in ``results/<date>—<test_number>``. For results from a particular test, look for ``results/<date>—<test_number>/test_class_name/<test_method_name>/`` directory. The ``test_log.debug`` file will contain the log output from the python driver, and logs of services used in the test will be in ``service_name/node_name`` sub-directory.

If there is not enough information in the logs, you can re-run the test with ``--no-teardown`` argument.

.. code-block:: bash

    ducktape dir/tests/my_test.py::TestA.test_a --no-teardown


This will run the test but will not kill any running processes or remove log files when the test finishes running. Then, you can examine the state of a running service or the machine when the service process is running by logging into that machine. Suppose you suspect a particular service being the cause of the test failure. You can find out which machine was allocated to that service by either looking at ``test_log.debug`` or at directory names under ``results/<date>—<test_number>/test_class_name/<test_method_name>/service_name/``. It could be useful to add an explicit debug log to ``start_node`` method with a node ID and node’s hostname information for easy debugging:

.. code-block:: python

    def start_node(self, node):
        idx = self.idx(node)
        self.logger.info("Starting ZK node %d on %s", idx, node.account.hostname)

The log statement will look something like this::

    [INFO  - 2017-03-28 22:07:25,222 - zookeeper - start_node - lineno:50]: Starting ZK node 1 on worker1

If you are using Vagrant for example, you can then log into that node via:

.. code-block:: bash

    vagrant ssh worker1



Use Logging
===========

Distributed system tests can be difficult to debug. You want to add a lot of logging for debugging and tracking progress of the test. A good approach would be to log an intention of an operation with some useful information before any operation that can fail. It could be a good idea to use a higher logging level than you would in production so more info is available. For example, make your log levels default to DEBUG instead of INFO. Also, put enough information to a message of ``assert`` to help figure out what went wrong as well as log messages. Consider an example of testing ElasticSearch service:

.. code-block:: python

        res = es.search(index="test-index", body={"query": {"match_all": {}}})
        self.logger.debug("result: %s" % res['hits'])
        assert res['hits']['total'] == 1, "Expected total 1 hit, but got %d" % res['hits']['total']
        for hit in res['hits']['hits']:
            assert 'kimchy’ == hit['_source']['author’], "Expected author kimchy but got %s" % hit['_source']['author']
            assert 'Elasticsearch: cool.' == hit['_source']['text’], "Expected text Elasticsearch: cool. but got %s" % hit['_source']['text’]

First, the tests outputs the result of a search, so that if any of the following assertions fail, we can see the whole result in ``test_log.debug``. Assertion messages help to quickly see the difference in expected and retrieved results. 


Fail early
==========

Try to avoid a situation where a test fails because of an uncaught failure earlier in the test. Suppose we write a ``start_node`` method that does not check if the service starts successfully. The service fails to start, but we get a test failure indication that there was a problem querying the service. It would be much faster to debug the issue if the test failure pointed to the issue with starting the service. So make sure to add checks for operations that may fail, and fail the test earlier than later.


Flaky tests
============

Flaky tests are hard to debug due to their non-determinism, they waste time, and sometimes hide real bugs: developers tend to ignore those failures, and thus could miss real bugs. Flakiness can come from the test itself, the system it is testing, or the environmental issues.

Waiting on Conditions
^^^^^^^^^^^^^^^^^^^^^

A common cause of a flaky test is asynchronous wait on conditions. A test makes an asynchronous call and does not properly wait for the result of the call to become available before using it::

	node.account.kill_process("zookeeper", allow_fail=False)
	time.sleep(2)
	assert not self.alive(node), “Expected Zookeeper service to stop” 

In this example, the test terminates a zookeeper service via ``kill_process`` and then uses ``time.sleep`` to wait for it to stop. If terminating the process takes longer, the test will fail. The test may intermittently fail based on how fast a process terminates. Of course, there should be a timeout for termination to ensure that test does not run indefinitely. You could increase sleep time, but that also increases the test run length. A more explicit way to express this condition is to use :meth:`~ ducktape.utils.util.wait_until` with a timeout::

	node.account.kill_process("zookeeper", allow_fail=False)
	wait_until(lambda: not self.alive(node),
                   timeout_sec=5,
                   err_msg="Timed out waiting for zookeeper to stop.")

The test will progress as soon as condition is met, and timeout ensures that the test does not run indefinitely if termination never ends.

Think carefully about the condition to check. A common source of issues is incorrect choice of condition of successful service start in ``start_node`` implementation. One way to check that a service starts successfully is to wait for some specific log output. However, make sure that this specific log message is always printed after the things run successfully. If there is still a chance that service may fail to start after the log is printed, this may cause race conditions and flaky tests. Sometimes it could be better to check if the service runs successfully by querying a service or checking some metrics if they are available.


Test Order Dependency
^^^^^^^^^^^^^^^^^^^^^

Make sure that your services properly cleanup the state in ``clean_node`` implementation. Failure to properly clean up the state can cause the next run of the test to fail or fail intermittently if other tests happen to clean same directories for example. One of the benefits of isolation that ducktape assumes is that you can assume you have complete control of the machine. It is ok to delete the entire working space. It is also safe to kill all java processes you can find rather than being more targeted. So, clean up aggressively.

Incorrect Assumptions
^^^^^^^^^^^^^^^^^^^^^

It is possible that assumptions about how the system works that we are testing are incorrect. One way to help debug this is to use more detailed comments why certain checks are made.


Tools for Managing Logs
=======================

Analyzing and matching up logs from a distributed service could be time consuming. There are many good tools for working with logs. Examples include http://lnav.org/, http://list.xmodulo.com/multitail.html, and http://glogg.bonnefon.org/.

Validating Ssh Issues
=======================

Ducktape supports running custom validators when an ssh error occurs, allowing you to run your own validation against a host.
this is done simply by running ducktape with the `--ssh-checker-function`, followed by the module path to your function, so for instance::
    
    ducktape my-test.py --ssh-checker-function my.module.validator.validate_ssh

this function will take in the ssh error raised as its first argument, and the remote account object as its second.


================================================
FILE: docs/index.rst
================================================
.. _topics-index:

============================================================
Distributed System Integration & Performance Testing Library
============================================================
Ducktape contains tools for running system integration and performance tests. It provides the following features:

   * Write tests for distributed systems in a simple unit test-like style
   * Isolation by default so system tests are as reliable as possible.
   * Utilities for pulling up and tearing down services easily in clusters in different environments (e.g. local, custom cluster, Vagrant, K8s, Mesos, Docker, cloud providers, etc.)
   * Trigger special events (e.g. bouncing a service)
   * Collect results (e.g. logs, console output)
   * Report results (e.g. expected conditions met, performance results, etc.)

.. toctree::
   install
   test_clusters
   run_tests
   new_tests
   new_services
   debug_tests
   api
   misc
   changelog

Contribute
==========

- Source Code: https://github.com/confluentinc/ducktape
- Issue Tracker: https://github.com/confluentinc/ducktape/issues

License
=======

The project is licensed under the Apache 2 license.


================================================
FILE: docs/install.rst
================================================
.. _topics-install:

=======
Install
=======

1. Ducktape requires python 3.7 or later.

2. Install `cryptography`_ (used by `paramiko` which Ducktape depends on), this may have non-python external requirements

.. _cryptography: https://cryptography.io/en/latest/installation

    * OSX (if needed)::

        brew install openssl

    * Ubuntu::

        sudo apt-get install build-essential libssl-dev libffi-dev python-dev

    * Fedora and RHEL-derivatives::

        sudo yum install gcc libffi-devel python-devel openssl-devel


3. As a general rule, it's recommended to use an isolation tool such as ``virtualenv``

4. Install Ducktape::

    pip install ducktape

.. note::

    On OSX you may need to::

        C_INCLUDE_PATH=/usr/local/opt/openssl/include LIBRARY_PATH=/usr/local/opt/openssl/lib pip install ducktape

    If you are not using a virtualenv and get the error message `failed with error code 1`, you may need to install ducktape to your user directory instead with ::

        pip install --user ducktape


================================================
FILE: docs/make.bat
================================================
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
	set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=ducktape

if "%1" == "" goto help

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
	echo.
	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
	echo.installed, then set the SPHINXBUILD environment variable to point
	echo.to the full path of the 'sphinx-build' executable. Alternatively you
	echo.may add the Sphinx directory to PATH.
	echo.
	echo.If you don't have Sphinx installed, grab it from
	echo.http://sphinx-doc.org/
	exit /b 1
)

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%

:end
popd


================================================
FILE: docs/misc.rst
================================================
.. _topics-misc:

====
Misc
====

Developer Install
=================

If you are are a ducktape developer, consider using the develop command instead of install. This allows you to make code changes without constantly reinstalling ducktape (see http://stackoverflow.com/questions/19048732/python-setup-py-develop-vs-install for more information)::

    cd ducktape
    python setup.py develop

To uninstall::

    cd ducktape
    python setup.py develop --uninstall


Unit Tests
==========

You can run the tests with code coverage and style check using `tox <https://tox.readthedocs.io/en/latest/>`_::

    tox

Alternatively, you can activate the virtualenv and run pytest and ruff directly::

    source ~/.virtualenvs/ducktape/bin/activate
    pytest tests
    ruff check
    ruff format --check


System Tests
============

System tests are included under the `systests/` directory. These tests are end to end tests that run across multiple VMs, testing ducktape in an environment similar to how it would be used in practice to test other projects.

The system tests run against virtual machines managed by `Vagrant <https://www.vagrantup.com/>`_. With Vagrant installed, start the VMs (3 by default)::

  vagrant up

From a developer install, running the system tests now looks the same as using ducktape on your own project::

  ducktape systests/

You should see the tests running, and then results and logs will be in the default directory, `results/`. By using a developer install, you can make modifications to the ducktape code and iterate on system tests without having to re-install after each modification.

When you're done running tests, you can destroy the VMs::

  vagrant destroy


Windows
=======

Ducktape support Services that run on Windows, but only in EC2.

When a ``Service`` requires a Windows machine, AWS credentials must be configured on the machine running ducktape.

Ducktape uses the `boto3`_ Python module to connect to AWS. And ``boto3`` support many different `configuration options`_

.. _boto3: https://aws.amazon.com/sdk-for-python/
.. _configuration options: https://boto3.readthedocs.io/en/latest/guide/configuration.html#guide-configuration

Here's an example bare minimum configuration using environment variables::

    export AWS_ACCESS_KEY_ID="ABC123"
    export AWS_SECRET_ACCESS_KEY="secret"
    export AWS_DEFAULT_REGION="us-east-1"

The region can be any AWS region, not just ``us-east-1``.


================================================
FILE: docs/new_services.rst
================================================
.. _topics-new_services:

===================
Create New Services
===================

Writing ducktape services
=============================

``Service`` refers generally to multiple processes, possibly long-running, which you
want to run on the test cluster.

These can be services you would actually deploy (e.g., Kafka brokers, ZK servers, REST proxy) or processes used during testing (e.g. producer/consumer performance processes). Services that are distributed systems can support a variable number of nodes which allow them to handle a variety of tests.

Each service is implemented as a class and should at least implement the following:

    * :meth:`~ducktape.services.service.Service.start_node` - start the service (possibly waiting to ensure it started successfully)

    * :meth:`~ducktape.services.service.Service.stop_node` - kill processes on the given node

    * :meth:`~ducktape.services.service.Service.clean_node` - remove persistent state leftover from testing, e.g. log files

These may block to ensure services start or stop properly, but must *not* block for the full lifetime of the service. If you need to run a blocking process (e.g. run a process via SSH and iterate over its output), this should be done in a background thread. For services that exit after completing a fixed operation (e.g. produce N messages to topic foo), you should also implement ``wait``, which will usually just wait for background worker threads to exit. The ``Service`` base class provides a helper method ``run`` which wraps ``start``, ``wait``, and ``stop`` for tests that need to start a service and wait for it to finish. You can also provide additional helper methods for common test functionality. Normal services might provide a ``bounce`` method.

Most of the code you'll write for a service will just be series of SSH commands and tests of output. You should request the number of nodes you'll need using the ``num_nodes`` or ``cluster_spec`` parameter to the Service base class's constructor. Then, in your Service's methods you'll have access to ``self.nodes`` to access the nodes allocated to your service. Each node has an associated :class:`~ducktape.cluster.remoteaccount.RemoteAccount` instance which lets you easily perform remote operations such as running commands via SSH or creating files. By default, these operations try to hide output (but provide it to you if you need to extract some subset of it) and *checks status codes for errors* so any operations that fail cause an obvious failure of the entire test.

.. _service-example-ref:

New Service Example
===================

Let’s walk through an example of writing a simple Zookeeper service.

.. code-block:: python

    class ZookeeperService(Service):
        PERSISTENT_ROOT = "/mnt"
        LOG_FILE = os.path.join(PERSISTENT_ROOT, "zk.log")
        DATA_DIR = os.path.join(PERSISTENT_ROOT, "zookeeper")
        CONFIG_FILE = os.path.join(PERSISTENT_ROOT, "zookeeper.properties")

        logs = {
            "zk_log": {
                "path": LOG_FILE,
                "collect_default": True},
            "zk_data": {
                "path": DATA_DIR,
                "collect_default": False}
        }

        def __init__(self, context, num_nodes):
            super(ZookeeperService, self).__init__(context, num_nodes)


``logs`` is a member of ``Service`` that provides a mechanism for locating and collecting log files produced by the service on its nodes. ``logs`` is a dict with entries that look like ``log_name: {"path": log_path, "collect_default": boolean}``. In our example, log files will be collected on both successful and failed test runs, while files from the data directory will be collected only on failed test runs. Zookeeper service requests the number of nodes passed to its constructor by passing ``num_nodes`` parameters to the Service base class’s constructor.

.. code-block:: python

        def start_node(self, node):
            idx = self.idx(node)
            self.logger.info("Starting ZK node %d on %s", idx, node.account.hostname)

            node.account.ssh("mkdir -p %s" % self.DATA_DIR)
            node.account.ssh("echo %d > %s/myid" % (idx, self.DATA_DIR))

            prop_file = """\n dataDir=%s\n clientPort=2181""" % self.DATA_DIR
            for idx, node in enumerate(self.nodes):
                prop_file += "\n server.%d=%s:2888:3888" % (idx, node.account.hostname)
            self.logger.info("zookeeper.properties: %s" % prop_file)
            node.account.create_file(self.CONFIG_FILE, prop_file)

            start_cmd = "/opt/kafka/bin/zookeeper-server-start.sh %s 1>> %s 2>> %s &" % \
                    (self.CONFIG_FILE, self.LOG_FILE, self.LOG_FILE)

            with node.account.monitor_log(self.LOG_FILE) as monitor:
                node.account.ssh(start_cmd)
                monitor.wait_until(
                    "binding to port",
                    timeout_sec=100,
                    backoff_sec=7,
                    err_msg="Zookeeper service didn't finish startup"
                )
            self.logger.debug("Zookeeper service is successfully started.")


The ``start_node`` method first creates directories and the config file on the given node, and then invokes the start script to start a Zookeeper service. In this simple example, the config file is created from manually constructed ``prop_file`` string, because it has only a couple of easy to construct lines. More complex config files can be created with templates, as described in :ref:`using-templates-ref`.

A service may take time to start and get to a usable state. Using sleeps to wait for a service to start often leads to a flaky test. The sleep time may be too short, or the service may fail to start altogether. It is useful to verify that the service starts properly before returning from the ``start_node``, and fail the test if the service fails to start. Otherwise, the test will likely fail later, and it would be harder to find the root cause of the failure. One way to check that the service starts successfully is to check whether a service’s process is alive and one additional check that the service is usable such as querying the service or checking some metrics if they are available. Our example checks whether a Zookeeper service is started successfully by searching for a particular output in a log file.

The :class:`~ducktape.cluster.remoteaccount.RemoteAccount` instance associated with each node provides you with :class:`~ducktape.cluster.remoteaccount.LogMonitor` that let you check or wait for a pattern to appear in the log. Our example waits for 100 seconds for “binding to port” string to appear in the ``self.LOG_FILE`` log file, and raises an exception if it does not.

.. code-block:: python

    def pids(self, node):
        try:
            cmd = "ps ax | grep -i zookeeper | grep java | grep -v grep | awk '{print $1}'"
            pid_arr = [pid for pid in node.account.ssh_capture(cmd, allow_fail=True, callback=int)]
            return pid_arr
        except (RemoteCommandError, ValueError) as e:
            return []

    def alive(self, node):
        return len(self.pids(node)) > 0

    def stop_node(self, node):
        idx = self.idx(node)
        self.logger.info("Stopping %s node %d on %s" % (type(self).__name__, idx, node.account.hostname))
        node.account.kill_process("zookeeper", allow_fail=False)

    def clean_node(self, node):
        self.logger.info("Cleaning Zookeeper node %d on %s", self.idx(node), node.account.hostname)
        if self.alive(node):
            self.logger.warn("%s %s was still alive at cleanup time. Killing forcefully..." %
                             (self.__class__.__name__, node.account))
        node.account.kill_process("zookeeper", clean_shutdown=False, allow_fail=True)
        node.account.ssh("rm -rf /mnt/zookeeper /mnt/zookeeper.properties /mnt/zk.log",
                         allow_fail=False)


The ``stop_node`` method uses :meth:`~ducktape.cluster.remoteaccount.RemoteAccount.kill_process` to terminate the service process on the given node. If the remote command to terminate the process fails, :meth:`~ducktape.cluster.remoteaccount.RemoteAccount.kill_process` will raise an ``RemoteCommandError`` exception.

The ``clean_node`` method forcefully kills the process if it is still alive, and then removes persistent state leftover from testing. Make sure to properly cleanup the state to avoid test order dependency and flaky tests. You can assume complete control of the machine, so it is safe to delete an entire temporary working space and kill all java processes, etc.

.. _using-templates-ref:


Using Templates
===============

Both ``Service`` and ``Test`` subclass :class:`~ducktape.template.TemplateRenderer` that lets you render templates directly from strings or from files loaded from *templates/* directory relative to the class. A template contains variables and/or expressions, which are replaced with values when a template is rendered. :class:`~ducktape.template.TemplateRenderer` renders templates using `Jinja2 <http://jinja.pocoo.org/docs/2.9/>`_ template engine. A good use-case for templates is a properties file that needs to be passed to a service process. In :ref:`service-example-ref`, the properties file is created by building a string and using it as contents as follows::

        prop_file = """\n dataDir=%s\n clientPort=2181""" % self.DATA_DIR
        for idx, node in enumerate(self.nodes):
            prop_file += "\n server.%d=%s:2888:3888" % (idx, node.account.hostname)
        node.account.create_file(self.CONFIG_FILE, prop_file)

A template approach is to add a properties file in *templates/* directory relative to the ZookeeperService class:

.. code-block:: rst

    dataDir={{ DATA_DIR }}
    clientPort=2181
    {% for node in nodes %}
    server.{{ loop.index }}={{ node.account.hostname }}:2888:3888
    {% endfor %}


Suppose we named the file zookeeper.properties. The creation of the config file will look like this:

.. code-block:: python

        prop_file = self.render('zookeeper.properties')
        node.account.create_file(self.CONFIG_FILE, prop_file)



================================================
FILE: docs/new_tests.rst
================================================
.. _topics-new_tests:

================
Create New Tests
================

Writing ducktape Tests
======================

Subclass :class:`~ducktape.tests.test.Test` and implement as many ``test`` methods as you
want. The name of each test method must start or end with ``test``,
e.g. ``test_functionality`` or ``example_test``. Typically, a test will
start a few services, collect and/or validate some data, and then finish.

If the test method finishes with no exceptions, the test is recorded as successful, otherwise it is recorded as a failure.


Here is an example of a test that just starts a Zookeeper cluster with 2 nodes, and a
Kafka cluster with 3 nodes::

    class StartServicesTest(Test):
        """Make sure we can start Kafka and Zookeeper services."""
        def __init__(self, test_context):
            super(StartServicesTest, self).__init__(test_context=test_context)
            self.zk = ZookeeperService(test_context, num_nodes=2)
            self.kafka = KafkaService(test_context, num_nodes=3, self.zk)

        def test_services_start(self):
            self.zk.start()
            self.kafka.start()

Test Parameters
===============

Use test decorators to parametrize tests, examples are provided below

.. autofunction:: ducktape.mark.parametrize
.. autofunction:: ducktape.mark.matrix
.. autofunction:: ducktape.mark.resource.cluster
.. autofunction:: ducktape.mark.ignore

Logging
=======

The :class:`~ducktape.tests.test.Test` base class sets up logger you can use which is tagged by class name,
so adding some logging for debugging or to track the progress of tests is easy::

    self.logger.debug("End-to-end latency %d: %s", idx, line.strip())

These types of tests can be difficult to debug, so err toward more rather than
less logging.

.. note:: Logs are collected a multiple log levels, and only higher log levels are displayed to the console while the test runs. Make sure you log at the appropriate level.

JVM Logging
-----------

For Java-based services, ducktape can automatically collect JVM diagnostic logs without requiring any code changes to services or tests. Enable it with the ``--enable-jvm-logs`` flag::

    ducktape --enable-jvm-logs <test_path>

When enabled, ducktape wraps the service's ``start_node`` and ``clean_node`` methods to:

- Create a log directory (``/mnt/jvm_logs``) on each worker node before the service starts.
- Prepend ``JDK_JAVA_OPTIONS`` with the JVM logging flags to every SSH command sent to the node, so the options are inherited by any Java process the service launches.
- Remove the log directory after ``clean_node`` runs and restore the original SSH methods.

The following JVM options are injected automatically:

.. list-table::
   :header-rows: 1
   :widths: 40 60

   * - Option
     - Purpose
   * - ``-Xlog:disable``
     - Suppress default JVM console output to avoid polluting test logs
   * - ``-Xlog:gc*:file=<log_dir>/gc.log``
     - GC activity with timestamps, uptime, level, and tags
   * - ``-XX:+HeapDumpOnOutOfMemoryError``
     - Generate a heap dump when an OOM error occurs
   * - ``-XX:HeapDumpPath=<log_dir>/heap_dump.hprof``
     - Location for the heap dump file
   * - ``-Xlog:safepoint=info:file=<log_dir>/jvm.log``
     - Safepoint pause events
   * - ``-Xlog:class+load=info:file=<log_dir>/jvm.log``
     - Class loading events
   * - ``-XX:ErrorFile=<log_dir>/hs_err_pid%p.log``
     - Fatal error log (JVM crashes)
   * - ``-XX:NativeMemoryTracking=summary``
     - Native memory usage tracking
   * - ``-Xlog:jit+compilation=info:file=<log_dir>/jvm.log``
     - JIT compilation events

The following log files are collected from each node:

.. list-table::
   :header-rows: 1
   :widths: 30 20 50

   * - File
     - Collected by default
     - Contents
   * - ``gc.log``
     - Yes
     - Garbage collection activity
   * - ``jvm.log``
     - Yes
     - Safepoint, class loading, and JIT compilation events
   * - ``heap_dump.hprof``
     - No (failure only)
     - Heap dump generated on OutOfMemoryError

.. note:: If a service or test injects its own ``-Xlog`` options as part of the command, those options will override the ones injected by JVM logging, since ducktape prepends ``JDK_JAVA_OPTIONS`` before the command. In practice, services should behave as expected.

New test example
================

Lets expand on the StartServicesTest example. The test starts a Zookeeper cluster with 2 nodes, and a
Kafka cluster with 3 nodes, and then bounces a kafka broker node which is either a special controller node or a non-controller node, depending on the `bounce_controller_broker` test parameter.

.. code-block:: python

    class StartServicesTest(Test):
        def __init__(self, test_context):
            super(StartServicesTest, self).__init__(test_context=test_context)
            self.zk = ZookeeperService(test_context, num_nodes=2)
            self.kafka = KafkaService(self.test_context, num_nodes=3, zk=self.zk)

        def setUp(self):
            self.zk.start()
            self.kafka.start()

        @matrix(bounce_controller_broker=[True, False])
        def test_broker_bounce(self, bounce_controller_broker=False):
            controller_node = self.kafka.controller()
            self.logger.debug("Found controller broker %s", controller_node.account)
            if bounce_controller_broker:
                bounce_node = controller_node
            else:
                bounce_node = self.kafka.nodes[(self.kafka.idx(controller_node) + 1) % self.kafka.num_nodes]

            self.logger.debug("Will hard kill broker %s", bounce_node.account)
            self.kafka.signal_node(bounce_node, sig=signal.SIGKILL)

            wait_until(lambda: not self.kafka.is_registered(bounce_node),
                       timeout_sec=self.kafka.zk_session_timeout + 5,
                       err_msg="Failed to see timely deregistration of hard-killed broker %s"
                               % bounce_node.account)

            self.kafka.start_node(bounce_node)

This will run two tests, one with ‘bounce_controller_broker’: False and another with 'bounce_controller_broker': True arguments. We moved start of Zookeeper and Kafka services to :meth:`~ducktape.tests.test.Test.setUp`, which is called before every test run.

The test finds which of Kafka broker nodes is a special controller node via provided ``controller`` method in KafkaService. The ``controller`` method in KafkaService will raise an exception if the controller node is not found. Make sure to check the behavior of methods provided by a service or other helper classes and fail the test as soon as an issue is found. That way, it will be much easier to find the cause of the test failure.

The test then finds the node to bounce based on `bounce_controller_broker` test parameter and then forcefully terminates the service process on that node via ``signal_node`` method of KafkaService. This method just sends a signal to forcefully kill the process, and does not do any further check. Thus, our test needs to check that the hard killed kafka broker is not part of the Kafka cluster anymore, before restarting the killed broker process. We do this by waiting on ``is_registered`` method provided by KafkaService to return False with a timeout, since de-registering the broker may take some time. Notice the use of ``wait_until`` method instead of a check after ``time.sleep``. This allows the test to continue as soon as de-registration happens.

We don’t check if the restarted broker is registered, because this is already done in KafkaService  ``start_node`` implementation, which will raise an exception if the service is not started successfully on a given node.


================================================
FILE: docs/requirements.txt
================================================
Sphinx~=8.2.3
sphinx-argparse~=0.5.2
sphinx-rtd-theme~=3.0.2
boto3==1.33.13
pycryptodome==3.23.0
pywinrm==0.4.3
jinja2~=3.1.6
MarkupSafe~=2.1.5


================================================
FILE: docs/run_tests.rst
================================================
.. _topics-run_tests:

=========
Run Tests
=========

Running Tests
=============

ducktape discovers and runs tests in the path(s) provided.
You can specify a folder with tests (all tests in Python modules named with "test\_" prefix or "_test" suffix will be
run), a specific test file (with any name) or even a specific class or test method, via absolute or relative paths.
You can optionally specify a specific set of parameters for tests with ``@parametrize`` or ``@matrix`` annotations::

    ducktape <relative_path_to_testdirectory>                   # e.g. ducktape dir/tests
    ducktape <relative_path_to_file>                            # e.g. ducktape dir/tests/my_test.py
    ducktape <path_to_test>[::SomeTestClass]                    # e.g. ducktape dir/tests/my_test.py::TestA
    ducktape <path_to_test>[::SomeTestClass[.test_method]]      # e.g. ducktape dir/tests/my_test.py::TestA.test_a
    ducktape <path_to_test>[::TestClass[.method[@params_json]]] # e.g. ducktape 'dir/tests/my_test.py::TestA.test_a@{"x": 100}'


Excluding Tests
===============

Pass ``--exclude`` flag to exclude certain test(s) from the run, using the same syntax::

    ducktape ./my_tests_dir --exclude ./my_tests_dir/test_a.py ./my_tests_dir/test_b.py::TestB.test_b



Test Suites
===========

Test suite is a collection of tests to run, optionally also specifying which tests to exclude. Test suites are specified
via YAML file

.. code-block:: yaml

    # list all tests that are part of the suite under the test suite name:
    my_test_suite:
        - ./my_tests_dir/  # paths are relative to the test suite file location
        - ./another_tests_dir/test_file.py::TestClass.test_method  # same syntax as passing tests directly to ducktape
        - './another_tests_dir/test_file.py::TestClass.parametrized_method@{"x": 100}'  # params are supported too
        - ./third_tests_dir/prefix_*.py  # basic globs are supported (* and ? characters)

    # each YAML file can contain one or more test suites:
    another_test_suite:
        # you can optionally specify excluded tests in the suite as well using the following syntax:
        included:
            - ./some_tests_dir/
        excluded:
            - ./some_tests_dir/*_large_test.py


Running Test Suites
===================

Tests suites are run in the same fashion as separate tests.

Run a single test suite::

    ducktape ./path/to/test_suite.yml

Run multiple test suites::

    ducktape ./path/to/test_suite_1.yml ./test_suite_2.yml

You can specify both tests and test suites at the same time::

    ducktape ./my_test.py ./my_test_suite.yml ./another_test.py::TestClass.test_method

If the same test method is effectively specified more than once, it will only be executed once.

For example, if ``test_suite.yml`` lists ``test_a.py`` then running the following command
will execute ``test_a.py`` only once::

    ducktape test_suite.yml test_a.py

If you specify a folder, all tests (ie python files) under that folder will be discovered, but test suites will be not.

For example, if ``test_dir`` contains ``my_test.py`` and ``my_test_suite.yml``, then running::

    ducktape ./test_dir

will execute ``my_test.py`` but skip ``my_test_suite.yml``.

To execute both ``my_test.py`` and ``my_test_suite.yml`` you need to specify test suite path explicitly::

    ducktape ./test_dir/ ./test_dir/my_test_suite.yml



Exclude and Test Suites
=======================

Exclude section in the test suite applies only to that test suite. ``--exclude`` parameter passed to ducktape applies
to all loaded tests and test suites.

For example, if ``test_dir`` contains ``test_a.py``, ``test_b.py`` and ``test_c.py``, and ``test_suite.yml`` is:

.. code-block:: yaml

    suite_one:
        included:
            - ./test_dir/*.py
        excluded:
            - ./test_dir/test_a.py
    suite_two:
        included:
            - ./test_dir/
        excluded:
            - ./test_dir/test_b.py

Then running::

    ducktape test_suite.yml
runs each of ``test_a.py``, ``test_b.py`` and ``test_c.py`` once


But running::

    ducktape test_suite.yml --exclude test_dir/test_a.py
runs only ``test_b.py`` and ``test_c.py`` once, and skips ``test_a.py``.


Options
=======

To see a complete listing of options run::

    ducktape --help

.. argparse::
   :module: ducktape.command_line.parse_args
   :func: create_ducktape_parser
   :prog: ducktape

Configuration File
==================

You can configure options in three locations: on the command line (highest priority), in a user configuration file in
``~/.ducktape/config``, and in a project-specific configuration ``<project_dir>/.ducktape/config`` (lowest priority).
Configuration files use the same syntax as command line arguments and may split arguments across multiple lines::

    --debug
    --exit-first
    --cluster=ducktape.cluster.json.JsonCluster

Output
======

Test results go in ``results/<session_id>.<session_id>`` which looks like ``<date>--<test_number>``. For example: ``results/2015-03-28--002``

ducktape does its best to group test results and log files in a sensible way. The output directory is
structured like so::

    <session_id>
        session_log.info
        session_log.debug
        report.txt   # Summary report of all tests run in this session
        report.html  # Open this to see summary report in a browser
        report.css

        <test_class_name>
            <test_method_name>
                test_log.info
                test_log.debug
                report.txt   # Report on this single test
                [data.json]  # Present if the test returns data

                <service_1>
                    <node_1>
                        some_logs
                    <node_2>
                        some_logs
        ...


To see an example of the output structure, go `here`_ and click on one of the details links.

.. _here: http://testing.confluent.io/confluent-kafka-system-test-results/


================================================
FILE: docs/test_clusters.rst
================================================
.. _topics-test_clusters:

===================
Test Clusters
===================

Ducktape runs on a test cluster with several nodes.  Ducktape will take ownership of the nodes and handle starting, stopping, and running services on them.

Many test environments are possible.  The nodes may be local nodes, running inside Docker.  Or they could be virtual machines running on a public cloud.

Cluster Specifications
======================

A cluster specification-- also called a ClusterSpec-- describes a particular
cluster configuration.  Originally, cluster specifications could only express the
number of nodes of each operating system. Now, with heterogeneous cluster support,
specifications can also include node types (e.g., "small", "large") for more
fine-grained resource allocation. See `Heterogeneous Clusters`_ for more details.

Cluster specifications give us a vocabulary to express what a particular
service or test needs to run.  For example, a service might require a cluster
with three Linux nodes and one Windows node.  We could express that with a
ClusterSpec containing three Linux NodeSpec objects and one Windows NodeSpec
object.

Heterogeneous Clusters
======================

Ducktape supports heterogeneous clusters where nodes can have different types
(e.g., "small", "large", "arm64"). This allows tests to request specific node
types while maintaining backward compatibility with existing tests.

Using Node Types in Tests
-------------------------

Use the ``@cluster`` decorator with ``node_type`` to request specific node types::

    from ducktape.mark.resource import cluster

    @cluster(num_nodes=3, node_type="large")
    def test_with_large_nodes(self):
        # This test requires 3 large nodes
        pass

    @cluster(num_nodes=5)
    def test_any_nodes(self):
        # This test accepts any 5 nodes (backward compatible)
        pass

Cluster Configuration
---------------------

Node types are defined in your ``cluster.json`` file::

    {
      "nodes": [
        {
          "ssh_config": {"host": "worker1", ...},
          "node_type": "small"
        },
        {
          "ssh_config": {"host": "worker2", ...},
          "node_type": "large"
        }
      ]
    }

Backward Compatibility
----------------------

- Tests without ``node_type`` will match **any available node**
- Existing tests and cluster configurations continue to work unchanged
- Node type is optional in both test annotations and cluster configuration


================================================
FILE: ducktape/__init__.py
================================================
__version__ = "0.14.0"


================================================
FILE: ducktape/__main__.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from ducktape.command_line import main

if __name__ == "__main__":
    main.main()


================================================
FILE: ducktape/cluster/__init__.py
================================================
from .json import JsonCluster  # NOQA
from .localhost import LocalhostCluster  # NOQA
from .vagrant import VagrantCluster  # NOQA


================================================
FILE: ducktape/cluster/cluster.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import collections
from typing import Iterable, List, Union

from ducktape.cluster.cluster_node import ClusterNode
from ducktape.cluster.cluster_spec import ClusterSpec


class Cluster(object):
    """Interface for a cluster -- a collection of nodes with login credentials.
    This interface doesn't define any mapping of roles/services to nodes. It only interacts with some underlying
    system that can describe available resources and mediates reservations of those resources.
    """

    def __init__(self):
        self.max_used_nodes = 0

    def __len__(self) -> int:
        """Size of this cluster object. I.e. number of 'nodes' in the cluster."""
        return self.available().size() + self.used().size()

    def alloc(self, cluster_spec) -> Union[ClusterNode, List[ClusterNode], "Cluster"]:
        """
        Allocate some nodes.

        :param cluster_spec:                    A ClusterSpec describing the nodes to be allocated.
        :throws InsufficientResources:          If the nodes cannot be allocated.
        :return:                                Allocated nodes spec
        """
        allocated = self.do_alloc(cluster_spec)
        self.max_used_nodes = max(self.max_used_nodes, len(self.used()))
        return allocated

    def do_alloc(self, cluster_spec) -> Union[ClusterNode, List[ClusterNode], "Cluster"]:
        """
        Subclasses should implement actual allocation here.

        :param cluster_spec:                    A ClusterSpec describing the nodes to be allocated.
        :throws InsufficientResources:          If the nodes cannot be allocated.
        :return:                                Allocated nodes spec
        """
        raise NotImplementedError

    def free(self, nodes: Union[Iterable[ClusterNode], ClusterNode]) -> None:
        """Free the given node or list of nodes"""
        if isinstance(nodes, collections.abc.Iterable):
            for s in nodes:
                self.free_single(s)
        else:
            self.free_single(nodes)

    def free_single(self, node: ClusterNode) -> None:
        raise NotImplementedError()

    def __eq__(self, other):
        return other is not None and self.__dict__ == other.__dict__

    def __hash__(self):
        return hash(tuple(sorted(self.__dict__.items())))

    def num_available_nodes(self) -> int:
        return self.available().size()

    def available(self) -> ClusterSpec:
        """
        Return a ClusterSpec object describing the currently available nodes.
        """
        raise NotImplementedError

    def used(self) -> ClusterSpec:
        """
        Return a ClusterSpec object describing the currently in use nodes.
        """
        raise NotImplementedError

    def max_used(self) -> int:
        return self.max_used_nodes

    def all(self):
        """
        Return a ClusterSpec object describing all nodes.
        """
        return self.available().clone().add(self.used())


================================================
FILE: ducktape/cluster/cluster_node.py
================================================
from typing import Optional

from ducktape.cluster.remoteaccount import RemoteAccount


class ClusterNode(object):
    def __init__(self, account: RemoteAccount, **kwargs):
        self.account = account
        for k, v in kwargs.items():
            setattr(self, k, v)

    @property
    def name(self) -> Optional[str]:
        return self.account.hostname

    @property
    def operating_system(self) -> Optional[str]:
        return self.account.operating_system


================================================
FILE: ducktape/cluster/cluster_spec.py
================================================
# Copyright 2017 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import

import json
import typing

from ducktape.cluster.node_container import NodeContainer

from .consts import LINUX
from .node_spec import NodeSpec


class ClusterSpec(object):
    """
    The specification for a ducktape cluster.
    """

    nodes: typing.Optional[NodeContainer] = None

    @staticmethod
    def empty():
        return ClusterSpec([])

    @staticmethod
    def simple_linux(num_nodes, node_type=None):
        """
        Create a ClusterSpec for Linux nodes, optionally of a specific type.

        Examples:
            ClusterSpec.simple_linux(5)              # 5 nodes, any type
            ClusterSpec.simple_linux(3, "large")     # 3 large nodes

        :param num_nodes: Number of Linux nodes
        :param node_type: Optional node type label (e.g., "large", "small")
        """
        node_specs = [NodeSpec(LINUX, node_type)] * num_nodes
        return ClusterSpec(node_specs)

    @staticmethod
    def from_nodes(nodes):
        """
        Create a ClusterSpec describing a list of nodes.
        """
        return ClusterSpec([NodeSpec(node.operating_system, getattr(node, "node_type", None)) for node in nodes])

    def __init__(self, nodes=None):
        """
        Initialize the ClusterSpec.

        :param nodes:           A collection of NodeSpecs, or None to create an empty cluster spec.
        """
        self.nodes = NodeContainer(nodes)

    def __len__(self):
        return self.size()

    def __iter__(self):
        return self.nodes.elements()

    def size(self):
        """Return the total size of this cluster spec, including all types of nodes."""
        return self.nodes.size()

    def add(self, other):
        """
        Add another ClusterSpec to this one.

        :param node_spec:       The other cluster spec.  This will not be modified.
        :return:                This ClusterSpec.
        """
        for node_spec in other.nodes:
            self.nodes.add_node(node_spec)
        return self

    def clone(self):
        """
        Returns a deep copy of this object.
        """
        return ClusterSpec(self.nodes.clone())

    def __str__(self):
        node_spec_to_num = {}
        for node_spec in self.nodes.elements():
            node_spec_str = str(node_spec)
            node_spec_to_num[node_spec_str] = node_spec_to_num.get(node_spec_str, 0) + 1
        rval = []
        for node_spec_str in sorted(node_spec_to_num.keys()):
            node_spec = json.loads(node_spec_str)
            node_spec["num_nodes"] = node_spec_to_num[node_spec_str]
            rval.append(node_spec)
        return json.dumps(rval, sort_keys=True)


================================================
FILE: ducktape/cluster/consts.py
================================================
# Copyright 2017 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

LINUX = "linux"

WINDOWS = "windows"

SUPPORTED_OS_TYPES = [LINUX, WINDOWS]


================================================
FILE: ducktape/cluster/finite_subcluster.py
================================================
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import typing

from ducktape.cluster.cluster import Cluster, ClusterNode
from ducktape.cluster.cluster_spec import ClusterSpec
from ducktape.cluster.node_container import NodeContainer


class FiniteSubcluster(Cluster):
    """This cluster class gives us a mechanism for allocating finite blocks of nodes from another cluster."""

    def __init__(self, nodes: typing.Iterable[ClusterNode]):
        super(FiniteSubcluster, self).__init__()
        self.nodes = nodes
        self._available_nodes = NodeContainer(nodes)
        self._in_use_nodes = NodeContainer()

    def do_alloc(self, cluster_spec) -> typing.List[ClusterNode]:
        # there cannot be any bad nodes here,
        # since FiniteSubcluster operates on ClusterNode objects,
        # which are not checked for health by NodeContainer.remove_spec
        # however there could be an error, specifically if a test decides to alloc more nodes than are available
        # in a previous ducktape version this exception was raised by remove_spec
        # in this one, for consistency, we let the cluster itself deal with allocation errors
        good_nodes, bad_nodes = self._available_nodes.remove_spec(cluster_spec)
        self._in_use_nodes.add_nodes(good_nodes)
        return good_nodes

    def free_single(self, node):
        self._in_use_nodes.remove_node(node)
        self._available_nodes.add_node(node)

    def available(self):
        return ClusterSpec.from_nodes(self._available_nodes)

    def used(self):
        return ClusterSpec.from_nodes(self._in_use_nodes)


================================================
FILE: ducktape/cluster/json.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import

import json
import os
import traceback

from ducktape.cluster.cluster_spec import ClusterSpec
from ducktape.cluster.consts import WINDOWS
from ducktape.cluster.linux_remoteaccount import LinuxRemoteAccount
from ducktape.cluster.node_container import InsufficientHealthyNodesError, NodeContainer
from ducktape.cluster.windows_remoteaccount import WindowsRemoteAccount
from ducktape.command_line.defaults import ConsoleDefaults

from .cluster import Cluster
from .cluster_node import ClusterNode
from .remoteaccount import RemoteAccountSSHConfig


def make_remote_account(ssh_config, *args, **kwargs):
    """Factory function for creating the correct RemoteAccount implementation."""

    if ssh_config.host and WINDOWS in ssh_config.host:
        return WindowsRemoteAccount(ssh_config, *args, **kwargs)
    else:
        return LinuxRemoteAccount(ssh_config, *args, **kwargs)


class JsonCluster(Cluster):
    """An implementation of Cluster that uses static settings specified in a cluster file or json-serializeable dict"""

    def __init__(
        self,
        cluster_json=None,
        *args,
        make_remote_account_func=make_remote_account,
        **kwargs,
    ):
        """Initialize JsonCluster

        JsonCluster can be initialized from:
            - a json-serializeable dict
            - a "cluster_file" containing json

        :param cluster_json: a json-serializeable dict containing node information. If ``cluster_json`` is None,
               load from file
        :param cluster_file (optional): Overrides the default location of the json cluster file

        Example json with a local Vagrant cluster::

            {
              "nodes": [
                {
                  "externally_routable_ip": "192.168.50.151",

                  "ssh_config": {
                    "host": "worker1",
                    "hostname": "127.0.0.1",
                    "identityfile": "/path/to/private_key",
                    "password": null,
                    "port": 2222,
                    "user": "vagrant"
                  }
                },
                {
                  "externally_routable_ip": "192.168.50.151",

                  "ssh_config": {
                    "host": "worker2",
                    "hostname": "127.0.0.1",
                    "identityfile": "/path/to/private_key",
                    "password": null,
                    "port": 2223,
                    "user": "vagrant"
                  }
                }
              ]
            }

        """
        super(JsonCluster, self).__init__()
        self._available_accounts: NodeContainer = NodeContainer()
        self._bad_accounts: NodeContainer = NodeContainer()
        self._in_use_nodes: NodeContainer = NodeContainer()
        if cluster_json is None:
            # This is a directly instantiation of JsonCluster rather than from a subclass (e.g. VagrantCluster)
            cluster_file = kwargs.get("cluster_file")
            if cluster_file is None:
                cluster_file = ConsoleDefaults.CLUSTER_FILE
            cluster_json = json.load(open(os.path.abspath(cluster_file)))
        try:
            for ninfo in cluster_json["nodes"]:
                ssh_config_dict = ninfo.get("ssh_config")
                assert ssh_config_dict is not None, (
                    "Cluster json has a node without a ssh_config field: %s\n Cluster json: %s" % (ninfo, cluster_json)
                )

                ssh_config = RemoteAccountSSHConfig(**ninfo.get("ssh_config", {}))

                # Extract node_type from JSON (optional field)
                node_type = ninfo.get("node_type")

                remote_account = make_remote_account_func(
                    ssh_config,
                    ninfo.get("externally_routable_ip"),
                    node_type=node_type,
                    ssh_exception_checks=kwargs.get("ssh_exception_checks"),
                )
                if remote_account.externally_routable_ip is None:
                    remote_account.externally_routable_ip = self._externally_routable_ip(remote_account)
                self._available_accounts.add_node(remote_account)
        except BaseException as e:
            msg = "JSON cluster definition invalid: %s: %s" % (
                e,
                traceback.format_exc(limit=16),
            )
            raise ValueError(msg)
        self._id_supplier = 0

    def do_alloc(self, cluster_spec):
        try:
            good_nodes, bad_nodes = self._available_accounts.remove_spec(cluster_spec)
        except InsufficientHealthyNodesError as e:
            self._bad_accounts.add_nodes(e.bad_nodes)
            raise e

        # even in case of no exceptions, we can still run into bad nodes, so let's track them
        if bad_nodes:
            self._bad_accounts.add_nodes(bad_nodes)

        # now let's gather all the good ones and convert them into ClusterNode objects
        allocated_nodes = []
        for account in good_nodes:
            allocated_nodes.append(ClusterNode(account, slot_id=self._id_supplier))
            self._id_supplier += 1
        self._in_use_nodes.add_nodes(allocated_nodes)

        return allocated_nodes

    def free_single(self, node):
        self._in_use_nodes.remove_node(node)
        self._available_accounts.add_node(node.account)
        node.account.close()

    def _externally_routable_ip(self, account):
        return None

    def available(self):
        return ClusterSpec.from_nodes(self._available_accounts)

    def used(self):
        return ClusterSpec.from_nodes(self._in_use_nodes)


================================================
FILE: ducktape/cluster/linux_remoteaccount.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Optional

from paramiko import SFTPClient, SSHClient

from ducktape.cluster.consts import LINUX
from ducktape.cluster.remoteaccount import RemoteAccount, RemoteAccountError


class LinuxRemoteAccount(RemoteAccount):
    def __init__(self, *args, **kwargs):
        super(LinuxRemoteAccount, self).__init__(*args, **kwargs)
        self._ssh_client: Optional[SSHClient] = None
        self._sftp_client: Optional[SFTPClient] = None
        self.os = LINUX

    @property
    def local(self):
        """Returns True if this 'remote' account is probably local.
        This is an imperfect heuristic, but should work for simple local testing."""
        return self.hostname == "localhost" and self.user is None and self.ssh_config is None

    def get_network_devices(self):
        """
        Utility to get all network devices on a linux account
        """
        return [device for device in self.sftp_client.listdir("/sys/class/net")]

    def get_external_accessible_network_devices(self):
        """
        gets the subset of devices accessible through an external conenction
        """
        return [
            device
            for device in self.get_network_devices()
            if device != "lo"  # do not include local device
            and (device.startswith("en") or device.startswith("eth"))  # filter out other devices; "en" means ethernet
            # eth0 can also sometimes happen, see https://unix.stackexchange.com/q/134483
        ]

    # deprecated, please use the self.externally_routable_ip that is set in your cluster,
    # not explicitly deprecating it as it's used by vagrant cluster
    def fetch_externally_routable_ip(self, is_aws=None):
        if is_aws is not None:
            self.logger.warning("fetch_externally_routable_ip: is_aws is a deprecated flag, and does nothing")

        devices = self.get_external_accessible_network_devices()

        self.logger.debug("found devices: {}".format(devices))

        if not devices:
            raise RemoteAccountError(self, "Couldn't find any network devices")

        fmt_cmd = (
            "/sbin/ifconfig {device} | " "grep 'inet ' | " "tail -n 1 | " r"egrep -o '[0-9\.]+' | " "head -n 1 2>&1"
        )

        ips = ["".join(self.ssh_capture(fmt_cmd.format(device=device))).strip() for device in devices]
        self.logger.debug("found ips: {}".format(ips))
        self.logger.debug("returning the first ip found")
        return next(iter(ips))


================================================
FILE: ducktape/cluster/localhost.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from ducktape.cluster.cluster_spec import ClusterSpec
from ducktape.cluster.node_container import NodeContainer

from .cluster import Cluster
from .cluster_node import ClusterNode
from .linux_remoteaccount import LinuxRemoteAccount
from .remoteaccount import RemoteAccountSSHConfig


class LocalhostCluster(Cluster):
    """
    A "cluster" that runs entirely on localhost using default credentials. This doesn't require any user
    configuration and is equivalent to the old defaults in cluster_config.json. There are no constraints
    on the resources available.
    """

    def __init__(self, *args, **kwargs):
        super(LocalhostCluster, self).__init__()
        num_nodes = kwargs.get("num_nodes", 1000)
        self._available_nodes = NodeContainer()
        for i in range(num_nodes):
            ssh_config = RemoteAccountSSHConfig("localhost%d" % i, hostname="localhost", port=22)
            self._available_nodes.add_node(
                ClusterNode(
                    LinuxRemoteAccount(
                        ssh_config,
                        ssh_exception_checks=kwargs.get("ssh_exception_checks"),
                    )
                )
            )
        self._in_use_nodes = NodeContainer()

    def do_alloc(self, cluster_spec):
        # there shouldn't be any bad nodes in localhost cluster
        # since ClusterNode object does not implement `available()` method
        good_nodes, bad_nodes = self._available_nodes.remove_spec(cluster_spec)
        self._in_use_nodes.add_nodes(good_nodes)
        return good_nodes

    def free_single(self, node):
        self._in_use_nodes.remove_node(node)
        self._available_nodes.add_node(node)
        node.account.close()

    def available(self):
        return ClusterSpec.from_nodes(self._available_nodes)

    def used(self):
        return ClusterSpec.from_nodes(self._in_use_nodes)


================================================
FILE: ducktape/cluster/node_container.py
================================================
# Copyright 2017 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations

from typing import (
    TYPE_CHECKING,
    Any,
    Dict,
    Iterable,
    Iterator,
    List,
    Optional,
    Tuple,
    Union,
)

from ducktape.cluster.cluster_node import ClusterNode
from ducktape.cluster.remoteaccount import RemoteAccount

if TYPE_CHECKING:
    from ducktape.cluster.cluster_spec import ClusterSpec

NodeType = Union[ClusterNode, RemoteAccount]
# Key for node grouping: (operating_system, node_type)
NodeGroupKey = Tuple[Optional[str], Optional[str]]


class NodeNotPresentError(Exception):
    pass


class InsufficientResourcesError(Exception):
    pass


class InsufficientHealthyNodesError(InsufficientResourcesError):
    def __init__(self, bad_nodes: List, *args):
        self.bad_nodes = bad_nodes
        super().__init__(*args)


def _get_node_key(node: NodeType) -> NodeGroupKey:
    """Extract the (os, node_type) key from a node."""
    os = getattr(node, "operating_system", None)
    node_type = getattr(node, "node_type", None)
    return (os, node_type)


class NodeContainer(object):
    """
    Container for cluster nodes, grouped by (operating_system, node_type).

    This enables efficient lookup and allocation of nodes matching specific
    requirements. Nodes with node_type=None are grouped under (os, None) and
    can match any request when no specific type is required.
    """

    # Key: (os, node_type) tuple, Value: list of nodes
    node_groups: Dict[NodeGroupKey, List[NodeType]]

    def __init__(self, nodes: Optional[Iterable[NodeType]] = None) -> None:
        """
        Create a NodeContainer with the given nodes.

        Node objects should implement at least an operating_system property,
        and optionally a node_type property.

        :param nodes:           A collection of node objects to add, or None to add nothing.
        """
        self.node_groups = {}
        if nodes is not None:
            for node in nodes:
                self.add_node(node)

    def size(self) -> int:
        """
        Returns the total number of nodes in the container.
        """
        return sum([len(val) for val in self.node_groups.values()])

    def __len__(self):
        return self.size()

    def __iter__(self) -> Iterator[Any]:
        return self.elements()

    def elements(self, operating_system: Optional[str] = None, node_type: Optional[str] = None) -> Iterator[NodeType]:
        """
        Yield the elements in this container.

        :param operating_system:    If this is non-None, we will iterate only over elements
                                    which have this operating system.
        :param node_type:           If this is non-None, we will iterate only over elements
                                    which have this node type.
        """
        for (os, nt), node_list in self.node_groups.items():
            # Filter by OS if specified
            if operating_system is not None and os != operating_system:
                continue
            # Filter by node_type if specified
            if node_type is not None and nt != node_type:
                continue
            for node in node_list:
                yield node

    def grouped_by_os_and_type(self) -> Dict[Tuple[Optional[str], Optional[str]], int]:
        """
        Returns nodes grouped by (operating_system, node_type) with counts.

        This is a pure data method that groups nodes without any ordering.
        The caller is responsible for determining processing order.

        :return: Dictionary mapping (os, node_type) tuples to counts
        """
        result: Dict[Tuple[Optional[str], Optional[str]], int] = {}
        for node in self.elements():
            key = (getattr(node, "operating_system", None), getattr(node, "node_type", None))
            result[key] = result.get(key, 0) + 1
        return result

    def add_node(self, node: Union[ClusterNode, RemoteAccount]) -> None:
        """
        Add a node to this collection, grouping by (os, node_type).

        :param node:                        The node to add.
        """
        key = _get_node_key(node)
        self.node_groups.setdefault(key, []).append(node)

    def add_nodes(self, nodes):
        """
        Add a collection of nodes to this collection.

        :param nodes:                       The nodes to add.
        """
        for node in nodes:
            self.add_node(node)

    def remove_node(self, node):
        """
        Removes a node from this collection.

        :param node:                        The node to remove.
        :returns:                           The node which has been removed.
        :throws NodeNotPresentError:        If the node is not in the collection.
        """
        key = _get_node_key(node)
        try:
            return self.node_groups.get(key, []).remove(node)
        except ValueError:
            raise NodeNotPresentError

    def remove_nodes(self, nodes):
        """
        Remove a collection of nodes from this collection.

        :param nodes:                       The nodes to remove.
        """
        for node in nodes:
            self.remove_node(node)

    def _find_matching_nodes(
        self, required_os: str, required_node_type: Optional[str], num_needed: int
    ) -> Tuple[List[NodeType], List[NodeType], int]:
        """
        Find nodes that match the required OS and node_type.

        Matching rules:
            - OS must match exactly
            - If required_node_type is None, match nodes of ANY type for this OS
            - If required_node_type is specified, match only nodes with that exact type

        :param required_os: The required operating system
        :param required_node_type: The required node type (None means any)
        :param num_needed: Number of nodes needed
        :return: Tuple of (good_nodes, bad_nodes, shortfall) where shortfall is how many more we need
        """
        good_nodes: List[NodeType] = []
        bad_nodes: List[NodeType] = []

        # Collect candidate keys - keys in node_groups that can satisfy this requirement
        candidate_keys: List[NodeGroupKey] = []
        for os, nt in self.node_groups.keys():
            if os != required_os:
                continue
            # If no specific type required, any node of this OS matches
            # If specific type required, only exact match
            if required_node_type is None or nt == required_node_type:
                candidate_keys.append((os, nt))

        # Try to allocate from candidate pools
        for key in candidate_keys:
            if len(good_nodes) >= num_needed:
                break

            avail_nodes = self.node_groups.get(key, [])
            while avail_nodes and len(good_nodes) < num_needed:
                node = avail_nodes.pop(0)
                if isinstance(node, RemoteAccount):
                    if node.available():
                        good_nodes.append(node)
                    else:
                        bad_nodes.append(node)
                else:
                    good_nodes.append(node)

        shortfall = max(0, num_needed - len(good_nodes))
        return good_nodes, bad_nodes, shortfall

    def remove_spec(self, cluster_spec: ClusterSpec) -> Tuple[List[NodeType], List[NodeType]]:
        """
        Remove nodes matching a ClusterSpec from this NodeContainer.

        Allocation strategy:
            - Specific node_type requirements are allocated BEFORE any-type (None) requirements
            - For each (os, node_type) in the spec:
                - If node_type is specified, allocate from that exact pool
                - If node_type is None, allocate from any pool matching the OS

        :param cluster_spec:                    The cluster spec.  This will not be modified.
        :returns:                               Tuple of (good_nodes, bad_nodes).
        :raises:                                InsufficientResourcesError when there aren't enough total nodes
                                                InsufficientHealthyNodesError when there aren't enough healthy nodes
        """
        err = self.attempt_remove_spec(cluster_spec)
        if err:
            raise InsufficientResourcesError(err)

        good_nodes: List[NodeType] = []
        bad_nodes: List[NodeType] = []
        msg = ""

        # Get requirements grouped by (os, node_type) with counts
        grouped_counts = cluster_spec.nodes.grouped_by_os_and_type()

        # Sort so specific types (node_type != None) are allocated before any-type (node_type == None)
        # This prevents any-type requests from "stealing" nodes needed by specific types
        def allocation_order(item: Tuple[Tuple[str, Optional[str]], int]) -> Tuple[int, str, str]:
            (os, node_type), _ = item
            # Specific types first (0), any-type last (1)
            type_order = 1 if node_type is None else 0
            return (type_order, os or "", node_type or "")

        sorted_requirements = sorted(grouped_counts.items(), key=allocation_order)

        for (os, node_type), num_needed in sorted_requirements:
            found_good, found_bad, shortfall = self._find_matching_nodes(os, node_type, num_needed)

            good_nodes.extend(found_good)
            bad_nodes.extend(found_bad)

            if shortfall > 0:
                type_desc = f"{os}" if node_type is None else f"{os}/{node_type}"
                msg += f"{type_desc} nodes requested: {num_needed}. Healthy nodes available: {len(found_good)}. "

        if msg:
            # Return good nodes back to the container
            for node in good_nodes:
                self.add_node(node)
            raise InsufficientHealthyNodesError(bad_nodes, msg)

        return good_nodes, bad_nodes

    def can_remove_spec(self, cluster_spec: ClusterSpec) -> bool:
        """
        Determine if we can remove nodes matching a ClusterSpec from this NodeContainer.
        This container will not be modified.

        :param cluster_spec:                    The cluster spec.  This will not be modified.
        :returns:                               True if we could remove the nodes; false otherwise
        """
        msg = self.attempt_remove_spec(cluster_spec)
        return len(msg) == 0

    def _count_nodes_by_os(self, target_os: str) -> int:
        """
        Count total nodes available for a given OS (regardless of node_type).

        :param target_os: The operating system to count nodes for
        :return: Total number of nodes with the given OS
        """
        count = 0
        for (os, _), nodes in self.node_groups.items():
            if os == target_os:
                count += len(nodes)
        return count

    def _count_nodes_by_os_and_type(self, target_os: str, target_type: str) -> int:
        """
        Count nodes available for a specific (os, node_type) combination.

        :param target_os: The operating system
        :param target_type: The specific node type (not None)
        :return: Number of nodes matching both OS and type
        """
        return len(self.node_groups.get((target_os, target_type), []))

    def attempt_remove_spec(self, cluster_spec: ClusterSpec) -> str:
        """
        Attempt to remove a cluster_spec from this node container.

        Uses holistic per-OS validation to correctly handle mixed typed and untyped
        requirements without double-counting shared capacity.

        Validation strategy:
            1. Check total OS capacity >= total OS demand
            2. Check each specific type has enough nodes
            3. Check remaining capacity (after specific types) >= any-type demand

        :param cluster_spec:                    The cluster spec.  This will not be modified.
        :returns:                               An empty string if we can remove the nodes;
                                                an error string otherwise.
        """
        # if cluster_spec is None this means the test cannot be run at all
        # e.g. users didn't specify `@cluster` annotation on it but the session context has a flag to fail
        # on such tests or any other state where the test deems its cluster spec incorrect.
        if cluster_spec is None:
            return "Invalid or missing cluster spec"
        # cluster spec may be empty and that's ok, shortcut to returning no error messages
        elif len(cluster_spec) == 0 or cluster_spec.nodes is None:
            return ""

        msg = ""

        # Build requirements_by_os: {os -> {node_type -> count}} in a single pass
        requirements_by_os: Dict[str, Dict[Optional[str], int]] = {}
        for node_spec in cluster_spec.nodes.elements():
            os = node_spec.operating_system
            node_type = node_spec.node_type
            requirements_by_os.setdefault(os, {})
            requirements_by_os[os][node_type] = requirements_by_os[os].get(node_type, 0) + 1

        # Validate each OS holistically
        for os, type_requirements in requirements_by_os.items():
            total_available = self._count_nodes_by_os(os)
            total_required = sum(type_requirements.values())

            # Check 1: Total capacity for this OS
            if total_available < total_required:
                msg += f"{os} nodes requested: {total_required}. {os} nodes available: {total_available}. "
                continue  # Already failed, no need for detailed checks

            # Check 2: Each specific type has enough nodes
            for node_type, count_needed in type_requirements.items():
                if node_type is None:
                    continue  # Handle any-type separately
                type_available = self._count_nodes_by_os_and_type(os, node_type)
                if type_available < count_needed:
                    msg += f"{os}/{node_type} nodes requested: {count_needed}. {os}/{node_type} nodes available: {type_available}. "

            # Check 3: After reserving specific types, is there capacity for any-type?
            any_type_demand = type_requirements.get(None, 0)
            if any_type_demand > 0:
                specific_demand = sum(c for t, c in type_requirements.items() if t is not None)
                remaining_capacity = total_available - specific_demand
                if remaining_capacity < any_type_demand:
                    msg += (
                        f"{os} (any type) nodes requested: {any_type_demand}. "
                        f"{os} nodes remaining after typed allocations: {remaining_capacity}. "
                    )

        return msg

    def clone(self) -> "NodeContainer":
        """
        Returns a deep copy of this object.
        """
        container = NodeContainer()
        for key, nodes in self.node_groups.items():
            for node in nodes:
                container.node_groups.setdefault(key, []).append(node)
        return container


================================================
FILE: ducktape/cluster/node_spec.py
================================================
# Copyright 2017 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import

import json
from typing import Optional

from .consts import LINUX, SUPPORTED_OS_TYPES


class NodeSpec(object):
    """
    Specification for a single cluster node.

    The node_type field is a generic label that can represent size, architecture,
    or any classification scheme defined by the cluster configuration. When None,
    it matches any available node (backward compatible behavior).

    :param operating_system:    The operating system of the node.
    :param node_type:           Node type label (e.g., "large", "small"). None means "match any".
    """

    def __init__(self, operating_system: str = LINUX, node_type: Optional[str] = None):
        self.operating_system = operating_system
        self.node_type = node_type
        if self.operating_system not in SUPPORTED_OS_TYPES:
            raise RuntimeError("Unsupported os type %s" % self.operating_system)

    def matches(self, available_node_spec: "NodeSpec") -> bool:
        """
        Check if this requirement can be satisfied by an available node.

        Matching rules:
            - OS must match exactly
            - If requested node_type is None, match any type
            - If requested node_type is specified, must match exactly

        :param available_node_spec: The specification of an available node
        :return: True if this requirement matches the available node
        """
        if self.operating_system != available_node_spec.operating_system:
            return False
        if self.node_type is None:
            return True  # Requestor doesn't care about type
        return self.node_type == available_node_spec.node_type

    def __str__(self):
        d = {"os": self.operating_system}
        if self.node_type is not None:
            d["node_type"] = self.node_type
        return json.dumps(d, sort_keys=True)

    def __eq__(self, other):
        if not isinstance(other, NodeSpec):
            return False
        return self.operating_system == other.operating_system and self.node_type == other.node_type

    def __hash__(self):
        return hash((self.operating_system, self.node_type))


================================================
FILE: ducktape/cluster/remoteaccount.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging
import os
import shutil
import signal
import socket
import stat
import tempfile
import warnings
from contextlib import contextmanager
from typing import Callable, List, Optional, Union

from paramiko import MissingHostKeyPolicy, SFTPClient, SSHClient, SSHConfig
from paramiko.ssh_exception import NoValidConnectionsError, SSHException

from ducktape.errors import DucktapeError
from ducktape.utils.http_utils import HttpMixin
from ducktape.utils.util import wait_until


def check_ssh(method: Callable) -> Callable:
    def wrapper(self, *args, **kwargs):
        try:
            return method(self, *args, **kwargs)
        except (SSHException, NoValidConnectionsError, socket.error) as e:
            if self._custom_ssh_exception_checks:
                self._log(logging.DEBUG, "caught ssh error", exc_info=True)
                self._log(logging.DEBUG, "starting ssh checks:")
                self._log(
                    logging.DEBUG,
                    "\n".join(repr(f) for f in self._custom_ssh_exception_checks),
                )
                for func in self._custom_ssh_exception_checks:
                    func(e, self)
            raise

    return wrapper


class RemoteAccountSSHConfig(object):
    def __init__(
        self,
        host: Optional[str] = None,
        hostname: Optional[str] = None,
        user: Optional[str] = None,
        port: Optional[int] = None,
        password: Optional[str] = None,
        identityfile: Optional[str] = None,
        connecttimeout: Optional[Union[int, float]] = None,
        **kwargs,
    ) -> None:
        """Wrapper for ssh configs used by ducktape to connect to remote machines.

        The fields in this class are lowercase versions of a small selection of ssh config properties
        (see man page: "man ssh_config")
        """
        self.host = host
        self.hostname: str = hostname or "localhost"
        self.user = user
        self.port = port or 22
        self.port = int(self.port)
        self.password = password
        self.identityfile = identityfile
        # None is default, and it means default TCP timeout will be used.
        self.connecttimeout = int(connecttimeout) if connecttimeout is not None else None

    @staticmethod
    def from_string(config_str):
        """Construct RemoteAccountSSHConfig object from a string that looks like

        Host the-host
            Hostname the-hostname
            Port 22
            User ubuntu
            IdentityFile /path/to/key
        """
        config = SSHConfig()
        config.parse(config_str.split("\n"))

        hostnames = config.get_hostnames()
        if "*" in hostnames:
            hostnames.remove("*")
        assert len(hostnames) == 1, "Expected hostnames to have single entry: %s" % hostnames
        host = hostnames.pop()

        config_dict = config.lookup(host)
        if config_dict.get("identityfile") is not None:
            # paramiko.SSHConfig parses this in as a list, but we only want a single string
            config_dict["identityfile"] = config_dict["identityfile"][0]

        return RemoteAccountSSHConfig(host, **config_dict)

    def to_json(self):
        return self.__dict__

    def __repr__(self):
        return str(self.to_json())

    def __eq__(self, other):
        return other and other.__dict__ == self.__dict__

    def __hash__(self):
        return hash(tuple(sorted(self.__dict__.items())))


class RemoteAccountError(DucktapeError):
    """This exception is raised when an attempted action on a remote node fails."""

    def __init__(self, account, msg):
        self.account_str = str(account)
        self.msg = msg

    def __str__(self):
        return "%s: %s" % (self.account_str, self.msg)


class RemoteCommandError(RemoteAccountError):
    """This exception is raised when a process run by ssh*() returns a non-zero exit status."""

    def __init__(self, account, cmd, exit_status, msg):
        self.account_str = str(account)
        self.exit_status = exit_status
        self.cmd = cmd
        self.msg = msg

    def __str__(self):
        msg = "%s: Command '%s' returned non-zero exit status %d." % (
            self.account_str,
            self.cmd,
            self.exit_status,
        )
        if self.msg:
            msg += " Remote error message: %s" % self.msg
        return msg


class RemoteAccount(HttpMixin):
    """RemoteAccount is the heart of interaction with cluster nodes,
    and every allocated cluster node has a reference to an instance of RemoteAccount.

    It wraps metadata such as ssh configs, and provides methods for file system manipulation and shell commands.

    Each operating system has its own RemoteAccount implementation.

    The node_type attribute stores the classification label from the cluster
    configuration, enabling type-aware node allocation.
    """

    def __init__(
        self,
        ssh_config: RemoteAccountSSHConfig,
        externally_routable_ip: Optional[str] = None,
        node_type: Optional[str] = None,
        logger: Optional[logging.Logger] = None,
        ssh_exception_checks: List[Callable] = [],
    ) -> None:
        # Instance of RemoteAccountSSHConfig - use this instead of a dict, because we need the entire object to
        # be hashable
        self.ssh_config = ssh_config

        # We don't want to rely on the hostname (e.g. 'worker1') having been added to the driver host's /etc/hosts file.
        # But that means we need to distinguish between the hostname and the value of hostname we use for SSH commands.
        # We try to satisfy all use cases and keep things simple by
        #   a) storing the hostname the user probably expects (the "Host" value in .ssh/config)
        #   b) saving the real value we use for running the SSH command
        self.hostname = ssh_config.host
        self.ssh_hostname = ssh_config.hostname

        self.user = ssh_config.user
        self.externally_routable_ip = externally_routable_ip
        self.node_type = node_type  # Node type label (e.g., "large", "small")
        self._logger = logger
        self.os: Optional[str] = None
        self._ssh_client: Optional[SSHClient] = None
        self._sftp_client: Optional[SFTPClient] = None
        self._custom_ssh_exception_checks = ssh_exception_checks

    @property
    def operating_system(self) -> Optional[str]:
        return self.os

    @property
    def logger(self):
        if self._logger:
            return self._logger
        else:
            return logging.getLogger(__name__)

    @logger.setter
    def logger(self, logger):
        self._logger = logger

    def _log(self, level, msg, *args, **kwargs):
        msg = "%s: %s" % (str(self), msg)
        self.logger.log(level, msg, *args, **kwargs)

    @check_ssh
    def _set_ssh_client(self):
        client = SSHClient()
        client.set_missing_host_key_policy(IgnoreMissingHostKeyPolicy())

        self._log(logging.DEBUG, "ssh_config: %s" % str(self.ssh_config))

        client.connect(
            hostname=self.ssh_config.hostname,
            port=self.ssh_config.port,
            username=self.ssh_config.user,
            password=self.ssh_config.password,
            key_filename=self.ssh_config.identityfile,
            look_for_keys=False,
            timeout=self.ssh_config.connecttimeout,
        )

        if self._ssh_client:
            self._ssh_client.close()
        self._ssh_client = client
        self._set_sftp_client()

    @property
    def ssh_client(self):
        if self._ssh_client and self._ssh_client.get_transport() and self._ssh_client.get_transport().is_active():
            try:
                transport = self._ssh_client.get_transport()
                transport.send_ignore()
            except Exception as e:
                self._log(
                    logging.DEBUG,
                    "exception getting ssh_client (creating new client): %s" % str(e),
                )
                self._set_ssh_client()
        else:
            self._set_ssh_client()

        return self._ssh_client

    def _set_sftp_client(self):
        if self._sftp_client:
            self._sftp_client.close()
        self._sftp_client = self.ssh_client.open_sftp()

    @property
    def sftp_client(self):
        if not self._sftp_client:
            self._set_sftp_client()
        else:
            self.ssh_client  # test connection

        return self._sftp_client

    def close(self) -> None:
        """Close/release any outstanding network connections to remote account."""

        if self._ssh_client:
            self._ssh_client.close()
            self._ssh_client = None
        if self._sftp_client:
            self._sftp_client.close()
            self._sftp_client = None

    def __str__(self):
        r = ""
        if self.user:
            r += self.user + "@"
        r += self.hostname
        return r

    def __repr__(self):
        return str(self.__dict__)

    def __eq__(self, other):
        return other is not None and self.__dict__ == other.__dict__

    def __hash__(self):
        return hash(tuple(sorted(self.__dict__.items())))

    def wait_for_http_service(self, port, headers, timeout=20, path="/"):
        """Wait until this service node is available/awake."""
        url = "http://%s:%s%s" % (self.externally_routable_ip, str(port), path)

        err_msg = (
            "Timed out trying to contact service on %s. " % url
            + "Either the service failed to start, or there is a problem with the url."
        )
        wait_until(
            lambda: self._can_ping_url(url, headers),
            timeout_sec=timeout,
            backoff_sec=0.25,
            err_msg=err_msg,
        )

    def _can_ping_url(self, url, headers):
        """See if we can successfully issue a GET request to the given url."""
        try:
            self.http_request(url, "GET", None, headers, timeout=0.75)
            return True
        except Exception:
            return False

    def available(self):
        # TODO: https://github.com/confluentinc/ducktape/issues/339
        # try:
        #     self.ssh_client
        # except Exception:
        #     return False
        # else:
        #     return True
        # finally:
        #     self.close()
        return True

    @check_ssh
    def ssh(self, cmd, allow_fail=False):
        """Run the given command on the remote host, and block until the command has finished running.

        :param cmd: The remote ssh command
        :param allow_fail: If True, ignore nonzero exit status of the remote command,
               else raise an ``RemoteCommandError``

        :return: The exit status of the command.
        :raise RemoteCommandError: If allow_fail is False and the command returns a non-zero exit status
        """
        self._log(logging.DEBUG, "Running ssh command: %s" % cmd)

        client = self.ssh_client
        stdin, stdout, stderr = client.exec_command(cmd)

        # Unfortunately we need to read over the channel to ensure that recv_exit_status won't hang. See:
        # http://docs.paramiko.org/en/2.0/api/channel.html#paramiko.channel.Channel.recv_exit_status
        stdout.read()
        exit_status = stdout.channel.recv_exit_status()
        try:
            if exit_status != 0:
                if not allow_fail:
                    raise RemoteCommandError(self, cmd, exit_status, stderr.read())
                else:
                    self._log(
                        logging.DEBUG,
                        "Running ssh command '%s' exited with status %d and message: %s"
                        % (cmd, exit_status, stderr.read()),
                    )
        finally:
            stdin.close()
            stdout.close()
            stderr.close()

        return exit_status

    @check_ssh
    def ssh_capture(
        self,
        cmd,
        allow_fail=False,
        callback=None,
        combine_stderr=True,
        timeout_sec=None,
    ):
        """Run the given command asynchronously via ssh, and return an SSHOutputIter object.

        Does *not* block

        :param cmd: The remote ssh command
        :param allow_fail: If True, ignore nonzero exit status of the remote command,
               else raise an ``RemoteCommandError``
        :param callback: If set, the iterator returns ``callback(line)``
               for each line of output instead of the raw output
        :param combine_stderr: If True, return output from both stderr and stdout of the remote process.
        :param timeout_sec: Set timeout on blocking reads/writes. Default None. For more details see
            http://docs.paramiko.org/en/2.0/api/channel.html#paramiko.channel.Channel.settimeout

        :return SSHOutputIter: object which allows iteration through each line of output.
        :raise RemoteCommandError: If ``allow_fail`` is False and the command returns a non-zero exit status
        """
        self._log(logging.DEBUG, "Running ssh command: %s" % cmd)

        client = self.ssh_client
        chan = client.get_transport().open_session(timeout=timeout_sec)

        chan.settimeout(timeout_sec)
        chan.exec_command(cmd)
        chan.set_combine_stderr(combine_stderr)

        stdin = chan.makefile("wb", -1)  # set bufsize to -1
        stdout = chan.makefile("r", -1)
        stderr = chan.makefile_stderr("r", -1)

        def output_generator():
            for line in iter(stdout.readline, ""):
                if callback is None:
                    yield line
                else:
                    yield callback(line)
            try:
                exit_status = stdout.channel.recv_exit_status()
                if exit_status != 0:
                    if not allow_fail:
                        raise RemoteCommandError(self, cmd, exit_status, stderr.read())
                    else:
                        self._log(
                            logging.DEBUG,
                            "Running ssh command '%s' exited with status %d and message: %s"
                            % (cmd, exit_status, stderr.read()),
                        )
            finally:
                stdin.close()
                stdout.close()
                stderr.close()

        return SSHOutputIter(output_generator, stdout)

    @check_ssh
    def ssh_output(self, cmd, allow_fail=False, combine_stderr=True, timeout_sec=None):
        """Runs the command via SSH and captures the output, returning it as a string.

        :param cmd: The remote ssh command.
        :param allow_fail: If True, ignore nonzero exit status of the remote command,
               else raise an ``RemoteCommandError``
        :param combine_stderr: If True, return output from both stderr and stdout of the remote process.
        :param timeout_sec: Set timeout on blocking reads/writes. Default None. For more details see
            http://docs.paramiko.org/en/2.0/api/channel.html#paramiko.channel.Channel.settimeout

        :return: The stdout output from the ssh command.
        :raise RemoteCommandError: If ``allow_fail`` is False and the command returns a non-zero exit status
        """
        self._log(logging.DEBUG, "Running ssh command: %s" % cmd)

        client = self.ssh_client
        chan = client.get_transport().open_session(timeout=timeout_sec)

        chan.settimeout(timeout_sec)
        chan.exec_command(cmd)
        chan.set_combine_stderr(combine_stderr)

        stdin = chan.makefile("wb", -1)  # set bufsize to -1
        stdout = chan.makefile("r", -1)
        stderr = chan.makefile_stderr("r", -1)

        try:
            stdoutdata = stdout.read()
            exit_status = stdin.channel.recv_exit_status()
            if exit_status != 0:
                if not allow_fail:
                    raise RemoteCommandError(self, cmd, exit_status, stderr.read())
                else:
                    self._log(
                        logging.DEBUG,
                        "Running ssh command '%s' exited with status %d and message: %s"
                        % (cmd, exit_status, stderr.read()),
                    )
        finally:
            stdin.close()
            stdout.close()
            stderr.close()
        self._log(logging.DEBUG, "Returning ssh command output:\n%s" % stdoutdata)
        return stdoutdata

    def alive(self, pid):
        """Return True if and only if process with given pid is alive."""
        try:
            self.ssh("kill -0 %s" % str(pid), allow_fail=False)
            return True
        except Exception:
            return False

    def signal(self, pid, sig, allow_fail=False):
        cmd = "kill -%d %s" % (int(sig), str(pid))
        self.ssh(cmd, allow_fail=allow_fail)

    def kill_process(self, process_grep_str, clean_shutdown=True, allow_fail=False):
        cmd = """ps ax | grep -i """ + process_grep_str + """ | grep -v grep | awk '{print $1}'"""
        pids = [pid for pid in self.ssh_capture(cmd, allow_fail=True)]

        if clean_shutdown:
            sig = signal.SIGTERM
        else:
            sig = signal.SIGKILL

        for pid in pids:
            self.signal(pid, sig, allow_fail=allow_fail)

    def java_pids(self, match):
        """
        Get all the Java process IDs matching 'match'.

        :param match:               The AWK expression to match
        """
        cmd = """jcmd | awk '/%s/ { print $1 }'""" % match
        return [int(pid) for pid in self.ssh_capture(cmd, allow_fail=True)]

    def kill_java_processes(self, match, clean_shutdown=True, allow_fail=False):
        """
        Kill all the java processes matching 'match'.

        :param match:               The AWK expression to match
        :param clean_shutdown:      True if we should shut down cleanly with SIGTERM;
                                    false if we should shut down with SIGKILL.
        :param allow_fail:          True if we should throw exceptions if the ssh commands fail.
        """
        cmd = """jcmd | awk '/%s/ { print $1 }'""" % match
        pids = [pid for pid in self.ssh_capture(cmd, allow_fail=True)]

        if clean_shutdown:
            sig = signal.SIGTERM
        else:
            sig = signal.SIGKILL

        for pid in pids:
            self.signal(pid, sig, allow_fail=allow_fail)

    def copy_between(self, src, dest, dest_node):
        """Copy src to dest on dest_node

        :param src: Path to the file or directory we want to copy
        :param dest: The destination path
        :param dest_node: The node to which we want to copy the file/directory

        Note that if src is a directory, this will automatically copy recursively.

        """
        # TODO: if dest is an existing file, what is the behavior?

        temp_dir = tempfile.mkdtemp()

        try:
            # TODO: deal with very unlikely case that src_name matches temp_dir name?
            # TODO: I think this actually works
            local_dest = self._re_anchor_basename(src, temp_dir)

            self.copy_from(src, local_dest)

            dest_node.account.copy_to(local_dest, dest)

        finally:
            if os.path.isdir(temp_dir):
                shutil.rmtree(temp_dir)

    def scp_from(self, src, dest, recursive=False):
        warnings.warn("scp_from is now deprecated. Please use copy_from")
        self.copy_from(src, dest)

    def _re_anchor_basename(self, path, directory):
        """Anchor the basename of path onto the given directory

        Helper for the various copy_* methods.

        :param path: Path to a file or directory. Could be on the driver machine or a worker machine.
        :param directory: Path to a directory. Could be on the driver machine or a worker machine.

        Example::

            path/to/the_basename, another/path/ -> another/path/the_basename
        """
        path_basename = path

        # trim off path separator from end of path
        # this is necessary because os.path.basename of a path ending in a separator is an empty string
        # For example:
        #   os.path.basename("the/path/") == ""
        #   os.path.basename("the/path") == "path"
        if path_basename.endswith(os.path.sep):
            path_basename = path_basename[: -len(os.path.sep)]
        path_basename = os.path.basename(path_basename)

        return os.path.join(directory, path_basename)

    @check_ssh
    def copy_from(self, src, dest):
        if os.path.isdir(dest):
            # dest is an existing directory, so assuming src looks like path/to/src_name,
            # in this case we'll copy as:
            #   path/to/src_name -> dest/src_name
            dest = self._re_anchor_basename(src, dest)

        if self.isfile(src):
            self.sftp_client.get(src, dest)
        elif self.isdir(src):
            # we can now assume dest path looks like: path_that_exists/new_directory
            os.mkdir(dest)

            # for obj in `ls src`, if it's a file, copy with copy_file_from, elif its a directory, call again
            for obj in self.sftp_client.listdir(src):
                obj_path = os.path.join(src, obj)
                if self.isfile(obj_path) or self.isdir(obj_path):
                    self.copy_from(obj_path, dest)
                else:
                    # TODO what about uncopyable file types?
                    pass

    def scp_to(self, src, dest, recursive=False):
        warnings.warn("scp_to is now deprecated. Please use copy_to")
        self.copy_to(src, dest)

    @check_ssh
    def copy_to(self, src, dest):
        if self.isdir(dest):
            # dest is an existing directory, so assuming src looks like path/to/src_name,
            # in this case we'll copy as:
            #   path/to/src_name -> dest/src_name
            dest = self._re_anchor_basename(src, dest)

        if os.path.isfile(src):
            # local to remote
            self.sftp_client.put(src, dest)
        elif os.path.isdir(src):
            # we can now assume dest path looks like: path_that_exists/new_directory
            self.mkdir(dest)

            # for obj in `ls src`, if it's a file, copy with copy_file_from, elif its a directory, call again
            for obj in os.listdir(src):
                obj_path = os.path.join(src, obj)
                if os.path.isfile(obj_path) or os.path.isdir(obj_path):
                    self.copy_to(obj_path, dest)
                else:
                    # TODO what about uncopyable file types?
                    pass

    @check_ssh
    def islink(self, path):
        try:
            # stat should follow symlinks
            path_stat = self.sftp_client.lstat(path)
            return stat.S_ISLNK(path_stat.st_mode)
        except Exception:
            return False

    @check_ssh
    def isdir(self, path):
        try:
            # stat should follow symlinks
            path_stat = self.sftp_client.stat(path)
            return stat.S_ISDIR(path_stat.st_mode)
        except Exception:
            return False

    @check_ssh
    def exists(self, path):
        """Test that the path exists, but don't follow symlinks."""
        try:
            # stat follows symlinks and tries to stat the actual file
            self.sftp_client.lstat(path)
            return True
        except IOError:
            return False

    @check_ssh
    def isfile(self, path):
        """Imitates semantics of os.path.isfile

        :param path: Path to the thing to check
        :return: True if path is a file or a symlink to a file, else False. Note False can mean path does not exist.
        """
        try:
            # stat should follow symlinks
            path_stat = self.sftp_client.stat(path)
            return stat.S_ISREG(path_stat.st_mode)
        except Exception:
            return False

    def open(self, path, mode="r"):
        return self.sftp_client.open(path, mode)

    @check_ssh
    def create_file(self, path, contents):
        """Create file at path, with the given contents.

        If the path already exists, it will be overwritten.
        """
        # TODO: what should semantics be if path exists? what actually happens if it already exists?
        # TODO: what happens if the base part of the path does not exist?

        with self.sftp_client.open(path, "w") as f:
            f.write(contents)

    _DEFAULT_PERMISSIONS = int("755", 8)

    @check_ssh
    def mkdir(self, path, mode=_DEFAULT_PERMISSIONS):
        self.sftp_client.mkdir(path, mode)

    def mkdirs(self, path, mode=_DEFAULT_PERMISSIONS):
        self.ssh("mkdir -p %s && chmod %o %s" % (path, mode, path))

    def remove(self, path, allow_fail=False):
        """Remove the given file or directory"""

        if allow_fail:
            cmd = "rm -rf %s" % path
        else:
            cmd = "rm -r %s" % path

        self.ssh(cmd, allow_fail=allow_fail)

    @contextmanager
    def monitor_log(self, log):
        """
        Context manager that returns an object that helps you wait for events to
        occur in a log. This checks the size of the log at the beginning of the
        block and makes a helper object available with convenience methods for
        checking or waiting for a pattern to appear in the log. This will commonly
        be used to start a process, then wait for a log message indicating the
        process is in a ready state.

        See ``LogMonitor`` for more usage information.
        """
        try:
            offset = int(self.ssh_output("wc -c %s" % log).split()[0])
        except Exception:
            offset = 0
        yield LogMonitor(self, log, offset)


class SSHOutputIter(object):
    """Helper class that wraps around an iterable object to provide has_next() in addition to next()"""

    def __init__(self, iter_obj_func, channel_file=None):
        """
        :param iter_obj_func: A generator that returns an iterator over stdout from the remote process
        :param channel_file: A paramiko ``ChannelFile`` object
        """
        self.iter_obj_func = iter_obj_func
        self.iter_obj = iter_obj_func()
        self.channel_file = channel_file

        # sentinel is used as an indicator that there is currently nothing cached
        # If self.cached is self.sentinel, then next object from ier_obj is not yet cached.
        self.sentinel = object()
        self.cached = self.sentinel

    def __iter__(self):
        return self

    def next(self):
        if self.cached is self.sentinel:
            return next(self.iter_obj)
        next_obj = self.cached
        self.cached = self.sentinel
        return next_obj

    __next__ = next

    def has_next(self, timeout_sec=None):
        """Return True if next(iter_obj) would return another object within timeout_sec, else False.

        If timeout_sec is None, next(iter_obj) may block indefinitely.
        """
        assert timeout_sec is None or self.channel_file is not None, "should have descriptor to enforce timeout"

        prev_timeout = None
        if self.cached is self.sentinel:
            if self.channel_file is not None:
                prev_timeout = self.channel_file.channel.gettimeout()

                # when timeout_sec is None, next(iter_obj) will block indefinitely
                self.channel_file.channel.settimeout(timeout_sec)
            try:
                self.cached = next(self.iter_obj, self.sentinel)
            except socket.timeout:
                self.iter_obj = self.iter_obj_func()
                self.cached = self.sentinel
            finally:
                if self.channel_file is not None:
                    # restore preexisting timeout
                    self.channel_file.channel.settimeout(prev_timeout)

        return self.cached is not self.sentinel


class LogMonitor(object):
    """
    Helper class returned by monitor_log. Should be used as::

        with remote_account.monitor_log("/path/to/log") as monitor:
            remote_account.ssh("/command/to/start")
            monitor.wait_until("pattern.*to.*grep.*for", timeout_sec=5)

    to run the command and then wait for the pattern to appear in the log.
    """

    def __init__(self, acct, log, offset):
        self.acct = acct
        self.log = log
        self.offset = offset

    def wait_until(self, pattern, **kwargs):
        """
        Wait until the specified pattern is found in the log, after the initial
        offset recorded when the LogMonitor was created. Additional keyword args
        are passed directly to ``ducktape.utils.util.wait_until``
        """
        return wait_until(
            lambda: self.acct.ssh(
                "tail -c +%d %s | grep '%s'" % (self.offset + 1, self.log, pattern),
                allow_fail=True,
            )
            == 0,
            **kwargs,
        )


class IgnoreMissingHostKeyPolicy(MissingHostKeyPolicy):
    """Policy for ignoring missing host keys.
    Many examples show use of AutoAddPolicy, but this clutters up the known_hosts file unnecessarily.
    """

    def missing_host_key(self, client, hostname, key):
        return


================================================
FILE: ducktape/cluster/vagrant.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import absolute_import

import json
import os
import subprocess

from ducktape.json_serializable import DucktapeJSONEncoder

from .json import JsonCluster, make_remote_account
from .remoteaccount import RemoteAccountSSHConfig


class VagrantCluster(JsonCluster):
    """
    An implementation of Cluster that uses a set of VMs created by Vagrant. Because we need hostnames that can be
    advertised, this assumes that the Vagrant VM's name is a routeable hostname on all the hosts.

    - If cluster_file is specified in the constructor's kwargs (i.e. passed via command line argument --cluster-file)
      - If cluster_file exists on the filesystem, read cluster info from the file
      - Otherwise, retrieve cluster info via "vagrant ssh-config" from vagrant and write cluster info to cluster_file
    - Otherwise, retrieve cluster info via "vagrant ssh-config" from vagrant
    """

    def __init__(self, *args, make_remote_account_func=make_remote_account, **kwargs) -> None:
        is_read_from_file = False
        self.ssh_exception_checks = kwargs.get("ssh_exception_checks")
        cluster_file = kwargs.get("cluster_file")
        if cluster_file is not None:
            try:
                cluster_json = json.load(open(os.path.abspath(cluster_file)))
                is_read_from_file = True
            except IOError:
                # It is OK if file is not found. Call vagrant ssh-info to read the cluster info.
                pass

        if not is_read_from_file:
            cluster_json = {"nodes": self._get_nodes_from_vagrant(make_remote_account_func)}

        super(VagrantCluster, self).__init__(
            cluster_json,
            *args,
            make_remote_account_func=make_remote_account_func,
            **kwargs,
        )

        # If cluster file is specified but the cluster info is not read from it, write the cluster info into the file
        if not is_read_from_file and cluster_file is not None:
            nodes = [
                {
                    "ssh_config": node_account.ssh_config,
                    "externally_routable_ip": node_account.externally_routable_ip,
                }
                for node_account in self._available_accounts
            ]
            cluster_json["nodes"] = nodes
            with open(cluster_file, "w+") as fd:
                json.dump(
                    cluster_json,
                    fd,
                    cls=DucktapeJSONEncoder,
                    indent=2,
                    separators=(",", ": "),
                    sort_keys=True,
                )

        # Release any ssh clients used in querying the nodes for metadata
        for node_account in self._available_accounts:
            node_account.close()

    def _get_nodes_from_vagrant(self, make_remote_account_func):
        ssh_config_info, error = self._vagrant_ssh_config()

        nodes = []
        node_info_arr = ssh_config_info.split("\n\n")
        node_info_arr = [ninfo.strip() for ninfo in node_info_arr if ninfo.strip()]

        for ninfo in node_info_arr:
            ssh_config = RemoteAccountSSHConfig.from_string(ninfo)

            account = None
            try:
                account = make_remote_account_func(ssh_config, ssh_exception_checks=self.ssh_exception_checks)
                externally_routable_ip = account.fetch_externally_routable_ip()
            finally:
                if account:
                    account.close()
                    del account

            nodes.append(
                {
                    "ssh_config": ssh_config.to_json(),
                    "externally_routable_ip": externally_routable_ip,
                }
            )

        return nodes

    def _vagrant_ssh_config(self):
        ssh_config_info, error = subprocess.Popen(
            "vagrant ssh-config",
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            close_fds=True,
            # Force to text mode in py2/3 compatible way
            universal_newlines=True,
        ).communicate()
        return ssh_config_info, error


================================================
FILE: ducktape/cluster/windows_remoteaccount.py
================================================
# Copyright 2014 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import base64
import logging
import os

import boto3
import winrm
from botocore.exceptions import ClientError
from Crypto.Cipher import PKCS1_v1_5
from Crypto.PublicKey import RSA

from ducktape.cluster.consts import WINDOWS
from ducktape.cluster.remoteaccount import RemoteAccount, RemoteCommandError


class WindowsRemoteAccount(RemoteAccount):
    """
    Windows remote accounts are currently only supported in EC2. See ``_setup_winrm()`` for how the WinRM password
    is fetched, which is currently specific to AWS.

    The Windows AMI needs to also have an SSH server running to support SSH commands, SCP, and rsync.
    """

    WINRM_USERNAME = "Administrator"

    def __init__(self, *args, **kwargs):
        super(WindowsRemoteAccount, self).__init__(*args, **kwargs)
        self.os = WINDOWS
        self._winrm_client = None

    @property
    def winrm_client(self):
        # TODO: currently this only works in AWS EC2 provisioned by Vagrant. Add support for other environments.

        # check if winrm has already been setup. If yes, return immediately.
        if self._winrm_client:
            return self._winrm_client

        # first get the instance ID of this machine from Vagrant's metadata.
        ec2_instance_id_path = os.path.join(os.getcwd(), ".vagrant", "machines", self.ssh_config.host, "aws", "id")
        instance_id_file = None
        try:
            instance_id_file = open(ec2_instance_id_path, "r")
            ec2_instance_id = instance_id_file.read().strip()
            if not ec2_instance_id or ec2_instance_id == "":
                raise Exception
        except Exception:
            raise Exception("Could not extract EC2 instance ID from local file: %s" % ec2_instance_id_path)
        finally:
            if instance_id_file:
                instance_id_file.close()

        self._log(logging.INFO, "Found EC2 instance id: %s" % ec2_instance_id)

        # then get the encrypted password.
        client = boto3.client("ec2")
        try:
            response = client.get_password_data(InstanceId=ec2_instance_id)
        except ClientError as ce:
            if "InvalidInstanceID.NotFound" in str(ce):
                raise Exception(
                    "The instance id '%s' couldn't be found. Is the correct AWS region configured?" % ec2_instance_id
                )
            else:
                raise ce

        self._log(
            logging.INFO,
            "Fetched encrypted winrm password and will decrypt with private key: %s" % self.ssh_config.identityfile,
        )

        # then decrypt the password using the private key.
        key_file = None
        try:
            key_file = open(self.ssh_config.identityfile, "r")
            key = key_file.read()
            rsa_key = RSA.importKey(key)
            cipher = PKCS1_v1_5.new(rsa_key)
            winrm_password = cipher.decrypt(base64.b64decode(response["PasswordData"]), None)
            self._winrm_client = winrm.Session(
                self.ssh_config.hostname,
                auth=(WindowsRemoteAccount.WINRM_USERNAME, winrm_password),
            )
        finally:
            if key_file:
                key_file.close()

        return self._winrm_client

    def fetch_externally_routable_ip(self, is_aws=None):
        # EC2 windows machines aren't given an externally routable IP. Use the hostname instead.
        return self.ssh_config.hostname

    def run_winrm_command(self, cmd, allow_fail=False):
        self._log(logging.DEBUG, "Running winrm command: %s" % cmd)
        result = self.winrm_client.run_cmd(cmd)
        if not allow_fail and result.status_code != 0:
            raise RemoteCommandError(self, cmd, result.status_code, result.std_err)
        return result.status_code


================================================
FILE: ducktape/command_line/__init__.py
================================================


================================================
FILE: ducktape/command_line/defaults.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os


class ConsoleDefaults(object):
    # Directory for project-specific ducktape configs and runtime data
    DUCKTAPE_DIR = ".ducktape"

    # Store various bookkeeping data here
    METADATA_DIR = os.path.join(DUCKTAPE_DIR, "metadata")

    # Default path, relative to current project directory, to the project's ducktape config file
    PROJECT_CONFIG_FILE = os.path.join(DUCKTAPE_DIR, "config")

    # Default path to the user-specific config file
    USER_CONFIG_FILE = os.path.join("~", DUCKTAPE_DIR, "config")

    # Default cluster implementation
    CLUSTER_TYPE = "ducktape.cluster.vagrant.VagrantCluster"

    # Default path, relative to current project directory, to the cluster file
    CLUSTER_FILE = os.path.join(DUCKTAPE_DIR, "cluster.json")

    # Track the last-used session_id here
    SESSION_ID_FILE = os.path.join(METADATA_DIR, "session_id")

    # Folders with test reports, logs, etc all are created in this directory
    RESULTS_ROOT_DIRECTORY = "./results"

    SESSION_LOG_FORMATTER = "[%(levelname)s:%(asctime)s]: %(message)s"
    TEST_LOG_FORMATTER = "[%(levelname)-5s - %(asctime)s - %(module)s - %(funcName)s - lineno:%(lineno)s]: %(message)s"

    # Log this to indicate a test is misbehaving to help end user find which test is at fault
    BAD_TEST_MESSAGE = "BAD_TEST"

    # Ducktape will try to pick a random open port in the range [TEST_DRIVER_PORT_MIN, TEST_DRIVER_PORT_MAX]
    # this range is *inclusive*
    TEST_DRIVER_MIN_PORT = 5556
    TEST_DRIVER_MAX_PORT = 5656


================================================
FILE: ducktape/command_line/main.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import print_function

import importlib
import json
import os
import random
import sys
import traceback

from ducktape.command_line.defaults import ConsoleDefaults
from ducktape.command_line.parse_args import parse_args
from ducktape.tests.loader import LoaderException, TestLoader
from ducktape.tests.loggermaker import close_logger
from ducktape.tests.reporter import (
    FailedTestSymbolReporter,
    HTMLSummaryReporter,
    JSONReporter,
    JUnitReporter,
    SimpleFileSummaryReporter,
    SimpleStdoutSummaryReporter,
)
from ducktape.tests.runner import TestRunner
from ducktape.tests.session import (
    SessionContext,
    SessionLoggerMaker,
    generate_results_dir,
    generate_session_id,
)
from ducktape.utils import persistence
from ducktape.utils.local_filesystem_utils import mkdir_p
from ducktape.utils.util import load_function


def get_user_defined_globals(globals_str):
    """Parse user-defined globals into an immutable dict using globals_str

    :param globals_str Either a file, in which case, attempt to open the file and parse the contents as JSON,
        or a JSON string representing a JSON object. The parsed JSON must represent a collection of key-value pairs,
        i.e. a python dict.
    :return dict containing user-defined global variables
    """
    if globals_str is None:
        return persistence.make_dict()

    from_file = False
    if os.path.isfile(globals_str):
        # The string appears to be a file, so try loading JSON from file
        # This may raise an IOError if the file can't be read or a ValueError if the contents of the file
        # cannot be parsed.
        user_globals = json.loads(open(globals_str, "r").read())
        from_file = True
    else:
        try:
            # try parsing directly as json if it doesn't seem to be a file
            user_globals = json.loads(globals_str)
        except ValueError as ve:
            message = str(ve)
            message += "\nglobals parameter %s is neither valid JSON nor a valid path to a JSON file." % globals_str
            raise ValueError(message)

    # Now check that the parsed JSON is a dictionary
    if not isinstance(user_globals, dict):
        if from_file:
            message = "The JSON contained in file %s must parse to a dict. " % globals_str
        else:
            message = "JSON string referred to by globals parameter must parse to a dict. "
        message += "I.e. the contents of the JSON must be an object, not an array or primitive. "
        message += "Instead found %s, which parsed to %s" % (
            str(user_globals),
            type(user_globals),
        )

        raise ValueError(message)

    # create the immutable dict
    return persistence.make_dict(**user_globals)


def setup_results_directory(new_results_dir):
    """Make directory in which results will be stored"""
    if os.path.exists(new_results_dir):
        raise Exception("A file or directory at %s already exists. Exiting without overwriting." % new_results_dir)
    mkdir_p(new_results_dir)


def update_latest_symlink(results_root, new_results_dir):
    """Create or update symlink "latest" which points to the new test results directory"""
    latest_test_dir = os.path.join(results_root, "latest")
    if os.path.islink(latest_test_dir):
        os.unlink(latest_test_dir)
    os.symlink(new_results_dir, latest_test_dir)


def main():
    """Ducktape entry point. This contains top level logic for ducktape command-line program which does the following:

    Discover tests
    Initialize cluster for distributed services
    Run tests
    Report a summary of all results
    """
    args_dict = parse_args(sys.argv[1:])

    injected_args = None
    if args_dict["parameters"]:
        try:
            injected_args = json.loads(args_dict["parameters"])
        except ValueError as e:
            print("parameters are not valid json: " + str(e))
            sys.exit(1)

    args_dict["globals"] = get_user_defined_globals(args_dict.get("globals"))

    # Make .ducktape directory where metadata such as the last used session_id is stored
    if not os.path.isdir(ConsoleDefaults.METADATA_DIR):
        os.makedirs(ConsoleDefaults.METADATA_DIR)

    # Generate a shared 'global' identifier for this test run and create the directory
    # in which all test results will be stored
    session_id = generate_session_id(ConsoleDefaults.SESSION_ID_FILE)
    results_dir = generate_results_dir(args_dict["results_root"], session_id)
    setup_results_directory(results_dir)

    session_context = SessionContext(session_id=session_id, results_dir=results_dir, **args_dict)
    session_logger = SessionLoggerMaker(session_context).logger
    for k, v in args_dict.items():
        session_logger.debug("Configuration: %s=%s", k, v)

    # Discover and load tests to be run
    loader = TestLoader(
        session_context,
        session_logger,
        repeat=args_dict["repeat"],
        injected_args=injected_args,
        subset=args_dict["subset"],
        subsets=args_dict["subsets"],
        historical_report=args_dict["historical_report"],
    )
    try:
        tests = loader.load(args_dict["test_path"], excluded_test_symbols=args_dict["exclude"])
    except LoaderException as e:
        print("Failed while trying to discover tests: {}".format(e))
        sys.exit(1)

    expected_test_count = len(tests)
    session_logger.info(f"Discovered {expected_test_count} tests to run")

    if args_dict["collect_only"]:
        print("Collected %d tests:" % expected_test_count)
        for test in tests:
            print("    " + str(test))
        sys.exit(0)

    if args_dict["collect_num_nodes"]:
        total_nodes = sum(test.expected_num_nodes for test in tests)
        print(total_nodes)
        sys.exit(0)

    if args_dict["sample"]:
        print("Running a sample of %d tests" % args_dict["sample"])
        try:
            tests = random.sample(tests, args_dict["sample"])
        except ValueError as e:
            if args_dict["sample"] > len(tests):
                print(
                    "sample size %d greater than number of tests %d; running all tests"
                    % (args_dict["sample"], len(tests))
                )
            else:
                print("invalid sample size (%s), running all tests" % e)

    # Initializing the cluster is slow, so do so only if
    # tests are sure to be run
    try:
        (cluster_mod_name, cluster_class_name) = args_dict["cluster"].rsplit(".", 1)
        cluster_mod = importlib.import_module(cluster_mod_name)
        cluster_class = getattr(cluster_mod, cluster_class_name)

        cluster_kwargs = {"cluster_file": args_dict["cluster_file"]}
        checker_function_names = args_dict["ssh_checker_function"]
        if checker_function_names:
            checkers = [load_function(func_path) for func_path in checker_function_names]
            if checkers:
                cluster_kwargs["ssh_exception_checks"] = checkers
        cluster = cluster_class(**cluster_kwargs)
        for ctx in tests:
            # Note that we're attaching a reference to cluster
            # only after test context objects have been instantiated
            ctx.cluster = cluster
    except Exception:
        print("Failed to load cluster: ", str(sys.exc_info()[0]))
        print(traceback.format_exc(limit=16))
        sys.exit(1)

    # Run the tests
    deflake_num = args_dict["deflake"]
    if deflake_num < 1:
        session_logger.warning("specified number of deflake runs specified to be less than 1, running without deflake.")
    deflake_num = max(1, deflake_num)
    runner = TestRunner(cluster, session_context, session_logger, tests, deflake_num)
    test_results = runner.run_all_tests()

    # Report results
    reporters = [
        SimpleStdoutSummaryReporter(test_results),
        SimpleFileSummaryReporter(test_results),
        HTMLSummaryReporter(test_results, expected_test_count),
        JSONReporter(test_results),
        JUnitReporter(test_results),
        FailedTestSymbolReporter(test_results),
    ]

    for r in reporters:
        r.report()

    update_latest_symlink(args_dict["results_root"], results_dir)

    if len(test_results) < expected_test_count:
        session_logger.warning(
            f"All tests were NOT run. Expected {expected_test_count} tests, only {len(test_results)} were run."
        )
        close_logger(session_logger)
        sys.exit(1)

    close_logger(session_logger)
    if not test_results.get_aggregate_success():
        # Non-zero exit if at least one test failed
        sys.exit(1)


================================================
FILE: ducktape/command_line/parse_args.py
================================================
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import print_function

import argparse
import itertools
import os
import sys
from typing import Dict, List, Optional, Union

from ducktape.command_line.defaults import ConsoleDefaults
from ducktape.utils.util import ducktape_version


def create_ducktape_parser() -> argparse.ArgumentParser:
    parser = argparse.ArgumentParser(description="Discover and run your tests")
    parser.add_argument(
        "test_path",
        metavar="test_path",
        type=str,
        nargs="*",
        default=[os.getcwd()],
        help="One or more test identifiers or test suite paths to execute",
    )
    parser.add_argument(
        "--exclude",
        type=str,
        nargs="*",
        default=None,
        help="one or more space-delimited strings indicating which tests to exclude",
    )
    parser.add_argument(
        "--collect-only",
        action="store_true",
        help="display collected tests, but do not run.",
    )
    parser.add_argument(
        "--collect-num-nodes",
        action="store_true",
        help="display total number of nodes requested by all tests, but do not run anything.",
    )
    parser.add_argument("--debug", action="store_true", help="pipe more verbose test output to stdout.")
    parser.add_argument(
        "--config-file",
        action="store",
        default=ConsoleDefaults.USER_CONFIG_FILE,
        help="path to project-specific configuration file.",
    )
    parser.add_argument(
        "--compress",
        action="store_true",
        help="compress remote logs before collection.",
    )
    parser.add_argument(
        "--cluster",
        action="store",
        default=ConsoleDefaults.CLUSTER_TYPE,
        help="cluster class to use to allocate nodes for tests.",
    )
    parser.add_argument(
        "--default-num-nodes",
        action="store",
        type=int,
        default=None,
        help="Global hint for cluster usage. A test without the @cluster annotation will "
        "default to this value for expected cluster usage.",
    )
    parser.add_argument(
        "--cluster-file",
        action="store",
        default=None,
        help="path to a json file which provides information needed to initialize a json cluster. "
        "The file is used to read/write cached cluster info if "
        "cluster is ducktape.cluster.vagrant.VagrantCluster.",
    )
    parser.add_argument(
        "--results-root",
        action="store",
        default=ConsoleDefaults.RESULTS_ROOT_DIRECTORY,
        help="path to custom root results directory. Running ducktape with this root "
        "specified will result in new test results being stored in a subdirectory of "
        "this root directory.",
    )
    parser.add_argument("--exit-first", action="store_true", help="exit after first failure")
    parser.add_argument(
        "--no-teardown",
        action="store_true",
        help="don't kill running processes or remove log files when a test has finished running. "
        "This is primarily useful for test developers who want to interact with running "
        "services after a test has run.",
    )
    parser.add_argument("--version", action="store_true", help="display version")
    parser.add_argument(
        "--parameters",
        action="store",
        help="inject these arguments into the specified test(s). Specify parameters as a JSON string.",
    )
    parser.add_argument(
        "--globals",
        action="store",
        help="user-defined globals go here. "
        "This can be a file containing a JSON object, or a string representing a JSON object.",
    )
    parser.add_argument(
        "--max-parallel",
        action="store",
        type=int,
        default=1,
        help="Upper bound on number of tests run simultaneously.",
    )
    parser.add_argument(
        "--repeat",
        action="store",
        type=int,
        default=1,
        help="Use this flag to repeat all discovered tests the given number of times.",
    )
    parser.add_argument(
        "--subsets",
        action="store",
        type=int,
        default=1,
        help="Number of subsets of tests to statically break the tests into to allow for parallel "
        "execution without coordination between test runner processes.",
    )
    parser.add_argument(
        "--subset",
        action="store",
        type=int,
        default=0,
        help="Which subset of the tests to run, based on the breakdown using the parameter for --subsets",
    )
    parser.add_argument(
        "--historical-report",
        action="store",
        type=str,
        help="URL of a JSON report file containing stats from a previous test run. If specified, "
        "this will be used when creating subsets of tests to divide evenly by total run time "
        "instead of by number of tests.",
    )
    parser.add_argument(
        "--skip-nodes-allocation",
        action="store_true",
        help="Use this flag to skip allocating "
        "nodes for services. Can be used when running specific tests on a running platform",
    )
    parser.add_argument(
        "--sample",
        action="store",
        type=int,
        help="The size of a random test sample to run",
    )
    parser.add_argument(
        "--fail-bad-cluster-utilization",
        action="store_true",
        help="Fail a test if the test declared that it needs more nodes than it actually used. "
        "E.g. if the test had `@cluster(num_nodes=10)` annotation, "
        "but never used more than 5 nodes during its execution.",
    )
    parser.add_argument(
        "--fail-greedy-tests",
        action="store_true",
        help="Fail a test if it has no @cluster annotation "
        "or if @cluster annotation is empty. "
        "You can still specify 0-sized cluster explicitly using either num_nodes=0 "
        "or cluster_spec=ClusterSpec.empty()",
    )
    parser.add_argument(
        "--test-runner-timeout",
        action="store",
        type=int,
        default=1800000,
        help="Amount of time in milliseconds between test communicating between the test runner"
        " before a timeout error occurs. Default is 30 minutes",
    )
    (
        parser.add_argument(
            "--ssh-checker-function",
            action="store",
            type=str,
            nargs="+",
            help="Python module path(s) to a function that takes an exception and a remote account"
            " that will be called when an ssh error occurs, this can give some "
            "validation or better logging when an ssh error occurs. Specify any "
            "number of module paths after this flag to be called.",
        ),
    )
    parser.add_argument(
        "--deflake",
        action="store",
        type=int,
        default=1,
        help="the number of times a failed test should be ran in total (including its initial run) "
        "to determine flakyness. When not present, deflake will not be used, "
        "and a test will be marked as either passed or failed. "
        "When enabled tests will be marked as flaky if it passes on any of the reruns",
    )
    parser.add_argument(
        "--enable-jvm-logs",
        action="store_true",
        help="Enable automatic JVM log collection for Java-based services (Kafka, ZooKeeper, Connect, etc.)",
    )
    return parser


def get_user_config_file(args: List[str]) -> str:
    """Helper function to get specified (or default) user config file.
    :return Filename which is the path to the config file.
    """
    parser = create_ducktape_parser()
    config_file = vars(parser.parse_args(args))["config_file"]
    assert config_file is not None
    return os.path.expanduser(config_file)


def config_file_to_args_list(config_file):
    """Parse in contents of config file, and return a list of command-line options parseable by the ducktape parser.

    Skip whitespace lines and comments (lines prefixed by "#")
    """
    if config_file is None:
        raise RuntimeError("config_file is None")

    # Read in configuration, but ignore empty lines and comments
    config_lines = [
        line for line in open(config_file).readlines() if (len(line.strip()) > 0 and line.lstrip()[0] != "#")
    ]

    return list(itertools.chain(*[line.split() for line in config_lines]))


def parse_non_default_args(parser: argparse.ArgumentParser, defaults: dict, args: list) -> dict:
    """
    Parse and remove default args from a list of args, and return the dict of the parsed args.
    """
    parsed_args = vars(parser.parse_args(args))

    # remove defaults
    for key, value in defaults.items():
        if parsed_args[key] == value:
            del parsed_args[key]

    return parsed_args


def parse_args(
    args: List[str],
) -> Dict[str, Optional[Union[List[str], bool, str, int]]]:
    """Parse in command-line and config file options.

    Command line arguments have the highest priority, then user configs specified in ~/.ducktape/config, and finally
    project configs specified in <ducktape_dir>/config.
    """

    parser = create_ducktape_parser()

    if len(args) == 0:
        # Show help if there are no arguments
        parser.print_help()
        sys.exit(0)

    # Collect arguments from project config file, user config file, and command line
    # later arguments supersede earlier arguments
    parsed_args_list = []

    # First collect all the default values
    defaults = vars(parser.parse_args([]))

    project_config_file = ConsoleDefaults.PROJECT_CONFIG_FILE
    # Load all non-default args from project config file.
    if os.path.exists(project_config_file):
        parsed_args_list.append(parse_non_default_args(parser, defaults, config_file_to_args_list(project_config_file)))

    # Load all non-default args from user config file.
    user_config_file = get_user_config_file(args)
    if os.path.exists(user_config_file):
        parsed_args_list.append(parse_non_default_args(parser, defaults, config_file_to_args_list(user_config_file)))

    # Load all non-default args from the command line.
    parsed_args_list.append(parse_non_default_args(parser, defaults, args))

    # Don't need to copy, done with the defaults dict.
    # Start with the default args, and layer on changes.
    parsed_args_dict = defaults
    for parsed_args in parsed_args_list:
        parsed_args_dict.update(parsed_args)

    if parsed_args_dict["version"]:
        print(ducktape_version())
        sys.exit(0)
    return parsed_args_dict


================================================
FILE: ducktape/errors.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


class DucktapeError(RuntimeError):
    pass


class TimeoutError(DucktapeError):
    pass


================================================
FILE: ducktape/json_serializable.py
================================================
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from json import JSONEncoder


class DucktapeJSONEncoder(JSONEncoder):
    def default(self, obj):
        if hasattr(obj, "to_json"):
            # to_json may return a dict or array or other naturally json serializable object
            # this allows serialization to work correctly on nested items
            return obj.to_json()
        else:
            # Let the base class default method raise the TypeError
            return JSONEncoder.default(self, obj)


================================================
FILE: ducktape/jvm_logging.py
================================================
# Copyright 2024 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
JVM logging support for Ducktape.

This module provides automatic JVM log collection for Java-based services without requiring any
code changes to services or tests.

Please Note: We are prepending JVM options to the SSH cmd. If any option is injected again as part of cmd, it will
override the option injected from this module.
For example, if a test or service injects its own -Xlog options, it may override the GC logging options injected
by this module. In practice, Services should work as expected.
"""

import os
import types


class JVMLogger:
    """Handles JVM logging configuration and enablement for services."""

    def __init__(self, log_dir="/mnt/jvm_logs"):
        """
        Initialize JVM logger.

        :param log_dir: Directory for JVM logs on worker nodes
        """
        self.log_dir = log_dir

    def enable_for_service(self, service):
        """
        Enable JVM logging for a service instance. Adds JVM log definitions and helper methods to the service.
        Wraps start_node so that when nodes are started, JVM logging is set up and JVM options are injected via SSH,
        and wraps clean_node to clean up JVM logs after node cleanup.
        :param service: Service instance to enable JVM logging for
        """
        # Store reference to JVMLogger instance for use in closures
        jvm_logger = self

        # Add JVM log definitions
        jvm_logs = {
            "jvm_gc_log": {"path": os.path.join(jvm_logger.log_dir, "gc.log"), "collect_default": True},
            "jvm_stdout_stderr": {"path": os.path.join(jvm_logger.log_dir, "jvm.log"), "collect_default": True},
            "jvm_heap_dump": {
                "path": os.path.join(jvm_logger.log_dir, "heap_dump.hprof"),
                "collect_default": False,  # Only on failure
            },
        }

        # Initialize logs dict if needed
        if not hasattr(service, "logs") or service.logs is None:
            service.logs = {}

        # Merge with existing logs
        service.logs.update(jvm_logs)

        # Add helper methods
        service.JVM_LOG_DIR = jvm_logger.log_dir
        service.jvm_options = lambda node: jvm_logger._get_jvm_options()
        service.setup_jvm_logging = lambda node: jvm_logger._setup_on_node(node)
        service.clean_jvm_logs = lambda node: jvm_logger._cleanup_on_node(node)

        # Wrap start_node to automatically setup JVM logging and wrap SSH
        original_start_node = service.start_node

        def wrapped_start_node(self, node, *args, **kwargs):
            jvm_logger._setup_on_node(node)  # Setup JVM log directory

            # Wrap all SSH methods to inject JDK_JAVA_OPTIONS, wrap once and keep active for entire service lifecycle
            if not hasattr(node.account, "original_ssh"):
                original_ssh = node.account.ssh
                original_ssh_capture = node.account.ssh_capture
                original_ssh_output = node.account.ssh_output

                node.account.original_ssh = original_ssh
                node.account.original_ssh_capture = original_ssh_capture
                node.account.original_ssh_output = original_ssh_output

                jvm_opts = jvm_logger._get_jvm_options()
                # Use env command and append to existing JDK_JAVA_OPTIONS
                env_prefix = f'env JDK_JAVA_OPTIONS="${{JDK_JAVA_OPTIONS:-}} {jvm_opts}" '

                def wrapped_ssh(cmd, allow_fail=False):
                    return original_ssh(env_prefix + cmd, allow_fail=allow_fail)

                def wrapped_ssh_capture(cmd, allow_fail=False, callback=None, combine_stderr=True, timeout_sec=None):
                    return original_ssh_capture(
                        env_prefix + cmd,
                        allow_fail=allow_fail,
                        callback=callback,
                        combine_stderr=combine_stderr,
                        timeout_sec=timeout_sec,
                    )

                def wrapped_ssh_output(cmd, allow_fail=False, combine_stderr=True, timeout_sec=None):
                    return original_ssh_output(
                        env_prefix + cmd, allow_fail=allow_fail, combine_stderr=combine_stderr, timeout_sec=timeout_sec
                    )

                node.account.ssh = wrapped_ssh
                node.account.ssh_capture = wrapped_ssh_capture
                node.account.ssh_output = wrapped_ssh_output

            return original_start_node(node, *args, **kwargs)

        # Bind the wrapper function to the service object
        service.start_node = types.MethodType(wrapped_start_node, service)

        # Wrap clean_node to cleanup JVM logs and restore SSH methods
        original_clean_node = service.clean_node

        def wrapped_clean_node(self, node, *args, **kwargs):
            result = original_clean_node(node, *args, **kwargs)
            jvm_logger._cleanup_on_node(node)

            # Restore original SSH methods
            if hasattr(node.account, "original_ssh"):
                node.account.ssh = node.account.original_ssh
                node.account.ssh_capture = node.account.original_ssh_capture
                node.account.ssh_output = node.account.original_ssh_output
                del node.account.original_ssh
                del node.account.original_ssh_capture
                del node.account.original_ssh_output

            return result

        # Bind the wrapper function to the service instance
        service.clean_node = types.MethodType(wrapped_clean_node, service)

    def _get_jvm_options(self):
        """Generate JVM options string for logging."""
        gc_log = os.path.join(self.log_dir, "gc.log")
        heap_dump = os.path.join(self.log_dir, "heap_dump.hprof")
        error_log = os.path.join(self.log_dir, "hs_err_pid%p.log")
        jvm_log = os.path.join(self.log_dir, "jvm.log")

        jvm_logging_opts = [
            "-Xlog:disable",  # Suppress all default JVM console logging to prevent output pollution
            f"-Xlog:gc*:file={gc_log}:time,uptime,level,tags",  # GC activity with timestamps
            "-XX:+HeapDumpOnOutOfMemoryError",  # Generate heap dump on OOM
            f"-XX:HeapDumpPath={heap_dump}",  # Heap dump file location
            f"-Xlog:safepoint=info:file={jvm_log}:time,uptime,level,tags",  # Safepoint pause events
            f"-Xlog:class+load=info:file={jvm_log}:time,uptime,level,tags",  # Class loading events
            f"-XX:ErrorFile={error_log}",  # Fatal error log location (JVM crashes)
            "-XX:NativeMemoryTracking=summary",  # Track native memory usage
            f"-Xlog:jit+compilation=info:file={jvm_log}:time,uptime,level,tags",  # JIT compilation events
        ]

        return " ".join(jvm_logging_opts)

    def _setup_on_node(self, node):
        """Create JVM log directory on worker node."""
        node.account.ssh(f"mkdir -p {self.log_dir}")
        node.account.ssh(f"chmod 755 {self.log_dir}")

    def _cleanup_on_node(self, node):
        """Clean JVM logs from worker node."""
        node.account.ssh(f"rm -rf {self.log_dir}", allow_fail=True)


================================================
FILE: ducktape/mark/__init__.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from ._mark import matrix  # NOQA
from ._mark import defaults, env, ignore, ignored, is_env, parametrize, parametrized

__all__ = [
    "matrix",
    "defaults",
    "env",
    "ignore",
    "ignored",
    "is_env",
    "parametrize",
    "parametrized",
]


================================================
FILE: ducktape/mark/_mark.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import functools
import itertools
import os

from ducktape.errors import DucktapeError


class Mark(object):
    """Common base class for "marks" which may be applied to test functions/methods."""

    @staticmethod
    def mark(fun, mark):
        """Attach a tag indicating that fun has been marked with the given mark

        Marking fun updates it with two attributes:

        - marks:      a list of mark objects applied to the function. These may be strings or objects subclassing Mark
                      we use a list because in some cases, it is useful to preserve ordering.
        - mark_names: a set of names of marks applied to the function
        """
        # Update fun.marks
        if hasattr(fun, "marks"):
            fun.marks.append(mark)
        else:
            fun.__dict__["marks"] = [mark]

        # Update fun.mark_names
        if hasattr(fun, "mark_names"):
            fun.mark_names.add(mark.name)
        else:
            fun.__dict__["mark_names"] = {mark.name}

    @staticmethod
    def marked(f, mark):
        if f is None:
            return False

        if not hasattr(f, "mark_names"):
            return False

        return mark.name in f.mark_names

    @staticmethod
    def clear_marks(f):
        if not hasattr(f, "marks"):
            return

        del f.__dict__["marks"]
        del f.__dict__["mark_names"]

    @property
    def name(self):
        return "MARK"

    def apply(self, seed_context, context_list):
        raise NotImplementedError("Subclasses should implement apply")

    def __eq__(self, other):
        if not isinstance(self, type(other)):
            return False

        return self.name == other.name


class Ignore(Mark):
    """Ignore a specific parametrization of test."""

    def __init__(self, **kwargs):
        # Ignore tests with injected_args matching self.injected_args
        self.injected_args = kwargs

    @property
    def name(self):
        return "IGNORE"

    def apply(self, seed_context, context_list):
        assert len(context_list) > 0, "ignore annotation is not being applied to any test cases"
        for ctx in context_list:
            ctx.ignore = ctx.ignore or self.injected_args is None or self.injected_args == ctx.injected_args
        return context_list

    def __eq__(self, other):
        return super(Ignore, self).__eq__(other) and self.injected_args == other.injected_args


class IgnoreAll(Ignore):
    """This mark signals to ignore all parametrizations of a test."""

    def __init__(self):
        super(IgnoreAll, self).__init__()
        self.injected_args = None


class Matrix(Mark):
    """Parametrize with a matrix of arguments.
    Assume each values in self.injected_args is iterable
    """

    def __init__(self, **kwargs):
        self.injected_args = kwargs
        for k in self.injected_args:
            try:
                iter(self.injected_args[k])
            except TypeError as te:
                raise DucktapeError("Expected all values in @matrix decorator to be iterable: " + str(te))

    @property
    def name(self):
        return "MATRIX"

    def apply(self, seed_context, context_list):
        for injected_args in cartesian_product_dict(self.injected_args):
            injected_fun = _inject(**injected_args)(seed_context.function)
            context_list.insert(0, seed_context.copy(function=injected_fun, injected_args=injected_args))

        return context_list

    def __eq__(self, other):
        return super(Matrix, self).__eq__(other) and self.injected_args == other.injected_args


class Defaults(Mark):
    """Parametrize with a default matrix of arguments on existing parametrizations.
    Assume each values in self.injected_args is iterable
    """

    def __init__(self, **kwargs):
        self.injected_args = kwargs
        for k in self.injected_args:
            try:
                iter(self.injected_args[k])
            except TypeError as te:
                raise DucktapeError("Expected all values in @defaults decorator to be iterable: " + str(te))

    @property
    def name(self):
        return "DEFAULTS"

    def apply(self, seed_context, context_list):
        new_context_list = []
        if context_list:
            for ctx in context_list:
                for injected_args in cartesian_product_dict(
                    {arg: self.injected_args[arg] for arg in self.injected_args if arg not in ctx.injected_args}
                ):
                    injected_args.update(ctx.injected_args)
                    injected_fun = _inject(**injected_args)(seed_context.function)
                    new_context = seed_context.copy(
                        function=injected_fun,
                        injected_args=injected_args,
                        cluster_use_metadata=ctx.cluster_use_metadata,
                    )
                    new_context_list.insert(0, new_context)
        else:
            for injected_args in cartesian_product_dict(self.injected_args):
                injected_fun = _inject(**injected_args)(seed_context.function)
                new_context_list.insert(
                    0,
                    seed_context.copy(function=injected_fun, injected_args=injected_args),
                )

        return new_context_list

    def __eq__(self, other):
        return super(Defaults, self).__eq__(other) and self.injected_args == other.injected_args


class Parametrize(Mark):
    """Parametrize a test function"""

    def __init__(self, **kwargs):
        self.injected_args = kwargs

    @property
    def name(self):
        return "PARAMETRIZE"

    def apply(self, seed_context, context_list):
        injected_fun = _inject(**self.injected_args)(seed_context.function)
        context_list.insert(
            0,
            seed_context.copy(function=injected_fun, injected_args=self.injected_args),
        )
        return context_list

    def __eq__(self, other):
        return super(Parametrize, self).__eq__(other) and self.injected_args == other.injected_args


class Env(Mark):
    def __init__(self, **kwargs):
        self.injected_args = kwargs
        self.should_ignore = any(os.environ.get(key) != value for key, value in kwargs.items())

    @property
    def name(self):
        return "ENV"

    def apply(self, seed_context, context_list):
        for ctx in context_list:
            ctx.ignore = ctx.ignore or self.should_ignore

        return context_list

    def __eq__(self, other):
        return super(Env, self).__eq__(other) and self.injected_args == other.injected_args


PARAMETRIZED = Parametrize()
MATRIX = Matrix()
DEFAULTS = Defaults()
IGNORE = Ignore()
ENV = Env()


def _is_parametrize_mark(m):
    return m.name == PARAMETRIZED.name or m.name == MATRIX.name or m.name == DEFAULTS.name


def parametrized(f):
    """Is this function or object decorated with @parametrize or @matrix?"""
    return Mark.marked(f, PARAMETRIZED) or Mark.marked(f, MATRIX) or Mark.marked(f, DEFAULTS)


def ignored(f):
    """Is this function or object decorated with @ignore?"""
    return Mark.marked(f, IGNORE)


def is_env(f):
    return Mark.marked(f, ENV)


def cartesian_product_dict(d):
    """Return the "cartesian product" of this dictionary's values.
    d is assumed to be a dictionary, where each value in the dict is a list of values

    Example::

        {
            "x": [1, 2],
            "y": ["a", "b"]
        }

        expand this into a list of dictionaries like so:

        [
            {
                "x": 1,
                "y": "a"
            },
            {
                "x": 1,
                "y": "b"
            },
            {
                "x": 2,
                "y": "a"
            },
            {
                "x": 2,
                "y", "b"
            }
        ]
    """
    # Establish an ordering of the keys
    key_list = [k for k in d.keys()]

    expanded = []
    values_list = [d[k] for k in key_list]  # list of lists
    for v in itertools.product(*values_list):
        # Iterate through the cartesian product of the lists of values
        # One dictionary per element in this cartesian product
        new_dict = {}
        for i in range(len(key_list)):
            new_dict[key_list[i]] = v[i]
        expanded.append(new_dict)
    return expanded


def matrix(**kwargs):
    """Function decorator used to parametrize with a matrix of values.
    Decorating a function or method with ``@matrix`` marks it with the Matrix mark. When expanded using the
    ``MarkedFunctionExpander``, it yields a list of TestContext objects, one for every possible combination
    of arguments.

    Example::

        @matrix(x=[1, 2], y=[-1, -2])
        def g(x, y):
            print "x = %s, y = %s" % (x, y)

        for ctx in MarkedFunctionExpander(..., function=g, ...).expand():
            ctx.function()

        # output:
        # x = 1, y = -1
        # x = 1, y = -2
        # x = 2, y = -1
        # x = 2, y = -2
    """

    def parametrizer(f):
        Mark.mark(f, Matrix(**kwargs))
        return f

    return parametrizer


def defaults(**kwargs):
    """Function decorator used to parametrize with a default matrix of values.
    Decorating a function or method with ``@defaults`` marks it with the Defaults mark. When expanded using the
    ``MarkedFunctionExpander``, it yields a list of TestContext objects, one for every possible combination
    of defaults combined with ``@matrix`` and ``@parametrize``. If there are overlap between defaults
    and parametrization, defaults will not be applied.

    Example::

        @defaults(z=[1, 2])
        @matrix(x=[1], y=[1, 2])
        @parametrize(x=3, y=4)
        @parametrize(x=3, y=4, z=999)
        def g(x, y, z):
            print "x = %s, y = %s" % (x, y)

        for ctx in MarkedFunctionExpander(..., function=g, ...).expand():
            ctx.function()

        # output:
        # x = 1, y = 1, z = 1
        # x = 1, y = 1, z = 2
        # x = 1, y = 2, z = 1
        # x = 1, y = 2, z = 2
        # x = 3, y = 4, z = 1
        # x = 3, y = 4, z = 2
        # x = 3, y = 4, z = 999
    """

    def parametrizer(f):
        Mark.mark(f, Defaults(**kwargs))
        return f

    return parametrizer


def parametrize(**kwargs):
    """Function decorator used to parametrize its arguments.
    Decorating a function or method with ``@parametrize`` marks it with the Parametrize mark.

    Example::

        @parametrize(x=1, y=2 z=-1)
        @parametrize(x=3, y=4, z=5)
        def g(x, y, z):
            print "x = %s, y = %s, z = %s" % (x, y, z)

        for ctx in MarkedFunctionExpander(..., function=g, ...).expand():
            ctx.function()

        # output:
        # x = 1, y = 2, z = -1
        # x = 3, y = 4, z = 5
    """

    def parametrizer(f):
        Mark.mark(f, Parametrize(**kwargs))
        return f

    return parametrizer


def ignore(*args, **kwargs):
    """
    Test method decorator which signals to the test runner to ignore a given test.

    Example::

        When no parameters are provided to the @ignore decorator, ignore all parametrizations of the test function

        @ignore  # Ignore all parametrizations
        @parametrize(x=1, y=0)
        @parametrize(x=2, y=3)
        def the_test(...):
            ...

    Example::

        If parameters are supplied to the @ignore decorator, only ignore the parametrization with matching parameter(s)

        @ignore(x=2, y=3)
        @parametrize(x=1, y=0)  # This test will run as usual
        @parametrize(x=2, y=3)  # This test will be ignored
        def the_test(...):
            ...
    """
    if len(args) == 1 and len(kwargs) == 0:
        # this corresponds to the usage of the decorator with no arguments
        # @ignore
        # def test_function:
        #   ...
        Mark.mark(args[0], IgnoreAll())
        return args[0]

    # this corresponds to usage of @ignore with arguments
    def ignorer(f):
        Mark.mark(f, Ignore(**kwargs))
        return f

    return ignorer


def env(**kwargs):
    def environment(f):
        Mark.mark(f, Env(**kwargs))
        return f

    return environment


def _inject(*args, **kwargs):
    """Inject variables into the arguments of a function or method.
    This is almost identical to decorating with functools.partial, except we also propagate the wrapped
    function's __name__.
    """

    def injector(f):
        assert callable(f)

        @functools.wraps(f)
        def wrapper(*w_args, **w_kwargs):
            return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)

        wrapper.args = args
        wrapper.kwargs = kwargs
        wrapper.function = f

        return wrapper

    return injector


================================================
FILE: ducktape/mark/consts.py
================================================
CLUSTER_SPEC_KEYWORD = "cluster_spec"
CLUSTER_SIZE_KEYWORD = "num_nodes"
CLUSTER_NODE_TYPE_KEYWORD = "node_type"


================================================
FILE: ducktape/mark/mark_expander.py
================================================
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from ducktape.tests.test_context import TestContext

from ._mark import Parametrize, _is_parametrize_mark, parametrized


class MarkedFunctionExpander(object):
    """This class helps expand decorated/marked functions into a list of test context objects."""

    def __init__(
        self,
        session_context=None,
        module=None,
        cls=None,
        function=None,
        file=None,
        cluster=None,
    ):
        self.seed_context = TestContext(
            session_context=session_context,
            module=module,
            cls=cls,
            function=function,
            file=file,
            cluster=cluster,
        )

        if parametrized(function):
            self.context_list = []
        else:
            self.context_list = [self.seed_context]

    def expand(self, test_parameters=None):
        """Inspect self.function for marks, and expand into a list of test context objects useable by the test runner."""
        f = self.seed_context.function

        # If the user has specified that they want to run tests with specific parameters, apply the parameters first,
        # then subsequently strip any parametrization decorators. Otherwise, everything gets applied normally.
        if test_parameters is not None:
            self.context_list = Parametrize(**test_parameters).apply(self.seed_context, self.context_list)

        for m in getattr(f, "marks", []):
            if test_parameters is None or not _is_parametrize_mark(m):
                self.context_list = m.apply(self.seed_context, self.context_list)

        return self.context_list


================================================
FILE: ducktape/mark/resource.py
================================================
# Copyright 2016 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import copy
from typing import Callable, List

from ducktape.mark._mark import Mark
from ducktape.tests.test_context import TestContext


class ClusterUseMetadata(Mark):
    """Provide a hint about how a given test will use the cluster."""

    def __init__(self, **kwargs) -> None:
        # shallow copy
        self.metadata = copy.copy(kwargs)

    @property
    def name(self) -> str:
        return "RESOURCE_HINT_CLUSTER_USE"

    def apply(self, seed_context: TestContext, context_list: List[TestContext]) -> List[TestContext]:
        assert len(context_list) > 0, "cluster use annotation is not being applied to any test cases"

        for ctx in context_list:
            if not ctx.cluster_use_metadata:
                # only update if non-None and non-empty
                ctx.cluster_use_metadata = self.metadata
        return context_list


def cluster(**kwargs) -> Callable:
    """Test method decorator used to provide hints about how the test will use the given cluster.

    If this decorator is not provided, the test will either claim all cluster resources or fail immediately,
    depending on the flags passed to ducktape.


    :Keywords used by ducktape:

        - ``num_nodes`` provide hint about how many nodes the test will consume
        - ``node_type`` provide hint about what type of nodes the test needs (e.g., "large", "small")
        - ``cluster_spec`` provide hint about how many nodes of each type the test will consume


    Example::

        # basic usage with num_nodes
        @cluster(num_nodes=10)
        def the_test(...):
            ...

        # usage with num_nodes and node_type
        @cluster(num_nodes=5, node_type="large")
        def the_test(...):
            ...

        # basic usage with cluster_spec
        @cluster(cluster_spec=ClusterSpec.simple_linux(10))
        def the_test(...):
            ...

        # parametrized test:
        # both test cases will be marked with cluster_size of 200
        @cluster(num_nodes=200)
        @parametrize(x=1)
        @parametrize(x=2)
        def the_test(x):
            ...

        # test case {'x': 1} has cluster size 100, test case {'x': 2} has cluster size 200
        @cluster(num_nodes=100)
        @parametrize(x=1)
        @cluster(num_nodes=200)
        @parametrize(x=2)
        def the_test(x):
            ...

    """

    def cluster_use_metadata_adder(f):
        Mark.mark(f, ClusterUseMetadata(**kwargs))
        return f

    return cluster_use_metadata_adder


================================================
FILE: ducktape/services/__init__.py
================================================


================================================
FILE: ducktape/services/background_thread.py
================================================
# Copyright 2015 Confluent Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not u
Download .txt
gitextract_muj9xuou/

├── .coveragerc
├── .dockerignore
├── .github/
│   └── ISSUE_TEMPLATE/
│       └── bug_report.md
├── .gitignore
├── .readthedocs.yaml
├── .semaphore/
│   └── semaphore.yml
├── CODEOWNERS
├── Dockerfile
├── Jenkinsfile.disabled
├── README.md
├── Vagrantfile
├── docs/
│   ├── Makefile
│   ├── README.md
│   ├── _static/
│   │   └── theme_overrides.css
│   ├── api/
│   │   ├── clusters.rst
│   │   ├── remoteaccount.rst
│   │   ├── services.rst
│   │   ├── templates.rst
│   │   └── test.rst
│   ├── api.rst
│   ├── changelog.rst
│   ├── conf.py
│   ├── debug_tests.rst
│   ├── index.rst
│   ├── install.rst
│   ├── make.bat
│   ├── misc.rst
│   ├── new_services.rst
│   ├── new_tests.rst
│   ├── requirements.txt
│   ├── run_tests.rst
│   └── test_clusters.rst
├── ducktape/
│   ├── __init__.py
│   ├── __main__.py
│   ├── cluster/
│   │   ├── __init__.py
│   │   ├── cluster.py
│   │   ├── cluster_node.py
│   │   ├── cluster_spec.py
│   │   ├── consts.py
│   │   ├── finite_subcluster.py
│   │   ├── json.py
│   │   ├── linux_remoteaccount.py
│   │   ├── localhost.py
│   │   ├── node_container.py
│   │   ├── node_spec.py
│   │   ├── remoteaccount.py
│   │   ├── vagrant.py
│   │   └── windows_remoteaccount.py
│   ├── command_line/
│   │   ├── __init__.py
│   │   ├── defaults.py
│   │   ├── main.py
│   │   └── parse_args.py
│   ├── errors.py
│   ├── json_serializable.py
│   ├── jvm_logging.py
│   ├── mark/
│   │   ├── __init__.py
│   │   ├── _mark.py
│   │   ├── consts.py
│   │   ├── mark_expander.py
│   │   └── resource.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── background_thread.py
│   │   ├── service.py
│   │   └── service_registry.py
│   ├── template.py
│   ├── templates/
│   │   └── report/
│   │       ├── report.css
│   │       └── report.html
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── event.py
│   │   ├── loader.py
│   │   ├── loggermaker.py
│   │   ├── reporter.py
│   │   ├── result.py
│   │   ├── runner.py
│   │   ├── runner_client.py
│   │   ├── scheduler.py
│   │   ├── serde.py
│   │   ├── session.py
│   │   ├── status.py
│   │   ├── test.py
│   │   └── test_context.py
│   └── utils/
│       ├── __init__.py
│       ├── http_utils.py
│       ├── local_filesystem_utils.py
│       ├── persistence.py
│       ├── terminal_size.py
│       └── util.py
├── requirements-test.txt
├── requirements.txt
├── ruff.toml
├── service.yml
├── setup.cfg
├── setup.py
├── systests/
│   ├── __init__.py
│   └── cluster/
│       ├── __init__.py
│       ├── test_debug.py
│       ├── test_no_cluster.py
│       ├── test_remote_account.py
│       └── test_runner_operations.py
├── tests/
│   ├── __init__.py
│   ├── cluster/
│   │   ├── __init__.py
│   │   ├── check_cluster.py
│   │   ├── check_cluster_spec.py
│   │   ├── check_finite_subcluster.py
│   │   ├── check_json.py
│   │   ├── check_localhost.py
│   │   ├── check_node_container.py
│   │   ├── check_remoteaccount.py
│   │   └── check_vagrant.py
│   ├── command_line/
│   │   ├── __init__.py
│   │   ├── check_main.py
│   │   └── check_parse_args.py
│   ├── ducktape_mock.py
│   ├── loader/
│   │   ├── __init__.py
│   │   ├── check_loader.py
│   │   └── resources/
│   │       ├── __init__.py
│   │       ├── loader_test_directory/
│   │       │   ├── README
│   │       │   ├── __init__.py
│   │       │   ├── invalid_test_suites/
│   │       │   │   ├── empty_file.yml
│   │       │   │   ├── malformed_test_suite.yml
│   │       │   │   ├── not_yaml.yml
│   │       │   │   ├── test_suite_refers_to_non_existent_file.yml
│   │       │   │   ├── test_suite_with_malformed_params.yml
│   │       │   │   └── test_suites_with_no_tests.yml
│   │       │   ├── name_does_not_match_pattern.py
│   │       │   ├── sub_dir_a/
│   │       │   │   ├── __init__.py
│   │       │   │   ├── test_c.py
│   │       │   │   └── test_d.py
│   │       │   ├── sub_dir_no_tests/
│   │       │   │   ├── __init__.py
│   │       │   │   └── just_some_file.py
│   │       │   ├── test_a.py
│   │       │   ├── test_b.py
│   │       │   ├── test_decorated.py
│   │       │   ├── test_suite_cyclic_a.yml
│   │       │   ├── test_suite_cyclic_b.yml
│   │       │   ├── test_suite_decorated.yml
│   │       │   ├── test_suite_import_malformed.yml
│   │       │   ├── test_suite_import_py.yml
│   │       │   ├── test_suite_malformed.yml
│   │       │   ├── test_suite_multiple.yml
│   │       │   ├── test_suite_single.yml
│   │       │   ├── test_suite_with_self_import.yml
│   │       │   ├── test_suite_with_single_import.yml
│   │       │   └── test_suites/
│   │       │       ├── refers_to_parent_dir.yml
│   │       │       ├── sub_dir_a_test_c.yml
│   │       │       ├── sub_dir_a_test_c_via_class.yml
│   │       │       ├── sub_dir_a_with_exclude.yml
│   │       │       ├── sub_dir_test_import.yml
│   │       │       └── test_suite_glob.yml
│   │       └── report.json
│   ├── logger/
│   │   ├── __init__.py
│   │   └── check_logger.py
│   ├── mark/
│   │   ├── __init__.py
│   │   ├── check_cluster_use_metadata.py
│   │   ├── check_env.py
│   │   ├── check_ignore.py
│   │   ├── check_parametrize.py
│   │   └── resources/
│   │       └── __init__.py
│   ├── reporter/
│   │   └── check_symbol_reporter.py
│   ├── runner/
│   │   ├── __init__.py
│   │   ├── check_runner.py
│   │   ├── check_runner_memory.py
│   │   ├── check_sender_receiver.py
│   │   ├── fake_remote_account.py
│   │   └── resources/
│   │       ├── __init__.py
│   │       ├── test_bad_actor.py
│   │       ├── test_failing_tests.py
│   │       ├── test_fails_to_init.py
│   │       ├── test_fails_to_init_in_setup.py
│   │       ├── test_memory_leak.py
│   │       ├── test_thingy.py
│   │       └── test_various_num_nodes.py
│   ├── scheduler/
│   │   ├── __init__.py
│   │   └── check_scheduler.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── check_background_thread_service.py
│   │   ├── check_jvm_logging.py
│   │   └── check_service.py
│   ├── templates/
│   │   ├── __init__.py
│   │   ├── service/
│   │   │   ├── __init__.py
│   │   │   ├── check_render.py
│   │   │   └── templates/
│   │   │       └── sample
│   │   └── test/
│   │       ├── __init__.py
│   │       ├── check_render.py
│   │       └── templates/
│   │           └── sample
│   ├── test_utils.py
│   ├── tests/
│   │   ├── __init__.py
│   │   ├── check_session.py
│   │   ├── check_test.py
│   │   └── check_test_context.py
│   └── utils/
│       ├── __init__.py
│       └── check_util.py
└── tox.ini
Download .txt
SYMBOL INDEX (1130 symbols across 96 files)

FILE: docs/conf.py
  function skip (line 167) | def skip(app, what, name, obj, skip, options):
  function setup (line 173) | def setup(app):

FILE: ducktape/cluster/cluster.py
  class Cluster (line 22) | class Cluster(object):
    method __init__ (line 28) | def __init__(self):
    method __len__ (line 31) | def __len__(self) -> int:
    method alloc (line 35) | def alloc(self, cluster_spec) -> Union[ClusterNode, List[ClusterNode],...
    method do_alloc (line 47) | def do_alloc(self, cluster_spec) -> Union[ClusterNode, List[ClusterNod...
    method free (line 57) | def free(self, nodes: Union[Iterable[ClusterNode], ClusterNode]) -> None:
    method free_single (line 65) | def free_single(self, node: ClusterNode) -> None:
    method __eq__ (line 68) | def __eq__(self, other):
    method __hash__ (line 71) | def __hash__(self):
    method num_available_nodes (line 74) | def num_available_nodes(self) -> int:
    method available (line 77) | def available(self) -> ClusterSpec:
    method used (line 83) | def used(self) -> ClusterSpec:
    method max_used (line 89) | def max_used(self) -> int:
    method all (line 92) | def all(self):

FILE: ducktape/cluster/cluster_node.py
  class ClusterNode (line 6) | class ClusterNode(object):
    method __init__ (line 7) | def __init__(self, account: RemoteAccount, **kwargs):
    method name (line 13) | def name(self) -> Optional[str]:
    method operating_system (line 17) | def operating_system(self) -> Optional[str]:

FILE: ducktape/cluster/cluster_spec.py
  class ClusterSpec (line 26) | class ClusterSpec(object):
    method empty (line 34) | def empty():
    method simple_linux (line 38) | def simple_linux(num_nodes, node_type=None):
    method from_nodes (line 53) | def from_nodes(nodes):
    method __init__ (line 59) | def __init__(self, nodes=None):
    method __len__ (line 67) | def __len__(self):
    method __iter__ (line 70) | def __iter__(self):
    method size (line 73) | def size(self):
    method add (line 77) | def add(self, other):
    method clone (line 88) | def clone(self):
    method __str__ (line 94) | def __str__(self):

FILE: ducktape/cluster/finite_subcluster.py
  class FiniteSubcluster (line 21) | class FiniteSubcluster(Cluster):
    method __init__ (line 24) | def __init__(self, nodes: typing.Iterable[ClusterNode]):
    method do_alloc (line 30) | def do_alloc(self, cluster_spec) -> typing.List[ClusterNode]:
    method free_single (line 41) | def free_single(self, node):
    method available (line 45) | def available(self):
    method used (line 48) | def used(self):

FILE: ducktape/cluster/json.py
  function make_remote_account (line 33) | def make_remote_account(ssh_config, *args, **kwargs):
  class JsonCluster (line 42) | class JsonCluster(Cluster):
    method __init__ (line 45) | def __init__(
    method do_alloc (line 133) | def do_alloc(self, cluster_spec):
    method free_single (line 153) | def free_single(self, node):
    method _externally_routable_ip (line 158) | def _externally_routable_ip(self, account):
    method available (line 161) | def available(self):
    method used (line 164) | def used(self):

FILE: ducktape/cluster/linux_remoteaccount.py
  class LinuxRemoteAccount (line 23) | class LinuxRemoteAccount(RemoteAccount):
    method __init__ (line 24) | def __init__(self, *args, **kwargs):
    method local (line 31) | def local(self):
    method get_network_devices (line 36) | def get_network_devices(self):
    method get_external_accessible_network_devices (line 42) | def get_external_accessible_network_devices(self):
    method fetch_externally_routable_ip (line 56) | def fetch_externally_routable_ip(self, is_aws=None):

FILE: ducktape/cluster/localhost.py
  class LocalhostCluster (line 24) | class LocalhostCluster(Cluster):
    method __init__ (line 31) | def __init__(self, *args, **kwargs):
    method do_alloc (line 47) | def do_alloc(self, cluster_spec):
    method free_single (line 54) | def free_single(self, node):
    method available (line 59) | def available(self):
    method used (line 62) | def used(self):

FILE: ducktape/cluster/node_container.py
  class NodeNotPresentError (line 39) | class NodeNotPresentError(Exception):
  class InsufficientResourcesError (line 43) | class InsufficientResourcesError(Exception):
  class InsufficientHealthyNodesError (line 47) | class InsufficientHealthyNodesError(InsufficientResourcesError):
    method __init__ (line 48) | def __init__(self, bad_nodes: List, *args):
  function _get_node_key (line 53) | def _get_node_key(node: NodeType) -> NodeGroupKey:
  class NodeContainer (line 60) | class NodeContainer(object):
    method __init__ (line 72) | def __init__(self, nodes: Optional[Iterable[NodeType]] = None) -> None:
    method size (line 86) | def size(self) -> int:
    method __len__ (line 92) | def __len__(self):
    method __iter__ (line 95) | def __iter__(self) -> Iterator[Any]:
    method elements (line 98) | def elements(self, operating_system: Optional[str] = None, node_type: ...
    method grouped_by_os_and_type (line 117) | def grouped_by_os_and_type(self) -> Dict[Tuple[Optional[str], Optional...
    method add_node (line 132) | def add_node(self, node: Union[ClusterNode, RemoteAccount]) -> None:
    method add_nodes (line 141) | def add_nodes(self, nodes):
    method remove_node (line 150) | def remove_node(self, node):
    method remove_nodes (line 164) | def remove_nodes(self, nodes):
    method _find_matching_nodes (line 173) | def _find_matching_nodes(
    method remove_spec (line 221) | def remove_spec(self, cluster_spec: ClusterSpec) -> Tuple[List[NodeTyp...
    method can_remove_spec (line 275) | def can_remove_spec(self, cluster_spec: ClusterSpec) -> bool:
    method _count_nodes_by_os (line 286) | def _count_nodes_by_os(self, target_os: str) -> int:
    method _count_nodes_by_os_and_type (line 299) | def _count_nodes_by_os_and_type(self, target_os: str, target_type: str...
    method attempt_remove_spec (line 309) | def attempt_remove_spec(self, cluster_spec: ClusterSpec) -> str:
    method clone (line 375) | def clone(self) -> "NodeContainer":

FILE: ducktape/cluster/node_spec.py
  class NodeSpec (line 23) | class NodeSpec(object):
    method __init__ (line 35) | def __init__(self, operating_system: str = LINUX, node_type: Optional[...
    method matches (line 41) | def matches(self, available_node_spec: "NodeSpec") -> bool:
    method __str__ (line 59) | def __str__(self):
    method __eq__ (line 65) | def __eq__(self, other):
    method __hash__ (line 70) | def __hash__(self):

FILE: ducktape/cluster/remoteaccount.py
  function check_ssh (line 34) | def check_ssh(method: Callable) -> Callable:
  class RemoteAccountSSHConfig (line 53) | class RemoteAccountSSHConfig(object):
    method __init__ (line 54) | def __init__(
    method from_string (line 81) | def from_string(config_str):
    method to_json (line 106) | def to_json(self):
    method __repr__ (line 109) | def __repr__(self):
    method __eq__ (line 112) | def __eq__(self, other):
    method __hash__ (line 115) | def __hash__(self):
  class RemoteAccountError (line 119) | class RemoteAccountError(DucktapeError):
    method __init__ (line 122) | def __init__(self, account, msg):
    method __str__ (line 126) | def __str__(self):
  class RemoteCommandError (line 130) | class RemoteCommandError(RemoteAccountError):
    method __init__ (line 133) | def __init__(self, account, cmd, exit_status, msg):
    method __str__ (line 139) | def __str__(self):
  class RemoteAccount (line 150) | class RemoteAccount(HttpMixin):
    method __init__ (line 162) | def __init__(
    method operating_system (line 192) | def operating_system(self) -> Optional[str]:
    method logger (line 196) | def logger(self):
    method logger (line 203) | def logger(self, logger):
    method _log (line 206) | def _log(self, level, msg, *args, **kwargs):
    method _set_ssh_client (line 211) | def _set_ssh_client(self):
    method ssh_client (line 233) | def ssh_client(self):
    method _set_sftp_client (line 249) | def _set_sftp_client(self):
    method sftp_client (line 255) | def sftp_client(self):
    method close (line 263) | def close(self) -> None:
    method __str__ (line 273) | def __str__(self):
    method __repr__ (line 280) | def __repr__(self):
    method __eq__ (line 283) | def __eq__(self, other):
    method __hash__ (line 286) | def __hash__(self):
    method wait_for_http_service (line 289) | def wait_for_http_service(self, port, headers, timeout=20, path="/"):
    method _can_ping_url (line 304) | def _can_ping_url(self, url, headers):
    method available (line 312) | def available(self):
    method ssh (line 325) | def ssh(self, cmd, allow_fail=False):
    method ssh_capture (line 362) | def ssh_capture(
    method ssh_output (line 424) | def ssh_output(self, cmd, allow_fail=False, combine_stderr=True, timeo...
    method alive (line 469) | def alive(self, pid):
    method signal (line 477) | def signal(self, pid, sig, allow_fail=False):
    method kill_process (line 481) | def kill_process(self, process_grep_str, clean_shutdown=True, allow_fa...
    method java_pids (line 493) | def java_pids(self, match):
    method kill_java_processes (line 502) | def kill_java_processes(self, match, clean_shutdown=True, allow_fail=F...
    method copy_between (line 522) | def copy_between(self, src, dest, dest_node):
    method scp_from (line 549) | def scp_from(self, src, dest, recursive=False):
    method _re_anchor_basename (line 553) | def _re_anchor_basename(self, path, directory):
    method copy_from (line 579) | def copy_from(self, src, dest):
    method scp_to (line 601) | def scp_to(self, src, dest, recursive=False):
    method copy_to (line 606) | def copy_to(self, src, dest):
    method islink (line 630) | def islink(self, path):
    method isdir (line 639) | def isdir(self, path):
    method exists (line 648) | def exists(self, path):
    method isfile (line 658) | def isfile(self, path):
    method open (line 671) | def open(self, path, mode="r"):
    method create_file (line 675) | def create_file(self, path, contents):
    method mkdir (line 689) | def mkdir(self, path, mode=_DEFAULT_PERMISSIONS):
    method mkdirs (line 692) | def mkdirs(self, path, mode=_DEFAULT_PERMISSIONS):
    method remove (line 695) | def remove(self, path, allow_fail=False):
    method monitor_log (line 706) | def monitor_log(self, log):
  class SSHOutputIter (line 724) | class SSHOutputIter(object):
    method __init__ (line 727) | def __init__(self, iter_obj_func, channel_file=None):
    method __iter__ (line 741) | def __iter__(self):
    method next (line 744) | def next(self):
    method has_next (line 753) | def has_next(self, timeout_sec=None):
  class LogMonitor (line 780) | class LogMonitor(object):
    method __init__ (line 791) | def __init__(self, acct, log, offset):
    method wait_until (line 796) | def wait_until(self, pattern, **kwargs):
  class IgnoreMissingHostKeyPolicy (line 812) | class IgnoreMissingHostKeyPolicy(MissingHostKeyPolicy):
    method missing_host_key (line 817) | def missing_host_key(self, client, hostname, key):

FILE: ducktape/cluster/vagrant.py
  class VagrantCluster (line 27) | class VagrantCluster(JsonCluster):
    method __init__ (line 38) | def __init__(self, *args, make_remote_account_func=make_remote_account...
    method _get_nodes_from_vagrant (line 84) | def _get_nodes_from_vagrant(self, make_remote_account_func):
    method _vagrant_ssh_config (line 112) | def _vagrant_ssh_config(self):

FILE: ducktape/cluster/windows_remoteaccount.py
  class WindowsRemoteAccount (line 29) | class WindowsRemoteAccount(RemoteAccount):
    method __init__ (line 39) | def __init__(self, *args, **kwargs):
    method winrm_client (line 45) | def winrm_client(self):
    method fetch_externally_routable_ip (line 103) | def fetch_externally_routable_ip(self, is_aws=None):
    method run_winrm_command (line 107) | def run_winrm_command(self, cmd, allow_fail=False):

FILE: ducktape/command_line/defaults.py
  class ConsoleDefaults (line 18) | class ConsoleDefaults(object):

FILE: ducktape/command_line/main.py
  function get_user_defined_globals (line 48) | def get_user_defined_globals(globals_str):
  function setup_results_directory (line 93) | def setup_results_directory(new_results_dir):
  function update_latest_symlink (line 100) | def update_latest_symlink(results_root, new_results_dir):
  function main (line 108) | def main():

FILE: ducktape/command_line/parse_args.py
  function create_ducktape_parser (line 27) | def create_ducktape_parser() -> argparse.ArgumentParser:
  function get_user_config_file (line 218) | def get_user_config_file(args: List[str]) -> str:
  function config_file_to_args_list (line 228) | def config_file_to_args_list(config_file):
  function parse_non_default_args (line 244) | def parse_non_default_args(parser: argparse.ArgumentParser, defaults: di...
  function parse_args (line 258) | def parse_args(

FILE: ducktape/errors.py
  class DucktapeError (line 16) | class DucktapeError(RuntimeError):
  class TimeoutError (line 20) | class TimeoutError(DucktapeError):

FILE: ducktape/json_serializable.py
  class DucktapeJSONEncoder (line 19) | class DucktapeJSONEncoder(JSONEncoder):
    method default (line 20) | def default(self, obj):

FILE: ducktape/jvm_logging.py
  class JVMLogger (line 31) | class JVMLogger:
    method __init__ (line 34) | def __init__(self, log_dir="/mnt/jvm_logs"):
    method enable_for_service (line 42) | def enable_for_service(self, service):
    method _get_jvm_options (line 142) | def _get_jvm_options(self):
    method _setup_on_node (line 163) | def _setup_on_node(self, node):
    method _cleanup_on_node (line 168) | def _cleanup_on_node(self, node):

FILE: ducktape/mark/_mark.py
  class Mark (line 23) | class Mark(object):
    method mark (line 27) | def mark(fun, mark):
    method marked (line 49) | def marked(f, mark):
    method clear_marks (line 59) | def clear_marks(f):
    method name (line 67) | def name(self):
    method apply (line 70) | def apply(self, seed_context, context_list):
    method __eq__ (line 73) | def __eq__(self, other):
  class Ignore (line 80) | class Ignore(Mark):
    method __init__ (line 83) | def __init__(self, **kwargs):
    method name (line 88) | def name(self):
    method apply (line 91) | def apply(self, seed_context, context_list):
    method __eq__ (line 97) | def __eq__(self, other):
  class IgnoreAll (line 101) | class IgnoreAll(Ignore):
    method __init__ (line 104) | def __init__(self):
  class Matrix (line 109) | class Matrix(Mark):
    method __init__ (line 114) | def __init__(self, **kwargs):
    method name (line 123) | def name(self):
    method apply (line 126) | def apply(self, seed_context, context_list):
    method __eq__ (line 133) | def __eq__(self, other):
  class Defaults (line 137) | class Defaults(Mark):
    method __init__ (line 142) | def __init__(self, **kwargs):
    method name (line 151) | def name(self):
    method apply (line 154) | def apply(self, seed_context, context_list):
    method __eq__ (line 179) | def __eq__(self, other):
  class Parametrize (line 183) | class Parametrize(Mark):
    method __init__ (line 186) | def __init__(self, **kwargs):
    method name (line 190) | def name(self):
    method apply (line 193) | def apply(self, seed_context, context_list):
    method __eq__ (line 201) | def __eq__(self, other):
  class Env (line 205) | class Env(Mark):
    method __init__ (line 206) | def __init__(self, **kwargs):
    method name (line 211) | def name(self):
    method apply (line 214) | def apply(self, seed_context, context_list):
    method __eq__ (line 220) | def __eq__(self, other):
  function _is_parametrize_mark (line 231) | def _is_parametrize_mark(m):
  function parametrized (line 235) | def parametrized(f):
  function ignored (line 240) | def ignored(f):
  function is_env (line 245) | def is_env(f):
  function cartesian_product_dict (line 249) | def cartesian_product_dict(d):
  function matrix (line 296) | def matrix(**kwargs):
  function defaults (line 325) | def defaults(**kwargs):
  function parametrize (line 361) | def parametrize(**kwargs):
  function ignore (line 387) | def ignore(*args, **kwargs):
  function env (line 427) | def env(**kwargs):
  function _inject (line 435) | def _inject(*args, **kwargs):

FILE: ducktape/mark/mark_expander.py
  class MarkedFunctionExpander (line 21) | class MarkedFunctionExpander(object):
    method __init__ (line 24) | def __init__(
    method expand (line 47) | def expand(self, test_parameters=None):

FILE: ducktape/mark/resource.py
  class ClusterUseMetadata (line 22) | class ClusterUseMetadata(Mark):
    method __init__ (line 25) | def __init__(self, **kwargs) -> None:
    method name (line 30) | def name(self) -> str:
    method apply (line 33) | def apply(self, seed_context: TestContext, context_list: List[TestCont...
  function cluster (line 43) | def cluster(**kwargs) -> Callable:

FILE: ducktape/services/background_thread.py
  class BackgroundThreadService (line 21) | class BackgroundThreadService(Service):
    method __init__ (line 22) | def __init__(self, context, num_nodes=None, cluster_spec=None, *args, ...
    method _protected_worker (line 29) | def _protected_worker(self, idx, node):
    method start_node (line 48) | def start_node(self, node):
    method wait (line 64) | def wait(self, timeout_sec=600):
    method stop (line 73) | def stop(self):
    method wait_node (line 84) | def wait_node(self, node, timeout_sec=600):
    method _propagate_exceptions (line 95) | def _propagate_exceptions(self):

FILE: ducktape/services/service.py
  class ServiceIdFactory (line 27) | class ServiceIdFactory:
    method generate_service_id (line 28) | def generate_service_id(self, service):
  class MultiRunServiceIdFactory (line 36) | class MultiRunServiceIdFactory:
    method __init__ (line 37) | def __init__(self, run_number=1):
    method generate_service_id (line 40) | def generate_service_id(self, service):
  class Service (line 52) | class Service(TemplateRenderer):
    method __init__ (line 80) | def __init__(self, context, num_nodes=None, cluster_spec=None, *args, ...
    method setup_cluster_spec (line 126) | def setup_cluster_spec(num_nodes=None, cluster_spec=None):
    method __repr__ (line 137) | def __repr__(self):
    method num_nodes (line 144) | def num_nodes(self):
    method local_scratch_dir (line 148) | def local_scratch_dir(self):
    method service_id (line 155) | def service_id(self):
    method _order (line 160) | def _order(self):
    method logger (line 191) | def logger(self):
    method cluster (line 196) | def cluster(self):
    method allocated (line 201) | def allocated(self):
    method who_am_i (line 205) | def who_am_i(self, node=None):
    method allocate_nodes (line 216) | def allocate_nodes(self):
    method start (line 245) | def start(self, **kwargs):
    method start_node (line 278) | def start_node(self, node, **kwargs):
    method wait (line 282) | def wait(self, timeout_sec=600):
    method wait_node (line 307) | def wait_node(self, node, timeout_sec=None):
    method stop (line 313) | def stop(self, **kwargs):
    method stop_node (line 325) | def stop_node(self, node, **kwargs):
    method clean (line 329) | def clean(self, **kwargs):
    method clean_node (line 339) | def clean_node(self, node, **kwargs):
    method free (line 346) | def free(self):
    method run (line 354) | def run(self):
    method get_node (line 360) | def get_node(self, idx):
    method idx (line 364) | def idx(self, node):
    method close (line 374) | def close(self):
    method run_parallel (line 381) | def run_parallel(*args):
    method to_json (line 393) | def to_json(self):

FILE: ducktape/services/service_registry.py
  class ServiceRegistry (line 21) | class ServiceRegistry(object):
    method __init__ (line 22) | def __init__(self, enable_jvm_logs=False) -> None:
    method __contains__ (line 32) | def __contains__(self, item):
    method __iter__ (line 35) | def __iter__(self):
    method __repr__ (line 38) | def __repr__(self):
    method append (line 41) | def append(self, service):
    method to_json (line 49) | def to_json(self):
    method stop_all (line 52) | def stop_all(self):
    method clean_all (line 69) | def clean_all(self):
    method free_all (line 83) | def free_all(self):
    method errors (line 99) | def errors(self):

FILE: ducktape/template.py
  class TemplateRenderer (line 23) | class TemplateRenderer(object):
    method _get_ctx (line 24) | def _get_ctx(self):
    method render_template (line 29) | def render_template(self, template, **kwargs):
    method _package_search_path (line 43) | def _package_search_path(module_name):
    method render (line 58) | def render(self, path, **kwargs):

FILE: ducktape/tests/event.py
  class ClientEventFactory (line 20) | class ClientEventFactory(object):
    method __init__ (line 33) | def __init__(self, test_id, test_index, source_id):
    method _event (line 40) | def _event(self, event_type, payload=None):
    method copy (line 68) | def copy(self, event):
    method running (line 76) | def running(self):
    method ready (line 82) | def ready(self):
    method setting_up (line 88) | def setting_up(self):
    method finished (line 91) | def finished(self, result):
    method log (line 94) | def log(self, message, level):
  class EventResponseFactory (line 101) | class EventResponseFactory(object):
    method _event_response (line 104) | def _event_response(self, client_event, payload=None):
    method running (line 122) | def running(self, client_event):
    method ready (line 125) | def ready(self, client_event, session_context, test_context, cluster):
    method setting_up (line 134) | def setting_up(self, client_event):
    method finished (line 137) | def finished(self, client_event):
    method log (line 140) | def log(self, client_event):

FILE: ducktape/tests/loader.py
  class LoaderException (line 37) | class LoaderException(Exception):
  class TestLoader (line 52) | class TestLoader(object):
    method __init__ (line 55) | def __init__(
    method load (line 88) | def load(self, symbols: List[str], excluded_test_symbols: None = None)...
    method discover (line 200) | def discover(
    method _parse_discovery_symbol (line 255) | def _parse_discovery_symbol(self, discovery_symbol: str, base_dir: Non...
    method _import_module (line 299) | def _import_module(self, file_path: str) -> Optional[ModuleAndFile]:
    method _expand_module (line 379) | def _expand_module(self, module_and_file: ModuleAndFile) -> List[TestC...
    method _expand_class (line 403) | def _expand_class(self, t_ctx: TestContext) -> List[TestContext]:
    method _expand_function (line 417) | def _expand_function(self, t_ctx: TestContext) -> List[TestContext]:
    method _find_test_files (line 428) | def _find_test_files(self, path_or_glob):
    method _is_test_file (line 470) | def _is_test_file(self, file_name):
    method _is_test_class (line 474) | def _is_test_class(self, obj: Any) -> bool:
    method _is_test_function (line 478) | def _is_test_function(self, function: Any) -> bool:
    method _load_test_suite_files (line 488) | def _load_test_suite_files(self, test_suite_files: List[Any]) -> Set[A...
    method _load_file (line 498) | def _load_file(self, suite_file_path):
    method _load_suites (line 520) | def _load_suites(self, file_path, file_content):
    method _read_test_suite_from_file (line 546) | def _read_test_suite_from_file(self, root_suite_file_paths: List[Any])...
    method _load_test_suite (line 574) | def _load_test_suite(self, **kwargs):
    method _load_test_contexts (line 592) | def _load_test_contexts(
    method _filter_by_unique_test_id (line 647) | def _filter_by_unique_test_id(self, contexts: Iterable[TestContext]) -...
    method _filter_excluded_test_contexts (line 654) | def _filter_excluded_test_contexts(
    method _add_top_level_dirs_to_sys_path (line 660) | def _add_top_level_dirs_to_sys_path(self, test_files: List[str]) -> None:

FILE: ducktape/tests/loggermaker.py
  class LoggerMaker (line 18) | class LoggerMaker(object):
    method __init__ (line 21) | def __init__(self, logger_name: str) -> None:
    method logger (line 25) | def logger(self) -> logging.Logger:
    method configured (line 36) | def configured(self) -> bool:
    method configure_logger (line 44) | def configure_logger(self):
  function close_logger (line 48) | def close_logger(logger):

FILE: ducktape/tests/reporter.py
  function format_time (line 34) | def format_time(t):
  class SingleResultReporter (line 48) | class SingleResultReporter(object):
    method __init__ (line 51) | def __init__(self, result):
    method result_string (line 55) | def result_string(self):
    method report_string (line 73) | def report_string(self):
  class SingleResultFileReporter (line 78) | class SingleResultFileReporter(SingleResultReporter):
    method report (line 79) | def report(self):
  class SummaryReporter (line 92) | class SummaryReporter(object):
    method __init__ (line 93) | def __init__(self, results):
    method report (line 97) | def report(self):
  class SimpleSummaryReporter (line 101) | class SimpleSummaryReporter(SummaryReporter):
    method header_string (line 102) | def header_string(self):
    method report_string (line 120) | def report_string(self):
  class SimpleFileSummaryReporter (line 131) | class SimpleFileSummaryReporter(SimpleSummaryReporter):
    method report (line 132) | def report(self):
  class SimpleStdoutSummaryReporter (line 139) | class SimpleStdoutSummaryReporter(SimpleSummaryReporter):
    method report (line 140) | def report(self):
  class JSONReporter (line 144) | class JSONReporter(object):
    method __init__ (line 145) | def __init__(self, results):
    method report (line 148) | def report(self):
  class JUnitReporter (line 162) | class JUnitReporter(object):
    method __init__ (line 163) | def __init__(self, results):
    method report (line 166) | def report(self):
  class HTMLSummaryReporter (line 250) | class HTMLSummaryReporter(SummaryReporter):
    method __init__ (line 251) | def __init__(self, results, expected_test_count):
    method format_test_name (line 255) | def format_test_name(self, result):
    method format_result (line 276) | def format_result(self, result):
    method test_results_dir (line 292) | def test_results_dir(self, result):
    method format_report (line 304) | def format_report(self):
    method report (line 377) | def report(self):
  class FailedTestSymbolReporter (line 381) | class FailedTestSymbolReporter(SummaryReporter):
    method __init__ (line 382) | def __init__(self, results):
    method to_symbol (line 387) | def to_symbol(self, result):
    method dump_test_suite (line 395) | def dump_test_suite(self, lines):
    method print_test_symbols_string (line 404) | def print_test_symbols_string(self, lines):
    method report (line 411) | def report(self):

FILE: ducktape/tests/result.py
  class TestResult (line 31) | class TestResult(object):
    method __init__ (line 34) | def __init__(
    method __repr__ (line 90) | def __repr__(self):
    method run_time_seconds (line 98) | def run_time_seconds(self):
    method report (line 106) | def report(self):
    method dump_json (line 114) | def dump_json(self):
    method to_json (line 119) | def to_json(self):
  class TestResults (line 148) | class TestResults(object):
    method __init__ (line 151) | def __init__(self, session_context: SessionContext, cluster: VagrantCl...
    method append (line 164) | def append(self, obj: TestResult):
    method __len__ (line 167) | def __len__(self):
    method __iter__ (line 170) | def __iter__(self):
    method num_passed (line 174) | def num_passed(self):
    method num_failed (line 178) | def num_failed(self):
    method num_ignored (line 182) | def num_ignored(self):
    method num_flaky (line 186) | def num_flaky(self):
    method run_time_seconds (line 190) | def run_time_seconds(self):
    method get_aggregate_success (line 198) | def get_aggregate_success(self):
    method _stats (line 207) | def _stats(self, num_list):
    method to_json (line 217) | def to_json(self):

FILE: ducktape/tests/runner.py
  class Receiver (line 54) | class Receiver(object):
    method __init__ (line 55) | def __init__(self, min_port: int, max_port: int) -> None:
    method start (line 69) | def start(self):
    method recv (line 79) | def recv(self, timeout=1800000):
    method send (line 90) | def send(self, event):
    method close (line 93) | def close(self):
  class TestRunner (line 101) | class TestRunner(object):
    method __init__ (line 105) | def __init__(
    method _terminate_process (line 154) | def _terminate_process(self, process: multiprocessing.Process):
    method _join_test_process (line 163) | def _join_test_process(self, process_key, timeout: int = DEFAULT_MP_JO...
    method _propagate_sigterm (line 186) | def _propagate_sigterm(self, signum, frame):
    method who_am_i (line 206) | def who_am_i(self):
    method _ready_to_trigger_more_tests (line 211) | def _ready_to_trigger_more_tests(self):
    method _expect_client_requests (line 218) | def _expect_client_requests(self):
    method _report_unschedulable (line 221) | def _report_unschedulable(self, unschedulable, err_msg=None):
    method _check_unschedulable (line 254) | def _check_unschedulable(self):
    method _report_remaining_as_failed (line 257) | def _report_remaining_as_failed(self, reason):
    method _report_active_as_failed (line 285) | def _report_active_as_failed(self, reason):
    method run_all_tests (line 323) | def run_all_tests(self):
    method _run_single_test (line 424) | def _run_single_test(self, test_context):
    method _preallocate_subcluster (line 460) | def _preallocate_subcluster(self, test_context):
    method _handle (line 478) | def _handle(self, event):
    method _handle_ready (line 496) | def _handle_ready(self, event):
    method _handle_log (line 503) | def _handle_log(self, event):
    method _handle_finished (line 507) | def _handle_finished(self, event):
    method _should_print_separator (line 558) | def _should_print_separator(self):
    method _handle_lifecycle (line 571) | def _handle_lifecycle(self, event):
    method _log (line 574) | def _log(self, log_level, msg, *args, **kwargs):

FILE: ducktape/tests/runner_client.py
  function run_client (line 39) | def run_client(*args, **kwargs):
  class Sender (line 45) | class Sender(object):
    method __init__ (line 58) | def __init__(
    method _init_socket (line 76) | def _init_socket(self):
    method send (line 81) | def send(self, event, blocking=True):
    method close (line 113) | def close(self):
  class RunnerClient (line 119) | class RunnerClient(object):
    method __init__ (line 140) | def __init__(
    method deflake_enabled (line 174) | def deflake_enabled(self) -> bool:
    method ready (line 177) | def ready(self):
    method send (line 183) | def send(self, event):
    method _kill_all_child_processes (line 190) | def _kill_all_child_processes(self, send_signal=signal.SIGTERM):
    method _sigterm_handler (line 198) | def _sigterm_handler(self, signum, frame):
    method _collect_test_context (line 208) | def _collect_test_context(self, directory, file_name, cls_name, method...
    method run (line 225) | def run(self):
    method process_run_summaries (line 314) | def process_run_summaries(self, run_summaries: List[List[str]], test_s...
    method _do_run (line 359) | def _do_run(self, num_runs):
    method _check_cluster_utilization (line 402) | def _check_cluster_utilization(self, result, summary):
    method setup_test (line 423) | def setup_test(self):
    method run_test (line 428) | def run_test(self):
    method _exc_msg (line 437) | def _exc_msg(self, e):
    method _do_safely (line 440) | def _do_safely(self, action, err_msg):
    method teardown_test (line 446) | def teardown_test(self, teardown_services=True, test_status=None):
    method log (line 478) | def log(self, log_level, msg, *args, **kwargs):
    method dump_threads (line 494) | def dump_threads(self, msg):

FILE: ducktape/tests/scheduler.py
  class TestScheduler (line 22) | class TestScheduler(object):
    method __init__ (line 29) | def __init__(self, test_contexts: List[TestContext], cluster: VagrantC...
    method __len__ (line 38) | def __len__(self) -> int:
    method __iter__ (line 42) | def __iter__(self):
    method filter_unschedulable_tests (line 45) | def filter_unschedulable_tests(self):
    method _sort_test_context_list (line 59) | def _sort_test_context_list(self) -> None:
    method peek (line 67) | def peek(self):
    method remove (line 79) | def remove(self, tc):
    method drain_remaining_tests (line 87) | def drain_remaining_tests(self):

FILE: ducktape/tests/serde.py
  class SerDe (line 18) | class SerDe(object):
    method serialize (line 19) | def serialize(self, obj):
    method deserialize (line 25) | def deserialize(self, bytes_obj, obj_cls=None):

FILE: ducktape/tests/session.py
  class SessionContext (line 24) | class SessionContext(object):
    method __init__ (line 29) | def __init__(self, **kwargs) -> None:
    method globals (line 47) | def globals(self):
    method to_json (line 51) | def to_json(self):
  class SessionLoggerMaker (line 55) | class SessionLoggerMaker(LoggerMaker):
    method __init__ (line 56) | def __init__(self, session_context: SessionContext) -> None:
    method configure_logger (line 61) | def configure_logger(self) -> None:
  function generate_session_id (line 89) | def generate_session_id(session_id_file: str) -> str:
  function generate_results_dir (line 134) | def generate_results_dir(results_root: str, session_id: str) -> str:

FILE: ducktape/tests/status.py
  class TestStatus (line 16) | class TestStatus(object):
    method __init__ (line 17) | def __init__(self, status: str) -> None:
    method __eq__ (line 20) | def __eq__(self, other):
    method __str__ (line 23) | def __str__(self):
    method to_json (line 26) | def to_json(self):

FILE: ducktape/tests/test.py
  class Test (line 27) | class Test(TemplateRenderer):
    method __init__ (line 30) | def __init__(self, test_context, *args, **kwargs):
    method cluster (line 38) | def cluster(self):
    method logger (line 42) | def logger(self):
    method min_cluster_spec (line 45) | def min_cluster_spec(self):
    method min_cluster_size (line 53) | def min_cluster_size(self):
    method setup (line 61) | def setup(self):
    method teardown (line 67) | def teardown(self):
    method setUp (line 73) | def setUp(self):
    method tearDown (line 76) | def tearDown(self):
    method free_nodes (line 79) | def free_nodes(self):
    method compress_service_logs (line 86) | def compress_service_logs(self, node, service, node_logs):
    method copy_service_logs (line 109) | def copy_service_logs(self, test_status):
    method mark_for_collect (line 166) | def mark_for_collect(self, service, log_name=None):
    method mark_no_collect (line 174) | def mark_no_collect(self, service, log_name=None):
    method should_collect_log (line 177) | def should_collect_log(self, log_name, service):
  function _compress_cmd (line 184) | def _compress_cmd(log_path):
  function in_dir (line 195) | def in_dir(path):
  function in_temp_dir (line 208) | def in_temp_dir():
  function _new_temp_dir (line 216) | def _new_temp_dir():

FILE: ducktape/tests/test_context.py
  function _escape_pathname (line 36) | def _escape_pathname(s):
  class TestLoggerMaker (line 50) | class TestLoggerMaker(LoggerMaker):
    method __init__ (line 51) | def __init__(self, logger_name, log_dir, debug):
    method configure_logger (line 56) | def configure_logger(self):
  function test_logger (line 91) | def test_logger(logger_name, log_dir, debug):
  class TestContext (line 100) | class TestContext(object):
    method __init__ (line 103) | def __init__(self, **kwargs) -> None:
    method __repr__ (line 142) | def __repr__(self) -> str:
    method copy (line 149) | def copy(self, **kwargs) -> "TestContext":
    method local_scratch_dir (line 159) | def local_scratch_dir(self):
    method test_metadata (line 166) | def test_metadata(self):
    method logger_name (line 176) | def logger_name(test_context, test_index):
    method results_dir (line 183) | def results_dir(test_context, test_index):
    method expected_num_nodes (line 198) | def expected_num_nodes(self) -> int:
    method expected_cluster_spec (line 208) | def expected_cluster_spec(self) -> Optional[ClusterSpec]:
    method globals (line 230) | def globals(self):
    method module_name (line 234) | def module_name(self) -> str:
    method cls_name (line 238) | def cls_name(self) -> str:
    method function_name (line 242) | def function_name(self) -> str:
    method description (line 246) | def description(self):
    method injected_args_name (line 258) | def injected_args_name(self) -> str:
    method test_id (line 266) | def test_id(self) -> str:
    method test_name (line 270) | def test_name(self) -> str:
    method logger (line 285) | def logger(self):
    method close (line 294) | def close(self):

FILE: ducktape/utils/http_utils.py
  class HttpMixin (line 18) | class HttpMixin(object):
    method http_request (line 19) | def http_request(self, url, method, data="", headers=None, timeout=None):

FILE: ducktape/utils/local_filesystem_utils.py
  function mkdir_p (line 19) | def mkdir_p(path: str) -> None:

FILE: ducktape/utils/persistence.py
  function not_implemented_method (line 27) | def not_implemented_method(*args, **kwargs):
  class PDict (line 31) | class PDict(dict):
    method _as_transient (line 43) | def _as_transient(self):
    method copy (line 46) | def copy(self):
    method without (line 49) | def without(self, *keys):
    method using (line 55) | def using(self, **kwargs):
    method __reduce__ (line 60) | def __reduce__(self):

FILE: ducktape/utils/terminal_size.py
  function get_terminal_size (line 26) | def get_terminal_size():
  function _get_terminal_size_windows (line 47) | def _get_terminal_size_windows():
  function _get_terminal_size_tput (line 78) | def _get_terminal_size_tput():
  function _get_terminal_size_linux (line 89) | def _get_terminal_size_linux():

FILE: ducktape/utils/util.py
  function wait_until (line 23) | def wait_until(condition, timeout_sec, backoff_sec=0.1, err_msg="", retr...
  function package_is_installed (line 62) | def package_is_installed(package_name):
  function ducktape_version (line 71) | def ducktape_version():
  function load_function (line 76) | def load_function(func_module_path) -> Callable:

FILE: setup.py
  class PyTest (line 14) | class PyTest(TestCommand):
    method initialize_options (line 17) | def initialize_options(self):
    method finalize_options (line 21) | def finalize_options(self):
    method run_tests (line 26) | def run_tests(self):

FILE: systests/cluster/test_debug.py
  class FailingTest (line 15) | class FailingTest(Test):
    method setup (line 20) | def setup(self):
    method matrix_test (line 28) | def matrix_test(self, string_param, int_param):
    method parametrized_test (line 34) | def parametrized_test(self, string_param, int_param):
    method failing_test (line 38) | def failing_test(self):
    method successful_test (line 42) | def successful_test(self):
  class DebugThisTest (line 46) | class DebugThisTest(Test):
    method one_node_test_sleep_90s (line 48) | def one_node_test_sleep_90s(self):
    method one_node_test_sleep_30s (line 55) | def one_node_test_sleep_30s(self):
    method another_one_node_test_sleep_30s (line 62) | def another_one_node_test_sleep_30s(self):
    method two_node_test (line 69) | def two_node_test(self):
    method another_two_node_test (line 74) | def another_two_node_test(self):
    method a_two_node_ignored_test (line 80) | def a_two_node_ignored_test(self):
    method yet_another_two_node_test (line 84) | def yet_another_two_node_test(self):
    method three_node_test (line 89) | def three_node_test(self):
    method three_node_test_sleeping_30s (line 94) | def three_node_test_sleeping_30s(self):
    method another_three_node_test (line 101) | def another_three_node_test(self):
    method bad_alloc_test (line 106) | def bad_alloc_test(self):

FILE: systests/cluster/test_no_cluster.py
  class NoClusterTest (line 5) | class NoClusterTest(Test):
    method test_zero_nodes (line 9) | def test_zero_nodes(self):

FILE: systests/cluster/test_remote_account.py
  function generate_tempdir_name (line 35) | def generate_tempdir_name():
  class RemoteAccountTestService (line 42) | class RemoteAccountTestService(Service):
    method __init__ (line 45) | def __init__(self, context):
    method log_file (line 57) | def log_file(self):
    method start_node (line 60) | def start_node(self, node):
    method stop_node (line 64) | def stop_node(self, node):
    method clean_node (line 67) | def clean_node(self, node):
    method write_to_log (line 70) | def write_to_log(self, msg):
  class GenericService (line 74) | class GenericService(Service):
    method __init__ (line 77) | def __init__(self, context, num_nodes):
    method stop_node (line 83) | def stop_node(self, node):
    method clean_node (line 87) | def clean_node(self, node):
  class UnderUtilizedTest (line 91) | class UnderUtilizedTest(Test):
    method setup (line 92) | def setup(self):
    method under_utilized_test (line 96) | def under_utilized_test(self):
  class FileSystemTest (line 111) | class FileSystemTest(Test):
    method setup (line 116) | def setup(self):
    method create_file_test (line 122) | def create_file_test(self):
    method mkdir_test (line 137) | def mkdir_test(self):
    method mkdirs_nested_test (line 152) | def mkdirs_nested_test(self):
    method open_test (line 162) | def open_test(self):
    method exists_file_test (line 185) | def exists_file_test(self):
    method exists_dir_test (line 202) | def exists_dir_test(self):
    method remove_test (line 217) | def remove_test(self):
  function make_dir_structure (line 253) | def make_dir_structure(base_dir, dir_structure, node=None):
  function verify_dir_structure (line 285) | def verify_dir_structure(base_dir, dir_structure, node=None):
  class CopyToAndFroTest (line 315) | class CopyToAndFroTest(Test):
    method setup (line 318) | def setup(self):
    method test_copy_to_dir_with_rename (line 329) | def test_copy_to_dir_with_rename(self):
    method test_copy_to_dir_as_subtree (line 339) | def test_copy_to_dir_as_subtree(self):
    method test_copy_from_dir_with_rename (line 350) | def test_copy_from_dir_with_rename(self):
    method test_copy_from_dir_as_subtree (line 360) | def test_copy_from_dir_as_subtree(self):
    method teardown (line 367) | def teardown(self):
  class CopyDirectTest (line 373) | class CopyDirectTest(Test):
    method setup (line 374) | def setup(self):
    method test_copy_file (line 383) | def test_copy_file(self):
    method test_copy_directory (line 400) | def test_copy_directory(self):
  class TestClusterSpec (line 412) | class TestClusterSpec(Test):
    method test_create_two_node_service (line 414) | def test_create_two_node_service(self):
    method three_nodes_test (line 428) | def three_nodes_test(self):
  class RemoteAccountTest (line 434) | class RemoteAccountTest(Test):
    method __init__ (line 435) | def __init__(self, test_context):
    method setup (line 439) | def setup(self):
    method test_flaky (line 443) | def test_flaky(self):
    method test_ssh_capture_combine_stderr (line 457) | def test_ssh_capture_combine_stderr(self):
    method test_ssh_output_combine_stderr (line 473) | def test_ssh_output_combine_stderr(self):
    method test_ssh_capture (line 487) | def test_ssh_capture(self):
    method test_ssh_output (line 497) | def test_ssh_output(self):
    method test_monitor_log (line 506) | def test_monitor_log(self):
    method test_monitor_log_exception (line 536) | def test_monitor_log_exception(self):
    method test_kill_process (line 556) | def test_kill_process(self):
  class TestIterWrapper (line 586) | class TestIterWrapper(Test):
    method setup (line 587) | def setup(self):
    method test_iter_wrapper (line 601) | def test_iter_wrapper(self):
    method test_iter_wrapper_timeout (line 612) | def test_iter_wrapper_timeout(self):
    method teardown (line 631) | def teardown(self):
  class RemoteAccountCompressedTest (line 639) | class RemoteAccountCompressedTest(Test):
    method __init__ (line 640) | def __init__(self, test_context):
    method setup (line 647) | def setup(self):
    method test_log_compression_with_non_existent_files (line 651) | def test_log_compression_with_non_existent_files(self):
  class CompressionErrorFilter (line 664) | class CompressionErrorFilter(logging.Filter):
    method __init__ (line 665) | def __init__(self, test):
    method filter (line 669) | def filter(self, record):

FILE: systests/cluster/test_runner_operations.py
  class SimpleEchoService (line 20) | class SimpleEchoService(Service):
    method __init__ (line 27) | def __init__(self, context):
    method echo (line 31) | def echo(self):
  class SimpleRunnerTest (line 36) | class SimpleRunnerTest(Test):
    method setup (line 37) | def setup(self):
    method timeout_test (line 41) | def timeout_test(self):
    method quick1_test (line 52) | def quick1_test(self):
    method quick2_test (line 63) | def quick2_test(self):

FILE: tests/cluster/check_cluster.py
  class CheckCluster (line 26) | class CheckCluster(object):
    method setup_method (line 27) | def setup_method(self, _):
    method spec (line 35) | def spec(self, linux_nodes, windows_nodes):
    method check_enough_capacity (line 43) | def check_enough_capacity(self):
    method check_not_enough_capacity (line 47) | def check_not_enough_capacity(self):

FILE: tests/cluster/check_cluster_spec.py
  class CheckClusterSpec (line 19) | class CheckClusterSpec(object):
    method check_cluster_spec_sizes (line 20) | def check_cluster_spec_sizes(self):
    method check_to_string (line 25) | def check_to_string(self):
    method check_simple_linux_with_node_type (line 31) | def check_simple_linux_with_node_type(self):
    method check_simple_linux_without_node_type (line 39) | def check_simple_linux_without_node_type(self):
    method check_grouped_by_os_and_type_empty (line 47) | def check_grouped_by_os_and_type_empty(self):
    method check_grouped_by_os_and_type_single_type (line 53) | def check_grouped_by_os_and_type_single_type(self):
    method check_grouped_by_os_and_type_mixed (line 59) | def check_grouped_by_os_and_type_mixed(self):

FILE: tests/cluster/check_finite_subcluster.py
  class MockFiniteSubclusterNode (line 26) | class MockFiniteSubclusterNode:
    method operating_system (line 28) | def operating_system(self):
  class CheckFiniteSubcluster (line 32) | class CheckFiniteSubcluster(object):
    method check_cluster_size (line 35) | def check_cluster_size(self):
    method check_pickleable (line 43) | def check_pickleable(self):
    method check_allocate_free (line 47) | def check_allocate_free(self):
    method check_alloc_too_many (line 69) | def check_alloc_too_many(self):
    method check_free_too_many (line 75) | def check_free_too_many(self):

FILE: tests/cluster/check_json.py
  function create_json_cluster (line 23) | def create_json_cluster(*args, **kwargs):
  class CheckJsonCluster (line 27) | class CheckJsonCluster(object):
    method check_invalid_json (line 30) | def check_invalid_json(self):
    method cluster_hostnames (line 40) | def cluster_hostnames(nodes):
    method check_cluster_size (line 43) | def check_cluster_size(self):
    method check_pickleable (line 52) | def check_pickleable(self):
    method check_allocate_free (line 65) | def check_allocate_free(self):
    method check_parsing (line 97) | def check_parsing(self):
    method check_exhausts_supply (line 126) | def check_exhausts_supply(self):
    method check_node_names (line 131) | def check_node_names(self):

FILE: tests/cluster/check_localhost.py
  class CheckLocalhostCluster (line 21) | class CheckLocalhostCluster(object):
    method setup_method (line 22) | def setup_method(self, _):
    method check_size (line 25) | def check_size(self):
    method check_pickleable (line 28) | def check_pickleable(self):
    method check_request_free (line 32) | def check_request_free(self):

FILE: tests/cluster/check_node_container.py
  function fake_account (line 32) | def fake_account(host, is_available=True, node_type=None):
  function fake_win_account (line 38) | def fake_win_account(host, is_available=True, node_type=None):
  function count_nodes_by_os (line 44) | def count_nodes_by_os(container, target_os):
  class CheckNodeContainer (line 53) | class CheckNodeContainer(object):
    method check_sizes (line 54) | def check_sizes(self):
    method check_add_and_remove (line 63) | def check_add_and_remove(self):
    method check_remove_single_node_spec (line 88) | def check_remove_single_node_spec(self):
    method check_not_enough_nodes_to_remove (line 153) | def check_not_enough_nodes_to_remove(self, cluster_spec):
    method check_not_enough_healthy_nodes (line 204) | def check_not_enough_healthy_nodes(self, accounts):
    method check_enough_healthy_but_some_bad_nodes_too (line 279) | def check_enough_healthy_but_some_bad_nodes_too(self, accounts):
    method check_empty_cluster_spec (line 318) | def check_empty_cluster_spec(self):
    method check_none_cluster_spec (line 328) | def check_none_cluster_spec(self):
    method check_node_groups_by_type (line 339) | def check_node_groups_by_type(self):
    method check_remove_spec_with_node_type (line 362) | def check_remove_spec_with_node_type(self):
    method check_remove_spec_with_node_type_not_available (line 383) | def check_remove_spec_with_node_type_not_available(self):
    method check_remove_spec_node_type_none_matches_any (line 397) | def check_remove_spec_node_type_none_matches_any(self):
    method check_remove_spec_mixed_node_types (line 413) | def check_remove_spec_mixed_node_types(self):
    method check_mixed_typed_and_untyped_double_counting_rejected (line 441) | def check_mixed_typed_and_untyped_double_counting_rejected(self):
    method check_mixed_typed_and_untyped_valid_passes (line 476) | def check_mixed_typed_and_untyped_valid_passes(self):
    method check_specific_type_shortage_detected (line 507) | def check_specific_type_shortage_detected(self):
    method check_holistic_os_capacity_check (line 536) | def check_holistic_os_capacity_check(self):
    method check_multi_os_mixed_requirements (line 563) | def check_multi_os_mixed_requirements(self):
    method check_only_any_type_requirements (line 594) | def check_only_any_type_requirements(self):
    method check_allocation_order_specific_before_any (line 629) | def check_allocation_order_specific_before_any(self):
    method check_allocation_order_with_multiple_specific_types (line 666) | def check_allocation_order_with_multiple_specific_types(self):

FILE: tests/cluster/check_remoteaccount.py
  class DummyException (line 30) | class DummyException(Exception):
  function raise_error_checker (line 34) | def raise_error_checker(error, remote_account):
  function raise_no_error_checker (line 38) | def raise_no_error_checker(error, remote_account):
  class SimpleServer (line 42) | class SimpleServer(object):
    method __init__ (line 45) | def __init__(self):
    method start (line 52) | def start(self, delay_sec=0.0):
    method stop (line 65) | def stop(self):
  class CheckRemoteAccount (line 75) | class CheckRemoteAccount(object):
    method setup (line 76) | def setup(self):
    method check_wait_for_http (line 80) | def check_wait_for_http(self):
    method check_wait_for_http_timeout (line 85) | def check_wait_for_http_timeout(self):
    method check_ssh_checker (line 110) | def check_ssh_checker(self, checkers):
    method teardown (line 125) | def teardown(self):
  class CheckRemoteAccountEquality (line 129) | class CheckRemoteAccountEquality(object):
    method check_remote_account_equality (line 130) | def check_remote_account_equality(self):

FILE: tests/cluster/check_vagrant.py
  function make_vagrant_cluster (line 51) | def make_vagrant_cluster(*args, **kwargs):
  class CheckVagrantCluster (line 55) | class CheckVagrantCluster(object):
    method setup_method (line 56) | def setup_method(self, _):
    method teardown_method (line 63) | def teardown_method(self, _):
    method _set_monkeypatch_attr (line 67) | def _set_monkeypatch_attr(self, monkeypatch):
    method check_pickleable (line 77) | def check_pickleable(self, monkeypatch):
    method check_one_host_parsing (line 82) | def check_one_host_parsing(self, monkeypatch):
    method check_cluster_file_write (line 101) | def check_cluster_file_write(self, monkeypatch):
    method check_cluster_file_read (line 132) | def check_cluster_file_read(self, monkeypatch):
    method check_no_valid_network_devices (line 196) | def check_no_valid_network_devices(self, monkeypatch):

FILE: tests/command_line/check_main.py
  class CheckSetupResultsDirectory (line 27) | class CheckSetupResultsDirectory(object):
    method setup_method (line 28) | def setup_method(self, _):
    method validate_directories (line 33) | def validate_directories(self):
    method check_creation (line 42) | def check_creation(self):
    method check_symlink (line 49) | def check_symlink(self):
  class CheckUserDefinedGlobals (line 92) | class CheckUserDefinedGlobals(object):
    method check_immutable (line 95) | def check_immutable(self):
    method check_pickleable (line 105) | def check_pickleable(self):
    method check_parseable_json_string (line 112) | def check_parseable_json_string(self):
    method check_unparseable (line 117) | def check_unparseable(self):
    method check_parse_from_file (line 122) | def check_parse_from_file(self):
    method check_bad_parse_from_file (line 135) | def check_bad_parse_from_file(self):
    method check_non_dict (line 149) | def check_non_dict(self):
    method check_non_dict_from_file (line 158) | def check_non_dict_from_file(self):

FILE: tests/command_line/check_parse_args.py
  class Capturing (line 26) | class Capturing(object):
    method __enter__ (line 34) | def __enter__(self):
    method __exit__ (line 39) | def __exit__(self, *args):
  class CheckParseArgs (line 44) | class CheckParseArgs(object):
    method check_empty_args (line 45) | def check_empty_args(self):
    method check_version (line 54) | def check_version(self):
    method check_empty_test_path (line 63) | def check_empty_test_path(self):
    method check_multiple_test_paths (line 71) | def check_multiple_test_paths(self):
    method check_multiple_exclude (line 83) | def check_multiple_exclude(self):
    method check_config_overrides (line 89) | def check_config_overrides(self, monkeypatch):
    method check_config_file_option (line 132) | def check_config_file_option(self):
    method check_config_overrides_for_n_args (line 148) | def check_config_overrides_for_n_args(self, monkeypatch):

FILE: tests/ducktape_mock.py
  function mock_cluster (line 32) | def mock_cluster():
  class FakeClusterNode (line 40) | class FakeClusterNode(object):
    method operating_system (line 42) | def operating_system(self):
  class FakeCluster (line 46) | class FakeCluster(Cluster):
    method __init__ (line 49) | def __init__(self, num_nodes):
    method do_alloc (line 56) | def do_alloc(self, cluster_spec):
    method free_single (line 61) | def free_single(self, node):
    method available (line 65) | def available(self):
    method used (line 68) | def used(self):
  function session_context (line 72) | def session_context(**kwargs):
  class TestMockTest (line 84) | class TestMockTest(Test):
    method mock_test (line 85) | def mock_test(self):
  function test_context (line 89) | def test_context(session_context=session_context(), cluster=mock_cluster...
  class MockNode (line 101) | class MockNode(object):
    method __init__ (line 104) | def __init__(self):
  class MockAccount (line 108) | class MockAccount(LinuxRemoteAccount):
    method __init__ (line 111) | def __init__(self, **kwargs):
  class MockSender (line 117) | class MockSender(MagicMock):
    method __init__ (line 120) | def __init__(self, *args, **kwargs):
    method send (line 124) | def send(self, *args, **kwargs):

FILE: tests/loader/check_loader.py
  class LocalFileAdapter (line 31) | class LocalFileAdapter(requests.adapters.HTTPAdapter):
    method build_response_from_file (line 32) | def build_response_from_file(self, request):
    method send (line 42) | def send(self, request, stream=False, timeout=None, verify=True, cert=...
  function resources_dir (line 46) | def resources_dir():
  function discover_dir (line 50) | def discover_dir():
  function sub_dir_a (line 55) | def sub_dir_a():
  function num_tests_in_file (line 59) | def num_tests_in_file(fpath):
  function num_tests_in_dir (line 73) | def num_tests_in_dir(dpath):
  function invalid_test_suites (line 88) | def invalid_test_suites():
  class CheckTestLoader (line 100) | class CheckTestLoader(object):
    method setup_method (line 101) | def setup_method(self, method):
    method check_test_loader_raises_on_invalid_test_suite (line 108) | def check_test_loader_raises_on_invalid_test_suite(self, suite_file_pa...
    method check_test_loader_with_test_suites_and_files (line 237) | def check_test_loader_with_test_suites_and_files(self, expected_count,...
    method check_test_loader_with_directory (line 248) | def check_test_loader_with_directory(self):
    method check_test_loader_with_file (line 261) | def check_test_loader_with_file(self, dir_, file_name):
    method check_test_loader_with_glob (line 269) | def check_test_loader_with_glob(self):
    method check_test_loader_multiple_files (line 275) | def check_test_loader_multiple_files(self):
    method check_test_loader_include_dir_exclude_file (line 283) | def check_test_loader_include_dir_exclude_file(self):
    method check_test_loader_exclude_subdir (line 291) | def check_test_loader_exclude_subdir(self):
    method check_test_loader_exclude_subdir_glob (line 298) | def check_test_loader_exclude_subdir_glob(self):
    method check_test_loader_raises_when_nothing_is_included (line 306) | def check_test_loader_raises_when_nothing_is_included(self):
    method check_test_loader_raises_on_include_subdir_exclude_parent_dir (line 313) | def check_test_loader_raises_on_include_subdir_exclude_parent_dir(self):
    method check_test_loader_with_nonexistent_file (line 318) | def check_test_loader_with_nonexistent_file(self):
    method check_test_loader_include_dir_without_tests (line 324) | def check_test_loader_include_dir_without_tests(self):
    method check_test_loader_include_file_without_tests (line 329) | def check_test_loader_include_file_without_tests(self):
    method check_test_loader_allow_exclude_dir_without_tests (line 334) | def check_test_loader_allow_exclude_dir_without_tests(self):
    method check_test_loader_allow_exclude_file_without_tests (line 339) | def check_test_loader_allow_exclude_file_without_tests(self):
    method check_test_loader_allow_exclude_nonexistent_file (line 347) | def check_test_loader_allow_exclude_nonexistent_file(self):
    method check_test_loader_with_class (line 355) | def check_test_loader_with_class(self):
    method check_test_loader_include_dir_exclude_class (line 365) | def check_test_loader_include_dir_exclude_class(self):
    method check_test_loader_include_class_exclude_method (line 372) | def check_test_loader_include_class_exclude_method(self):
    method check_test_loader_include_dir_exclude_method (line 380) | def check_test_loader_include_dir_exclude_method(self):
    method check_test_loader_with_matrix_params (line 387) | def check_test_loader_with_matrix_params(self):
    method check_test_loader_with_params_special_chars (line 401) | def check_test_loader_with_params_special_chars(self):
    method check_test_loader_with_multiple_matrix_params (line 419) | def check_test_loader_with_multiple_matrix_params(self):
    method check_test_loader_with_parametrize (line 436) | def check_test_loader_with_parametrize(self):
    method check_test_loader_with_parametrize_with_objects (line 448) | def check_test_loader_with_parametrize_with_objects(self):
    method check_test_loader_with_injected_args (line 461) | def check_test_loader_with_injected_args(self):
    method check_test_loader_raises_with_both_injected_args_and_parameters (line 480) | def check_test_loader_raises_with_both_injected_args_and_parameters(se...
    method check_test_loader_raises_on_params_not_found (line 495) | def check_test_loader_raises_on_params_not_found(self):
    method check_test_loader_raises_on_malformed_test_discovery_symbol (line 518) | def check_test_loader_raises_on_malformed_test_discovery_symbol(self, ...
    method check_test_loader_exclude_with_injected_args (line 524) | def check_test_loader_exclude_with_injected_args(self):
    method check_test_loader_exclude_with_params (line 539) | def check_test_loader_exclude_with_params(self):
    method check_test_loader_exclude_with_params_multiple (line 557) | def check_test_loader_exclude_with_params_multiple(self):
    method check_test_loader_with_subsets (line 576) | def check_test_loader_with_subsets(self):
    method check_test_loader_with_invalid_subsets (line 600) | def check_test_loader_with_invalid_subsets(self):
    method check_test_loader_with_time_based_subsets (line 607) | def check_test_loader_with_time_based_subsets(self):
    method check_loader_with_non_yml_file (line 639) | def check_loader_with_non_yml_file(self):
    method check_loader_with_non_suite_yml_file (line 648) | def check_loader_with_non_suite_yml_file(self):
    method check_test_loader_with_absolute_path (line 660) | def check_test_loader_with_absolute_path(self):
  function join_parsed_symbol_components (line 686) | def join_parsed_symbol_components(parsed):
  function normalize_ending_slash (line 711) | def normalize_ending_slash(dirname):
  class CheckParseSymbol (line 717) | class CheckParseSymbol(object):
    method check_parse_discovery_symbol (line 718) | def check_parse_discovery_symbol(self):

FILE: tests/loader/resources/loader_test_directory/name_does_not_match_pattern.py
  class TestNotLoaded (line 4) | class TestNotLoaded(Test):
    method test_a (line 7) | def test_a(self):

FILE: tests/loader/resources/loader_test_directory/sub_dir_a/test_c.py
  class TestC (line 20) | class TestC(Test):
    method test (line 23) | def test(self):
  class TestInvisible (line 27) | class TestInvisible(object):
    method test_invisible (line 30) | def test_invisible(self):

FILE: tests/loader/resources/loader_test_directory/sub_dir_a/test_d.py
  class TestD (line 20) | class TestD(Test):
    method test_d (line 23) | def test_d(self):
    method test_dd (line 26) | def test_dd(self):
    method ddd_test (line 29) | def ddd_test(self):
  class TestInvisible (line 33) | class TestInvisible(object):
    method test_invisible (line 36) | def test_invisible(self):

FILE: tests/loader/resources/loader_test_directory/sub_dir_no_tests/just_some_file.py
  class JustSomeClass (line 1) | class JustSomeClass(object):

FILE: tests/loader/resources/loader_test_directory/test_a.py
  class TestA (line 20) | class TestA(Test):
    method test_a (line 23) | def test_a(self):
  class TestInvisible (line 27) | class TestInvisible(object):
    method test_invisible (line 30) | def test_invisible(self):

FILE: tests/loader/resources/loader_test_directory/test_b.py
  class TestB (line 20) | class TestB(Test):
    method test_b (line 23) | def test_b(self):
  class TestBB (line 27) | class TestBB(Test):
    method test_bb_one (line 32) | def test_bb_one(self):
    method bb_two_test (line 35) | def bb_two_test(self):
    method other_method (line 38) | def other_method(self):
  class TestInvisible (line 42) | class TestInvisible(object):
    method test_invisible (line 45) | def test_invisible(self):

FILE: tests/loader/resources/loader_test_directory/test_decorated.py
  class TestMatrix (line 22) | class TestMatrix(Test):
    method test_thing (line 26) | def test_thing(self, x, y):
  class TestStackedMatrix (line 30) | class TestStackedMatrix(Test):
    method test_thing (line 34) | def test_thing(self, x, y):
  class TestParametrized (line 38) | class TestParametrized(Test):
    method test_single_decorator (line 40) | def test_single_decorator(self, x=1, y="hi"):
    method test_thing (line 46) | def test_thing(self, x, y):
  class TestParametrizdeSpecial (line 51) | class TestParametrizdeSpecial(Test):
    method test_special_characters_params (line 53) | def test_special_characters_params(self, version, chars):
  class TestObjectParameters (line 58) | class TestObjectParameters(Test):
    method test_thing (line 61) | def test_thing(self, d, lst):

FILE: tests/logger/check_logger.py
  class DummyFileLoggerMaker (line 24) | class DummyFileLoggerMaker(LoggerMaker):
    method __init__ (line 25) | def __init__(self, log_dir, n_handles):
    method logger_name (line 31) | def logger_name(self):
    method configure_logger (line 34) | def configure_logger(self):
  function open_files (line 40) | def open_files():
  class CheckLogger (line 46) | class CheckLogger(object):
    method setup_method (line 47) | def setup_method(self, _):
    method check_close_logger (line 50) | def check_close_logger(self):
    method teardown_method (line 63) | def teardown_method(self, _):

FILE: tests/mark/check_cluster_use_metadata.py
  class CheckClusterUseAnnotation (line 26) | class CheckClusterUseAnnotation(object):
    method check_basic_usage_arbitrary_metadata (line 27) | def check_basic_usage_arbitrary_metadata(self):
    method check_basic_usage_cluster_spec (line 40) | def check_basic_usage_cluster_spec(self):
    method check_basic_usage_num_nodes (line 56) | def check_basic_usage_num_nodes(self):
    method check_empty_cluster_annotation (line 71) | def check_empty_cluster_annotation(self, fail_greedy_tests, has_annota...
    method check_zero_nodes_annotation (line 99) | def check_zero_nodes_annotation(self, fail_greedy_tests):
    method check_with_parametrize (line 115) | def check_with_parametrize(self):
    method check_beneath_parametrize (line 127) | def check_beneath_parametrize(self):
    method check_with_nodes_default_parametrize_matrix (line 145) | def check_with_nodes_default_parametrize_matrix(self):
    method check_no_override (line 174) | def check_no_override(self):
    method check_parametrized_with_multiple_cluster_annotations (line 192) | def check_parametrized_with_multiple_cluster_annotations(self):
    method check_matrix_with_multiple_cluster_annotations (line 211) | def check_matrix_with_multiple_cluster_annotations(self):
    method check_with_ignore (line 233) | def check_with_ignore(self):

FILE: tests/mark/check_env.py
  class CheckEnv (line 21) | class CheckEnv(object):
    method check_does_not_raise_exception_when_key_not_exists (line 22) | def check_does_not_raise_exception_when_key_not_exists(self):
    method check_has_env_annotation (line 28) | def check_has_env_annotation(self):
    method check_is_ignored_if_env_not_correct (line 36) | def check_is_ignored_if_env_not_correct(self):
    method check_is_not_ignore_if_correct_env (line 45) | def check_is_not_ignore_if_correct_env(self):

FILE: tests/mark/check_ignore.py
  class CheckIgnore (line 21) | class CheckIgnore(object):
    method check_simple (line 22) | def check_simple(self):
    method check_simple_method (line 32) | def check_simple_method(self):
    method check_ignore_all (line 43) | def check_ignore_all(self):
    method check_ignore_all_method (line 58) | def check_ignore_all_method(self):
    method check_ignore_specific (line 77) | def check_ignore_specific(self):
    method check_ignore_specific_method (line 95) | def check_ignore_specific_method(self):
    method check_invalid_specific_ignore (line 114) | def check_invalid_specific_ignore(self):
    method check_invalid_ignore_all (line 132) | def check_invalid_ignore_all(self):

FILE: tests/mark/check_parametrize.py
  class CheckParametrize (line 19) | class CheckParametrize(object):
    method check_simple (line 20) | def check_simple(self):
    method check_simple_method (line 35) | def check_simple_method(self):
    method check_stacked (line 50) | def check_stacked(self):
    method check_stacked_method (line 69) | def check_stacked_method(self):
  class CheckMatrix (line 91) | class CheckMatrix(object):
    method check_simple (line 92) | def check_simple(self):
    method check_simple_method (line 109) | def check_simple_method(self):
    method check_stacked (line 128) | def check_stacked(self):
    method check_stacked_method (line 148) | def check_stacked_method(self):
  class CheckDefaults (line 170) | class CheckDefaults(object):
    method check_defaults (line 171) | def check_defaults(self):
    method check_defaults_method (line 199) | def check_defaults_method(self):
    method check_overlap_param (line 233) | def check_overlap_param(self):
    method check_overlap_matrix (line 253) | def check_overlap_matrix(self):
    method check_only_defaults (line 282) | def check_only_defaults(self):

FILE: tests/reporter/check_symbol_reporter.py
  function check_to_symbol_no_args (line 7) | def check_to_symbol_no_args(tmp_path):
  function check_to_symbol_relative_path (line 19) | def check_to_symbol_relative_path(tmp_path):
  function check_to_symbol_with_args (line 31) | def check_to_symbol_with_args():

FILE: tests/runner/check_runner.py
  class CheckRunner (line 54) | class CheckRunner(object):
    method check_insufficient_cluster_resources (line 55) | def check_insufficient_cluster_resources(self):
    method _do_expand (line 78) | def _do_expand(self, test_file, test_class, test_methods, cluster=None...
    method check_simple_run (line 92) | def check_simple_run(self):
    method check_deflake_run (line 122) | def check_deflake_run(self):
    method check_runner_report_junit (line 144) | def check_runner_report_junit(self):
    method check_exit_first (line 191) | def check_exit_first(self):
    method check_exits_if_failed_to_initialize (line 211) | def check_exits_if_failed_to_initialize(self):
    method check_sends_result_when_error_reporting_exception (line 245) | def check_sends_result_when_error_reporting_exception(self, exc_msg_mo...
    method check_run_failure_with_bad_cluster_allocation (line 267) | def check_run_failure_with_bad_cluster_allocation(self):
    method check_test_failure_with_too_many_nodes_requested (line 291) | def check_test_failure_with_too_many_nodes_requested(self):
    method check_runner_timeout (line 322) | def check_runner_timeout(self):
    method check_fail_greedy_tests (line 362) | def check_fail_greedy_tests(self, fail_greedy_tests):
    method check_cluster_shrink (line 386) | def check_cluster_shrink(self):
    method check_cluster_shrink_reschedule (line 424) | def check_cluster_shrink_reschedule(self):
    method check_cluster_shrink_to_zero (line 487) | def check_cluster_shrink_to_zero(self):
    method check_runner_client_report (line 518) | def check_runner_client_report(self):
    method check_report_remaining_as_failed (line 547) | def check_report_remaining_as_failed(self):
    method check_report_active_as_failed (line 576) | def check_report_active_as_failed(self):
    method check_report_active_as_failed_frees_cluster (line 617) | def check_report_active_as_failed_frees_cluster(self):
    method check_runner_client_shutdown_flag (line 661) | def check_runner_client_shutdown_flag(self):
    method check_duplicate_finished_message_handling (line 695) | def check_duplicate_finished_message_handling(self):
    method check_timeout_exception_join_timeout_param (line 762) | def check_timeout_exception_join_timeout_param(self):
  class ShrinkingLocalhostCluster (line 791) | class ShrinkingLocalhostCluster(LocalhostCluster):
    method __init__ (line 792) | def __init__(self, *args, shrink_on=1, **kwargs):
    method do_alloc (line 799) | def do_alloc(self, cluster_spec):

FILE: tests/runner/check_runner_memory.py
  class InstrumentedTestRunner (line 39) | class InstrumentedTestRunner(TestRunner):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method _run_single_test (line 49) | def _run_single_test(self, test_context):
  class CheckMemoryUsage (line 58) | class CheckMemoryUsage(object):
    method setup_method (line 59) | def setup_method(self, _):
    method check_for_inter_test_memory_leak (line 63) | def check_for_inter_test_memory_leak(self):
    method validate_memory_measurements (line 100) | def validate_memory_measurements(self, measurements):
    method _linear_regression_slope (line 129) | def _linear_regression_slope(self, arr):

FILE: tests/runner/check_sender_receiver.py
  class CheckSenderReceiver (line 29) | class CheckSenderReceiver(object):
    method ready_response (line 30) | def ready_response(self, client_id, port):
    method check_simple_messaging (line 40) | def check_simple_messaging(self):
    method check_timeout (line 64) | def check_timeout(self):
    method check_exponential_backoff (line 83) | def check_exponential_backoff(self):

FILE: tests/runner/fake_remote_account.py
  class FakeRemoteAccount (line 5) | class FakeRemoteAccount(RemoteAccount):
    method __init__ (line 6) | def __init__(self, *args, is_available=True, **kwargs):
    method available (line 11) | def available(self):
    method fetch_externally_routable_ip (line 14) | def fetch_externally_routable_ip(self, *args, **kwargs):
  class FakeWindowsRemoteAccount (line 18) | class FakeWindowsRemoteAccount(FakeRemoteAccount):
    method __init__ (line 19) | def __init__(self, *args, **kwargs):
  function create_fake_remote_account (line 24) | def create_fake_remote_account(*args, **kwargs):

FILE: tests/runner/resources/test_bad_actor.py
  class FakeService (line 6) | class FakeService(Service):
  class BadActorTest (line 10) | class BadActorTest(Test):
    method test_too_many_nodes (line 12) | def test_too_many_nodes(self):

FILE: tests/runner/resources/test_failing_tests.py
  class FailingTest (line 24) | class FailingTest(Test):
    method __init__ (line 25) | def __init__(self, test_context):
    method test_fail (line 30) | def test_fail(self, x):

FILE: tests/runner/resources/test_fails_to_init.py
  class FailsToInitTest (line 21) | class FailsToInitTest(Test):
    method __init__ (line 24) | def __init__(self, test_context):
    method test_nothing (line 30) | def test_nothing(self):

FILE: tests/runner/resources/test_fails_to_init_in_setup.py
  class FailsToInitInSetupTest (line 21) | class FailsToInitInSetupTest(Test):
    method __init__ (line 24) | def __init__(self, test_context):
    method setUp (line 27) | def setUp(self):
    method test_nothing (line 32) | def test_nothing(self):

FILE: tests/runner/resources/test_memory_leak.py
  class MemoryEater (line 24) | class MemoryEater(Service):
    method __init__ (line 27) | def __init__(self, context):
    method start_node (line 31) | def start_node(self, node):
    method stop_node (line 34) | def stop_node(self, node):
    method clean_node (line 37) | def clean_node(self, node):
    method num_nodes (line 41) | def num_nodes(self):
  class MemoryLeakTest (line 45) | class MemoryLeakTest(Test):
    method __init__ (line 50) | def __init__(self, test_context):
    method test_leak (line 56) | def test_leak(self, x):

FILE: tests/runner/resources/test_thingy.py
  class TestThingy (line 24) | class TestThingy(Test):
    method test_pi (line 28) | def test_pi(self):
    method test_delayed (line 32) | def test_delayed(self):
    method test_ignore1 (line 37) | def test_ignore1(self):
    method test_ignore2 (line 43) | def test_ignore2(self, x=2):
    method test_failure (line 47) | def test_failure(self):
    method test_flaky (line 51) | def test_flaky(self):
  class ClusterTestThingy (line 57) | class ClusterTestThingy(Test):
    method test_bad_num_nodes (line 61) | def test_bad_num_nodes(self):

FILE: tests/runner/resources/test_various_num_nodes.py
  class VariousNumNodesTest (line 7) | class VariousNumNodesTest(Test):
    method test_five_nodes_a (line 13) | def test_five_nodes_a(self):
    method test_five_nodes_b (line 17) | def test_five_nodes_b(self):
    method test_four_nodes (line 21) | def test_four_nodes(self):
    method test_three_nodes_asleep (line 25) | def test_three_nodes_asleep(self):
    method test_three_nodes_a (line 30) | def test_three_nodes_a(self):
    method test_three_nodes_b (line 34) | def test_three_nodes_b(self):
    method test_two_nodes_a (line 38) | def test_two_nodes_a(self):
    method test_two_nodes_b (line 42) | def test_two_nodes_b(self):
    method test_one_node_a (line 46) | def test_one_node_a(self):
    method test_one_node_b (line 50) | def test_one_node_b(self):
    method test_no_cluster_annotation (line 53) | def test_no_cluster_annotation(self):
    method test_empty_cluster_annotation (line 57) | def test_empty_cluster_annotation(self):
    method test_zero_nodes (line 64) | def test_zero_nodes(self):

FILE: tests/scheduler/check_scheduler.py
  class CheckScheduler (line 26) | class CheckScheduler(object):
    method setup_method (line 27) | def setup_method(self, _):
    method check_empty (line 42) | def check_empty(self):
    method check_non_empty_cluster_too_small (line 49) | def check_non_empty_cluster_too_small(self):
    method check_simple_usage (line 63) | def check_simple_usage(self):
    method check_with_changing_cluster_availability (line 79) | def check_with_changing_cluster_availability(self):
    method check_filter_unschedulable_tests (line 116) | def check_filter_unschedulable_tests(self):
    method check_drain_remaining_tests (line 126) | def check_drain_remaining_tests(self):
    method check_drain_remaining_tests_partial (line 148) | def check_drain_remaining_tests_partial(self):

FILE: tests/services/check_background_thread_service.py
  class DummyService (line 23) | class DummyService(BackgroundThreadService):
    method __init__ (line 26) | def __init__(self, context, run_time_sec, exc=None):
    method who_am_i (line 32) | def who_am_i(self, node=None):
    method idx (line 35) | def idx(self, node):
    method allocate_nodes (line 38) | def allocate_nodes(self):
    method _worker (line 41) | def _worker(self, idx, node):
    method stop_node (line 54) | def stop_node(self, node):
  class CheckBackgroundThreadService (line 58) | class CheckBackgroundThreadService(object):
    method setup_method (line 59) | def setup_method(self, method):
    method check_service_constructor (line 62) | def check_service_constructor(self):
    method check_service_timeout (line 74) | def check_service_timeout(self):
    method check_no_timeout (line 94) | def check_no_timeout(self):
    method check_wait_node (line 102) | def check_wait_node(self):
    method check_wait_node_no_start (line 110) | def check_wait_node_no_start(self):
    method check_background_exception (line 115) | def check_background_exception(self):

FILE: tests/services/check_jvm_logging.py
  class JavaService (line 23) | class JavaService(Service):
    method __init__ (line 26) | def __init__(self, context, num_nodes):
    method idx (line 32) | def idx(self, node):
    method start_node (line 35) | def start_node(self, node, **kwargs):
    method clean_node (line 41) | def clean_node(self, node, **kwargs):
  function create_mock_node (line 46) | def create_mock_node():
  class CheckJVMLogging (line 56) | class CheckJVMLogging(object):
    method setup_method (line 57) | def setup_method(self, _):
    method check_enable_for_service (line 63) | def check_enable_for_service(self):
    method check_jvm_options_format (line 82) | def check_jvm_options_format(self):
    method check_ssh_wrapping (line 97) | def check_ssh_wrapping(self):
    method check_ssh_methods_inject_options (line 124) | def check_ssh_methods_inject_options(self):
    method check_preserves_existing_jvm_options (line 175) | def check_preserves_existing_jvm_options(self):
    method check_ssh_wrap_idempotent (line 214) | def check_ssh_wrap_idempotent(self):
    method check_clean_node_behavior (line 244) | def check_clean_node_behavior(self):
    method check_log_paths (line 273) | def check_log_paths(self):
    method check_kwargs_preserved (line 290) | def check_kwargs_preserved(self):
    method check_setup_failure_doesnt_break_wrapping (line 304) | def check_setup_failure_doesnt_break_wrapping(self):
    method check_wrapped_ssh_failure_propagates (line 332) | def check_wrapped_ssh_failure_propagates(self):
    method check_wrapped_ssh_with_allow_fail (line 369) | def check_wrapped_ssh_with_allow_fail(self):
    method check_cleanup_failure_still_restores_ssh (line 398) | def check_cleanup_failure_still_restores_ssh(self):
    method check_double_cleanup_is_safe (line 434) | def check_double_cleanup_is_safe(self):

FILE: tests/services/check_service.py
  class DummyService (line 20) | class DummyService(Service):
    method __init__ (line 23) | def __init__(self, context, num_nodes):
    method idx (line 33) | def idx(self, node):
    method start_node (line 36) | def start_node(self, node, **kwargs):
    method clean_node (line 41) | def clean_node(self, node, **kwargs):
    method stop_node (line 46) | def stop_node(self, node, **kwargs):
  class DifferentDummyService (line 52) | class DifferentDummyService(Service):
    method __init__ (line 55) | def __init__(self, context, num_nodes):
    method idx (line 58) | def idx(self, node):
  class CheckAllocateFree (line 62) | class CheckAllocateFree(object):
    method setup_method (line 63) | def setup_method(self, _):
    method check_allocate_free (line 68) | def check_allocate_free(self):
    method check_order (line 82) | def check_order(self):
  class CheckStartStop (line 97) | class CheckStartStop(object):
    method setup_method (line 98) | def setup_method(self, _):
    method check_start_stop_clean (line 103) | def check_start_stop_clean(self):
    method check_kwargs_support (line 129) | def check_kwargs_support(self):

FILE: tests/templates/service/check_render.py
  class CheckTemplateRenderingService (line 19) | class CheckTemplateRenderingService(object):
    method new_instance (line 24) | def new_instance(self):
    method check_simple (line 27) | def check_simple(self):
    method check_single_variable (line 30) | def check_single_variable(self):
    method check_overload (line 33) | def check_overload(self):
    method check_class_template (line 36) | def check_class_template(self):
    method check_file_template (line 39) | def check_file_template(self):
  class TemplateRenderingService (line 43) | class TemplateRenderingService(Service):
    method __init__ (line 51) | def __init__(self):
    method render_simple (line 54) | def render_simple(self):
    method render_single_variable (line 58) | def render_single_variable(self):
    method render_overload (line 63) | def render_overload(self):
    method render_class_template (line 67) | def render_class_template(self):
    method render_file_template (line 72) | def render_file_template(self):

FILE: tests/templates/test/check_render.py
  class CheckTemplateRenderingTest (line 25) | class CheckTemplateRenderingTest(object):
    method setup (line 30) | def setup(self):
    method check_string_template (line 36) | def check_string_template(self):
    method check_file_template (line 42) | def check_file_template(self):
  class CheckPackageSearchPath (line 48) | class CheckPackageSearchPath(object):
    method check_package_search_path (line 53) | def check_package_search_path(self):
    method check_get_ctx (line 64) | def check_get_ctx(self):
  class TemplateRenderingTest (line 85) | class TemplateRenderingTest(Test):

FILE: tests/test_utils.py
  function find_available_port (line 18) | def find_available_port(min_port=8000, max_port=9000):

FILE: tests/tests/check_session.py
  class CheckGenerateResultsDir (line 24) | class CheckGenerateResultsDir(object):
    method setup_method (line 25) | def setup_method(self, _):
    method check_generate_results_root (line 28) | def check_generate_results_root(self):
    method check_pickleable (line 34) | def check_pickleable(self):
    method teardown_method (line 49) | def teardown_method(self, _):

FILE: tests/tests/check_test.py
  class DummyTest (line 28) | class DummyTest(Test):
    method test_class_description (line 31) | def test_class_description(self):
    method test_function_description (line 34) | def test_function_description(self):
  class DummyTestNoDescription (line 39) | class DummyTestNoDescription(Test):
    method test_this (line 40) | def test_this(self):
  class CheckLifecycle (line 44) | class CheckLifecycle(object):
    method check_test_context_double_close (line 45) | def check_test_context_double_close(self):
    method check_cluster_property (line 55) | def check_cluster_property(self):
  class CheckEscapePathname (line 67) | class CheckEscapePathname(object):
    method check_illegal_path (line 68) | def check_illegal_path(self):
    method check_negative (line 72) | def check_negative(self):
    method check_many_dots (line 77) | def check_many_dots(self):
  class CheckDescription (line 82) | class CheckDescription(object):
    method check_from_function (line 85) | def check_from_function(self):
    method check_from_class (line 94) | def check_from_class(self):
    method check_no_description (line 103) | def check_no_description(self):
  class CheckCompressCmd (line 113) | class CheckCompressCmd(object):
    method setup_method (line 116) | def setup_method(self, _):
    method _make_random_file (line 119) | def _make_random_file(self, dir, num_chars=10000):
    method _make_files (line 127) | def _make_files(self, dir, num_files=10):
    method _validate_compressed (line 132) | def _validate_compressed(self, uncompressed_path):
    method check_compress_service_logs_swallow_error (line 147) | def check_compress_service_logs_swallow_error(self):
    method check_abs_path_file (line 178) | def check_abs_path_file(self):
    method check_relative_path_file (line 191) | def check_relative_path_file(self):
    method check_abs_path_dir (line 203) | def check_abs_path_dir(self):
    method check_relative_path_dir (line 217) | def check_relative_path_dir(self):
    method teardown_method (line 230) | def teardown_method(self, _):

FILE: tests/tests/check_test_context.py
  class CheckTestContext (line 25) | class CheckTestContext(object):
    method check_copy_constructor (line 26) | def check_copy_constructor(self):
  class DummyTest (line 50) | class DummyTest(Test):
    method __init__ (line 51) | def __init__(self, test_context):
    method test_me (line 57) | def test_me(self):
  class DummyService (line 61) | class DummyService(Service):
    method __init__ (line 62) | def __init__(self, context):

FILE: tests/utils/check_util.py
  class CheckUtils (line 21) | class CheckUtils(object):
    method check_wait_until (line 22) | def check_wait_until(self):
    method check_wait_until_timeout (line 28) | def check_wait_until_timeout(self):
    method check_wait_until_timeout_callable_msg (line 40) | def check_wait_until_timeout_callable_msg(self):
    method check_wait_until_with_exception (line 52) | def check_wait_until_with_exception(self):
    method check_wait_until_with_exception_on_first_step_only_but_still_fails (line 77) | def check_wait_until_with_exception_on_first_step_only_but_still_fails...
    method check_wait_until_exception_which_succeeds_eventually (line 99) | def check_wait_until_exception_which_succeeds_eventually(self):
    method check_wait_until_breaks_early_on_exception (line 116) | def check_wait_until_breaks_early_on_exception(self):
Condensed preview — 193 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (705K chars).
[
  {
    "path": ".coveragerc",
    "chars": 82,
    "preview": "[run]\nomit =\n    .git/*\n    .tox/*\n    docs/*\n    setup.py\n    test/*\n    tests/*\n"
  },
  {
    "path": ".dockerignore",
    "chars": 2,
    "preview": "*\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "chars": 454,
    "preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n@confluentinc/dev"
  },
  {
    "path": ".gitignore",
    "chars": 1328,
    "preview": "# Duckape\nsystests/.ducktape/\nsystests/results/\nresults\n.ducktape\n.vagrant\n\n# Byte-compiled / optimized / DLL files\n__py"
  },
  {
    "path": ".readthedocs.yaml",
    "chars": 349,
    "preview": "# .readthedocs.yaml\n# Read the Docs configuration file\n# See https://docs.readthedocs.io/en/stable/config-file/v2.html f"
  },
  {
    "path": ".semaphore/semaphore.yml",
    "chars": 1532,
    "preview": "version: v1.0\nname: pr-test-job\nagent:\n  machine:\n    type: s1-prod-ubuntu24-04-amd64-1\n\nexecution_time_limit:\n  hours: "
  },
  {
    "path": "CODEOWNERS",
    "chars": 49,
    "preview": "* @confluentinc/cp-test-frameworks-and-readiness\n"
  },
  {
    "path": "Dockerfile",
    "chars": 843,
    "preview": "# An image of ducktape that can be used to setup a Docker cluster where ducktape is run inside the container.\n\nFROM ubun"
  },
  {
    "path": "Jenkinsfile.disabled",
    "chars": 91,
    "preview": "python {\n    publish = false  // Release is done manually to PyPI and not supported yet.\n}\n"
  },
  {
    "path": "README.md",
    "chars": 1240,
    "preview": "[![Documentation Status](https://readthedocs.org/projects/ducktape/badge/?version=latest)](https://ducktape.readthedocs."
  },
  {
    "path": "Vagrantfile",
    "chars": 1833,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "docs/Makefile",
    "chars": 605,
    "preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHI"
  },
  {
    "path": "docs/README.md",
    "chars": 863,
    "preview": "Ducktape documentation quick start guide\n========================================\n\n\nBuild the documentation\n------------"
  },
  {
    "path": "docs/_static/theme_overrides.css",
    "chars": 364,
    "preview": "/* override table width restrictions */\n@media screen and (min-width: 767px) {\n\n   .wy-table-responsive table td {\n     "
  },
  {
    "path": "docs/api/clusters.rst",
    "chars": 289,
    "preview": "Clusters\n========\n\n.. autoclass:: ducktape.cluster.cluster.Cluster\n    :members:\n\n.. autoclass:: ducktape.cluster.vagran"
  },
  {
    "path": "docs/api/remoteaccount.rst",
    "chars": 354,
    "preview": "Remote Account\n==============\n\n.. autoclass:: ducktape.cluster.remoteaccount.RemoteAccount\n    :members:\n\n.. autoclass::"
  },
  {
    "path": "docs/api/services.rst",
    "chars": 174,
    "preview": "Services\n========\n\n.. autoclass:: ducktape.services.service.Service\n    :members:\n\n.. autoclass:: ducktape.services.back"
  },
  {
    "path": "docs/api/templates.rst",
    "chars": 82,
    "preview": "Template\n========\n\n.. autoclass:: ducktape.template.TemplateRenderer\n    :members:"
  },
  {
    "path": "docs/api/test.rst",
    "chars": 127,
    "preview": "Test\n====\n\n.. autoclass:: ducktape.tests.test.Test\n    :members:\n\n.. autoclass:: ducktape.tests.test.TestContext\n    :me"
  },
  {
    "path": "docs/api.rst",
    "chars": 142,
    "preview": ".. _topics-api:\n\n=======\nAPI Doc\n=======\n\n.. toctree::\n    api/test\n    api/services\n    api/remoteaccount\n    api/clust"
  },
  {
    "path": "docs/changelog.rst",
    "chars": 5596,
    "preview": ".. _topics-changelog:\n\n====\nChangelog\n====\n\n0.14.0\n======\nTuesday, March 10th, 2026\n-------------------------\n- Ensure l"
  },
  {
    "path": "docs/conf.py",
    "chars": 5308,
    "preview": "# -*- coding: utf-8 -*-\n#\n# ducktape documentation build configuration file, created by\n# sphinx-quickstart on Mon Mar 1"
  },
  {
    "path": "docs/debug_tests.rst",
    "chars": 7550,
    "preview": ".. _topics-debug_tests:\n\n===========\nDebug Tests\n===========\n\nThe test results go in ``results/<date>—<test_number>``. F"
  },
  {
    "path": "docs/index.rst",
    "chars": 1167,
    "preview": ".. _topics-index:\n\n============================================================\nDistributed System Integration & Perform"
  },
  {
    "path": "docs/install.rst",
    "chars": 1031,
    "preview": ".. _topics-install:\n\n=======\nInstall\n=======\n\n1. Ducktape requires python 3.7 or later.\n\n2. Install `cryptography`_ (use"
  },
  {
    "path": "docs/make.bat",
    "chars": 812,
    "preview": "@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sp"
  },
  {
    "path": "docs/misc.rst",
    "chars": 2443,
    "preview": ".. _topics-misc:\n\n====\nMisc\n====\n\nDeveloper Install\n=================\n\nIf you are are a ducktape developer, consider usi"
  },
  {
    "path": "docs/new_services.rst",
    "chars": 10172,
    "preview": ".. _topics-new_services:\n\n===================\nCreate New Services\n===================\n\nWriting ducktape services\n======="
  },
  {
    "path": "docs/new_tests.rst",
    "chars": 7685,
    "preview": ".. _topics-new_tests:\n\n================\nCreate New Tests\n================\n\nWriting ducktape Tests\n======================"
  },
  {
    "path": "docs/requirements.txt",
    "chars": 144,
    "preview": "Sphinx~=8.2.3\nsphinx-argparse~=0.5.2\nsphinx-rtd-theme~=3.0.2\nboto3==1.33.13\npycryptodome==3.23.0\npywinrm==0.4.3\njinja2~="
  },
  {
    "path": "docs/run_tests.rst",
    "chars": 5965,
    "preview": ".. _topics-run_tests:\n\n=========\nRun Tests\n=========\n\nRunning Tests\n=============\n\nducktape discovers and runs tests in "
  },
  {
    "path": "docs/test_clusters.rst",
    "chars": 2481,
    "preview": ".. _topics-test_clusters:\n\n===================\nTest Clusters\n===================\n\nDucktape runs on a test cluster with s"
  },
  {
    "path": "ducktape/__init__.py",
    "chars": 23,
    "preview": "__version__ = \"0.14.0\"\n"
  },
  {
    "path": "ducktape/__main__.py",
    "chars": 658,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/__init__.py",
    "chars": 130,
    "preview": "from .json import JsonCluster  # NOQA\nfrom .localhost import LocalhostCluster  # NOQA\nfrom .vagrant import VagrantCluste"
  },
  {
    "path": "ducktape/cluster/cluster.py",
    "chars": 3522,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/cluster_node.py",
    "chars": 470,
    "preview": "from typing import Optional\n\nfrom ducktape.cluster.remoteaccount import RemoteAccount\n\n\nclass ClusterNode(object):\n    d"
  },
  {
    "path": "ducktape/cluster/cluster_spec.py",
    "chars": 3244,
    "preview": "# Copyright 2017 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/consts.py",
    "chars": 651,
    "preview": "# Copyright 2017 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/finite_subcluster.py",
    "chars": 2125,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/json.py",
    "chars": 6221,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/linux_remoteaccount.py",
    "chars": 3050,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/localhost.py",
    "chars": 2453,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/node_container.py",
    "chars": 15491,
    "preview": "# Copyright 2017 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/node_spec.py",
    "chars": 2736,
    "preview": "# Copyright 2017 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/remoteaccount.py",
    "chars": 29634,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/vagrant.py",
    "chars": 4698,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/cluster/windows_remoteaccount.py",
    "chars": 4331,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/command_line/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "ducktape/command_line/defaults.py",
    "chars": 2093,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/command_line/main.py",
    "chars": 9192,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/command_line/parse_args.py",
    "chars": 11062,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/errors.py",
    "chars": 666,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/json_serializable.py",
    "chars": 1043,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/jvm_logging.py",
    "chars": 7680,
    "preview": "# Copyright 2024 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/mark/__init__.py",
    "chars": 832,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/mark/_mark.py",
    "chars": 13295,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/mark/consts.py",
    "chars": 113,
    "preview": "CLUSTER_SPEC_KEYWORD = \"cluster_spec\"\nCLUSTER_SIZE_KEYWORD = \"num_nodes\"\nCLUSTER_NODE_TYPE_KEYWORD = \"node_type\"\n"
  },
  {
    "path": "ducktape/mark/mark_expander.py",
    "chars": 2184,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/mark/resource.py",
    "chars": 3074,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/services/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "ducktape/services/background_thread.py",
    "chars": 3795,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/services/service.py",
    "chars": 16258,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/services/service_registry.py",
    "chars": 3727,
    "preview": "# Copyright 2014 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/template.py",
    "chars": 3997,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/templates/report/report.css",
    "chars": 1269,
    "preview": "body { \n    font-family: verdana, arial, helvetica, sans-serif; \n    font-size: 80%; \n}\n\ntable { \n    font-size: 100%;\n}"
  },
  {
    "path": "ducktape/templates/report/report.html",
    "chars": 7552,
    "preview": "<!DOCTYPE html>\n<html>\n  <head>\n    <script src=\"https://fb.me/react-0.13.1.min.js\"></script>\n    <script src=\"https://f"
  },
  {
    "path": "ducktape/tests/__init__.py",
    "chars": 96,
    "preview": "from .test import Test\nfrom .test_context import TestContext\n\n__all__ = [\"Test\", \"TestContext\"]\n"
  },
  {
    "path": "ducktape/tests/event.py",
    "chars": 4655,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/loader.py",
    "chars": 29796,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/loggermaker.py",
    "chars": 1895,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/reporter.py",
    "chars": 15501,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/result.py",
    "chars": 8943,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/runner.py",
    "chars": 24408,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/runner_client.py",
    "chars": 18864,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/scheduler.py",
    "chars": 3520,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/serde.py",
    "chars": 975,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/session.py",
    "chars": 4883,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/status.py",
    "chars": 996,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/test.py",
    "chars": 7788,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/tests/test_context.py",
    "chars": 11431,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "ducktape/utils/http_utils.py",
    "chars": 1432,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/utils/local_filesystem_utils.py",
    "chars": 879,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/utils/persistence.py",
    "chars": 2135,
    "preview": "# Copyright (c) 2009 Jason M Baker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of"
  },
  {
    "path": "ducktape/utils/terminal_size.py",
    "chars": 3343,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "ducktape/utils/util.py",
    "chars": 3749,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "requirements-test.txt",
    "chars": 168,
    "preview": "pytest~=6.2.0\n# 4.0 drops py27 support\nmock==4.0.2\nmemory_profiler==0.57\nstatistics==1.0.3.5\nrequests-testadapter==0.3.0"
  },
  {
    "path": "requirements.txt",
    "chars": 306,
    "preview": "jinja2~=3.1.6\nboto3==1.33.13\n# jinja2 pulls in MarkupSafe with a > constraint, but we need to constrain it for compatibi"
  },
  {
    "path": "ruff.toml",
    "chars": 178,
    "preview": "\nextend-exclude = [\n    \"docs\",\n    \".virtualenvs\"\n]\nline-length = 120\n\n[lint]\nselect = [\n    \"E4\",\n    \"E7\",\n    \"E9\",\n"
  },
  {
    "path": "service.yml",
    "chars": 360,
    "preview": "name: ducktape\nlang: python\nlang_version: '3.13'\ngit:\n  enable: true\nsonarqube:\n  enable: true\ngithub:\n  enable: true\n  "
  },
  {
    "path": "setup.cfg",
    "chars": 425,
    "preview": "# pytest configuration (can also be defined in in tox.ini or pytest.ini file)\n#\n# To ease possible confusion, prefix duc"
  },
  {
    "path": "setup.py",
    "chars": 1570,
    "preview": "from setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\nimport re\nimport sys"
  },
  {
    "path": "systests/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "systests/cluster/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "systests/cluster/test_debug.py",
    "chars": 3374,
    "preview": "\"\"\"\nThis module contains tests that are useful for developer debugging\nand can contain sleep statements or test that int"
  },
  {
    "path": "systests/cluster/test_no_cluster.py",
    "chars": 305,
    "preview": "from ducktape.mark.resource import cluster\nfrom ducktape.tests.test import Test\n\n\nclass NoClusterTest(Test):\n    \"\"\"This"
  },
  {
    "path": "systests/cluster/test_remote_account.py",
    "chars": 25036,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "systests/cluster/test_runner_operations.py",
    "chars": 2096,
    "preview": "# Copyright 2022 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/__init__.py",
    "chars": 124,
    "preview": "from ducktape.tests.test import Test\nfrom ducktape.tests.test_context import TestContext\n\n__all__ = [\"Test\", \"TestContex"
  },
  {
    "path": "tests/cluster/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/cluster/check_cluster.py",
    "chars": 2304,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_cluster_spec.py",
    "chars": 3058,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_finite_subcluster.py",
    "chars": 2821,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_json.py",
    "chars": 4999,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_localhost.py",
    "chars": 1846,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_node_container.py",
    "chars": 26986,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_remoteaccount.py",
    "chars": 4695,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/cluster/check_vagrant.py",
    "chars": 7900,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/command_line/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/command_line/check_main.py",
    "chars": 5856,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/command_line/check_parse_args.py",
    "chars": 7099,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/ducktape_mock.py",
    "chars": 3759,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/check_loader.py",
    "chars": 31520,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/resources/loader_test_directory/README",
    "chars": 69,
    "preview": "A dummy test directory structure for checking test discovery behavior"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/empty_file.yml",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/malformed_test_suite.yml",
    "chars": 109,
    "preview": "malformed_test_suite:\n  included:\n    should_have_been_a_list:\n      - but\n      - is\n      - a\n      - dict\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/not_yaml.yml",
    "chars": 35,
    "preview": "this is not a yaml file\nNo, really\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/test_suite_refers_to_non_existent_file.yml",
    "chars": 65,
    "preview": "non_existent_file_suit:\n  included:\n    - file_does_not_exist.py\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/test_suite_with_malformed_params.yml",
    "chars": 148,
    "preview": "non_existent_file_suit:\n  included:\n    - 'test_decorated.py::TestMatrix.test_thing@[{x: 1,y: \"test \"}]'  # this json wo"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/invalid_test_suites/test_suites_with_no_tests.yml",
    "chars": 71,
    "preview": "empty_test_suite_a:\nempty_test_suite_b:\n  excluded:\n    - ../sub_dir_a\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/name_does_not_match_pattern.py",
    "chars": 190,
    "preview": "from ducktape.tests.test import Test\n\n\nclass TestNotLoaded(Test):\n    \"\"\"Loader should not discover this - module name d"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/sub_dir_a/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/resources/loader_test_directory/sub_dir_a/test_c.py",
    "chars": 839,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/sub_dir_a/test_d.py",
    "chars": 916,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/sub_dir_no_tests/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/loader/resources/loader_test_directory/sub_dir_no_tests/just_some_file.py",
    "chars": 38,
    "preview": "class JustSomeClass(object):\n    pass\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_a.py",
    "chars": 841,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_b.py",
    "chars": 1066,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_decorated.py",
    "chars": 1658,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_cyclic_a.yml",
    "chars": 151,
    "preview": "import:\n  - test_suite_cyclic_b.yml # 1 test\n\ntest_suite_cyclic_a:\n  - ./sub_dir_a/test_c.py # 1 test\n  - ./test_b.py # "
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_cyclic_b.yml",
    "chars": 131,
    "preview": "\nimport:\n  - test_suite_cyclic_a.yml # 4 test\n\ntest_suite_cyclic_b:\n  - test_a.py # 1 test\n# suite total: 1 test\n\n# tota"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_decorated.yml",
    "chars": 266,
    "preview": "test_suite_decorated:\n  - 'test_decorated.py::TestMatrix.test_thing@[{\"x\": 1,\"y\": \"test \"}, {\"x\": 2, \"y\": \"test \"}]'  # "
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_import_malformed.yml",
    "chars": 70,
    "preview": "import:\n  - test_suite_malformed.yml\n\nsimple_suite:\n  - test_a.py # 1\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_import_py.yml",
    "chars": 78,
    "preview": "import:\n  - test_a.py\n# improper import\n\nsimple_suite:\n  - test_a.py\n# 1 test\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_malformed.yml",
    "chars": 118,
    "preview": "simple_malformed_suite:\n  cheese:\n    - 'fatty milk'\n    - 'salt'\n  bread:\n    - 'flower'\n    - 'water'\n    - 'yeast'\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_multiple.yml",
    "chars": 429,
    "preview": "test_suite_b:\n  - sub_dir_a/test_c.py  # 1 test\n  - sub_dir_a/test_d.py::TestD.test_d  # 1 test\n# suite total: 2 tests\n\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_single.yml",
    "chars": 321,
    "preview": "test_suite_a:\n  included:\n    - test_a.py  # 1 test\n    - test_b.py::TestBB.test_bb_one  # 1 test\n    - sub_dir_a\n    # "
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_with_self_import.yml",
    "chars": 265,
    "preview": "test_suite_d:\n  included:\n    - sub_dir_a  # +4 tests across all files\n    - test_b.py  # +3 tests\n  excluded:\n    - sub"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suite_with_single_import.yml",
    "chars": 256,
    "preview": "test_suite_d:\n  included:\n    - sub_dir_a  # +4 tests across all files\n    - test_b.py  # +3 tests\n  excluded:\n    - sub"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/refers_to_parent_dir.yml",
    "chars": 72,
    "preview": "test_suite_a:\n  included:\n    - ../test_a.py  # 1 test\n# total: 1 tests\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/sub_dir_a_test_c.yml",
    "chars": 39,
    "preview": "test_suite:\n  - ../sub_dir_a/test_c.py\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/sub_dir_a_test_c_via_class.yml",
    "chars": 46,
    "preview": "test_suite:\n  - ../sub_dir_a/test_c.py::TestC\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/sub_dir_a_with_exclude.yml",
    "chars": 89,
    "preview": "test_suite:\n  included:\n    - ../sub_dir_a/*.py\n  excluded:\n    - ../sub_dir_a/test_d.py\n"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/sub_dir_test_import.yml",
    "chars": 145,
    "preview": "import:\n  - ../test_suite_cyclic_a.yml # 5 tests\n  - .././test_suites/../sub_dir_no_tests/.././test_suite_cyclic_b.yml #"
  },
  {
    "path": "tests/loader/resources/loader_test_directory/test_suites/test_suite_glob.yml",
    "chars": 332,
    "preview": "test_suite_glob:\n  included:\n    - ../sub_dir_a/*\n    - ../test_?.py\n  excluded:\n    - ../sub_dir_a/*_d.py\n\n# globs expa"
  },
  {
    "path": "tests/loader/resources/report.json",
    "chars": 412,
    "preview": "{\n  \"results\": [\n    {\n      \"test_id\": \"tests.loader.resources.loader_test_directory.test_b.TestB.test_b\",\n      \"run_t"
  },
  {
    "path": "tests/logger/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/logger/check_logger.py",
    "chars": 2044,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/check_cluster_use_metadata.py",
    "chars": 9841,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/check_env.py",
    "chars": 1706,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/check_ignore.py",
    "chars": 5026,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/check_parametrize.py",
    "chars": 9767,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/mark/resources/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/reporter/check_symbol_reporter.py",
    "chars": 1264,
    "preview": "from pathlib import Path\nfrom unittest.mock import Mock\n\nfrom ducktape.tests.reporter import FailedTestSymbolReporter\n\n\n"
  },
  {
    "path": "tests/runner/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/runner/check_runner.py",
    "chars": 33898,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/check_runner_memory.py",
    "chars": 5820,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/check_sender_receiver.py",
    "chars": 4796,
    "preview": "# Copyright 2021 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/fake_remote_account.py",
    "chars": 722,
    "preview": "from ducktape.cluster.consts import LINUX, WINDOWS\nfrom ducktape.cluster.remoteaccount import RemoteAccount\n\n\nclass Fake"
  },
  {
    "path": "tests/runner/resources/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_bad_actor.py",
    "chars": 478,
    "preview": "from ducktape.mark.resource import cluster\nfrom ducktape.services.service import Service\nfrom ducktape.tests.test import"
  },
  {
    "path": "tests/runner/resources/test_failing_tests.py",
    "chars": 1073,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_fails_to_init.py",
    "chars": 1040,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_fails_to_init_in_setup.py",
    "chars": 1079,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_memory_leak.py",
    "chars": 1751,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_thingy.py",
    "chars": 1537,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/runner/resources/test_various_num_nodes.py",
    "chars": 1365,
    "preview": "import time\n\nfrom ducktape.mark.resource import cluster\nfrom ducktape.tests.test import Test\n\n\nclass VariousNumNodesTest"
  },
  {
    "path": "tests/scheduler/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/scheduler/check_scheduler.py",
    "chars": 6127,
    "preview": "# Copyright 2016 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/services/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/services/check_background_thread_service.py",
    "chars": 4559,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/services/check_jvm_logging.py",
    "chars": 17426,
    "preview": "# Copyright 2024 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/services/check_service.py",
    "chars": 4761,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/templates/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/templates/service/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/templates/service/check_render.py",
    "chars": 2564,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/templates/service/templates/sample",
    "chars": 18,
    "preview": "Sample {{a_field}}"
  },
  {
    "path": "tests/templates/test/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/templates/test/check_render.py",
    "chars": 2659,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/templates/test/templates/sample",
    "chars": 15,
    "preview": "Sample {{name}}"
  },
  {
    "path": "tests/test_utils.py",
    "chars": 1325,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/tests/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/tests/check_session.py",
    "chars": 1820,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/tests/check_test.py",
    "chars": 8274,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/tests/check_test_context.py",
    "chars": 2294,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/utils/__init__.py",
    "chars": 574,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tests/utils/check_util.py",
    "chars": 4446,
    "preview": "# Copyright 2015 Confluent Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use th"
  },
  {
    "path": "tox.ini",
    "chars": 1861,
    "preview": "[tox]\nenvlist = py38, py39, py310, py311, py312, py313, cover, style, docs\n\n[testenv]\n# Consolidate all deps here instea"
  }
]

About this extraction

This page contains the full source code of the confluentinc/ducktape GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 193 files (650.7 KB), approximately 151.6k tokens, and a symbol index with 1130 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!