` in the commit message, at the start.
3. Rebase `dev` with your branch and push your changes.
4. Once you are confident in your code changes, create a pull request in your fork to the `dev` branch in the LambdaTest/test-at-scale base repository.
5. Link the issue of the base repository in your Pull request description. [Guide](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)
6. Fill out the [Pull Request Template](./.github/pull_request_template.md) completely within the body of the PR. If you feel some areas are not relevant add `N/A` but don’t delete those sections.
#### **Commit messages**
- The first line should be a summary of the changes, not exceeding 50
characters, followed by an optional body that has more details about the
changes. Refer to [this link](https://github.com/erlang/otp/wiki/writing-good-commit-messages)
for more information on writing good commit messages.
- Don't add a period/dot (.) at the end of the summary line.
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2021 LambdaTest Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: Makefile
================================================
NUCLEUS_DOCKER_FILE ?= ./build/nucleus/Dockerfile
NUCLEUS_IMAGE_NAME ?= lambdatest/nucleus:latest
SYNAPSE_DOCKER_FILE ?= ./build/synapse/Dockerfile
SYNAPSE_IMAGE_NAME ?= lambdatest/synapse:latest
REV_LIST ?= $(shell git rev-list --tags --max-count=1)
VERSION ?= $(shell git describe --tags ${REV_LIST})
usage: ## Show this help
@fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/:.*##\s*/##/g' | awk -F'##' '{ printf "%-25s -> %s\n", $$1, $$2 }'
lint: ## Runs linting
golangci-lint run
build-nucleus-image: ## Builds nucleus docker image
docker build --build-arg VERSION=${VERSION}-dev -t ${NUCLEUS_IMAGE_NAME} --file $(NUCLEUS_DOCKER_FILE) .
build-nucleus-bin: ## Builds nucleus binary
bash build/nucleus/build.sh
build-synapse-image: ## Builds synapse docker image
docker build --build-arg VERSION=${VERSION}-dev -t ${SYNAPSE_IMAGE_NAME} --file $(SYNAPSE_DOCKER_FILE) .
build-synapse-bin: ## Builds synapse binary
bash build/synapse/build.sh
install-mockery-mac:
brew install mockery
install-mockery-linux:
apt update && apt install -y mockery
gen-mock-files:
mockery --dir=./pkg --all
================================================
FILE: README.md
================================================
Test At Scale

Test Smarter, Release Faster with test-at-scale.
## Test at scale - TAS
TAS helps you accelerate your testing, shorten job times and get faster feedback on code changes, manage flaky tests and keep master green at all times.
To learn more about TAS features and capabilities, see our [product page](https://www.lambdatest.com/test-at-scale).
## Features
- Smart test selection to run only the subset of tests which get impacted by a commit ⚡
- Smart auto grouping of test to evenly distribute test execution across multiple containers based on previous execution times
- Deep insights about test runs and execution metrics
- Support status checks for pull requests
- Advanced analytics to surface test performance and quality data
- YAML driven declarative workflow management
- Natively integrates with Github and Gitlab
- Flexible workflow to run pre-merge and post-merge tests
- Allows blocking and unblocking tests directly from the UI or YAML directive. No more WIP commits!
- Support for customizing testing environment using raw commands in pre and poststeps
- Supports Javascript monorepos
- Smart depdency caching to speedup subsequent test runs
- Easily customizable to support all major language and frameworks
- Available as [hosted solution](https://lambdatest.com/test-at-scale) as well as self-hosted opensource runner
- [Upcoming] Smart flaky test management 🪄
## Table of contents
- 🚀 [Getting Started](#getting-started)
- 💡 [Tutorials](#tutorials)
- 💖 [Contribute](#contribute)
- 📖 [Docs](https://www.lambdatest.com/support/docs/tas-overview)
## Getting Started
### Step 1 - Setting up a New Account
In order to create an account, visit [TAS Login Page](https://tas.lambdatest.com/login/). (Or [TAS Home Page](https://tas.lambdatest.com/))
- Login using a suitable git provider and select your organization you want to continue with.
- Tell us your specialization, team size.

- Select **TAS Self Hosted** and click on Proceed.
- You will find your **LambdaTest Secret Key** on this page which will be required in the next steps.

### Step 2 - Creating a configuration file for self hosted setup
Before installation we need to create a file that will be used for configuring test-at-scale.
- Open any `Terminal` of your choice.
- Move to your desired directory or you can create a new directory and move to it using the following command.
- Download our sample configuration file using the given command.
```bash
mkdir ~/test-at-scale
cd ~/test-at-scale
curl https://raw.githubusercontent.com/LambdaTest/test-at-scale/main/.sample.synapse.json -o .synapse.json
```
- Open the downloaded `.synapse.json` configuration file in any editor of your choice such as `vi`, `nano`, `code`, etc.
> **NOTE**: `.synapse.json` file is hidden by default. You can list it using `ls -la` command.
- You will need to add the following in this file:
- 1- **LambdaTest Secret Key**, that you got at the end of **Step 1**.
- 2- **Git Token**, that would be required to clone the repositories after Step 3. Generating [GitHub](https://www.lambdatest.com/support/docs/tas-how-to-guides-gh-token), [GitLab](https://www.lambdatest.com/support/docs/tas-how-to-guides-gl-token) personal access token.
- This file will also be used to store certain other parameters such as **Repository Secrets** (Optional), **Container Registry** (Optional) etc that might be required in configuring test-at-scale on your local/self-hosted environment. You can learn more about the configuration options [here](https://www.lambdatest.com/support/docs/tas-self-hosted-configuration#parameters).
### Step 3 - Installation
#### Installation on Docker
##### Prerequisites
- [Docker](https://docs.docker.com/get-docker/) and [Docker-Compose](https://docs.docker.com/compose/install/) (Recommended)
##### Docker Compose
- Run the docker application.
```bash
docker info --format "CPU: {{.NCPU}}, RAM: {{.MemTotal}}"
```
- Execute the above command to ensure that resources usable by Docker are atleast `CPU: 2, RAM: 4294967296`.
> **NOTE:** In order to run test-at-scale you require a minimum configuration of 2 CPU cores and 4 GiBs of RAM.
- The `.synapse.json` configuration file made in [Step 2](#step-2---creating-a-configuration-file-for-self-hosted-setup) will be required before executing the next command.
- Download and run the docker compose file using the following command.
```bash
cd ~/test-at-scale
curl -L https://raw.githubusercontent.com/LambdaTest/test-at-scale/main/docker-compose.yml -o docker-compose.yml
docker-compose up -d
```
> **NOTE:** This docker-compose file will pull the latest version of test-at-scale and install on your self hosted environment.
Installation without Docker Compose
To get up and running quickly, you can use the following instructions to setup Test at Scale on Self hosted environment without docker-compose.
- The `.synapse.json` configuration file made in [Step 2](#step-2---creating-a-configuration-file-for-self-hosted-setup) will be required before executing the next command.
- Execute the following command to run Test at Scale docker container
```bash
cd ~/test-at-scale
docker network create --internal test-at-scale
docker run --name synapse --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/synapse:/tmp/synapse \
-v ${PWD}/.synapse.json:/home/synapse/.synapse.json \
-v /etc/machine-id:/etc/machine-id \
--network=test-at-scale \
lambdatest/synapse:latest
```
> **WARNING:** We strongly recommend to use docker-compose while Test at Scale on Self hosted environment.
Installation on Local Machine & Supported Cloud Platforms
- Local Machine - Setup using [docker](#docker).
- Setup on [Azure](https://www.lambdatest.com/support/docs/tas-self-hosted-installation#azure)
- Setup on [AWS](https://www.lambdatest.com/support/docs/tas-self-hosted-installation#aws)
- Setup on [GCP](https://www.lambdatest.com/support/docs/tas-self-hosted-installation#gcp)
- Once the installation is complete, go back to the TAS portal.
- Click the 'Test Connection' button to ensure `test-at-scale` self hosted environment is connected and ready.
- Hit `Proceed` to move forward to [Step 4](#step-4---importing-your-repo)
### Step 4 - Importing your repo
> **NOTE:** Currently we support Mocha, Jest and Jasmine for testing Javascript codebases.
- Click the Import button for the `JS` repository you want to integrate with TAS.
- Once Imported successfully, click on `Go to Project` to proceed further.
- You will be asked to setup a `post-merge` here. We recommend to proceed ahead with default settings. (You can change these later.)

### Step 5 - Configuring TAS yml
A `.tas.yml` file is a basic yaml configuration file that contains steps required for installing necessary dependencies and executing the tests present in your repository.
- In order to configure your imported repository, follow the steps given on the `.tas.yml` configuration page.
- You can also know more about `.tas.yml` configuration parameters [here](https://www.lambdatest.com/support/docs/tas-configuring-tas-yml).

- Placing the `.tas.yml` configuration file.
- Create a new file as **.tas.yml** at the root level of your repository .
- **Copy** the configuration from the TAS yml configuration page and **paste** them in the **.tas.yml** file you just created.
- **Commit and Push** the changes to your repo.

## **Language & Framework Support**
Currently we support Mocha, Jest and Jasmine for testing Javascript codebases.
## **Tutorials**
- [Setting up you first repo on TAS - Cloud](https://www.lambdatest.com/support/docs/tas-getting-started-integrating-your-first-repo/)
- [Setting up you first repo on TAS - Self Hosted](https://www.lambdatest.com/support/docs/tas-self-hosted-installation)
- Sample repos : [Mocha](https://github.com/LambdaTest/mocha-demos), [Jest](https://github.com/LambdaTest/jest-demos), [Jasmine](https://github.com/LambdaTest/jasmine-node-js-example).
- [How to configure a .tas.yml file](https://www.lambdatest.com/support/docs/tas-configuring-tas-yml)
## **Contribute**
We love our contributors! If you'd like to contribute anything from a bug fix to a feature update, start here:
- 📕 Read our Code of Conduct [Code of Conduct](https://github.com/LambdaTest/test-at-scale/blob/main/CODE_OF_CONDUCT.md).
- 📖 Know more about [test-at-scale](https://github.com/LambdaTest/test-at-scale/blob/main/CONTRIBUTING.md#repo-overview) and contributing from our [Contribution Guide](https://github.com/LambdaTest/test-at-scale/blob/main/CONTRIBUTING.md).
- 👾 Explore some good first issues [good first issues](https://github.com/LambdaTest/test-at-scale/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
### **Join our community**
Engage with Developers, SDETs, and Testers around the world.
- Get the latest product updates.
- Discuss testing philosophies and more.
Join the Test-at-scale Community on [Discord](https://discord.gg/Wyf8srhf6K). Click [here](https://discord.com/channels/940635450509504523/941297958954102846) if you are already an existing member.
### **Support & Troubleshooting**
The documentation and community will help you troubleshoot most issues. If you have encountered a bug, you can contact us using one of the following channels:
- Help yourself with our [Documentation](https://www.lambdatest.com/support/docs/tas-overview)📚, and [FAQs](https://www.lambdatest.com/support/docs/tas-faq-and-troubleshooting/).
- In case of Issue & bugs go to [GitHub issues](https://github.com/LambdaTest/test-at-scale/issues)🐛.
- For support & feedback join our [Discord](https://discord.gg/Wyf8srhf6K) or reach out to us on our [email](mailto:hello.tas@lambdatest.com)💬.
We are committed to fostering an open and welcoming environment in the community. Please see the Code of Conduct.
## **License**
TestAtScale is available under the [Apache License 2.0](https://github.com/LambdaTest/test-at-scale/blob/main/LICENSE). Use it wisely.
================================================
FILE: build/nucleus/Dockerfile
================================================
FROM golang:latest as builder
# create a working directory
COPY . /nucleus
WORKDIR /nucleus
# Build binary
RUN GOARCH=amd64 GOOS=linux go build -ldflags="-w -s" -o nucleus cmd/nucleus/*.go
# Uncomment only when build is highly stable. Compress binary.
# RUN strip --strip-unneeded ts
# RUN upx ts
# use a minimal alpine image
FROM nikolaik/python-nodejs:python3.10-nodejs16-slim
ARG VERSION
ENV VERSION=$VERSION
# Installing chromium so that all linux libs get automatically installed for running puppeteer tests
RUN apt update && apt install -y git zstd chromium curl unzip zip xmlstarlet build-essential
RUN curl -LJO https://go.dev/dl/go1.18.3.linux-amd64.tar.gz
RUN tar -C /usr/local -xzf go1.18.3.linux-amd64.tar.gz
COPY bundle /usr/local/bin/bundle
RUN chmod +x /usr/local/bin/bundle
ENV SMART_BINARY=/usr/local/bin/bundle
# Install Custom Runners
RUN mkdir /custom-runners
RUN mkdir /tmp/custom-runners
WORKDIR /tmp/custom-runners
RUN npm init -y
RUN npm install -g pnpm
RUN npm i --global-style --legacy-peer-deps \
@lambdatest/test-at-scale-jasmine-runner@~0.3.0 \
@lambdatest/test-at-scale-mocha-runner@~0.3.0 \
@lambdatest/test-at-scale-jest-runner@~0.3.0
RUN npm i -g nyc@^15.1.0
RUN tar -zcf /custom-runners/custom-runners.tgz node_modules
RUN rm -rf /tmp/custom-runners
RUN mkdir /home/nucleus
RUN mkdir /home/nucleus/.nvm
ENV NVM_DIR=/home/nucleus/.nvm
ENV GOROOT /usr/local/go
ENV GOPATH /home/nucleus
ENV PATH /usr/local/go/bin:/home/nucleus/bin:$PATH
COPY ./build/nucleus/golang/server /home/nucleus
RUN chmod 744 /home/nucleus/server
# install nvm for nucleus user
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | /bin/bash
WORKDIR /home/nucleus
# copy the binary from builder
COPY --from=builder /nucleus/nucleus /usr/local/bin/
# run the binary
COPY ./build/nucleus/entrypoint.sh /
RUN apt update -y && apt upgrade -y
RUN curl -s https://get.sdkman.io | bash
RUN /bin/bash -c "source $HOME/.sdkman/bin/sdkman-init.sh;sdk install java 18.0.1-oracle"
ENV JAVA_HOME="/root/.sdkman/candidates/java/current"
ENV PATH=$JAVA_HOME:$PATH
ENV PATH=$JAVA_HOME/bin:$PATH
ARG MAVEN_VERSION=3.6.3
# Define a constant with the working directory
ARG USER_HOME_DIR="/root"
# Define the URL where maven can be downloaded from
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
# Create the directories, download maven, validate the download, install it, remove downloaded file and set links
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& echo "Downlaoding maven" \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
\
&& echo "Unziping maven" \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
\
&& echo "Cleaning and setting links" \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
# Define environmental variables required by Maven, like Maven_Home directory and where the maven repo is located
ENV MAVEN_HOME /usr/share/maven
RUN mkdir -p /home/nucleus/.m2
#update settings.xml file for new maven local repo location
RUN xmlstarlet ed -O --inplace -N a='http://maven.apache.org/SETTINGS/1.0.0' -s /a:settings --type elem --name "localRepository" -v /home/nucleus/.m2/repository /usr/share/maven/conf/settings.xml
COPY ./build/nucleus/java/test-at-scale-java.jar /
RUN curl -o /home/nucleus/junit-platform-console-standalone-1.8.2.jar https://repo1.maven.org/maven2/org/junit/platform/junit-platform-console-standalone/1.8.2/junit-platform-console-standalone-1.8.2.jar
COPY ./build/nucleus/entrypoint.sh /
ENTRYPOINT ["/bin/sh", "/entrypoint.sh"]
================================================
FILE: build/nucleus/build.sh
================================================
#!/usr/bin
# exit when any command fails
set -e
# keep track of the last executed command
trap 'last_command=$current_command; current_command=$BASH_COMMAND' DEBUG
# echo an error message before exiting
trap 'echo "\"${last_command}\" command filed with exit code $?."' EXIT
echo 'Building binary'
go build -o nucleus ./cmd/nucleus/*.go
echo 'Binary successfully build by the name of `nucleus`'
================================================
FILE: build/nucleus/entrypoint.sh
================================================
#!/bin/sh
exec /usr/local/bin/nucleus "$@"
================================================
FILE: build/nucleus/golang/server
================================================
[File too large to display: 12.0 MB]
================================================
FILE: build/nucleus/java/test-at-scale-java.jar
================================================
================================================
FILE: build/synapse/Dockerfile
================================================
FROM golang:latest as builder
# create a working directory
COPY . /synapse
WORKDIR /synapse
# Build binary
RUN go build -o synapse cmd/synapse/*.go
# use a minimal alpine image
FROM docker:latest
ARG VERSION
ENV VERSION=$VERSION
# add ca-certificates in case you need them
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
# Create a group and user
RUN addgroup -S synapse && adduser -S synapse -G synapse
# set working directory
WORKDIR /home/synapse
# copy the binary from builder
COPY --chown=synapse:synapse --from=builder /synapse/synapse .
COPY ./build/synapse/entrypoint.sh /
# run the binary
ENTRYPOINT ["/bin/sh", "/entrypoint.sh"]
================================================
FILE: build/synapse/build.sh
================================================
#!/usr/bin
# exit when any command fails
set -e
# keep track of the last executed command
trap 'last_command=$current_command; current_command=$BASH_COMMAND' DEBUG
# echo an error message before exiting
trap 'echo "\"${last_command}\" command filed with exit code $?."' EXIT
echo 'Building binary'
go build -o synapse ./cmd/synapse/*.go
echo 'Binary successfully build by the name of `synapse`'
================================================
FILE: build/synapse/entrypoint.sh
================================================
#!/bin/sh
exec -- /home/synapse/synapse "$@"
================================================
FILE: bundle
================================================
================================================
FILE: cmd/nucleus/bin.go
================================================
package main
// this is cmd/root_cmd.go
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"path/filepath"
"strings"
"sync"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/api"
"github.com/LambdaTest/test-at-scale/pkg/azure"
"github.com/LambdaTest/test-at-scale/pkg/blocktestservice"
"github.com/LambdaTest/test-at-scale/pkg/cachemanager"
"github.com/LambdaTest/test-at-scale/pkg/command"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/diffmanager"
"github.com/LambdaTest/test-at-scale/pkg/driver"
"github.com/LambdaTest/test-at-scale/pkg/gitmanager"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/listsubmoduleservice"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/payloadmanager"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/pkg/secret"
"github.com/LambdaTest/test-at-scale/pkg/server"
"github.com/LambdaTest/test-at-scale/pkg/service/coverage"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/pkg/tasconfigmanager"
"github.com/LambdaTest/test-at-scale/pkg/task"
"github.com/LambdaTest/test-at-scale/pkg/testdiscoveryservice"
"github.com/LambdaTest/test-at-scale/pkg/testexecutionservice"
"github.com/LambdaTest/test-at-scale/pkg/zstd"
"github.com/cenkalti/backoff/v4"
"github.com/spf13/cobra"
)
// RootCommand will setup and return the root command
func RootCommand() *cobra.Command {
rootCmd := cobra.Command{
Use: "nucleus",
Long: `nucleus is a coordinator binary used as entrypoint in tas containers`,
Version: global.NucleusBinaryVersion,
Run: run,
}
// define flags used for this command
AttachCLIFlags(&rootCmd)
return &rootCmd
}
func run(cmd *cobra.Command, args []string) {
// create a context that we can cancel
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// timeout in seconds
const gracefulTimeout = 5000 * time.Millisecond
// a WaitGroup for the goroutines to tell us they've stopped
wg := sync.WaitGroup{}
cfg, err := config.LoadNucleusConfig(cmd)
if err != nil {
fmt.Printf("[Error] Failed to load config: " + err.Error())
os.Exit(1)
}
// patch logconfig file location with root level log file location
if cfg.LogFile != "" {
cfg.LogConfig.FileLocation = filepath.Join(cfg.LogFile, "nucleus.log")
}
// You can also use logrus implementation
// by using lumber.InstanceLogrusLogger
logger, err := lumber.NewLogger(cfg.LogConfig, cfg.Verbose, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("Could not instantiate logger %s", err.Error())
}
logger.Debugf("Running on local: %t", cfg.LocalRunner)
if cfg.LocalRunner {
logger.Infof("Local runner detected , changing IP from: %s to: %s", global.NeuronHost, cfg.SynapseHost)
global.SetNeuronHost(strings.TrimSpace(cfg.SynapseHost))
logger.Infof("change neuron host to %s", global.NeuronHost)
} else {
global.SetNeuronHost(global.NeuronRemoteHost)
}
pl, err := core.NewPipeline(cfg, logger)
if err != nil {
logger.Errorf("Unable to create the pipeline: %+v\n", err)
logger.Errorf("Aborting ...")
os.Exit(1)
}
ts, err := teststats.New(cfg, logger)
if err != nil {
logger.Fatalf("failed to initialize test stats service: %v", err)
}
defaultRequests := requestutils.New(logger, global.DefaultAPITimeout, backoff.NewExponentialBackOff())
azureClient, err := azure.NewAzureBlobEnv(cfg, defaultRequests, logger)
if err != nil {
logger.Fatalf("failed to initialize azure blob: %v", err)
}
if err != nil && !cfg.LocalRunner {
logger.Fatalf("failed to initialize azure blob: %v", err)
}
// attach plugins to pipeline
pm := payloadmanager.NewPayloadManger(azureClient, logger, cfg, defaultRequests)
secretParser := secret.New(logger)
tcm := tasconfigmanager.NewTASConfigManager(logger)
execManager := command.NewExecutionManager(secretParser, azureClient, logger)
gm := gitmanager.NewGitManager(logger, execManager)
dm := diffmanager.NewDiffManager(cfg, logger)
tdResChan := make(chan core.DiscoveryResult)
tds := testdiscoveryservice.NewTestDiscoveryService(ctx, tdResChan, execManager, defaultRequests, logger)
tes := testexecutionservice.NewTestExecutionService(cfg, defaultRequests, execManager, azureClient, ts, logger)
tbs := blocktestservice.NewTestBlockTestService(cfg, defaultRequests, logger)
router := api.NewRouter(logger, ts, tdResChan)
t, err := task.New(defaultRequests, logger)
if err != nil {
logger.Fatalf("failed to initialize task: %v", err)
}
zstd, err := zstd.New(execManager, logger)
if err != nil {
logger.Fatalf("failed to initialize zstd compressor: %v", err)
}
cache, err := cachemanager.New(zstd, azureClient, logger)
if err != nil {
logger.Fatalf("failed to initialize cache manager: %v", err)
}
coverageService, err := coverage.New(execManager, azureClient, zstd, cfg, logger)
if err != nil {
logger.Fatalf("failed to initialize coverage service: %v", err)
}
listsubmodule := listsubmoduleservice.New(defaultRequests, logger)
builder := driver.Builder{
Logger: logger,
TestExecutionService: tes,
TestDiscoveryService: tds,
AzureClient: azureClient,
BlockTestService: tbs,
ExecutionManager: execManager,
TASConfigManager: tcm,
CacheStore: cache,
DiffManager: dm,
ListSubModuleService: listsubmodule,
}
pl.PayloadManager = pm
pl.TASConfigManager = tcm
pl.GitManager = gm
pl.DiffManager = dm
pl.TestDiscoveryService = tds
pl.BlockTestService = tbs
pl.TestExecutionService = tes
pl.ExecutionManager = execManager
pl.CoverageService = coverageService
pl.TestStats = ts
pl.Task = t
pl.CacheStore = cache
pl.SecretParser = secretParser
pl.Builder = &builder
logger.Infof("LambdaTest Nucleus version: %s", global.NucleusBinaryVersion)
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
// starting pipeline
pl.Start(ctx)
}()
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
server.ListenAndServe(ctx, router, cfg, logger)
}()
// listen for C-c
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
// create channel to mark status of waitgroup
// this is required to brutally kill application in case of
// timeout
done := make(chan struct{})
// asynchronously wait for all the go routines
go func() {
// and wait for all go routines
wg.Wait()
logger.Debugf("main: all goroutines have finished.")
close(done)
}()
// wait for signal channel
select {
case <-c:
{
logger.Debugf("main: received C-c - attempting graceful shutdown ....")
// tell the goroutines to stop
logger.Debugf("main: telling goroutines to stop")
cancel()
select {
case <-done:
logger.Debugf("Go routines exited within timeout")
case <-time.After(gracefulTimeout):
logger.Errorf("Graceful timeout exceeded. Brutally killing the application")
}
}
case <-done:
os.Exit(0)
}
}
================================================
FILE: cmd/nucleus/flags.go
================================================
package main
import (
"github.com/spf13/cobra"
)
//AttachCLIFlags attaches command line flags to command
func AttachCLIFlags(rootCmd *cobra.Command) error {
rootCmd.PersistentFlags().StringP("config", "c", "", "the config file to use")
rootCmd.PersistentFlags().StringP("port", "p", "", "Port for api server to run")
rootCmd.PersistentFlags().StringP("payloadAddress", "l", "", "Payload address")
rootCmd.PersistentFlags().String("subModule", "", "submodule of a repo")
rootCmd.PersistentFlags().BoolP("verbose", "", false, "Run in verbose mode")
rootCmd.PersistentFlags().BoolP("coverage", "", false, "Run coverage only mode")
rootCmd.PersistentFlags().BoolP("discover", "", false, "Run nucleus in test discovery mode")
rootCmd.PersistentFlags().BoolP("execute", "", false, "Run nucleus in test execution mode")
rootCmd.PersistentFlags().BoolP("flaky", "", false, "Run nucleus in flaky mode")
rootCmd.PersistentFlags().BoolP("collectStats", "", false, "Collect test execution metrics")
rootCmd.PersistentFlags().IntP("consecutiveRuns", "", 1, "The consecutive test execution runs")
rootCmd.PersistentFlags().StringP("env", "e", "prod", "Environment.")
rootCmd.PersistentFlags().String("taskID", "", "The unique ID for a task")
rootCmd.PersistentFlags().String("locators", "", "The test locators for a task")
rootCmd.PersistentFlags().String("locatorAddress", "", "The test locators address for a task")
rootCmd.PersistentFlags().String("buildID", "", "The unique ID for a build")
rootCmd.PersistentFlags().String("targetCommit", "", "The target commit for nucleus")
rootCmd.PersistentFlags().String("baseCommit", "", "The base commit for nucleus")
rootCmd.PersistentFlags().StringP("synapsehost", "", "", "Local Ip of proxy server.")
rootCmd.PersistentFlags().BoolP("local", "", false, "local mode")
return nil
}
================================================
FILE: cmd/nucleus/main.go
================================================
package main
import (
"log"
)
// Main function just executes root command `ts`
// this project structure is inspired from `cobra` package
func main() {
if err := RootCommand().Execute(); err != nil {
log.Fatal(err)
}
}
================================================
FILE: cmd/synapse/bin.go
================================================
package main
// this is cmd/root_cmd.go
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"path/filepath"
"strconv"
"sync"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/cron"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/proxyserver"
"github.com/LambdaTest/test-at-scale/pkg/runner/docker"
"github.com/LambdaTest/test-at-scale/pkg/secrets"
synapsepkg "github.com/LambdaTest/test-at-scale/pkg/synapse"
"github.com/LambdaTest/test-at-scale/pkg/tasconfigdownloader"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"github.com/joho/godotenv"
"github.com/spf13/cobra"
)
// RootCommand will setup and return the root command
func RootCommand() *cobra.Command {
rootCmd := cobra.Command{
Use: "synapse",
Long: `Synapse is an opensource runner for TAS`,
Version: global.SynapseBinaryVersion,
Run: run,
}
// define flags used for this command
if err := AttachCLIFlags(&rootCmd); err != nil {
fmt.Println("Error in attaching cli flags")
}
return &rootCmd
}
func run(cmd *cobra.Command, args []string) {
// create a context that we can cancel
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// set necessary os env
setEnv()
// a WaitGroup for the goroutines to tell us they've stopped
wg := sync.WaitGroup{}
// Load environment variables from .env if available
err := godotenv.Load()
if err != nil {
fmt.Printf("Warning: No .env file found\n")
}
cfg, err := config.LoadSynapseConfig(cmd)
if err != nil {
fmt.Printf("Failed to load config: %s", err.Error())
}
err = config.LoadRepoSecrets(cmd, cfg)
if err != nil {
fmt.Printf("Error loading repository secrets: %v", err)
}
// patch logconfig file location with root level log file location
if cfg.LogFile != "" {
cfg.LogConfig.FileLocation = filepath.Join(cfg.LogFile, "synapse.log")
}
// You can also use logrus implementation
// by using lumber.InstanceLogrusLogger
logger, err := lumber.NewLogger(cfg.LogConfig, cfg.Verbose, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("Could not instantiate logger %s", err.Error())
}
if err := config.ValidateCfg(cfg, logger); err != nil {
logger.Fatalf("Error loading synapse config: %v", err)
}
secretsManager := secrets.New(cfg, logger)
runner, err := docker.New(secretsManager, logger, cfg)
if err != nil {
logger.Fatalf("could not instantiate k8s runner %v", err)
}
tasConfigDownloader := tasconfigdownloader.New(logger)
synapse := synapsepkg.New(runner, logger, secretsManager, tasConfigDownloader)
proxyHandler, err := proxyserver.NewProxyHandler(logger)
if err != nil {
logger.Fatalf("Could not instantiate proxyhandler %v", err)
}
// setting up cron handler
wg.Add(1)
go cron.Setup(ctx, &wg, logger, runner)
// All attempts to connect to lambdatest server failed
connectionFailed := make(chan struct{})
wg.Add(1)
go synapse.InitiateConnection(ctx, &wg, connectionFailed)
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
if err := proxyserver.ListenAndServe(ctx, proxyHandler, cfg, logger); err != nil {
logger.Fatalf("Error starting proxy server: %v", err)
}
}()
// listen for C-cInterrupt
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
// create channel to mark status of waitgroup
// this is required to brutally kill application in case of
// timeout
done := make(chan struct{})
// asynchronously wait for all the go routines
go func() {
// and wait for all go routines
wg.Wait()
logger.Debugf("main: all goroutines have finished.")
close(done)
}()
// wait for signal channel
select {
case <-c:
{
logger.Debugf("main: received OS Interrupt signal, attempting graceful shutdown ....")
// tell the goroutines to stop
logger.Debugf("main: telling goroutines to stop")
cancel()
select {
case <-done:
logger.Debugf("Go routines exited within timeout")
case <-time.After(global.GracefulTimeout):
logger.Errorf("Graceful timeout exceeded. Brutally killing the application")
}
}
case <-connectionFailed:
{
logger.Debugf("main: all attempts to connect to lamdatest server failed ....")
// tell the goroutines to stop
logger.Debugf("main: telling goroutines to stop")
cancel()
select {
case <-done:
logger.Debugf("Go routines exited within timeout")
case <-time.After(global.GracefulTimeout):
logger.Errorf("Graceful timeout exceeded. Brutally killing the application")
}
os.Exit(0)
}
case <-done:
os.Exit(0)
}
}
func setEnv() {
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(global.AutoRemove))
os.Setenv(global.LocalEnv, strconv.FormatBool(global.Local))
os.Setenv(global.SynapseHostEnv, utils.GetOutboundIP())
os.Setenv(global.NetworkEnvName, global.NetworkName)
}
================================================
FILE: cmd/synapse/flags.go
================================================
package main
import (
"github.com/spf13/cobra"
)
//AttachCLIFlags attaches command line flags to command
func AttachCLIFlags(rootCmd *cobra.Command) error {
rootCmd.PersistentFlags().StringP("config", "c", "", "the config file to use")
rootCmd.PersistentFlags().BoolP("verbose", "", false, "should every proxy request be logged to stdout")
return nil
}
================================================
FILE: cmd/synapse/main.go
================================================
package main
import (
"log"
"github.com/LambdaTest/test-at-scale/pkg/global"
)
// Main function just executes root command `ts`
// this project structure is inspired from `cobra` package
func main() {
log.Printf("Starting synapse %s", global.SynapseBinaryVersion)
if err := RootCommand().Execute(); err != nil {
log.Fatal(err)
}
}
================================================
FILE: config/default.go
================================================
package config
import (
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/spf13/viper"
)
func setNucleusDefaultConfig() {
viper.SetDefault("LogConfig.EnableConsole", true)
viper.SetDefault("LogConfig.ConsoleJSONFormat", false)
viper.SetDefault("LogConfig.ConsoleLevel", "debug")
viper.SetDefault("LogConfig.EnableFile", true)
viper.SetDefault("LogConfig.FileJSONFormat", true)
viper.SetDefault("LogConfig.FileLevel", "debug")
viper.SetDefault("LogConfig.FileLocation", global.HomeDir+"/nucleus.log")
viper.SetDefault("Env", "prod")
viper.SetDefault("Port", "9876")
viper.SetDefault("Verbose", false)
}
func setSynapseDefaultConfig() {
viper.SetDefault("LogConfig.EnableConsole", true)
viper.SetDefault("LogConfig.ConsoleJSONFormat", false)
viper.SetDefault("LogConfig.ConsoleLevel", "info")
viper.SetDefault("LogConfig.EnableFile", true)
viper.SetDefault("LogConfig.FileJSONFormat", true)
viper.SetDefault("LogConfig.FileLevel", "debug")
viper.SetDefault("LogConfig.FileLocation", "./mould.log")
viper.SetDefault("Env", "prod")
viper.SetDefault("Verbose", false)
}
================================================
FILE: config/loader.go
================================================
package config
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// GlobalNucleusConfig stores the config instance for global use
var GlobalNucleusConfig *NucleusConfig
// GlobalSynapseConfig store the config instance for synapse global use
var GlobalSynapseConfig *SynapseConfig
type tempSecretReader struct {
RepoSecrets map[string]map[string]string `json:"RepoSecrets" yaml:"RepoSecrets"`
}
// LoadNucleusConfig loads config from command instance to predefined config variables
func LoadNucleusConfig(cmd *cobra.Command) (*NucleusConfig, error) {
err := viper.BindPFlags(cmd.Flags())
if err != nil {
return nil, err
}
// default viper configs
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
viper.AutomaticEnv()
// set default configs
setNucleusDefaultConfig()
if configFile, _ := cmd.Flags().GetString("config"); configFile != "" {
viper.SetConfigFile(configFile)
} else {
viper.SetConfigName(".nucleus")
viper.AddConfigPath("./")
viper.AddConfigPath("$HOME/.nucleus")
}
if err := viper.ReadInConfig(); err != nil {
fmt.Println("Warning: No configuration file found. Proceeding with defaults")
}
return populateNucleusConfig(new(NucleusConfig))
}
// LoadSynapseConfig loads config from command instance to predefined config variables
func LoadSynapseConfig(cmd *cobra.Command) (*SynapseConfig, error) {
err := viper.BindPFlags(cmd.Flags())
if err != nil {
return nil, err
}
// default viper configs
viper.SetEnvPrefix("SYN")
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
viper.AutomaticEnv()
// set default configs
setSynapseDefaultConfig()
if configFile, _ := cmd.Flags().GetString("config"); configFile != "" {
viper.SetConfigFile(configFile)
} else {
viper.SetConfigName(".synapse")
viper.AddConfigPath("./")
viper.AddConfigPath("$HOME/.synapse")
}
if err := viper.ReadInConfig(); err != nil {
fmt.Println("Warning: No configuration file found. Proceeding with defaults")
}
return populateSynapseConfig(new(SynapseConfig))
}
// LoadRepoSecrets loads repo secrets from configuration file
func LoadRepoSecrets(cmd *cobra.Command, synapseConfig *SynapseConfig) error {
if configFile, _ := cmd.Flags().GetString("config"); configFile != "" {
viper.SetConfigFile(configFile)
} else {
viper.SetConfigName(".synapse")
viper.AddConfigPath("./")
viper.AddConfigPath("$HOME/.synapse")
}
if err := viper.ReadInConfig(); err != nil {
fmt.Println("Warning: No configuration file found. Proceeding with defaults")
}
secretFile, err := ioutil.ReadFile(viper.GetViper().ConfigFileUsed())
if err != nil {
fmt.Printf("error in reading config file: %v\n", err)
}
var tempSecret tempSecretReader
if err := json.Unmarshal(secretFile, &tempSecret); err != nil {
fmt.Printf("error in umarshaling secrets: %v\n", err)
}
synapseConfig.RepoSecrets = tempSecret.RepoSecrets
return nil
}
// ValidateCfg checks the validity of the config
func ValidateCfg(cfg *SynapseConfig, logger lumber.Logger) error {
if cfg.Lambdatest.SecretKey == "" {
return errors.New("error finding lambdatest secretkey in configuration file")
}
if cfg.ContainerRegistry.Mode == "" {
return errors.New("error finding ContainerRegistry Mode in configuration file")
}
if cfg.RepoSecrets == nil {
logger.Debugf("no RepoSecrets found in configuration file.")
return nil
}
return nil
}
================================================
FILE: config/nucleusmodel.go
================================================
package config
import "github.com/LambdaTest/test-at-scale/pkg/lumber"
// Model definition for configuration
// NucleusConfig is the application's configuration
type NucleusConfig struct {
Config string
Port string
PayloadAddress string `json:"payloadAddress"`
CollectStats bool `json:"collectStats"`
ConsecutiveRuns int `json:"consecutiveRuns"`
LogFile string
LogConfig lumber.LoggingConfig
CoverageMode bool `json:"coverage"`
DiscoverMode bool `json:"discover"`
ExecuteMode bool `json:"execute"`
FlakyMode bool `json:"flaky"`
TaskID string `json:"taskID" env:"TASK_ID"`
BuildID string `json:"buildID" env:"BUILD_ID"`
Locators string `json:"locators"`
LocatorAddress string `json:"locatorAddress"`
Env string
Verbose bool
Azure Azure `env:"AZURE"`
LocalRunner bool `env:"local"`
SynapseHost string `env:"synapsehost"`
SubModule string `json:"subModule"`
}
// Azure providers the storage configuration.
type Azure struct {
ContainerName string `env:"CONTAINER_NAME"`
StorageAccountName string `env:"STORAGE_ACCOUNT"`
StorageAccessKey string `env:"STORAGE_ACCESS_KEY"`
}
================================================
FILE: config/parse.go
================================================
package config
import (
"errors"
"fmt"
"reflect"
"github.com/spf13/viper"
)
const tagPrefix = "viper"
// populateNucleusConfig is used to parse config read through viper
func populateNucleusConfig(config *NucleusConfig) (*NucleusConfig, error) {
err := recursivelySet(reflect.ValueOf(config), "")
if err != nil {
return nil, err
}
return config, nil
}
// populateSynapseConfig is used to parse config read through viper
func populateSynapseConfig(config *SynapseConfig) (*SynapseConfig, error) {
err := recursivelySet(reflect.ValueOf(config), "")
if err != nil {
return nil, err
}
return config, nil
}
// recursivelySet is used to recursively set conf read from
// files to golang structs. Since nested values are accessed using periods
// we need to recursively parse the values
func recursivelySet(val reflect.Value, prefix string) error {
if val.Kind() != reflect.Ptr {
return errors.New("WTF")
}
// dereference
val = reflect.Indirect(val)
if val.Kind() != reflect.Struct {
return errors.New("FML")
}
// grab the type for this instance
vType := reflect.TypeOf(val.Interface())
// go through child fields
for i := 0; i < val.NumField(); i++ {
thisField := val.Field(i)
thisType := vType.Field(i)
tags := getTags(thisType)
// try to fetch value for each key using multiple tags
for _, tag := range tags {
key := prefix + tag
switch thisField.Kind() {
case reflect.Struct:
if err := recursivelySet(thisField.Addr(), key+"."); err != nil {
return err
}
case reflect.Int:
fallthrough
case reflect.Int32:
fallthrough
case reflect.Int64:
// you can only set with an int64 -> int
configVal := int64(viper.GetInt(key))
// skip the update if tag is not set in viper
if viper.GetInt(key) == 0 && thisField.Int() != 0 {
continue
}
thisField.SetInt(configVal)
case reflect.String:
// skip the update if tag is not set in viper
if viper.GetString(key) == "" && thisField.String() != "" {
continue
}
thisField.SetString(viper.GetString(key))
case reflect.Bool:
// skip the update if tag is not set in viper
if !viper.GetBool(key) && thisField.Bool() {
continue
}
thisField.SetBool(viper.GetBool(key))
case reflect.Map:
continue
default:
return fmt.Errorf("unexpected type detected ~ aborting: %s", thisField.Kind())
}
}
}
return nil
}
func getTags(field reflect.StructField) []string {
// check if maybe we have a special magic tag
tag := field.Tag
values := []string{}
if tag != "" {
for _, prefix := range []string{tagPrefix, "yaml", "json", "env", "mapstructure"} {
if v := tag.Get(prefix); v != "" {
values = append(values, v)
}
}
return values
}
return []string{field.Name}
}
================================================
FILE: config/parse_test.go
================================================
package config
import (
"reflect"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
)
func TestSimpleValues(t *testing.T) {
c := struct {
Simple string `json:"simple"`
}{}
viper.SetDefault("simple", "i am a simple string")
assert.Nil(t, recursivelySet(reflect.ValueOf(&c), ""))
assert.Equal(t, "i am a simple string", c.Simple)
}
func TestNestedValues(t *testing.T) {
c := struct {
Simple string `json:"simple"`
Nested struct {
BoolVal bool `json:"bool"`
StringVal string `json:"string"`
NumberVal int `json:"number"`
} `json:"nested"`
}{}
viper.SetDefault("simple", "simple")
viper.SetDefault("nested.bool", true)
viper.SetDefault("nested.string", "i am a simple string")
viper.SetDefault("nested.number", 4)
assert.Nil(t, recursivelySet(reflect.ValueOf(&c), ""))
assert.Equal(t, "simple", c.Simple)
assert.Equal(t, 4, c.Nested.NumberVal)
assert.Equal(t, "i am a simple string", c.Nested.StringVal)
assert.Equal(t, true, c.Nested.BoolVal)
}
================================================
FILE: config/synapsemodel.go
================================================
package config
import "github.com/LambdaTest/test-at-scale/pkg/lumber"
// Model definition for configuration
// SynapseConfig the application's configuration
type SynapseConfig struct {
Name string
Config string
LogFile string
LogConfig lumber.LoggingConfig
Env string
Verbose bool
Lambdatest LambdatestConfig
Git GitConfig
ContainerRegistry ContainerRegistryConfig
RepoSecrets map[string]map[string]string
}
// LambdatestConfig contains credentials for lambdatest
type LambdatestConfig struct {
SecretKey string
}
// GitConfig contains git token
type GitConfig struct {
Token string
TokenType string
}
// PullPolicyType defines when to pull docker image
type PullPolicyType string
// ModeType define type of container repo
type ModeType string
// ContainerRegistryConfig contains repo configuration if private repo is used
type ContainerRegistryConfig struct {
PullPolicy PullPolicyType
Mode ModeType
Username string
Password string
}
// defines constant for docker config
const (
PullAlways PullPolicyType = "always"
PullNever PullPolicyType = "never"
PrivateMode ModeType = "private"
PublicMode ModeType = "public"
)
================================================
FILE: docker-compose.yml
================================================
version: "3.9"
services:
synapse:
image: lambdatest/synapse:latest
stop_signal: SIGINT
restart: on-failure
networks:
- test-at-scale
hostname: synapse
container_name: synapse
volumes:
# synapse will needs socket access to create containers on host
- "/var/run/docker.sock:/var/run/docker.sock"
- "/tmp/synapse:/tmp/synapse"
- ".synapse.json:/home/synapse/.synapse.json"
- "/etc/machine-id:/etc/machine-id"
- "./logs/synapse:/var/log/synapse"
networks:
test-at-scale:
external: false
name: test-at-scale
================================================
FILE: go.mod
================================================
module github.com/LambdaTest/test-at-scale
go 1.17
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0
github.com/bmatcuk/doublestar/v4 v4.0.2
github.com/cenkalti/backoff/v4 v4.1.3
github.com/denisbrodbeck/machineid v1.0.1
github.com/docker/docker v20.10.12+incompatible
github.com/docker/go-units v0.4.0
github.com/gin-gonic/gin v1.7.7
github.com/go-playground/locales v0.14.0
github.com/go-playground/universal-translator v0.18.0
github.com/go-playground/validator/v10 v10.10.0
github.com/google/uuid v1.2.0
github.com/gorilla/websocket v1.4.2
github.com/joho/godotenv v1.4.0
github.com/mholt/archiver/v3 v3.5.1
github.com/robfig/cron/v3 v3.0.1
github.com/shirou/gopsutil/v3 v3.21.1
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.3.0
github.com/spf13/viper v1.10.1
github.com/stretchr/testify v1.7.0
go.uber.org/zap v1.20.0
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
gopkg.in/natefinch/lumberjack.v2 v2.0.0
gopkg.in/yaml.v3 v3.0.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 // indirect
github.com/Microsoft/go-winio v0.4.17 // indirect
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d // indirect
github.com/andybalholm/brotli v1.0.1 // indirect
github.com/containerd/containerd v1.5.10 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.0+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/snappy v0.0.3 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.11.13 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/nwaples/rardecode v1.1.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.2 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/pierrec/lz4/v4 v4.1.2 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/spf13/afero v1.6.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.2.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/ugorji/go/codec v1.1.7 // indirect
github.com/ulikunitz/xz v0.5.9 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d // indirect
golang.org/x/sys v0.0.0-20220111092808-5a964db01320 // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/ini.v1 v1.66.2 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gotest.tools/v3 v3.1.0 // indirect
)
================================================
FILE: go.sum
================================================
bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
cloud.google.com/go v0.98.0/go.mod h1:ua6Ush4NALrHk5QXDWnjvZHN93OuF0HfuEPq9I1X0cM=
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.6.1/go.mod h1:asNXNOzBdyVQmEU+ggO8UPodTkEVFW5Qx+rwHnAz+EY=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible h1:KnPIugL51v3N3WwvaSmZbxukD1WuWXOiE9fRdu32f2I=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 h1:qoVeMsc9/fh/yhxVaA0obYjVH/oI/ihrOoMwsLS9KSA=
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1/go.mod h1:fBF9PQNqB8scdgpZ3ufzaLntG0AG7C1WjPMsiFOmfHM=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 h1:E+m3SkZCN0Bf5q7YdTs5lSm2CYY3CK4spn5OmUIiQtk=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3/go.mod h1:KLF4gFr6DcKFZwSuH8w8yEK6DpFl3LP5rhdvAb7Yz5I=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0 h1:Px2UA+2RvSSvv+RvJNuUB6n7rs5Wsel4dXLe90Um2n4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0/go.mod h1:tPaiy8S5bQ+S5sOiDlINkp7+Ef339+Nz5L5XO+cnOHo=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.4.17 h1:iT12IBVClFevaf8PuVyi3UmZOVh4OqnaLxDTW2O6j3w=
github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
github.com/Microsoft/hcsshim v0.8.23/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d h1:G0m3OIz70MZUWq3EgK3CesDbo8upS2Vm9/P3FtgI+Jk=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/andybalholm/brotli v1.0.1 h1:KqhlKozYbRtJvsPrrEeXcO+N2l6NYT5A2QAFmSULpEc=
github.com/andybalholm/brotli v1.0.1/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu3qAvBg8x/Y=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-metrics v0.3.10/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/bmatcuk/doublestar/v4 v4.0.2 h1:X0krlUVAVmtr2cRoTqR8aDMrDqnB36ht8wpWTiQ3jsA=
github.com/bmatcuk/doublestar/v4 v4.0.2/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/cenkalti/backoff/v4 v4.1.3 h1:cFAlzYUlVYDysBEH2T5hyJZMh3+5+WCBvSnK6Q8UtC4=
github.com/cenkalti/backoff/v4 v4.1.3/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211130200136-a8f946100490/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
github.com/containerd/containerd v1.5.10 h1:3cQ2uRVCkJVcx5VombsE7105Gl9Wrl7ORAO3+4+ogf4=
github.com/containerd/containerd v1.5.10/go.mod h1:fvQqCfadDGga5HZyn3j4+dx56qj2I9YwBrlSdalvJYQ=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/denisbrodbeck/machineid v1.0.1 h1:geKr9qtkB876mXguW2X6TU4ZynleN6ezuMSRhl4D7AQ=
github.com/denisbrodbeck/machineid v1.0.1/go.mod h1:dJUwb7PTidGDeYyUBmXZ2GphQBbjJCrnectwCyxcUSI=
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko=
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.0+incompatible h1:l9EaZDICImO1ngI+uTifW+ZYvvz7fKISBAKpg+MbWbY=
github.com/docker/distribution v2.8.0+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.12+incompatible h1:CEeNmFM0QZIsJCZKMkZx0ZcahTiewkrgiwfYD+dfl1U=
github.com/docker/docker v20.10.12+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 h1:iFaUwBSo5Svw6L7HYpRu/0lE3e0BaElwnNO1qkNQxBY=
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5/go.mod h1:qssHWj60/X5sZFNxpG4HBPDHVqxNm4DfnCKgrbZOT+s=
github.com/dsnet/golib v0.0.0-20171103203638-1ea166775780/go.mod h1:Lj+Z9rebOhdfkVLjJ8T6VcRQv3SXugXy999NBtR9aFY=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/go-control-plane v0.10.1/go.mod h1:AY7fTTXNdv/aJ2O5jwpxAPOWUZ7hQAEvzN5Pf27BkQQ=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v0.6.2/go.mod h1:2t7qjJNvHPx8IjnBOzl9E9/baC+qXE/TeeyBRzgJDws=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.7.7 h1:3DoBmSbJbZAWqXJC3SLjAPfutPJJRN1U5pALB7EeTTs=
github.com/gin-gonic/gin v1.7.7/go.mod h1:axIBovoeJpVj8S3BwE0uPMTeReE4+AfFtqpqaZ1qq1U=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-ole/go-ole v1.2.4/go.mod h1:XCwSNxSkXRo4vlyPy93sltvi/qJq0jqQhjqQNIwKuxM=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-playground/assert/v2 v2.0.1 h1:MsBgLAaY856+nPRTKrp3/OZK38U/wa0CcBYNjji3q3A=
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8=
github.com/go-playground/locales v0.14.0 h1:u50s323jtVGugKlcYeyzC0etD1HifMjqmJqb8WugfUU=
github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA=
github.com/go-playground/universal-translator v0.18.0 h1:82dyy6p4OuJq4/CByFNOn/jYrnRPArHwAcmLoJZxyho=
github.com/go-playground/universal-translator v0.18.0/go.mod h1:UvRDBj+xPUEGrFYl+lu/H90nyDXpg0fqeB/AQUGNTVA=
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
github.com/go-playground/validator/v10 v10.10.0 h1:I7mrTYv78z8k8VXa/qJlOlEXn/nBh+BF8dHX5nt/dr0=
github.com/go-playground/validator/v10 v10.10.0/go.mod h1:74x4gJWsvQexRdW8Pn3dXSGrTK4nAUsbPlLADvpJkos=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.2/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.11.0/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
github.com/hashicorp/consul/api v1.12.0/go.mod h1:6pVBMo0ebnYdt2S3H87XhekM/HHrUoTD2XXb/VrZVy0=
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v1.0.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.1/go.mod h1:4gW7WsVCke5TE7EPeYliwHlRUyBtfCwuFwuMg2DmyNY=
github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc=
github.com/hashicorp/memberlist v0.2.2/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/serf v0.9.5/go.mod h1:UWDWwZeL5cuWDJdl0C6wrvrUwEqtQ4ZKBKKENpqIUyk=
github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/joho/godotenv v1.4.0 h1:3l4+N6zfMWnkbPEXKng2o2/MR5mSwTrBih4ZEkkz1lg=
github.com/joho/godotenv v1.4.0/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.4/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.13 h1:eSvu8Tmq6j2psUJqJrLcWH6K3w5Dwc+qipbaA6eVEN4=
github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/pgzip v1.2.5 h1:qnWYvvKqedOF2ulHpMG72XQol4ILEJ8k2wwRl/Km8oE=
github.com/klauspost/pgzip v1.2.5/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/leodido/go-urn v1.2.1 h1:BqpAaACuzVSgi/VLzGZIobT2z4v53pjosyNd9Yv6n/w=
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/lyft/protoc-gen-star v0.5.3/go.mod h1:V0xaHgaf5oCCqmcxYcWiDfTiKsZsRc87/1qhoTACD8w=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.5 h1:b6kJs+EmPFMYGkow9GiUyCyOvIwYetYJ3fSaWak/Gls=
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mholt/archiver/v3 v3.5.1 h1:rDjOBX9JSF5BvoJGvjqK479aL70qh9DIpZCl+k7Clwo=
github.com/mholt/archiver/v3 v3.5.1/go.mod h1:e3dqJ7H78uzsRSEACH1joayhuSyhnonssnDhppzS1L4=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGgCcyj8cs=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 h1:dcztxKSvZ4Id8iPpHERQBbIJfabdt4wUm5qy3wOL2Zc=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/nwaples/rardecode v1.1.0 h1:vSxaY8vQhOcVr4mm5e8XllHWTiM4JF507A0Katqw7MQ=
github.com/nwaples/rardecode v1.1.0/go.mod h1:5DzqNKiOdpKKBH87u8VlvAnPZMXcGRhxWkRpHbbfGS0=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v0.0.0-20151202141238-7f8ab55aaf3b/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/gomega v0.0.0-20151007035656-2152b45fa28a/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
github.com/pelletier/go-toml v1.9.4 h1:tjENF6MfZAg8e4ZmZTeWaWiT2vXtsoO6+iuOjFhECwM=
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4/v4 v4.1.2 h1:qvY3YFXRQE/XB8MlLzJH7mSzBs74eA2gg52YTk6jUPM=
github.com/pierrec/lz4/v4 v4.1.2/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
github.com/sagikazarmark/crypt v0.3.0/go.mod h1:uD/D+6UF4SrIR1uGEv7bBNkNqLGqUr43MRiaGWX1Nig=
github.com/sagikazarmark/crypt v0.4.0/go.mod h1:ALv2SRj7GxYV4HO9elxH9nS6M9gW+xDNxqmyJ6RfDFM=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/shirou/gopsutil/v3 v3.21.1 h1:dA72XXj5WOXIZkAL2iYTKRVcNOOqh4yfLn9Rm7t8BMM=
github.com/shirou/gopsutil/v3 v3.21.1/go.mod h1:igHnfak0qnw1biGeI2qKQvu0ZkwvEkUcCLlYhZzdr/4=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.3.3/go.mod h1:5KUK8ByomD5Ti5Artl0RtHeI5pTF7MIDuXL3yY520V4=
github.com/spf13/afero v1.6.0 h1:xoax2sJ2DT8S8xA2paPFjDCScCNeWsg75VG0DLRreiY=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA=
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.3.0 h1:R7cSvGu+Vv+qX0gW5R/85dx2kmmJT5z5NM8ifdYjdn0=
github.com/spf13/cobra v1.3.0/go.mod h1:BrRVncBjOJa/eUcVVm9CE+oC6as8k+VYr4NY7WCi9V4=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.10.0/go.mod h1:SoyBPwAtKDzypXNDFKN5kzH7ppppbGZtls1UpIy5AsM=
github.com/spf13/viper v1.10.1 h1:nuJZuYpG7gTj/XqiUwg8bA0cp1+M2mC3J4g5luUYBKk=
github.com/spf13/viper v1.10.1/go.mod h1:IGlFPqhNAPKRxohIzWpI5QEy4kuI7tcl5WvR+8qy1rU=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go v1.1.7 h1:/68gy2h+1mWMrwZFeD1kQialdSzAb432dtpeJ42ovdo=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v1.1.7 h1:2SvQaVZ1ouYrrKKwoSk2pzd4A9evlKJb9oTL+OaLUSs=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/ulikunitz/xz v0.5.8/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/ulikunitz/xz v0.5.9 h1:RsKRIA2MO8x56wkkcd3LbtcE/uMszhb6DpRf+3uwa3I=
github.com/ulikunitz/xz v0.5.9/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 h1:nIPpBwaJSVYIxUFsDv3M8ofmx9yWTog9BfvIu0q41lo=
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.1/go.mod h1:pMEacxZW7o8pg4CrFE7pquyCJJzZvkvdD2RibOCCCGs=
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
go.uber.org/goleak v1.1.11/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.20.0 h1:N4oPlghZwYG55MlU6LXk/Zp00FVNE9X9wrYO8CEs4lc=
go.uber.org/zap v1.20.0/go.mod h1:wjWOCqI0f2ZZrJF/UufIOkiC8ii6tm1iqIsLo76RfJw=
golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s39J9CbLn7Cc5a7IC5rwsMQ=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d h1:LO7XpTYMwTqxjLcGWPijK3vRXg1aWdlNOVOHRq45d7c=
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201024232916-9f70ab9862d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220111092808-5a964db01320 h1:0jf+tOCoZ3LyutmCOWpVni1chK4VfFLhRsDK7MhqGRY=
golang.org/x/sys v0.0.0-20220111092808-5a964db01320/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
google.golang.org/api v0.59.0/go.mod h1:sT2boj7M9YJxZzgeZqXogmhfmRWDtPzT31xkieUbuZU=
google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
google.golang.org/api v0.62.0/go.mod h1:dKmwPCydfsad4qCH08MSdgWjfHOyfpd4VtDGgRFdavw=
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211008145708-270636b82663/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211028162531-8db9c33dc351/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211129164237-f09f9a12af12/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211203200212-54befc351ae9/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa h1:I0YcKz0I7OAhddo7ya8kMnvprhcWM045PmkBdMO9zN0=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.43.0 h1:Eeu7bZtDZ2DpRCsLhUlcrLnvYaMK1Gz86a+hMVvELmM=
google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.66.2 h1:XfR1dOYubytKy4Shzc2LHrrGhU0lDCfDGG1yLPmpgsI=
gopkg.in/ini.v1 v1.66.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0 h1:hjy8E9ON/egN1tAYqKb61G10WtihqetD4sz2H+8nIeA=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
gotest.tools/v3 v3.1.0 h1:rVV8Tcg/8jHUkPUorwjaMTtemIMVXfIPKiOqnhEhakk=
gotest.tools/v3 v3.1.0/go.mod h1:fHy7eyTmJFO5bQbUsEGQ1v4m2J3Jz9eWL54TP2/ZuYQ=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
k8s.io/client-go v0.20.4/go.mod h1:LiMv25ND1gLUdBeYxBIwKpkSC5IsozMMmOOeSJboP+k=
k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
================================================
FILE: mocks/AzureClient.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
io "io"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// AzureClient is an autogenerated mock type for the AzureClient type
type AzureClient struct {
mock.Mock
}
// Create provides a mock function with given fields: ctx, path, reader, mimeType
func (_m *AzureClient) Create(ctx context.Context, path string, reader io.Reader, mimeType string) (string, error) {
ret := _m.Called(ctx, path, reader, mimeType)
var r0 string
if rf, ok := ret.Get(0).(func(context.Context, string, io.Reader, string) string); ok {
r0 = rf(ctx, path, reader, mimeType)
} else {
r0 = ret.Get(0).(string)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string, io.Reader, string) error); ok {
r1 = rf(ctx, path, reader, mimeType)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CreateUsingSASURL provides a mock function with given fields: ctx, sasURL, reader, mimeType
func (_m *AzureClient) CreateUsingSASURL(ctx context.Context, sasURL string, reader io.Reader, mimeType string) (string, error) {
ret := _m.Called(ctx, sasURL, reader, mimeType)
var r0 string
if rf, ok := ret.Get(0).(func(context.Context, string, io.Reader, string) string); ok {
r0 = rf(ctx, sasURL, reader, mimeType)
} else {
r0 = ret.Get(0).(string)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string, io.Reader, string) error); ok {
r1 = rf(ctx, sasURL, reader, mimeType)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Exists provides a mock function with given fields: ctx, path
func (_m *AzureClient) Exists(ctx context.Context, path string) (bool, error) {
ret := _m.Called(ctx, path)
var r0 bool
if rf, ok := ret.Get(0).(func(context.Context, string) bool); ok {
r0 = rf(ctx, path)
} else {
r0 = ret.Get(0).(bool)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, path)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Find provides a mock function with given fields: ctx, path
func (_m *AzureClient) Find(ctx context.Context, path string) (io.ReadCloser, error) {
ret := _m.Called(ctx, path)
var r0 io.ReadCloser
if rf, ok := ret.Get(0).(func(context.Context, string) io.ReadCloser); ok {
r0 = rf(ctx, path)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.ReadCloser)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, path)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// FindUsingSASUrl provides a mock function with given fields: ctx, sasURL
func (_m *AzureClient) FindUsingSASUrl(ctx context.Context, sasURL string) (io.ReadCloser, error) {
ret := _m.Called(ctx, sasURL)
var r0 io.ReadCloser
if rf, ok := ret.Get(0).(func(context.Context, string) io.ReadCloser); ok {
r0 = rf(ctx, sasURL)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(io.ReadCloser)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, sasURL)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetSASURL provides a mock function with given fields: ctx, purpose, query
func (_m *AzureClient) GetSASURL(ctx context.Context, purpose core.SASURLPurpose, query map[string]interface{}) (string, error) {
ret := _m.Called(ctx, purpose, query)
var r0 string
if rf, ok := ret.Get(0).(func(context.Context, core.SASURLPurpose, map[string]interface{}) string); ok {
r0 = rf(ctx, purpose, query)
} else {
r0 = ret.Get(0).(string)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, core.SASURLPurpose, map[string]interface{}) error); ok {
r1 = rf(ctx, purpose, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewAzureClient interface {
mock.TestingT
Cleanup(func())
}
// NewAzureClient creates a new instance of AzureClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewAzureClient(t mockConstructorTestingTNewAzureClient) *AzureClient {
mock := &AzureClient{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/BlockTestService.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// BlockTestService is an autogenerated mock type for the BlockTestService type
type BlockTestService struct {
mock.Mock
}
// GetBlockTests provides a mock function with given fields: ctx, blocklistYAML, branch
func (_m *BlockTestService) GetBlockTests(ctx context.Context, blocklistYAML []string, branch string) error {
ret := _m.Called(ctx, blocklistYAML, branch)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, []string, string) error); ok {
r0 = rf(ctx, blocklistYAML, branch)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewBlockTestService interface {
mock.TestingT
Cleanup(func())
}
// NewBlockTestService creates a new instance of BlockTestService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBlockTestService(t mockConstructorTestingTNewBlockTestService) *BlockTestService {
mock := &BlockTestService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/Builder.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// Builder is an autogenerated mock type for the Builder type
type Builder struct {
mock.Mock
}
// GetDriver provides a mock function with given fields: version
func (_m *Builder) GetDriver(version int) (core.Driver, error) {
ret := _m.Called(version)
var r0 core.Driver
if rf, ok := ret.Get(0).(func(int) core.Driver); ok {
r0 = rf(version)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(core.Driver)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(int) error); ok {
r1 = rf(version)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewBuilder interface {
mock.TestingT
Cleanup(func())
}
// NewBuilder creates a new instance of Builder. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBuilder(t mockConstructorTestingTNewBuilder) *Builder {
mock := &Builder{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/CacheStore.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// CacheStore is an autogenerated mock type for the CacheStore type
type CacheStore struct {
mock.Mock
}
// CacheWorkspace provides a mock function with given fields: ctx, subModule
func (_m *CacheStore) CacheWorkspace(ctx context.Context, subModule string) error {
ret := _m.Called(ctx, subModule)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string) error); ok {
r0 = rf(ctx, subModule)
} else {
r0 = ret.Error(0)
}
return r0
}
// Download provides a mock function with given fields: ctx, cacheKey
func (_m *CacheStore) Download(ctx context.Context, cacheKey string) error {
ret := _m.Called(ctx, cacheKey)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string) error); ok {
r0 = rf(ctx, cacheKey)
} else {
r0 = ret.Error(0)
}
return r0
}
// ExtractWorkspace provides a mock function with given fields: ctx, subModule
func (_m *CacheStore) ExtractWorkspace(ctx context.Context, subModule string) error {
ret := _m.Called(ctx, subModule)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string) error); ok {
r0 = rf(ctx, subModule)
} else {
r0 = ret.Error(0)
}
return r0
}
// Upload provides a mock function with given fields: ctx, cacheKey, itemsToCompress
func (_m *CacheStore) Upload(ctx context.Context, cacheKey string, itemsToCompress ...string) error {
_va := make([]interface{}, len(itemsToCompress))
for _i := range itemsToCompress {
_va[_i] = itemsToCompress[_i]
}
var _ca []interface{}
_ca = append(_ca, ctx, cacheKey)
_ca = append(_ca, _va...)
ret := _m.Called(_ca...)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string, ...string) error); ok {
r0 = rf(ctx, cacheKey, itemsToCompress...)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewCacheStore interface {
mock.TestingT
Cleanup(func())
}
// NewCacheStore creates a new instance of CacheStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewCacheStore(t mockConstructorTestingTNewCacheStore) *CacheStore {
mock := &CacheStore{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/CoverageService.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// CoverageService is an autogenerated mock type for the CoverageService type
type CoverageService struct {
mock.Mock
}
// MergeAndUpload provides a mock function with given fields: ctx, payload
func (_m *CoverageService) MergeAndUpload(ctx context.Context, payload *core.Payload) error {
ret := _m.Called(ctx, payload)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload) error); ok {
r0 = rf(ctx, payload)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewCoverageService interface {
mock.TestingT
Cleanup(func())
}
// NewCoverageService creates a new instance of CoverageService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewCoverageService(t mockConstructorTestingTNewCoverageService) *CoverageService {
mock := &CoverageService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/DiffManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// DiffManager is an autogenerated mock type for the DiffManager type
type DiffManager struct {
mock.Mock
}
// GetChangedFiles provides a mock function with given fields: ctx, payload, oauth
func (_m *DiffManager) GetChangedFiles(ctx context.Context, payload *core.Payload, oauth *core.Oauth) (map[string]int, error) {
ret := _m.Called(ctx, payload, oauth)
var r0 map[string]int
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload, *core.Oauth) map[string]int); ok {
r0 = rf(ctx, payload, oauth)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[string]int)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *core.Payload, *core.Oauth) error); ok {
r1 = rf(ctx, payload, oauth)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewDiffManager interface {
mock.TestingT
Cleanup(func())
}
// NewDiffManager creates a new instance of DiffManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewDiffManager(t mockConstructorTestingTNewDiffManager) *DiffManager {
mock := &DiffManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/DockerRunner.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// DockerRunner is an autogenerated mock type for the DockerRunner type
type DockerRunner struct {
mock.Mock
}
// Create provides a mock function with given fields: _a0, _a1
func (_m *DockerRunner) Create(_a0 context.Context, _a1 *core.RunnerOptions) core.ContainerStatus {
ret := _m.Called(_a0, _a1)
var r0 core.ContainerStatus
if rf, ok := ret.Get(0).(func(context.Context, *core.RunnerOptions) core.ContainerStatus); ok {
r0 = rf(_a0, _a1)
} else {
r0 = ret.Get(0).(core.ContainerStatus)
}
return r0
}
// Destroy provides a mock function with given fields: ctx, r
func (_m *DockerRunner) Destroy(ctx context.Context, r *core.RunnerOptions) error {
ret := _m.Called(ctx, r)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.RunnerOptions) error); ok {
r0 = rf(ctx, r)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetInfo provides a mock function with given fields: _a0
func (_m *DockerRunner) GetInfo(_a0 context.Context) (float32, int64) {
ret := _m.Called(_a0)
var r0 float32
if rf, ok := ret.Get(0).(func(context.Context) float32); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(float32)
}
var r1 int64
if rf, ok := ret.Get(1).(func(context.Context) int64); ok {
r1 = rf(_a0)
} else {
r1 = ret.Get(1).(int64)
}
return r0, r1
}
// Initiate provides a mock function with given fields: _a0, _a1, _a2
func (_m *DockerRunner) Initiate(_a0 context.Context, _a1 *core.RunnerOptions, _a2 chan core.ContainerStatus) {
_m.Called(_a0, _a1, _a2)
}
// KillRunningDocker provides a mock function with given fields: ctx
func (_m *DockerRunner) KillRunningDocker(ctx context.Context) {
_m.Called(ctx)
}
// PullImage provides a mock function with given fields: containerImageConfig, r
func (_m *DockerRunner) PullImage(containerImageConfig *core.ContainerImageConfig, r *core.RunnerOptions) error {
ret := _m.Called(containerImageConfig, r)
var r0 error
if rf, ok := ret.Get(0).(func(*core.ContainerImageConfig, *core.RunnerOptions) error); ok {
r0 = rf(containerImageConfig, r)
} else {
r0 = ret.Error(0)
}
return r0
}
// Run provides a mock function with given fields: _a0, _a1
func (_m *DockerRunner) Run(_a0 context.Context, _a1 *core.RunnerOptions) core.ContainerStatus {
ret := _m.Called(_a0, _a1)
var r0 core.ContainerStatus
if rf, ok := ret.Get(0).(func(context.Context, *core.RunnerOptions) core.ContainerStatus); ok {
r0 = rf(_a0, _a1)
} else {
r0 = ret.Get(0).(core.ContainerStatus)
}
return r0
}
// WaitForCompletion provides a mock function with given fields: ctx, r
func (_m *DockerRunner) WaitForCompletion(ctx context.Context, r *core.RunnerOptions) error {
ret := _m.Called(ctx, r)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.RunnerOptions) error); ok {
r0 = rf(ctx, r)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewDockerRunner interface {
mock.TestingT
Cleanup(func())
}
// NewDockerRunner creates a new instance of DockerRunner. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewDockerRunner(t mockConstructorTestingTNewDockerRunner) *DockerRunner {
mock := &DockerRunner{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/Driver.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// Driver is an autogenerated mock type for the Driver type
type Driver struct {
mock.Mock
}
// RunDiscovery provides a mock function with given fields: ctx, payload, taskPayload, oauth, coverageDir, secretMap
func (_m *Driver) RunDiscovery(ctx context.Context, payload *core.Payload, taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
ret := _m.Called(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload, *core.TaskPayload, *core.Oauth, string, map[string]string) error); ok {
r0 = rf(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
} else {
r0 = ret.Error(0)
}
return r0
}
// RunExecution provides a mock function with given fields: ctx, payload, taskPayload, oauth, coverageDir, secretMap
func (_m *Driver) RunExecution(ctx context.Context, payload *core.Payload, taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
ret := _m.Called(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload, *core.TaskPayload, *core.Oauth, string, map[string]string) error); ok {
r0 = rf(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewDriver interface {
mock.TestingT
Cleanup(func())
}
// NewDriver creates a new instance of Driver. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewDriver(t mockConstructorTestingTNewDriver) *Driver {
mock := &Driver{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/ExecutionManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// ExecutionManager is an autogenerated mock type for the ExecutionManager type
type ExecutionManager struct {
mock.Mock
}
// ExecuteInternalCommands provides a mock function with given fields: ctx, commandType, commands, cwd, envMap, secretData
func (_m *ExecutionManager) ExecuteInternalCommands(ctx context.Context, commandType core.CommandType, commands []string, cwd string, envMap map[string]string, secretData map[string]string) error {
ret := _m.Called(ctx, commandType, commands, cwd, envMap, secretData)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, core.CommandType, []string, string, map[string]string, map[string]string) error); ok {
r0 = rf(ctx, commandType, commands, cwd, envMap, secretData)
} else {
r0 = ret.Error(0)
}
return r0
}
// ExecuteUserCommands provides a mock function with given fields: ctx, commandType, payload, runConfig, secretData, logwriter, cwd
func (_m *ExecutionManager) ExecuteUserCommands(ctx context.Context, commandType core.CommandType, payload *core.Payload, runConfig *core.Run, secretData map[string]string, logwriter core.LogWriterStrategy, cwd string) error {
ret := _m.Called(ctx, commandType, payload, runConfig, secretData, logwriter, cwd)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, core.CommandType, *core.Payload, *core.Run, map[string]string, core.LogWriterStrategy, string) error); ok {
r0 = rf(ctx, commandType, payload, runConfig, secretData, logwriter, cwd)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetEnvVariables provides a mock function with given fields: envMap, secretData
func (_m *ExecutionManager) GetEnvVariables(envMap map[string]string, secretData map[string]string) ([]string, error) {
ret := _m.Called(envMap, secretData)
var r0 []string
if rf, ok := ret.Get(0).(func(map[string]string, map[string]string) []string); ok {
r0 = rf(envMap, secretData)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]string)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(map[string]string, map[string]string) error); ok {
r1 = rf(envMap, secretData)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewExecutionManager interface {
mock.TestingT
Cleanup(func())
}
// NewExecutionManager creates a new instance of ExecutionManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewExecutionManager(t mockConstructorTestingTNewExecutionManager) *ExecutionManager {
mock := &ExecutionManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/GitManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// GitManager is an autogenerated mock type for the GitManager type
type GitManager struct {
mock.Mock
}
// Clone provides a mock function with given fields: ctx, payload, oauth
func (_m *GitManager) Clone(ctx context.Context, payload *core.Payload, oauth *core.Oauth) error {
ret := _m.Called(ctx, payload, oauth)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload, *core.Oauth) error); ok {
r0 = rf(ctx, payload, oauth)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewGitManager interface {
mock.TestingT
Cleanup(func())
}
// NewGitManager creates a new instance of GitManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewGitManager(t mockConstructorTestingTNewGitManager) *GitManager {
mock := &GitManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/ListSubModuleService.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// ListSubModuleService is an autogenerated mock type for the ListSubModuleService type
type ListSubModuleService struct {
mock.Mock
}
// Send provides a mock function with given fields: ctx, buildID, totalSubmodule
func (_m *ListSubModuleService) Send(ctx context.Context, buildID string, totalSubmodule int) error {
ret := _m.Called(ctx, buildID, totalSubmodule)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string, int) error); ok {
r0 = rf(ctx, buildID, totalSubmodule)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewListSubModuleService interface {
mock.TestingT
Cleanup(func())
}
// NewListSubModuleService creates a new instance of ListSubModuleService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewListSubModuleService(t mockConstructorTestingTNewListSubModuleService) *ListSubModuleService {
mock := &ListSubModuleService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/LogWriterStrategy.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
io "io"
mock "github.com/stretchr/testify/mock"
)
// LogWriterStrategy is an autogenerated mock type for the LogWriterStrategy type
type LogWriterStrategy struct {
mock.Mock
}
// Write provides a mock function with given fields: ctx, reader
func (_m *LogWriterStrategy) Write(ctx context.Context, reader io.Reader) <-chan error {
ret := _m.Called(ctx, reader)
var r0 <-chan error
if rf, ok := ret.Get(0).(func(context.Context, io.Reader) <-chan error); ok {
r0 = rf(ctx, reader)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(<-chan error)
}
}
return r0
}
type mockConstructorTestingTNewLogWriterStrategy interface {
mock.TestingT
Cleanup(func())
}
// NewLogWriterStrategy creates a new instance of LogWriterStrategy. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewLogWriterStrategy(t mockConstructorTestingTNewLogWriterStrategy) *LogWriterStrategy {
mock := &LogWriterStrategy{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/Logger.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
lumber "github.com/LambdaTest/test-at-scale/pkg/lumber"
mock "github.com/stretchr/testify/mock"
)
// Logger is an autogenerated mock type for the Logger type
type Logger struct {
mock.Mock
}
// Debugf provides a mock function with given fields: format, args
func (_m *Logger) Debugf(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// Errorf provides a mock function with given fields: format, args
func (_m *Logger) Errorf(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// Fatalf provides a mock function with given fields: format, args
func (_m *Logger) Fatalf(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// Infof provides a mock function with given fields: format, args
func (_m *Logger) Infof(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// Panicf provides a mock function with given fields: format, args
func (_m *Logger) Panicf(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// Warnf provides a mock function with given fields: format, args
func (_m *Logger) Warnf(format string, args ...interface{}) {
var _ca []interface{}
_ca = append(_ca, format)
_ca = append(_ca, args...)
_m.Called(_ca...)
}
// WithFields provides a mock function with given fields: keyValues
func (_m *Logger) WithFields(keyValues lumber.Fields) lumber.Logger {
ret := _m.Called(keyValues)
var r0 lumber.Logger
if rf, ok := ret.Get(0).(func(lumber.Fields) lumber.Logger); ok {
r0 = rf(keyValues)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(lumber.Logger)
}
}
return r0
}
type mockConstructorTestingTNewLogger interface {
mock.TestingT
Cleanup(func())
}
// NewLogger creates a new instance of Logger. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewLogger(t mockConstructorTestingTNewLogger) *Logger {
mock := &Logger{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/PayloadManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// PayloadManager is an autogenerated mock type for the PayloadManager type
type PayloadManager struct {
mock.Mock
}
// FetchPayload provides a mock function with given fields: ctx, payloadAddress
func (_m *PayloadManager) FetchPayload(ctx context.Context, payloadAddress string) (*core.Payload, error) {
ret := _m.Called(ctx, payloadAddress)
var r0 *core.Payload
if rf, ok := ret.Get(0).(func(context.Context, string) *core.Payload); ok {
r0 = rf(ctx, payloadAddress)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*core.Payload)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, payloadAddress)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ValidatePayload provides a mock function with given fields: ctx, payload
func (_m *PayloadManager) ValidatePayload(ctx context.Context, payload *core.Payload) error {
ret := _m.Called(ctx, payload)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.Payload) error); ok {
r0 = rf(ctx, payload)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewPayloadManager interface {
mock.TestingT
Cleanup(func())
}
// NewPayloadManager creates a new instance of PayloadManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewPayloadManager(t mockConstructorTestingTNewPayloadManager) *PayloadManager {
mock := &PayloadManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/Requests.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// Requests is an autogenerated mock type for the Requests type
type Requests struct {
mock.Mock
}
// MakeAPIRequest provides a mock function with given fields: ctx, httpMethod, endpoint, body, params, headers
func (_m *Requests) MakeAPIRequest(ctx context.Context, httpMethod string, endpoint string, body []byte, params map[string]interface{}, headers map[string]string) ([]byte, int, error) {
ret := _m.Called(ctx, httpMethod, endpoint, body, params, headers)
var r0 []byte
if rf, ok := ret.Get(0).(func(context.Context, string, string, []byte, map[string]interface{}, map[string]string) []byte); ok {
r0 = rf(ctx, httpMethod, endpoint, body, params, headers)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]byte)
}
}
var r1 int
if rf, ok := ret.Get(1).(func(context.Context, string, string, []byte, map[string]interface{}, map[string]string) int); ok {
r1 = rf(ctx, httpMethod, endpoint, body, params, headers)
} else {
r1 = ret.Get(1).(int)
}
var r2 error
if rf, ok := ret.Get(2).(func(context.Context, string, string, []byte, map[string]interface{}, map[string]string) error); ok {
r2 = rf(ctx, httpMethod, endpoint, body, params, headers)
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
type mockConstructorTestingTNewRequests interface {
mock.TestingT
Cleanup(func())
}
// NewRequests creates a new instance of Requests. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewRequests(t mockConstructorTestingTNewRequests) *Requests {
mock := &Requests{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/SecretParser.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// SecretParser is an autogenerated mock type for the SecretParser type
type SecretParser struct {
mock.Mock
}
// Expired provides a mock function with given fields: token
func (_m *SecretParser) Expired(token *core.Oauth) bool {
ret := _m.Called(token)
var r0 bool
if rf, ok := ret.Get(0).(func(*core.Oauth) bool); ok {
r0 = rf(token)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// GetOauthSecret provides a mock function with given fields: filepath
func (_m *SecretParser) GetOauthSecret(filepath string) (*core.Oauth, error) {
ret := _m.Called(filepath)
var r0 *core.Oauth
if rf, ok := ret.Get(0).(func(string) *core.Oauth); ok {
r0 = rf(filepath)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*core.Oauth)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(filepath)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetRepoSecret provides a mock function with given fields: _a0
func (_m *SecretParser) GetRepoSecret(_a0 string) (map[string]string, error) {
ret := _m.Called(_a0)
var r0 map[string]string
if rf, ok := ret.Get(0).(func(string) map[string]string); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[string]string)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SubstituteSecret provides a mock function with given fields: command, secretData
func (_m *SecretParser) SubstituteSecret(command string, secretData map[string]string) (string, error) {
ret := _m.Called(command, secretData)
var r0 string
if rf, ok := ret.Get(0).(func(string, map[string]string) string); ok {
r0 = rf(command, secretData)
} else {
r0 = ret.Get(0).(string)
}
var r1 error
if rf, ok := ret.Get(1).(func(string, map[string]string) error); ok {
r1 = rf(command, secretData)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewSecretParser interface {
mock.TestingT
Cleanup(func())
}
// NewSecretParser creates a new instance of SecretParser. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewSecretParser(t mockConstructorTestingTNewSecretParser) *SecretParser {
mock := &SecretParser{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/SecretsManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
config "github.com/LambdaTest/test-at-scale/config"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// SecretsManager is an autogenerated mock type for the SecretsManager type
type SecretsManager struct {
mock.Mock
}
// GetDockerSecrets provides a mock function with given fields: r
func (_m *SecretsManager) GetDockerSecrets(r *core.RunnerOptions) (core.ContainerImageConfig, error) {
ret := _m.Called(r)
var r0 core.ContainerImageConfig
if rf, ok := ret.Get(0).(func(*core.RunnerOptions) core.ContainerImageConfig); ok {
r0 = rf(r)
} else {
r0 = ret.Get(0).(core.ContainerImageConfig)
}
var r1 error
if rf, ok := ret.Get(1).(func(*core.RunnerOptions) error); ok {
r1 = rf(r)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetLambdatestSecrets provides a mock function with given fields:
func (_m *SecretsManager) GetLambdatestSecrets() *config.LambdatestConfig {
ret := _m.Called()
var r0 *config.LambdatestConfig
if rf, ok := ret.Get(0).(func() *config.LambdatestConfig); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*config.LambdatestConfig)
}
}
return r0
}
// GetSynapseName provides a mock function with given fields:
func (_m *SecretsManager) GetSynapseName() string {
ret := _m.Called()
var r0 string
if rf, ok := ret.Get(0).(func() string); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(string)
}
return r0
}
// WriteGitSecrets provides a mock function with given fields: path
func (_m *SecretsManager) WriteGitSecrets(path string) error {
ret := _m.Called(path)
var r0 error
if rf, ok := ret.Get(0).(func(string) error); ok {
r0 = rf(path)
} else {
r0 = ret.Error(0)
}
return r0
}
// WriteRepoSecrets provides a mock function with given fields: repo, path
func (_m *SecretsManager) WriteRepoSecrets(repo string, path string) error {
ret := _m.Called(repo, path)
var r0 error
if rf, ok := ret.Get(0).(func(string, string) error); ok {
r0 = rf(repo, path)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewSecretsManager interface {
mock.TestingT
Cleanup(func())
}
// NewSecretsManager creates a new instance of SecretsManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewSecretsManager(t mockConstructorTestingTNewSecretsManager) *SecretsManager {
mock := &SecretsManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/SynapseManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
sync "sync"
)
// SynapseManager is an autogenerated mock type for the SynapseManager type
type SynapseManager struct {
mock.Mock
}
// InitiateConnection provides a mock function with given fields: ctx, wg, connectionFailed
func (_m *SynapseManager) InitiateConnection(ctx context.Context, wg *sync.WaitGroup, connectionFailed chan struct{}) {
_m.Called(ctx, wg, connectionFailed)
}
type mockConstructorTestingTNewSynapseManager interface {
mock.TestingT
Cleanup(func())
}
// NewSynapseManager creates a new instance of SynapseManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewSynapseManager(t mockConstructorTestingTNewSynapseManager) *SynapseManager {
mock := &SynapseManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/TASConfigManager.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// TASConfigManager is an autogenerated mock type for the TASConfigManager type
type TASConfigManager struct {
mock.Mock
}
// GetVersion provides a mock function with given fields: path
func (_m *TASConfigManager) GetVersion(path string) (int, error) {
ret := _m.Called(path)
var r0 int
if rf, ok := ret.Get(0).(func(string) int); ok {
r0 = rf(path)
} else {
r0 = ret.Get(0).(int)
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(path)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// LoadAndValidate provides a mock function with given fields: ctx, version, path, eventType, licenseTier
func (_m *TASConfigManager) LoadAndValidate(ctx context.Context, version int, path string, eventType core.EventType, licenseTier core.Tier) (interface{}, error) {
ret := _m.Called(ctx, version, path, eventType, licenseTier)
var r0 interface{}
if rf, ok := ret.Get(0).(func(context.Context, int, string, core.EventType, core.Tier) interface{}); ok {
r0 = rf(ctx, version, path, eventType, licenseTier)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(interface{})
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, int, string, core.EventType, core.Tier) error); ok {
r1 = rf(ctx, version, path, eventType, licenseTier)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewTASConfigManager interface {
mock.TestingT
Cleanup(func())
}
// NewTASConfigManager creates a new instance of TASConfigManager. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTASConfigManager(t mockConstructorTestingTNewTASConfigManager) *TASConfigManager {
mock := &TASConfigManager{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/Task.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// Task is an autogenerated mock type for the Task type
type Task struct {
mock.Mock
}
// UpdateStatus provides a mock function with given fields: ctx, payload
func (_m *Task) UpdateStatus(ctx context.Context, payload *core.TaskPayload) error {
ret := _m.Called(ctx, payload)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.TaskPayload) error); ok {
r0 = rf(ctx, payload)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewTask interface {
mock.TestingT
Cleanup(func())
}
// NewTask creates a new instance of Task. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTask(t mockConstructorTestingTNewTask) *Task {
mock := &Task{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/TestDiscoveryService.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// TestDiscoveryService is an autogenerated mock type for the TestDiscoveryService type
type TestDiscoveryService struct {
mock.Mock
}
// Discover provides a mock function with given fields: ctx, args
func (_m *TestDiscoveryService) Discover(ctx context.Context, args *core.DiscoveyArgs) (*core.DiscoveryResult, error) {
ret := _m.Called(ctx, args)
var r0 *core.DiscoveryResult
if rf, ok := ret.Get(0).(func(context.Context, *core.DiscoveyArgs) *core.DiscoveryResult); ok {
r0 = rf(ctx, args)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*core.DiscoveryResult)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *core.DiscoveyArgs) error); ok {
r1 = rf(ctx, args)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SendResult provides a mock function with given fields: ctx, testDiscoveryResult
func (_m *TestDiscoveryService) SendResult(ctx context.Context, testDiscoveryResult *core.DiscoveryResult) error {
ret := _m.Called(ctx, testDiscoveryResult)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *core.DiscoveryResult) error); ok {
r0 = rf(ctx, testDiscoveryResult)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewTestDiscoveryService interface {
mock.TestingT
Cleanup(func())
}
// NewTestDiscoveryService creates a new instance of TestDiscoveryService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTestDiscoveryService(t mockConstructorTestingTNewTestDiscoveryService) *TestDiscoveryService {
mock := &TestDiscoveryService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/TestExecutionService.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
core "github.com/LambdaTest/test-at-scale/pkg/core"
mock "github.com/stretchr/testify/mock"
)
// TestExecutionService is an autogenerated mock type for the TestExecutionService type
type TestExecutionService struct {
mock.Mock
}
// Run provides a mock function with given fields: ctx, testExecutionArgs
func (_m *TestExecutionService) Run(ctx context.Context, testExecutionArgs *core.TestExecutionArgs) (*core.ExecutionResults, error) {
ret := _m.Called(ctx, testExecutionArgs)
var r0 *core.ExecutionResults
if rf, ok := ret.Get(0).(func(context.Context, *core.TestExecutionArgs) *core.ExecutionResults); ok {
r0 = rf(ctx, testExecutionArgs)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*core.ExecutionResults)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *core.TestExecutionArgs) error); ok {
r1 = rf(ctx, testExecutionArgs)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SendResults provides a mock function with given fields: ctx, payload
func (_m *TestExecutionService) SendResults(ctx context.Context, payload *core.ExecutionResults) (*core.TestReportResponsePayload, error) {
ret := _m.Called(ctx, payload)
var r0 *core.TestReportResponsePayload
if rf, ok := ret.Get(0).(func(context.Context, *core.ExecutionResults) *core.TestReportResponsePayload); ok {
r0 = rf(ctx, payload)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*core.TestReportResponsePayload)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, *core.ExecutionResults) error); ok {
r1 = rf(ctx, payload)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewTestExecutionService interface {
mock.TestingT
Cleanup(func())
}
// NewTestExecutionService creates a new instance of TestExecutionService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTestExecutionService(t mockConstructorTestingTNewTestExecutionService) *TestExecutionService {
mock := &TestExecutionService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/TestStats.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import mock "github.com/stretchr/testify/mock"
// TestStats is an autogenerated mock type for the TestStats type
type TestStats struct {
mock.Mock
}
// CaptureTestStats provides a mock function with given fields: pid, collectStats
func (_m *TestStats) CaptureTestStats(pid int32, collectStats bool) error {
ret := _m.Called(pid, collectStats)
var r0 error
if rf, ok := ret.Get(0).(func(int32, bool) error); ok {
r0 = rf(pid, collectStats)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewTestStats interface {
mock.TestingT
Cleanup(func())
}
// NewTestStats creates a new instance of TestStats. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTestStats(t mockConstructorTestingTNewTestStats) *TestStats {
mock := &TestStats{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: mocks/ZstdCompressor.go
================================================
// Code generated by mockery v2.14.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// ZstdCompressor is an autogenerated mock type for the ZstdCompressor type
type ZstdCompressor struct {
mock.Mock
}
// Compress provides a mock function with given fields: ctx, compressedFileName, preservePath, workingDirectory, filesToCompress
func (_m *ZstdCompressor) Compress(ctx context.Context, compressedFileName string, preservePath bool, workingDirectory string, filesToCompress ...string) error {
_va := make([]interface{}, len(filesToCompress))
for _i := range filesToCompress {
_va[_i] = filesToCompress[_i]
}
var _ca []interface{}
_ca = append(_ca, ctx, compressedFileName, preservePath, workingDirectory)
_ca = append(_ca, _va...)
ret := _m.Called(_ca...)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string, bool, string, ...string) error); ok {
r0 = rf(ctx, compressedFileName, preservePath, workingDirectory, filesToCompress...)
} else {
r0 = ret.Error(0)
}
return r0
}
// Decompress provides a mock function with given fields: ctx, filePath, preservePath, workingDirectory
func (_m *ZstdCompressor) Decompress(ctx context.Context, filePath string, preservePath bool, workingDirectory string) error {
ret := _m.Called(ctx, filePath, preservePath, workingDirectory)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string, bool, string) error); ok {
r0 = rf(ctx, filePath, preservePath, workingDirectory)
} else {
r0 = ret.Error(0)
}
return r0
}
type mockConstructorTestingTNewZstdCompressor interface {
mock.TestingT
Cleanup(func())
}
// NewZstdCompressor creates a new instance of ZstdCompressor. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewZstdCompressor(t mockConstructorTestingTNewZstdCompressor) *ZstdCompressor {
mock := &ZstdCompressor{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
================================================
FILE: pkg/api/health/health.go
================================================
package health
import (
"net/http"
"github.com/gin-gonic/gin"
)
// Handler for health API
func Handler(c *gin.Context) {
c.Data(http.StatusOK, gin.MIMEPlain, []byte(http.StatusText(http.StatusOK)))
}
================================================
FILE: pkg/api/health/health_test.go
================================================
package health
import (
"fmt"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"github.com/gin-gonic/gin"
)
func TestHandler(t *testing.T) {
tests := []struct {
name string
httpRequest *http.Request
wantResponseCode int
wantStatusText string
}{
{"Test handler health route for success", httptest.NewRequest(http.MethodGet, "/health", nil), 200, http.StatusText(http.StatusOK)},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
resp := httptest.NewRecorder()
gin.SetMode(gin.TestMode)
c, _ := gin.CreateTestContext(resp)
c.Request = tt.httpRequest
router := gin.Default()
router.GET("/health", Handler)
router.ServeHTTP(resp, c.Request)
fmt.Printf("Responsecode: %v\n", resp.Code)
if resp.Code != tt.wantResponseCode {
t.Errorf("Router.Handler() responseCode = %v, want = %v\n", resp.Code, tt.wantResponseCode)
return
}
if !reflect.DeepEqual(resp.Body.String(), tt.wantStatusText) {
t.Errorf("Router.Handler() statusText = %v, want = %v\n", resp.Body.String(), tt.wantStatusText)
}
})
}
}
================================================
FILE: pkg/api/results/results.go
================================================
package results
import (
"net/http"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/gin-gonic/gin"
)
//Handler captures the test execution results from nucleus
func Handler(logger lumber.Logger, ts *teststats.ProcStats) gin.HandlerFunc {
return func(c *gin.Context) {
request := core.ExecutionResults{}
if err := c.ShouldBindJSON(&request); err != nil {
logger.Errorf("error while binding json %v", err)
c.JSON(http.StatusBadRequest, gin.H{"message": err.Error()})
return
}
go func() {
ts.ExecutionResultInputChannel <- request
}()
c.Data(http.StatusOK, gin.MIMEPlain, []byte(http.StatusText(http.StatusOK)))
}
}
================================================
FILE: pkg/api/results/results_test.go
================================================
package results
import (
"bytes"
"fmt"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/gin-gonic/gin"
)
// NOTE: Tests in this package are meant to be run in a Linux environment
func TestHandler(t *testing.T) {
logger, _ := testutils.GetLogger()
cfg, _ := testutils.GetConfig()
ts, err := teststats.New(cfg, logger)
if err != nil {
t.Errorf("Error creating teststats service: %v", err)
}
tests := []struct {
name string
httpRequest *http.Request
wantResponseCode int
wantStatusText string
}{
{
"Test handler result route",
httptest.NewRequest(http.MethodPost, "/results", bytes.NewBuffer([]byte(`{"TaskID" : "123"}`))),
200,
http.StatusText(http.StatusOK),
},
{
"Test handler result route for error in jsonBinding and hence http.StatusBadRequest",
httptest.NewRequest(http.MethodPost, "/results", nil),
http.StatusBadRequest,
`{"message":"EOF"}`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
resp := httptest.NewRecorder()
gin.SetMode(gin.TestMode)
c, _ := gin.CreateTestContext(resp)
c.Request = tt.httpRequest
router := gin.Default()
router.POST("/results", Handler(logger, ts))
router.ServeHTTP(resp, c.Request)
fmt.Printf("Responsecode: %v\n", resp.Code)
if resp.Code != tt.wantResponseCode {
t.Errorf("Router.Handler() responseCode = %v, want = %v\n", resp.Code, tt.wantResponseCode)
return
}
if !reflect.DeepEqual(resp.Body.String(), tt.wantStatusText) {
t.Errorf("Router.Handler() statusText = %v, want = %v\n", resp.Body.String(), tt.wantStatusText)
}
})
}
}
================================================
FILE: pkg/api/router.go
================================================
package api
import (
"github.com/LambdaTest/test-at-scale/pkg/api/health"
"github.com/LambdaTest/test-at-scale/pkg/api/results"
"github.com/LambdaTest/test-at-scale/pkg/api/testlist"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/gin-gonic/gin"
)
// Router for nucleus
type Router struct {
logger lumber.Logger
testStatsService *teststats.ProcStats
tdResChan chan core.DiscoveryResult
}
// NewRouter returns instance of Router
func NewRouter(logger lumber.Logger, ts *teststats.ProcStats, tdResChan chan core.DiscoveryResult) Router {
return Router{
logger: logger,
testStatsService: ts,
tdResChan: tdResChan,
}
}
//Handler function will perform all route operations
func (r Router) Handler() *gin.Engine {
r.logger.Infof("Setting up routes")
router := gin.Default()
// corsConfig := cors.DefaultConfig()
// corsConfig.AllowAllOrigins = true
// corsConfig.AddAllowHeaders("authorization", "cache-control", "pragma")
// router.Use(cors.New(corsConfig))
router.GET("/health", health.Handler)
router.POST("/results", results.Handler(r.logger, r.testStatsService))
router.POST("/test-list", testlist.Handler(r.logger, r.tdResChan))
return router
}
================================================
FILE: pkg/api/router_test.go
================================================
package api
import (
"bytes"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/gin-gonic/gin"
)
// NOTE: Tests in this package are meant to be run in a Linux environment
func TestNewRouter(t *testing.T) {
logger, _ := testutils.GetLogger()
cfg, _ := testutils.GetConfig()
ts, err := teststats.New(cfg, logger)
tdResChan := make(chan core.DiscoveryResult)
if err != nil {
t.Errorf("Error creating teststats service: %v", err)
}
type args struct {
logger lumber.Logger
ts *teststats.ProcStats
tdResChan chan core.DiscoveryResult
}
tests := []struct {
name string
args args
want Router
}{
{"TestNewRouter", args{logger, ts, tdResChan}, Router{logger, ts, tdResChan}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewRouter(tt.args.logger, tt.args.ts, tt.args.tdResChan); !reflect.DeepEqual(got, tt.want) {
t.Errorf("NewRouter() = %v, want %v", got, tt.want)
}
})
}
}
func TestRouter_Handler(t *testing.T) {
logger, _ := testutils.GetLogger()
cfg, _ := testutils.GetConfig()
ts, err := teststats.New(cfg, logger)
tdResChan := make(chan core.DiscoveryResult)
if err != nil {
t.Errorf("Error creating teststats service: %v", err)
}
tests := []struct {
name string
httpRequest *http.Request
wantResponseCode int
wantStatusText string
}{
{"Test handler health route for success", httptest.NewRequest(http.MethodGet, "/health", nil), 200, http.StatusText(http.StatusOK)},
{"Test handler result route", httptest.NewRequest(http.MethodPost, "/results", bytes.NewBuffer([]byte(`{"TaskID" : "123"}`))), 200, http.StatusText(http.StatusOK)},
{"Test handler result route for error in jsonBinding and hence http.StatusBadRequest", httptest.NewRequest(http.MethodPost, "/results", nil), http.StatusBadRequest, `{"message":"EOF"}`},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
newRouter := NewRouter(logger, ts, tdResChan)
resp := httptest.NewRecorder()
gin.SetMode(gin.TestMode)
c, _ := gin.CreateTestContext(resp)
c.Request = tt.httpRequest
newRouter.Handler().ServeHTTP(resp, c.Request)
if resp.Code != tt.wantResponseCode {
t.Errorf("Router.Handler() responseCode = %v, want = %v\n", resp.Code, tt.wantResponseCode)
return
}
if !reflect.DeepEqual(resp.Body.String(), tt.wantStatusText) {
t.Errorf("Router.Handler() statusText = %v, want = %v\n", resp.Body.String(), tt.wantStatusText)
}
})
}
}
================================================
FILE: pkg/api/testlist/testlist.go
================================================
package testlist
import (
"net/http"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/gin-gonic/gin"
)
// Handler captures the test execution results from nucleus
func Handler(logger lumber.Logger, tdResChan chan core.DiscoveryResult) gin.HandlerFunc {
return func(c *gin.Context) {
request := core.DiscoveryResult{}
if err := c.ShouldBindJSON(&request); err != nil {
logger.Errorf("error while binding json %v", err)
c.JSON(http.StatusBadRequest, gin.H{"message": err.Error()})
return
}
go func() {
tdResChan <- request
}()
c.Data(http.StatusOK, gin.MIMEPlain, []byte(http.StatusText(http.StatusOK)))
}
}
================================================
FILE: pkg/azure/client.go
================================================
package azure
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
var (
defaultBufferSize = 3 * 1024 * 1024
defaultMaxBuffers = 4
coverageContainerName = "coverage"
maxRetry = 10
)
// store represents the azure storage
type store struct {
requests core.Requests
containerClient azblob.ContainerClient
logger lumber.Logger
endpoint string
}
// request body for getting SAS URL API.
type request struct {
Purpose core.SASURLPurpose `json:"purpose" validate:"oneof=cache workspace_cache pre_run_logs post_run_logs execution_logs"`
}
// response body for get SAS URL API.
type response struct {
SASURL string `json:"sas_url"`
}
// NewAzureBlobEnv returns a new Azure blob store.
func NewAzureBlobEnv(cfg *config.NucleusConfig, requests core.Requests, logger lumber.Logger) (core.AzureClient, error) {
// if non coverage mode then use Azure SAS Token
if !cfg.CoverageMode {
return &store{
requests: requests,
logger: logger,
endpoint: global.NeuronHost + "/internal/sas-token",
}, nil
}
// FIXME: Hack for synapse
if cfg.LocalRunner {
cfg.Azure.StorageAccountName = "dummy-account"
cfg.Azure.StorageAccessKey = "dummy-access-key"
}
if cfg.Azure.StorageAccountName == "" || cfg.Azure.StorageAccessKey == "" {
return nil, errors.New("either the storage account or storage access key environment variable is not set")
}
credential, err := azblob.NewSharedKeyCredential(cfg.Azure.StorageAccountName, cfg.Azure.StorageAccessKey)
if err != nil {
return nil, err
}
u, err := url.Parse(fmt.Sprintf("https://%s.blob.core.windows.net/%s", cfg.Azure.StorageAccountName, cfg.Azure.ContainerName))
if err != nil {
return nil, err
}
serviceClient, err := azblob.NewServiceClientWithSharedKey(u.String(), credential, getClientOptions())
if err != nil {
logger.Errorf("Failed to create azure service client, error: %v", err)
return nil, err
}
return &store{
requests: requests,
logger: logger,
endpoint: global.NeuronHost + "/internal/sas-token",
containerClient: serviceClient.NewContainerClient(coverageContainerName),
}, nil
}
// FindUsingSASUrl download object based on sasURL
func (s *store) FindUsingSASUrl(ctx context.Context, sasURL string) (io.ReadCloser, error) {
u, err := url.Parse(sasURL)
if err != nil {
return nil, err
}
blobClient, err := azblob.NewBlockBlobClientWithNoCredential(u.String(), &azblob.ClientOptions{})
if err != nil {
s.logger.Errorf("failed to create blob client, error: %v", err)
return nil, err
}
s.logger.Debugf("Downloading blob from %s", blobClient.URL())
out, err := blobClient.Download(ctx, &azblob.DownloadBlobOptions{})
if err != nil {
return nil, handleError(err)
}
return out.Body(&azblob.RetryReaderOptions{MaxRetryRequests: 5}), nil
}
// CreateUsingSASURL creates object using sasURL
func (s *store) CreateUsingSASURL(ctx context.Context, sasURL string, reader io.Reader, mimeType string) (string, error) {
u, err := url.Parse(sasURL)
if err != nil {
return "", err
}
blobClient, err := azblob.NewBlockBlobClientWithNoCredential(u.String(), getClientOptions())
if err != nil {
s.logger.Errorf("failed to create blob client, error: %v", err)
return "", err
}
s.logger.Debugf("Uploading blob to %s", blobClient.URL())
_, err = blobClient.UploadStreamToBlockBlob(ctx, reader, azblob.UploadStreamToBlockBlobOptions{
HTTPHeaders: &azblob.BlobHTTPHeaders{BlobContentType: &mimeType},
BufferSize: defaultBufferSize,
MaxBuffers: defaultMaxBuffers,
})
return blobClient.URL(), err
}
// Find function downloads blob based on URI
func (s *store) Find(ctx context.Context, path string) (io.ReadCloser, error) {
blobClient := s.containerClient.NewBlockBlobClient(path)
out, err := blobClient.Download(ctx, &azblob.DownloadBlobOptions{})
if err != nil {
return nil, handleError(err)
}
defer out.RawResponse.Body.Close()
return out.Body(&azblob.RetryReaderOptions{MaxRetryRequests: 5}), nil
}
// Create function ulploads blob to URI
func (s *store) Create(ctx context.Context, path string, reader io.Reader, mimeType string) (string, error) {
blobClient := s.containerClient.NewBlockBlobClient(path)
_, err := blobClient.UploadStreamToBlockBlob(ctx, reader, azblob.UploadStreamToBlockBlobOptions{
HTTPHeaders: &azblob.BlobHTTPHeaders{BlobContentType: &mimeType},
BufferSize: defaultBufferSize,
MaxBuffers: defaultMaxBuffers,
})
return blobClient.URL(), err
}
// GetSASURL calls request neuron to get the SAS url
func (s *store) GetSASURL(ctx context.Context, purpose core.SASURLPurpose, query map[string]interface{}) (string, error) {
reqPayload := &request{Purpose: purpose}
reqBody, err := json.Marshal(reqPayload)
if err != nil {
s.logger.Errorf("failed to marshal request body %v", err)
return "", err
}
defaultQuery, headers := utils.GetDefaultQueryAndHeaders()
for key, val := range defaultQuery {
if query == nil {
query = make(map[string]interface{})
}
query[key] = val
}
rawBytes, _, err := s.requests.MakeAPIRequest(ctx, http.MethodPost, s.endpoint, reqBody, query, headers)
if err != nil {
return "", err
}
payload := new(response)
err = json.Unmarshal(rawBytes, payload)
if err != nil {
s.logger.Errorf("Error while unmarshalling json, error %v", err)
return "", err
}
return payload.SASURL, nil
}
// Exists checks the blob if exists
func (s *store) Exists(ctx context.Context, path string) (bool, error) {
blobClient := s.containerClient.NewBlockBlobClient(path)
get, err := blobClient.GetProperties(ctx, &azblob.GetBlobPropertiesOptions{})
if err != nil {
return false, fmt.Errorf("check if object exists, %w", err)
}
statusCode := get.RawResponse.StatusCode
defer get.RawResponse.Body.Close()
return statusCode == http.StatusOK, nil
}
func handleError(err error) error {
if err == nil {
return nil
}
var errResp *azblob.StorageError
if internalErr, ok := err.(*azblob.InternalError); ok && internalErr.As(&errResp) {
if errResp.ErrorCode == azblob.StorageErrorCodeBlobNotFound {
return errs.ErrNotFound
}
}
return err
}
func getClientOptions() *azblob.ClientOptions {
return &azblob.ClientOptions{
Retry: policy.RetryOptions{
MaxRetries: int32(maxRetry),
TryTimeout: global.DefaultAPITimeout,
},
}
}
================================================
FILE: pkg/blocktestservice/setup.go
================================================
// Package blocktestservice is used for creating the blocklist file
package blocktestservice
import (
"context"
"encoding/json"
"io/ioutil"
"net/http"
"strings"
"sync"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
const (
delimiter = "##"
)
// blocktest represents the blocked test suites and test cases.
type blocktest struct {
Source string `json:"source"`
Locator string `json:"locator"`
Status string `json:"status"`
}
// blocktestAPIResponse fetch blocked test cases from neuron API
type blocktestAPIResponse struct {
Name string `json:"test_name"`
TestLocator string `json:"test_locator"`
Status string `json:"status"`
}
// blocktestLocator stores locator and its status info
type blocktestLocator struct {
Locator string `json:"locator"`
Status string `json:"status"`
}
// TestBlockTestService represents an instance of ConfManager instance
type TestBlockTestService struct {
cfg *config.NucleusConfig
requests core.Requests
logger lumber.Logger
endpoint string
blockTestEntities map[string][]blocktest
once sync.Once
errChan chan error
}
// NewTestBlockTestService creates and returns a new TestBlockTestService instance
func NewTestBlockTestService(cfg *config.NucleusConfig, requests core.Requests, logger lumber.Logger) *TestBlockTestService {
return &TestBlockTestService{
cfg: cfg,
logger: logger,
requests: requests,
endpoint: global.NeuronHost + "/blocktest",
blockTestEntities: make(map[string][]blocktest),
errChan: make(chan error, 1),
}
}
func (tbs *TestBlockTestService) fetchBlockListFromNeuron(ctx context.Context, branch string) error {
var inp []blocktestAPIResponse
query, headers := utils.GetDefaultQueryAndHeaders()
query["branch"] = branch
rawBytes, statusCode, err := tbs.requests.MakeAPIRequest(ctx, http.MethodGet, tbs.endpoint, nil, query, headers)
if statusCode == http.StatusNotFound {
return nil
}
if err != nil {
return err
}
if jsonErr := json.Unmarshal(rawBytes, &inp); jsonErr != nil {
tbs.logger.Errorf("Unable to fetch blocklist response: %v", jsonErr)
return jsonErr
}
// populate bl
blocktestLocators := make([]*blocktestLocator, 0, len(inp))
for i := range inp {
blockLocator := new(blocktestLocator)
blockLocator.Locator = inp[i].TestLocator
blockLocator.Status = inp[i].Status
blocktestLocators = append(blocktestLocators, blockLocator)
}
tbs.populateBlockList("api", blocktestLocators)
return nil
}
// GetBlockTests provides list of blocked test cases
func (tbs *TestBlockTestService) GetBlockTests(ctx context.Context, blocklistYAML []string, branch string) error {
tbs.once.Do(func() {
blocktestLocators := make([]*blocktestLocator, 0, len(blocklistYAML))
for _, locator := range blocklistYAML {
blockLocator := new(blocktestLocator)
blockLocator.Locator = locator
blockLocator.Status = string(core.Blocklisted)
blocktestLocators = append(blocktestLocators, blockLocator)
}
tbs.populateBlockList("yml", blocktestLocators)
if err := tbs.fetchBlockListFromNeuron(ctx, branch); err != nil {
tbs.logger.Errorf("Unable to fetch remote blocklist: %v. Ignoring remote response", err)
tbs.errChan <- err
return
}
tbs.logger.Infof("Block tests: %+v", tbs.blockTestEntities)
// write blocklistest tests on disk
marshalledBlocklist, err := json.Marshal(tbs.blockTestEntities)
if err != nil {
tbs.logger.Errorf("Unable to json marshal blocklist: %+v", err)
tbs.errChan <- err
return
}
if err = ioutil.WriteFile(global.BlockTestFileLocation, marshalledBlocklist, 0644); err != nil {
tbs.logger.Errorf("Unable to write blocklist file: %+v", err)
tbs.errChan <- err
return
}
tbs.blockTestEntities = nil
})
select {
case err := <-tbs.errChan:
return err
default:
return nil
}
}
func (tbs *TestBlockTestService) populateBlockList(blocktestSource string, blocktestLocators []*blocktestLocator) {
i := 0
for _, test := range blocktestLocators {
// locators must end with delimiter
if !strings.HasSuffix(test.Locator, delimiter) {
test.Locator += delimiter
}
i = strings.Index(test.Locator, delimiter)
// TODO: handle duplicate entries and ignore its individual suites or testcases in blocklist if file is blocklisted
entity := blocktest{Source: blocktestSource, Locator: test.Locator, Status: test.Status}
if val, ok := tbs.blockTestEntities[test.Locator[:i]]; ok {
tbs.blockTestEntities[test.Locator[:i]] = append(val, entity)
} else {
tbs.blockTestEntities[test.Locator[:i]] = append([]blocktest{},
blocktest{Source: blocktestSource, Locator: test.Locator, Status: test.Status})
}
}
}
================================================
FILE: pkg/blocktestservice/setup_test.go
================================================
// Package blocktestservice is used for creating the blocklist file
package blocktestservice
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"reflect"
"sync"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
)
const buildID = "buildID"
func TestBlockListService_fetchBlockListFromNeuron(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils/testdata/testblocklistdata/")), // mock data stored at testutils/testdata
)
defer server.Close()
server2 := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/non200" {
t.Errorf("Expected to request '/non200', got: %v", r.URL)
return
}
w.WriteHeader(503)
_, err := w.Write([]byte(`{"value":"fixed"}`))
if err != nil {
fmt.Printf("Could not write data in httptest server, error: %v", err)
}
}))
defer server2.Close()
cfg := new(config.NucleusConfig)
cfg.BuildID = buildID
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
blocklistedEntities := make(map[string][]blocktest)
type args struct {
ctx context.Context
endpoint string
repoID string
branch string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"Test fetchBlocklistFromNeuron",
args{
ctx: context.TODO(),
endpoint: server.URL + "/testBlocklist.json",
repoID: "repoID",
branch: "branch",
},
false,
},
{"Test fetchBlocklistFromNeuron for wrong request endpoint",
args{
ctx: context.TODO(),
endpoint: "/dne.json",
repoID: "repoID",
branch: "branch",
},
true,
},
{"Test fetchBlocklistFromNeuron for non 200 response",
args{
ctx: context.TODO(),
endpoint: server2.URL + "/non200",
repoID: "repoID",
branch: "branch",
},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tbs := &TestBlockTestService{
cfg: cfg,
logger: logger,
endpoint: tt.args.endpoint,
blockTestEntities: blocklistedEntities,
once: sync.Once{},
errChan: make(chan error, 1),
requests: requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{}),
}
if err := tbs.fetchBlockListFromNeuron(tt.args.ctx, tt.args.branch); (err != nil) != tt.wantErr {
t.Errorf("TestBlockListService.fetchBlockListFromNeuron() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestBlockListService_GetBlockListedTests(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils/testdata/testblocklistdata/")), // mock data stored at testutils/testdata
)
defer server.Close()
cfg := new(config.NucleusConfig)
cfg.BuildID = buildID
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
tbs := NewTestBlockTestService(cfg, requests, logger)
tbs.endpoint = server.URL + "/testBlocklist.json"
type args struct {
ctx context.Context
tasConfig *core.TASConfig
branch string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"Test GetBlockListedTests",
args{
ctx: context.TODO(),
tasConfig: &core.TASConfig{
SmartRun: false,
Framework: "jest",
Blocklist: []string{"src/test/f1.spec.js", "src/test/f2.spec.js"},
SplitMode: core.TestSplit,
Tier: "small"},
},
false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
blYML := tt.args.tasConfig.Blocklist
if err := tbs.GetBlockTests(tt.args.ctx, blYML, tt.args.branch); (err != nil) != tt.wantErr {
t.Errorf("TestBlockListService.GetBlockListedTests() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestBlockListService_populateBlockList(t *testing.T) {
cfg := config.GlobalNucleusConfig
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
blocklistLocators := []*blocktestLocator{}
firstLocator := &blocktestLocator{
Locator: "src/test/api1.js",
Status: "quarantined",
}
secondLocator := &blocktestLocator{
Locator: "src/test/api2.js",
Status: "blocklisted",
}
blocklistLocators = append(blocklistLocators, firstLocator, secondLocator)
type fields struct {
cfg *config.NucleusConfig
logger lumber.Logger
endpoint string
blocklistedEntities map[string][]blocktest
errChan chan error
}
type args struct {
blocklistSource string
blocktestLocators []*blocktestLocator
}
tests := []struct {
name string
fields fields
args args
}{
{"Test populateBlockList",
fields{
cfg: cfg,
logger: logger,
endpoint: "/blocktest",
blocklistedEntities: map[string][]blocktest{
"src/test/api1.js": {
blocktest{
Source: "src",
Locator: "loc",
Status: "blocklisted",
},
},
},
errChan: make(chan error, 1)},
args{
blocklistSource: "./",
blocktestLocators: blocklistLocators,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tbs := &TestBlockTestService{
cfg: tt.fields.cfg,
logger: tt.fields.logger,
endpoint: tt.fields.endpoint,
blockTestEntities: tt.fields.blocklistedEntities,
once: sync.Once{},
errChan: tt.fields.errChan,
}
tbs.populateBlockList(tt.args.blocklistSource, tt.args.blocktestLocators)
expected := map[string][]blocktest{"src/test/api1.js": {{"src", "loc", "blocklisted"}, {"./", "src/test/api1.js##", "quarantined"}},
"src/test/api2.js": {{"./", "src/test/api2.js##", "blocklisted"}}}
got := tbs.blockTestEntities
if !reflect.DeepEqual(expected, got) {
t.Errorf("\nexpected: %v\ngot: %v", expected, got)
}
})
}
}
================================================
FILE: pkg/cachemanager/cachemanager.go
================================================
package cachemanager
import (
"context"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"sync"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/fileutils"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
const (
pnpmLock = "pnpm-lock.yaml"
yarnLock = "yarn.lock"
packageLock = "package-lock.json"
npmShrinkwrap = "npm-shrinkwrap.json"
nodeModules = "node_modules"
defaultCompressedFileName = "cache.tzst"
workspaceCompressedFilenameV1 = "workspace.tzst"
workspaceCompressedFilenameV2 = "workspace-%s.tzst"
)
// cache represents the files/dirs that will be cached
type cache struct {
azureClient core.AzureClient
logger lumber.Logger
once sync.Once
zstd core.ZstdCompressor
skipUpload bool
homeDir string
}
var cacheBlobURL string
var apiErr error
// New returns a new CacheStore
func New(z core.ZstdCompressor, azureClient core.AzureClient, logger lumber.Logger) (core.CacheStore, error) {
homeDir, err := os.UserHomeDir()
if err != nil {
return nil, err
}
return &cache{
azureClient: azureClient,
zstd: z,
logger: logger,
homeDir: homeDir,
}, nil
}
func (c *cache) getCacheSASURL(ctx context.Context, cacheKey string) (string, error) {
c.once.Do(func() {
query := map[string]interface{}{"key": cacheKey}
cacheBlobURL, apiErr = c.azureClient.GetSASURL(ctx, core.PurposeCache, query)
})
return cacheBlobURL, apiErr
}
func (c *cache) Download(ctx context.Context, cacheKey string) error {
sasURL, err := c.getCacheSASURL(ctx, cacheKey)
if err != nil {
c.logger.Errorf("Error while generating SAS Token, error %v", err)
return err
}
resp, err := c.azureClient.FindUsingSASUrl(ctx, sasURL)
if err != nil {
if errors.Is(err, errs.ErrNotFound) {
c.logger.Infof("Cache not found for key: %s", cacheKey)
return nil
}
c.logger.Errorf("Error while downloading cache for key: %s, error %v", cacheKey, err)
return err
}
c.skipUpload = true
defer resp.Close()
cachedFilePath := filepath.Join(os.TempDir(), defaultCompressedFileName)
out, err := os.Create(cachedFilePath)
if err != nil {
return err
}
defer out.Close()
if _, err := io.Copy(out, resp); err != nil {
return err
}
return c.zstd.Decompress(ctx, cachedFilePath, true, global.RepoDir)
}
func (c *cache) Upload(ctx context.Context, cacheKey string, itemsToCompress ...string) error {
if c.skipUpload {
c.logger.Infof("Cache hit occurred on the key %s, not saving cache.", cacheKey)
return nil
}
validatedItems := make([]string, 0, len(itemsToCompress))
if len(itemsToCompress) == 0 {
dirs, err := c.getDefaultDirs()
c.logger.Debugf("Dirs: %+v", dirs)
if err != nil {
c.logger.Errorf("failed to get default cache directories, error %v", err)
return nil
}
itemsToCompress = append(itemsToCompress, dirs...)
}
// validate the file or dir paths if it exists.
for _, item := range itemsToCompress {
exists, err := fileutils.CheckIfExists(item)
if err != nil {
return err
}
if exists {
validatedItems = append(validatedItems, item)
} else {
c.logger.Debugf("%s does not exist, skipping upload", item)
}
}
if len(validatedItems) == 0 {
c.logger.Debugf("No valid files/dirs found to cache")
return nil
}
err := c.zstd.Compress(ctx, defaultCompressedFileName, true, global.RepoDir, validatedItems...)
if err != nil {
c.logger.Errorf("error while compressing files with key %s, error: %v", cacheKey, err)
return err
}
f, err := os.Open(filepath.Join(global.RepoDir, defaultCompressedFileName))
if err != nil {
c.logger.Errorf("error while opening compressed file with key %s, error: %v", cacheKey, err)
return err
}
defer f.Close()
sasURL, err := c.getCacheSASURL(ctx, cacheKey)
if err != nil {
c.logger.Errorf("Error while generating SAS Token, error %v", err)
return err
}
_, err = c.azureClient.CreateUsingSASURL(ctx, sasURL, f, "application/zstd")
if err != nil {
c.logger.Errorf("error while uploading cached file %s with key %s, error: %v", defaultCompressedFileName, cacheKey, err)
return err
}
return nil
}
func (c *cache) CacheWorkspace(ctx context.Context, subModule string) error {
tmpDir := os.TempDir()
workspaceCompressedFilename := workspaceCompressedFilenameV1
if subModule != "" {
workspaceCompressedFilename = fmt.Sprintf(workspaceCompressedFilenameV2, subModule)
}
if err := c.zstd.Compress(ctx, workspaceCompressedFilename, true, tmpDir, global.HomeDir); err != nil {
return err
}
src := filepath.Join(tmpDir, workspaceCompressedFilename)
dst := filepath.Join(global.WorkspaceCacheDir, workspaceCompressedFilename)
if err := fileutils.CopyFile(src, dst, false); err != nil {
return err
}
return nil
}
func (c *cache) ExtractWorkspace(ctx context.Context, subModule string) error {
tmpDir := os.TempDir()
workspaceCompressedFilename := workspaceCompressedFilenameV1
if subModule != "" {
workspaceCompressedFilename = fmt.Sprintf(workspaceCompressedFilenameV2, subModule)
}
src := filepath.Join(global.WorkspaceCacheDir, workspaceCompressedFilename)
dst := filepath.Join(tmpDir, workspaceCompressedFilename)
if err := fileutils.CopyFile(src, dst, false); err != nil {
return err
}
if err := c.zstd.Decompress(ctx, filepath.Join(tmpDir, workspaceCompressedFilename), true, global.HomeDir); err != nil {
return err
}
return nil
}
func (c *cache) getDefaultDirs() ([]string, error) {
defaultDirs := []string{}
f, err := os.Open(global.RepoDir)
if err != nil {
return defaultDirs, err
}
dirs, err := f.ReadDir(-1)
if err != nil {
return defaultDirs, err
}
defaultDirs = append(defaultDirs, global.RepoCacheDir)
for _, d := range dirs {
// if yarn.lock present cache yarn folder
if d.Name() == yarnLock {
defaultDirs = append(defaultDirs, filepath.Join(c.homeDir, ".cache", "yarn"))
return defaultDirs, nil
}
// if package-lock.json or npm-shrinkwrap.json cache .npm cache
if d.Name() == packageLock || d.Name() == npmShrinkwrap {
defaultDirs = append(defaultDirs, filepath.Join(c.homeDir, ".npm"))
return defaultDirs, nil
}
// if pnmpm-lock.yaml is present, cache .pnpm-store cache
if d.Name() == pnpmLock {
defaultDirs = append(defaultDirs, filepath.Join(c.homeDir, ".local", "share", "pnpm", "store"))
return defaultDirs, nil
}
}
// If none present cache node_modules
defaultDirs = append(defaultDirs, nodeModules)
return defaultDirs, nil
}
================================================
FILE: pkg/command/run.go
================================================
package command
import (
"context"
"fmt"
"io"
"os"
"os/exec"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/logstream"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
type manager struct {
logger lumber.Logger
secretParser core.SecretParser
azureClient core.AzureClient
}
// NewExecutionManager returns new instance of manger
func NewExecutionManager(secretParser core.SecretParser,
azureClient core.AzureClient,
logger lumber.Logger) core.ExecutionManager {
return &manager{logger: logger,
secretParser: secretParser,
azureClient: azureClient}
}
// ExecuteUserCommands executes user commands
func (m *manager) ExecuteUserCommands(ctx context.Context,
commandType core.CommandType,
payload *core.Payload,
runConfig *core.Run,
secretData map[string]string,
logwriter core.LogWriterStrategy,
cwd string) error {
script, err := m.createScript(runConfig.Commands, secretData)
if err != nil {
return err
}
envVars, err := m.GetEnvVariables(runConfig.EnvMap, secretData)
if err != nil {
return err
}
azureReader, azureWriter := io.Pipe()
defer azureWriter.Close()
errChan := logwriter.Write(ctx, azureReader)
defer m.closeAndWriteLog(azureWriter, errChan, commandType)
logWriter := lumber.NewWriter(m.logger)
defer logWriter.Close()
multiWriter := io.MultiWriter(logWriter, azureWriter)
maskWriter := logstream.NewMasker(multiWriter, secretData)
cmd := exec.CommandContext(ctx, "/bin/bash", "-c", script)
cmd.Dir = cwd
cmd.Env = envVars
cmd.Stdout = maskWriter
cmd.Stderr = maskWriter
if startErr := cmd.Start(); startErr != nil {
m.logger.Errorf("failed to start command: %s, error: %v", commandType, startErr)
return startErr
}
if execErr := cmd.Wait(); execErr != nil {
m.logger.Errorf("command %s, exited with error: %v", commandType, execErr)
return execErr
}
azureWriter.Close()
if uploadErr := <-errChan; uploadErr != nil {
m.logger.Errorf("failed to upload logs for command %s, error: %v", commandType, uploadErr)
return uploadErr
}
return nil
}
// ExecuteInternalCommands executes internal commands
func (m *manager) ExecuteInternalCommands(ctx context.Context,
commandType core.CommandType,
commands []string,
cwd string,
envMap, secretData map[string]string) error {
bashCommands := strings.Join(commands, " && ")
cmd := exec.CommandContext(ctx, "/bin/bash", "-c", bashCommands)
if cwd != "" {
cmd.Dir = cwd
}
logWriter := lumber.NewWriter(m.logger)
defer logWriter.Close()
cmd.Stderr = logWriter
cmd.Stdout = logWriter
m.logger.Debugf("Executing command of type %s", commandType)
if err := cmd.Run(); err != nil {
m.logger.Errorf("command of type %s failed with error: %v", commandType, err)
return err
}
return nil
}
// GetEnvVariables gives set environment variable
func (m *manager) GetEnvVariables(envMap, secretData map[string]string) ([]string, error) {
envVars := os.Environ()
for k, v := range envMap {
val, err := m.secretParser.SubstituteSecret(v, secretData)
if err != nil {
return nil, err
}
envVars = append(envVars, fmt.Sprintf("%s=%s", k, val))
}
return envVars, nil
}
func (m *manager) closeAndWriteLog(azureWriter *io.PipeWriter, errChan <-chan error, commandType core.CommandType) {
azureWriter.Close()
if uploadErr := <-errChan; uploadErr != nil {
m.logger.Errorf("failed to upload logs for command %s, error: %v", commandType, uploadErr)
}
}
================================================
FILE: pkg/command/run_test.go
================================================
package command
import (
"fmt"
"os"
"reflect"
"sort"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/secret"
"github.com/LambdaTest/test-at-scale/testutils"
)
func TestNewExecutionManager(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
azureClient := new(mocks.AzureClient)
secretParser := secret.New(logger)
type args struct {
secretParser core.SecretParser
azureClient core.AzureClient
logger lumber.Logger
}
tests := []struct {
name string
args args
want core.ExecutionManager
}{
{"Test initialisation func",
args{secretParser: secretParser,
azureClient: azureClient,
logger: logger,
},
&manager{
logger: logger,
secretParser: secretParser,
azureClient: azureClient,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewExecutionManager(tt.args.secretParser, tt.args.azureClient, tt.args.logger); !reflect.DeepEqual(got, tt.want) {
t.Errorf("NewExecutionManager() = %v, want %v", got, tt.want)
}
})
}
}
func Test_manager_GetEnvVariables(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
secretParser := secret.New(logger)
azureClient := new(mocks.AzureClient)
envVars := os.Environ()
type fields struct {
logger lumber.Logger
secretParser core.SecretParser
azureClient core.AzureClient
}
type args struct {
envMap map[string]string
secretData map[string]string
}
tests := []struct {
name string
fields fields
args args
want []string
wantErr bool
}{
{"Test GetEnvVariables for success",
fields{
logger: logger,
secretParser: secretParser,
azureClient: azureClient,
},
args{
envMap: map[string]string{"os": "linux", "arch": "amd64", "ver": "1.15"},
secretData: map[string]string{"key1": "abc", "key2": "xyz", "key3": "123"},
},
append(envVars, "arch=amd64 os=linux ver=1.15"),
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := &manager{
logger: tt.fields.logger,
secretParser: tt.fields.secretParser,
azureClient: tt.fields.azureClient,
}
got, err := m.GetEnvVariables(tt.args.envMap, tt.args.secretData)
if (err != nil) != tt.wantErr {
t.Errorf("manager.GetEnvVariables() error = %v, wantErr %v", err, tt.wantErr)
return
}
sort.Strings(got)
sort.Strings(tt.want)
received := fmt.Sprintf("%v", got)
want := fmt.Sprintf("%v", tt.want)
if len(received) != len(want) || received != want {
t.Errorf("manager.GetEnvVariables() = \n%v, \nwant \n%v", got, tt.want)
}
})
}
}
================================================
FILE: pkg/command/script.go
================================================
package command
import (
"bytes"
"fmt"
"strings"
)
// CreateScript converts a slice of individual shell commands to
// a shell script.
func (m *manager) createScript(commands []string, secretData map[string]string) (string, error) {
buf := new(bytes.Buffer)
fmt.Fprintln(buf)
fmt.Fprint(buf, optionScript)
fmt.Fprintln(buf)
var err error
for _, command := range commands {
escaped := fmt.Sprintf("%q", command)
escaped = strings.Replace(escaped, "$", `\$`, -1)
if len(secretData) > 0 {
command, err = m.secretParser.SubstituteSecret(command, secretData)
if err != nil {
return "", err
}
}
buf.WriteString(fmt.Sprintf(
traceScript,
escaped,
command,
))
}
return buf.String(), nil
}
// optionScript is a helper script this is added to the build
// to set shell options, in this case, to exit on error.
const optionScript = `
set -e
`
// traceScript is a helper script that is added to
// the build script to trace a command.
const traceScript = `
echo + %s
%s
`
================================================
FILE: pkg/command/script_test.go
================================================
package command
import (
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/mock"
)
func Test_manager_createScript(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
var azureClient core.AzureClient
commands := []string{"cmd1", "cmd2", "cmd3"}
secretData := map[string]string{"secret1": "s1", "secret2": "s2", "secret3": "s3"}
secretParser := new(mocks.SecretParser)
secretParserErr := new(mocks.SecretParser)
want := `
set -e
echo + "cmd1"
fakecommand
echo + "cmd2"
fakecommand
echo + "cmd3"
fakecommand
`
type fields struct {
logger lumber.Logger
secretParser core.SecretParser
azureClient core.AzureClient
}
type args struct {
commands []string
secretData map[string]string
}
tests := []struct {
name string
fields fields
args args
want string
wantErr bool
}{
{
"Test for success",
fields{logger: logger, secretParser: secretParser, azureClient: azureClient},
args{commands: commands, secretData: secretData},
want,
false,
},
{
"This should throw an error",
fields{logger: logger, secretParser: secretParserErr, azureClient: azureClient},
args{commands: commands, secretData: secretData},
"",
true,
},
}
secretParser.On("SubstituteSecret", mock.AnythingOfType("string"), secretData).Return(
func(command string, secretData map[string]string) string {
return "fakecommand"
},
func(command string, secretData map[string]string) error {
return nil
})
secretParserErr.On("SubstituteSecret", mock.AnythingOfType("string"), secretData).Return(
func(command string, secretData map[string]string) string {
return ""
},
func(command string, secretData map[string]string) error {
return errs.New("error from mocked interface")
})
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := &manager{
logger: tt.fields.logger,
secretParser: tt.fields.secretParser,
azureClient: tt.fields.azureClient,
}
got, err := m.createScript(tt.args.commands, tt.args.secretData)
if (err != nil) != tt.wantErr {
t.Errorf("manager.createScript() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("manager.createScript() = %v, want %v", got, tt.want)
}
})
}
}
================================================
FILE: pkg/core/interfaces.go
================================================
package core
import (
"context"
"io"
)
// PayloadManager defines operations for payload
type PayloadManager interface {
// ValidatePayload validates the nucleus payload
ValidatePayload(ctx context.Context, payload *Payload) error
// FetchPayload used for fetching the payload used for running nucleus
FetchPayload(ctx context.Context, payloadAddress string) (*Payload, error)
}
// TASConfigManager defines operations for tas config
type TASConfigManager interface {
// LoadAndValidate loads and returns the tas config
LoadAndValidate(ctx context.Context, version int, path string, eventType EventType, licenseTier Tier,
tasFilePathInRepo string) (interface{}, error)
// GetVersion returns TAS yml version
GetVersion(path string) (int, error)
// GetTasConfigFilePath returns file path of tas config
GetTasConfigFilePath(payload *Payload) (string, error)
}
// GitManager manages the cloning of git repositories
type GitManager interface {
// Clone repository from TAS config
Clone(ctx context.Context, payload *Payload, oauth *Oauth) error
// DownloadFileByCommit download file from repo for given commit
DownloadFileByCommit(ctx context.Context, gitProvider, repoSlug, commitID, filePath string, oauth *Oauth) (string, error)
}
// DiffManager manages the diff findings for the given payload
type DiffManager interface {
GetChangedFiles(ctx context.Context, payload *Payload, oauth *Oauth) (map[string]int, error)
}
// TestDiscoveryService services discovery of tests
type TestDiscoveryService interface {
// Discover executes the test discovery scripts.
Discover(ctx context.Context, args *DiscoveyArgs) (*DiscoveryResult, error)
// SendResult sends discovery result to TAS server
SendResult(ctx context.Context, testDiscoveryResult *DiscoveryResult) error
}
// BlockTestService is used for fetching blocklisted tests
type BlockTestService interface {
GetBlockTests(ctx context.Context, blocklistYAML []string, branch string) error
}
// TestExecutionService services execution of tests
type TestExecutionService interface {
// Run executes the test execution scripts
Run(ctx context.Context, testExecutionArgs *TestExecutionArgs) (results *ExecutionResults, err error)
// SendResults sends the test execution results to the TAS server.
SendResults(ctx context.Context, payload *ExecutionResults) (resp *TestReportResponsePayload, err error)
}
// CoverageService services coverage of tests
type CoverageService interface {
MergeAndUpload(ctx context.Context, payload *Payload) error
}
// TestStats is used for servicing stat collection
type TestStats interface {
CaptureTestStats(pid int32, collectStats bool) error
}
// Task is a service to update task status at neuron
type Task interface {
// UpdateStatus updates status of the task
UpdateStatus(ctx context.Context, payload *TaskPayload) error
}
// NotifMessage defines struct for notification message
type NotifMessage struct {
Type string
Value string
Status string
Error string
}
// AzureClient defines operation for working with azure store
type AzureClient interface {
FindUsingSASUrl(ctx context.Context, sasURL string) (io.ReadCloser, error)
Find(ctx context.Context, path string) (io.ReadCloser, error)
Create(ctx context.Context, path string, reader io.Reader, mimeType string) (string, error)
CreateUsingSASURL(ctx context.Context, sasURL string, reader io.Reader, mimeType string) (string, error)
GetSASURL(ctx context.Context, purpose SASURLPurpose, query map[string]interface{}) (string, error)
Exists(ctx context.Context, path string) (bool, error)
}
// ZstdCompressor performs zstd compression and decompression
type ZstdCompressor interface {
Compress(ctx context.Context, compressedFileName string, preservePath bool, workingDirectory string, filesToCompress ...string) error
Decompress(ctx context.Context, filePath string, preservePath bool, workingDirectory string) error
}
// CacheStore defines operation for working with the cache
//go:generate mockery --name CacheStore --keeptree --output ../mocks/CacheStore.go
type CacheStore interface {
// Download downloads cache present at cacheKey
Download(ctx context.Context, cacheKey string) error
// Upload creates, compresses and uploads cache at cacheKey
Upload(ctx context.Context, cacheKey string, itemsToCompress ...string) error
// CacheWorkspace caches the workspace onto a mounted volume
CacheWorkspace(ctx context.Context, subModule string) error
// ExtractWorkspace extracts the workspace cache from mounted volume
ExtractWorkspace(ctx context.Context, subModule string) error
}
// SecretParser defines operation for parsing the vault secrets in given path
type SecretParser interface {
// GetOauthSecret parses the oauth secret for given path
GetOauthSecret(filepath string) (*Oauth, error)
// GetRepoSecret parses the repo secret for given path
GetRepoSecret(string) (map[string]string, error)
// SubstituteSecret replace secret placeholders with their respective values
SubstituteSecret(command string, secretData map[string]string) (string, error)
// Expired reports whether the token is expired.
Expired(token *Oauth) bool
}
// ExecutionManager has responsibility for executing the preRun, postRun and internal commands
type ExecutionManager interface {
// ExecuteUserCommands executes the preRun or postRun commands given by user in his yaml.
ExecuteUserCommands(ctx context.Context,
commandType CommandType,
payload *Payload,
runConfig *Run,
secretData map[string]string,
logwriter LogWriterStrategy,
cwd string) error
// ExecuteInternalCommands executes the commands like installing runners and test discovery.
ExecuteInternalCommands(ctx context.Context,
commandType CommandType,
commands []string,
cwd string, envMap,
secretData map[string]string) error
// GetEnvVariables get the environment variables from the env map given by user.
GetEnvVariables(envMap, secretData map[string]string) ([]string, error)
}
// Requests is a util interface for making API Requests
type Requests interface {
// MakeAPIRequest makes an HTTP request with auth
MakeAPIRequest(ctx context.Context, httpMethod, endpoint string, body []byte, params map[string]interface{},
headers map[string]string) (rawbody []byte, statusCode int, err error)
}
// ListSubModuleService will sends the submodule count to TAS server
type ListSubModuleService interface {
// Send sends count of submodules to TAS server
Send(ctx context.Context, buildID string, totalSubmodule int) error
}
// Driver has the responsibility to run discovery and test execution
type Driver interface {
// RunDiscovery runs the test discovery
RunDiscovery(ctx context.Context, payload *Payload,
taskPayload *TaskPayload, oauth *Oauth, coverageDir string, secretMap map[string]string) error
// RunExecution runs the test execution
RunExecution(ctx context.Context, payload *Payload,
taskPayload *TaskPayload, oauth *Oauth, coverageDir string, secretMap map[string]string) error
}
// LogWriterStrategy interface is used to tag all log writing strategy
type LogWriterStrategy interface {
// Write reads data from io.Reader and write it to various data stream
Write(ctx context.Context, reader io.Reader) <-chan error
}
// Builder builds the driver for given tas yml version
type Builder interface {
// GetDriver returns driver for use
GetDriver(version int, ymlFilePath string) (Driver, error)
}
================================================
FILE: pkg/core/lifecycle.go
================================================
package core
import (
"context"
"errors"
"fmt"
"os"
"path/filepath"
"runtime/debug"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/fileutils"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
const (
endpointPostTestResults = "http://localhost:9876/results"
endpointPostTestList = "http://localhost:9876/test-list"
languageJs = "javascript"
)
// NewPipeline creates and returns a new Pipeline instance
func NewPipeline(cfg *config.NucleusConfig, logger lumber.Logger) (*Pipeline, error) {
return &Pipeline{
Cfg: cfg,
Logger: logger,
}, nil
}
// Start starts pipeline lifecycle
func (pl *Pipeline) Start(ctx context.Context) (err error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
startTime := time.Now()
pl.Logger.Debugf("Starting pipeline.....")
pl.Logger.Debugf("Fetching config")
// fetch configuration
payload, err := pl.PayloadManager.FetchPayload(ctx, pl.Cfg.PayloadAddress)
if err != nil {
pl.Logger.Fatalf("error while fetching payload: %v", err)
}
err = pl.PayloadManager.ValidatePayload(ctx, payload)
if err != nil {
pl.Logger.Fatalf("error while validating payload %v", err)
}
pl.Logger.Debugf("Payload for current task: %+v \n", *payload)
if pl.Cfg.CoverageMode {
if err = pl.CoverageService.MergeAndUpload(ctx, payload); err != nil {
pl.Logger.Fatalf("error while merge and upload coverage files %v", err)
}
os.Exit(0)
}
// set payload on pipeline object
pl.Payload = payload
taskPayload := pl.getTaskPayload(payload, startTime)
payload.TaskType = taskPayload.Type
pl.Logger.Infof("Running nucleus in %s mode", taskPayload.Type)
go func() {
// marking task to running state
if err = pl.Task.UpdateStatus(context.Background(), taskPayload); err != nil {
pl.Logger.Fatalf("failed to update task status %v", err)
}
}()
// update task status when pipeline exits
defer func() {
taskPayload.EndTime = time.Now()
if p := recover(); p != nil {
pl.Logger.Errorf("panic stack trace: %v\n%s", p, string(debug.Stack()))
taskPayload.Status = Error
taskPayload.Remark = errs.GenericErrRemark.Error()
} else if err != nil {
if errors.Is(err, context.Canceled) {
taskPayload.Status = Aborted
taskPayload.Remark = "Task aborted"
} else {
if _, ok := err.(*errs.StatusFailed); ok {
taskPayload.Status = Failed
} else {
taskPayload.Status = Error
}
taskPayload.Remark = err.Error()
}
}
if err = pl.Task.UpdateStatus(context.Background(), taskPayload); err != nil {
pl.Logger.Fatalf("failed to update task status %v", err)
}
}()
oauth, err := pl.SecretParser.GetOauthSecret(global.OauthSecretPath)
if err != nil {
pl.Logger.Errorf("failed to get oauth secret %v", err)
return err
}
// read secrets
secretMap, err := pl.SecretParser.GetRepoSecret(global.RepoSecretPath)
if err != nil {
pl.Logger.Errorf("Error in fetching Repo secrets %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
if pl.Cfg.DiscoverMode {
pl.Logger.Infof("Cloning repo ...")
err = pl.GitManager.Clone(ctx, pl.Payload, oauth)
if err != nil {
pl.Logger.Errorf("Unable to clone repo '%s': %s", payload.RepoLink, err)
err = &errs.StatusFailed{Remark: fmt.Sprintf("Unable to clone repo: %s", payload.RepoLink)}
return err
}
} else {
pl.Logger.Debugf("Extracting workspace")
// Replicate workspace
// TODO this will be changed after parallel discovery support
if err = pl.CacheStore.ExtractWorkspace(ctx, ""); err != nil {
pl.Logger.Errorf("Error replicating workspace: %+v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
}
coverageDir := filepath.Join(global.CodeCoverageDir, payload.OrgID, payload.RepoID, payload.BuildTargetCommit)
if payload.CollectCoverage {
if err = fileutils.CreateIfNotExists(coverageDir, true); err != nil {
pl.Logger.Errorf("failed to create coverage directory %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
}
filePath, err := pl.TASConfigManager.GetTasConfigFilePath(pl.Payload)
if err != nil {
return err
}
version, err := pl.TASConfigManager.GetVersion(filePath)
if err != nil {
pl.Logger.Errorf("Unable to load tas yaml file, error: %v", err)
err = &errs.StatusFailed{Remark: err.Error()}
return err
}
pl.Logger.Infof("TAS Version %f", version)
pl.setEnv(payload, coverageDir)
newDriver, err := pl.Builder.GetDriver(version, filePath)
if err != nil {
pl.Logger.Errorf("error crearing driver, error %v", err)
return err
}
if pl.Cfg.DiscoverMode {
err = newDriver.RunDiscovery(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
} else {
err = newDriver.RunExecution(ctx, payload, taskPayload, oauth, coverageDir, secretMap)
}
return err
}
func (pl *Pipeline) getTaskPayload(payload *Payload, startTime time.Time) *TaskPayload {
taskPayload := &TaskPayload{
TaskID: payload.TaskID,
BuildID: payload.BuildID,
RepoSlug: payload.RepoSlug,
RepoLink: payload.RepoLink,
OrgID: payload.OrgID,
RepoID: payload.RepoID,
GitProvider: payload.GitProvider,
StartTime: startTime,
Status: Running,
}
if pl.Cfg.DiscoverMode {
taskPayload.Type = DiscoveryTask
} else if pl.Cfg.FlakyMode {
taskPayload.Type = FlakyTask
} else {
taskPayload.Type = ExecutionTask
}
return taskPayload
}
func (pl *Pipeline) setEnv(payload *Payload, coverageDir string) {
// set testing taskID, orgID and buildID as environment variable
os.Setenv("TASK_ID", payload.TaskID)
os.Setenv("ORG_ID", payload.OrgID)
os.Setenv("BUILD_ID", payload.BuildID)
// set target commit_id as environment variable
os.Setenv("COMMIT_ID", payload.BuildTargetCommit)
// set repo_id as environment variable
os.Setenv("REPO_ID", payload.RepoID)
// set coverage_dir as environment variable
os.Setenv("CODE_COVERAGE_DIR", coverageDir)
os.Setenv("BRANCH_NAME", payload.BranchName)
os.Setenv("ENV", pl.Cfg.Env)
os.Setenv("ENDPOINT_POST_TEST_LIST", endpointPostTestList)
os.Setenv("ENDPOINT_POST_TEST_RESULTS", endpointPostTestResults)
os.Setenv("REPO_ROOT", global.RepoDir)
os.Setenv("BLOCK_TESTS_FILE", global.BlockTestFileLocation)
os.Setenv(global.SubModuleName, pl.Cfg.SubModule)
// set MODULE_PATH to empty as env variable
os.Setenv(global.ModulePath, "")
}
================================================
FILE: pkg/core/models.go
================================================
// Package core is the backbone of the tunnel client,
// it defines the tunnel lifecycle and allows attaching hooks for functionality
// as plugins.
package core
import (
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
// ExecutionID type
type ExecutionID string
// SASURLPurpose defines reasons for which SAS Url is required
type SASURLPurpose string
// SASURLPurpose values
const (
PurposeCache SASURLPurpose = "cache"
PurposeWorkspaceCache SASURLPurpose = "workspace_cache"
PurposePreRunLogs SASURLPurpose = "pre_run_logs"
PurposePostRunLogs SASURLPurpose = "post_run_logs"
PurposeExecutionLogs SASURLPurpose = "execution_logs"
)
// Tier type of synapse
type Tier string
// TaskTier values.
const (
Internal Tier = "internal"
XSmall Tier = "xsmall"
Small Tier = "small"
Medium Tier = "medium"
Large Tier = "large"
XLarge Tier = "xlarge"
)
// PostMergeStrategyName type
type PostMergeStrategyName string
// All const of type PostMergeStrategyName
const (
AfterNCommitStrategy PostMergeStrategyName = "after_n_commits"
)
// SplitMode is the mode for splitting tests
type SplitMode string
// list of support test splitting modes
const (
FileSplit SplitMode = "file"
TestSplit SplitMode = "test"
)
// CommandType defines type of command
type CommandType string
// Types of Command string
const (
PreRun CommandType = "prerun"
PostRun CommandType = "postrun"
InstallRunners CommandType = "installrunners"
Execution CommandType = "execution"
Discovery CommandType = "discovery"
Zstd CommandType = "zstd"
CoverageMerge CommandType = "coveragemerge"
InstallNodeVer CommandType = "installnodeversion"
InitGit CommandType = "initgit"
RenameCloneFile CommandType = "renameclonefile"
)
// EventType represents the webhook event
type EventType string
const (
// EventPush represents the push event.
EventPush EventType = "push"
// EventPullRequest represents the pull request event.
EventPullRequest EventType = "pull-request"
)
// CommitChangeList defines information related to commits
type CommitChangeList struct {
Sha string `json:"Sha"`
Link string `json:"Link"`
Added []string `json:"added"`
Removed []string `json:"removed"`
Modified []string `json:"modified"`
Message string `json:"message"`
}
// Payload defines structure of payload
type Payload struct {
RepoSlug string `json:"repo_slug"`
ForkSlug string `json:"fork_slug"`
RepoLink string `json:"repo_link"`
BuildTargetCommit string `json:"build_target_commit"`
BuildBaseCommit string `json:"build_base_commit"`
TaskID string `json:"task_id"`
BranchName string `json:"branch_name"`
BuildID string `json:"build_id"`
RepoID string `json:"repo_id"`
OrgID string `json:"org_id"`
GitProvider string `json:"git_provider"`
PrivateRepo bool `json:"private_repo"`
EventType EventType `json:"event_type"`
Diff string `json:"diff_url"`
PullRequestNumber int `json:"pull_request_number"`
Commits []CommitChangeList `json:"commits"`
TasFileName string `json:"tas_file_name"`
Locators string `json:"locators"`
LocatorAddress string `json:"locator_address"`
ParentCommitCoverageExists bool `json:"parent_commit_coverage_exists"`
LicenseTier Tier `json:"license_tier"`
CollectCoverage bool `json:"collect_coverage"`
TaskType TaskType `json:"-"`
}
// Pipeline defines all attributes of Pipeline
type Pipeline struct {
Cfg *config.NucleusConfig
Payload *Payload
Logger lumber.Logger
PayloadManager PayloadManager
TASConfigManager TASConfigManager
GitManager GitManager
ExecutionManager ExecutionManager
DiffManager DiffManager
CacheStore CacheStore
TestDiscoveryService TestDiscoveryService
BlockTestService BlockTestService
TestExecutionService TestExecutionService
CoverageService CoverageService
TestStats TestStats
Task Task
SecretParser SecretParser
Builder Builder
}
type DiscoveryResult struct {
Tests []TestPayload `json:"tests"`
ImpactedTests []string `json:"impactedTests"`
TestSuites []TestSuitePayload `json:"testSuites"`
ExecuteAllTests bool `json:"executeAllTests"`
Parallelism int `json:"parallelism"`
SplitMode SplitMode `json:"splitMode"`
RepoID string `json:"repoID"`
BuildID string `json:"buildID"`
CommitID string `json:"commitID"`
TaskID string `json:"taskID"`
OrgID string `json:"orgID"`
Branch string `json:"branch"`
SubModule string `json:"subModule"`
}
// ExecutionResult represents the request body for test and test suite execution
type ExecutionResult struct {
TestPayload []TestPayload `json:"testResults"`
TestSuitePayload []TestSuitePayload `json:"testSuiteResults"`
}
// ExecutionResults represents collection of execution results
type ExecutionResults struct {
TaskID string `json:"taskID"`
BuildID string `json:"buildID"`
RepoID string `json:"repoID"`
OrgID string `json:"orgID"`
CommitID string `json:"commitID"`
TaskType TaskType `json:"taskType"`
Results []ExecutionResult `json:"results"`
}
// TestReportResponsePayload represents the response body for test and test suite report api.
type TestReportResponsePayload struct {
TaskID string `json:"taskID"`
TaskStatus Status `json:"taskStatus"`
Remark string `json:"remark,omitempty"`
}
// TestPayload represents the request body for test execution
type TestPayload struct {
TestID string `json:"testID"`
Detail string `json:"_detail"`
SuiteID string `json:"suiteID"`
Suites []string `json:"_suites"`
Title string `json:"title"`
FullTitle string `json:"fullTitle"`
Name string `json:"name"`
Duration int `json:"duration"`
FilePath string `json:"file"`
Line string `json:"line"`
Col string `json:"col"`
CurrentRetry int `json:"currentRetry"`
Status string `json:"status"`
DAG []string `json:"dependsOn"`
Filelocator string `json:"locator"`
BlocklistSource string `json:"blocklistSource"`
Blocklisted bool `json:"blocklist"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Stats []TestProcessStats `json:"stats"`
FailureMessage string `json:"failureMessage"`
}
// TestSuitePayload represents the request body for test suite execution
type TestSuitePayload struct {
SuiteID string `json:"suiteID"`
SuiteName string `json:"suiteName"`
ParentSuiteID string `json:"parentSuiteID"`
BlocklistSource string `json:"blocklistSource"`
Blocklisted bool `json:"blocklist"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration int `json:"duration"`
Status string `json:"status"`
Stats []TestProcessStats `json:"stats"`
TotalTests int `json:"totalTests"`
}
// TestProcessStats process stats associated with each test
type TestProcessStats struct {
Memory uint64 `json:"memory_consumed,omitempty"`
CPU float64 `json:"cpu_percentage,omitempty"`
Storage uint64 `json:"storage,omitempty"`
RecordTime time.Time `json:"record_time"`
}
// Status represents the task status
type Status string
// Const related to task status
const (
Initiating Status = "initiating"
Running Status = "running"
Failed Status = "failed"
Aborted Status = "aborted"
Passed Status = "passed"
Error Status = "error"
)
// TaskPayload repersent task response given by nucleus to neuron
type TaskPayload struct {
TaskID string `json:"task_id"`
Status Status `json:"status"`
RepoSlug string `json:"repo_slug"`
RepoLink string `json:"repo_link"`
RepoID string `json:"repo_id"`
OrgID string `json:"org_id"`
GitProvider string `json:"git_provider"`
CommitID string `json:"commit_id,omitempty"`
BuildID string `json:"build_id"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time,omitempty"`
Remark string `json:"remark,omitempty"`
Type TaskType `json:"type"`
}
// CoverageManifest for post processing coverage job
type CoverageManifest struct {
Removedfiles []string `json:"removed_files"`
AllFilesExecuted bool `json:"all_files_executed"`
CoverageThreshold *CoverageThreshold `json:"coverage_threshold,omitempty"`
}
const (
// FileAdded file added in commit
FileAdded int = iota + 1
// FileRemoved file removed in commit
FileRemoved
// FileModified file modified in commit
FileModified
)
const (
// GitHub as git provider
GitHub string = "github"
// GitLab as git provider
GitLab string = "gitlab"
// Bitbucket as git provider
Bitbucket string = "bitbucket"
)
type TokenType string
const (
// Bearer as token type
Bearer TokenType = "Bearer"
// Basic as token type
Basic TokenType = "Basic"
)
// Oauth represents the sructure of Oauth
type Oauth struct {
AccessToken string `json:"access_token"`
Expiry time.Time `json:"expiry"`
RefreshToken string `json:"refresh_token"`
Type TokenType `json:"token_type,omitempty"`
}
// TASConfig represents the .tas.yml file
type TASConfig struct {
SmartRun bool `yaml:"smartRun"`
Framework string `yaml:"framework" validate:"required,oneof=jest mocha jasmine golang junit"`
Blocklist []string `yaml:"blocklist"`
Postmerge *Merge `yaml:"postMerge" validate:"omitempty"`
Premerge *Merge `yaml:"preMerge" validate:"omitempty"`
Cache *Cache `yaml:"cache" validate:"omitempty"`
Prerun *Run `yaml:"preRun" validate:"omitempty"`
Postrun *Run `yaml:"postRun" validate:"omitempty"`
Parallelism int `yaml:"parallelism"`
SplitMode SplitMode `yaml:"splitMode" validate:"oneof=test file"`
SkipCache bool `yaml:"skipCache"`
ConfigFile string `yaml:"configFile" validate:"omitempty"`
CoverageThreshold *CoverageThreshold `yaml:"coverageThreshold" validate:"omitempty"`
Tier Tier `yaml:"tier" validate:"oneof=xsmall small medium large xlarge"`
NodeVersion string `yaml:"nodeVersion" validate:"omitempty,semver"`
ContainerImage string `yaml:"containerImage"`
FrameworkVersion int `yaml:"frameworkVersion" validate:"omitempty"`
Version string `yaml:"version" validate:"required"`
}
// CoverageThreshold reprents the code coverage threshold
type CoverageThreshold struct {
Branches float64 `yaml:"branches" json:"branches" validate:"number,min=0,max=100"`
Lines float64 `yaml:"lines" json:"lines" validate:"number,min=0,max=100"`
Functions float64 `yaml:"functions" json:"functions" validate:"number,min=0,max=100"`
Statements float64 `yaml:"statements" json:"statements" validate:"number,min=0,max=100"`
PerFile bool `yaml:"perFile" json:"perFile"`
}
// Cache represents the user's cached directories
type Cache struct {
Key string `yaml:"key" validate:"required"`
Paths []string `yaml:"paths" validate:"required"`
}
// Modifier defines struct for modifier
type Modifier struct {
Type string
Config string
Cli string
}
// Run represents pre and post runs
type Run struct {
Commands []string `yaml:"command" validate:"omitempty,gt=0"`
EnvMap map[string]string `yaml:"env" validate:"omitempty,gt=0"`
}
// Merge represents pre and post merge
type Merge struct {
Patterns []string `yaml:"pattern" validate:"required,gt=0"`
EnvMap map[string]string `yaml:"env" validate:"omitempty,gt=0"`
}
// Stability defines struct for stability
type Stability struct {
ConsecutiveRuns int `yaml:"consecutive_runs"`
}
// TaskType specifies the type of a Task
type TaskType string
// Task Type values.
const (
DiscoveryTask TaskType = "discover"
ExecutionTask TaskType = "execute"
FlakyTask TaskType = "flaky"
)
// TestStatus stores tests status
type TestStatus string
const (
Blocklisted TestStatus = "blocklisted"
Quarantined TestStatus = "quarantined"
)
// TASConfigV2 repersent TASConfig for version 2 and above
type TASConfigV2 struct {
SmartRun bool `yaml:"smartRun"`
Cache *Cache `yaml:"cache" validate:"omitempty"`
Tier Tier `yaml:"tier" validate:"oneof=xsmall small medium large xlarge"`
PostMerge *MergeV2 `yaml:"postMerge" validate:"omitempty"`
PreMerge *MergeV2 `yaml:"preMerge" validate:"omitempty"`
SkipCache bool `yaml:"skipCache"`
CoverageThreshold *CoverageThreshold `yaml:"coverageThreshold" validate:"omitempty"`
Parallelism int `yaml:"parallelism"` // TODO: will be supported later
Version string `yaml:"version" validate:"required"`
SplitMode SplitMode `yaml:"splitMode" validate:"oneof=test file"`
ContainerImage string `yaml:"containerImage"`
NodeVersion string `yaml:"nodeVersion" validate:"omitempty,semver"`
}
// MergeV2 repersent MergeConfig for version 2 and above
type MergeV2 struct {
PreRun *Run `yaml:"preRun" validate:"omitempty"`
SubModules []SubModule `yaml:"subModules" validate:"required,gt=0"`
EnvMap map[string]string `yaml:"env" validate:"omitempty,gt=0"`
}
// SubModule represent the structure of subModule yaml v2
type SubModule struct {
Name string `yaml:"name" validate:"required"`
Path string `yaml:"path" validate:"required"`
Patterns []string `yaml:"pattern" validate:"required,gt=0"`
Framework string `yaml:"framework" validate:"required,oneof=jest mocha jasmine"`
Blocklist []string `yaml:"blocklist"`
Prerun *Run `yaml:"preRun" validate:"omitempty"`
Postrun *Run `yaml:"postRun" validate:"omitempty"`
RunPrerunEveryTime bool `yaml:"runPreRunEveryTime"`
Parallelism int `yaml:"parallelism"` // TODO: will be supported later
ConfigFile string `yaml:"configFile" validate:"omitempty"`
}
// TasVersion used to identify yaml version
type TasVersion struct {
Version string `yaml:"version" validate:"required"`
}
// SubModuleList repersent submodule list API payload
type SubModuleList struct {
BuildID string `json:"buildID"`
TotalSubModule int `json:"totalSubModule"`
}
// DiscoveyArgs specify the arguments for discovery
type DiscoveyArgs struct {
TestPattern []string
Payload *Payload
EnvMap map[string]string
SecretData map[string]string
TestConfigFile string
FrameWork string
SmartRun bool
Diff map[string]int
DiffExists bool
FrameWorkVersion int
CWD string
}
// TestExecutionArgs specify the argument for test discovery
type TestExecutionArgs struct {
Payload *Payload
CoverageDir string
LogWriterStrategy LogWriterStrategy
TestPattern []string
EnvMap map[string]string
TestConfigFile string
FrameWork string
SecretData map[string]string
FrameWorkVersion int
CWD string
}
// YMLParsingRequestMessage defines yml parsing request received from TAS server
type YMLParsingRequestMessage struct {
GitProvider string `json:"gitProvider"`
CommitID string `json:"commitID"`
Event EventType `json:"eventType"`
RepoSlug string `json:"repoSlug"`
TasFileName string `json:"tasFilePath"`
LicenseTier Tier `json:"license_tier"`
OrgID string `json:"orgID"`
BuildID string `json:"buildID"`
}
// TASConfigDownloaderOutput repersent output return by tasconfig downloader
type TASConfigDownloaderOutput struct {
Version int `json:"version"`
TASConfig interface{} `json:"tasConfig"`
}
// YMLParsingResultMessage repersent message sent to TAS server in response of yml parsing request
type YMLParsingResultMessage struct {
ErrorMsg string `json:"ErrorMsg"`
OrgID string `json:"orgID"`
BuildID string `json:"buildID"`
YMLOutput TASConfigDownloaderOutput `json:"ymlOutput"`
}
================================================
FILE: pkg/core/runner.go
================================================
package core
import (
"context"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/errs"
)
// Specs denotes system specification
type Specs struct {
CPU float32
RAM int64
}
// TierOpts is const map which map each tier to specs
var TierOpts = map[Tier]Specs{
Internal: {CPU: 0.5, RAM: 256},
XSmall: {CPU: 1, RAM: 2048},
Small: {CPU: 2, RAM: 4096},
Medium: {CPU: 4, RAM: 8192},
Large: {CPU: 8, RAM: 16384},
XLarge: {CPU: 16, RAM: 32768},
}
// ContainerStatus contains status of container
type ContainerStatus struct {
Done bool
Error errs.Err
}
// ContainerImageConfig contains registry config for docker
type ContainerImageConfig struct {
AuthRegistry string
Image string
Mode config.ModeType
PullPolicy config.PullPolicyType
}
// DockerRunner defines operations for docker
type DockerRunner interface {
// Creates the execution enging
Create(context.Context, *RunnerOptions) ContainerStatus
// Run runs the execution engine
Run(context.Context, *RunnerOptions) ContainerStatus
// WaitForRunning waits for runner to get completed
WaitForCompletion(ctx context.Context, r *RunnerOptions) error
// Destroy the execution engine
Destroy(ctx context.Context, r *RunnerOptions) error
// GetInfo will get resources details of the infra
GetInfo(context.Context) (float32, int64)
// Initiate runs docker containers
Initiate(context.Context, *RunnerOptions, chan ContainerStatus)
// PullImage will pull image from remote
PullImage(containerImageConfig *ContainerImageConfig, r *RunnerOptions) error
// KillRunningDocker kills container spawn by synapse
KillRunningDocker(ctx context.Context)
// KillContainerForBuildID kills synapse container which is running for given buildID
KillContainerForBuildID(buildID string) error
CreateVolume(ctx context.Context, r *RunnerOptions) error
// RemoveOldVolumes removes volumes that are older than X hours
RemoveOldVolumes(ctx context.Context)
// CopyFileToContainer copies content to container in file
CopyFileToContainer(ctx context.Context, path, fileName, containerID string, content []byte) error
// FindVolumes checks if docker volume is available
FindVolumes(volumeName string) (bool, error)
// RemoveVolume removes volume
RemoveVolume(ctx context.Context, volumeName string) error
}
// VolumeDetails docker volume options
type VolumeDetails struct {
CreatedAt time.Time `json:"CreatedAt,omitempty"`
Driver string `json:"Driver"`
Labels map[string]string `json:"Labels"`
Mountpoint string `json:"Mountpoint"`
Name string `json:"Name"`
Options map[string]string `json:"Options"`
Scope string `json:"Scope"`
Status map[string]interface{} `json:"Status,omitempty"`
}
// RunnerOptions provides the the required instructions for execution engine.
type RunnerOptions struct {
ContainerID string `json:"container_id"`
DockerImage string `json:"docker_image"`
ContainerPort int `json:"container_port"`
HostPort int `json:"host_port"`
Label map[string]string `json:"label"`
NameSpace string `json:"name_space"`
ServiceAccount string `json:"service_account"`
PodName string `json:"pod_name"`
ContainerName string `json:"container_name"`
ContainerArgs []string `json:"container_args"`
ContainerCommands []string `json:"container_commands"`
HostVolumePath string `json:"host_volume_path"`
Env []string `json:"env"`
OrgID string `json:"org_id"`
Vault *VaultOpts `json:"vault"`
LogfilePath string `json:"logfile_path"`
PodType PodType `json:"pod_type"`
Tier Tier `json:"tier"`
}
// VaultOpts provides the vault path options
type VaultOpts struct {
// SecretPath path of the repo secrets.
SecretPath string
// TokenPath path of the user token.
TokenPath string
// RoleName vault role name
RoleName string
// Namespace is the default vault namespace
Namespace string
}
// PodType specifies the type of pod
type PodType string
// Values that PodType can take
const (
NucleusPod PodType = "nucleus"
CoveragePod PodType = "coverage"
)
================================================
FILE: pkg/core/secrets.go
================================================
package core
import "github.com/LambdaTest/test-at-scale/config"
// Secret struct for holding secret data
type Secret map[string]string
// VaultSecret holds secrets in vault format
type VaultSecret struct {
Secrets Secret `json:"data"`
}
// SecretsManager defines operation for secrets
type SecretsManager interface {
// GetLambdatestSecrets returns lambdatest config
GetLambdatestSecrets() *config.LambdatestConfig
// GetDockerSecrets returns Mode , RegistryAuth, and URL for pulling remote docker image
GetDockerSecrets(r *RunnerOptions) (ContainerImageConfig, error)
// GetSynapseName returns synapse name mentioned in config
GetSynapseName() string
// GetOauthToken returns oauth token
GetOauthToken() *Oauth
// GetGitSecretBytes get git secrets in bytes
GetGitSecretBytes() ([]byte, error)
// GetRepoSecretBytes get repo secrets in bytes
GetRepoSecretBytes(repo string) ([]byte, error)
}
================================================
FILE: pkg/core/synapse.go
================================================
package core
import (
"context"
"sync"
)
// SynapseManager denfines operations for synapse client
type SynapseManager interface {
// InitiateConnection initiates the connection with LT cloud
InitiateConnection(ctx context.Context, wg *sync.WaitGroup, connectionFailed chan struct{})
}
================================================
FILE: pkg/core/wsproto.go
================================================
package core
// MessageType defines type of message
type MessageType string
// StatusType defines type job status
type StatusType string
// StatType defines type of resource status
type StatType string
// types of messages
const (
MsgLogin MessageType = "login"
MsgLogout MessageType = "logout"
MsgTask MessageType = "task"
MsgInfo MessageType = "info"
MsgError MessageType = "error"
MsgResourceStats MessageType = "resourcestats"
MsgJobInfo MessageType = "jobinfo"
MsgBuildAbort MessageType = "build_abort"
MsgYMLParsingRequest MessageType = "yml_parsing_request"
MsgYMLParsingResult MessageType = "yml_parsing_result"
)
// JobInfo types
const (
JobCompleted StatusType = "complete"
JobStarted StatusType = "started"
JobFailed StatusType = "failed"
JobAborted StatusType = "aborted"
)
// ResourceStats types
const (
ResourceRelease StatType = "release"
ResourceCapture StatType = "capture"
)
// Message struct
type Message struct {
Type MessageType `json:"type"`
Content []byte `json:"content"`
Success bool `json:"success"`
}
// LoginDetails struct
type LoginDetails struct {
Name string `json:"name"`
SynapseID string `json:"synapse_id"`
SecretKey string `json:"secret_key"`
CPU float32 `json:"cpu"`
RAM int64 `json:"ram"`
SynapseVersion string `json:"synapse_version"`
}
// ResourceStats struct for CPU, RAM details
type ResourceStats struct {
Status StatType `json:"status"`
CPU float32 `json:"cpu"`
RAM int64 `json:"ram"`
}
// JobInfo stuct for job updates info
type JobInfo struct {
Status StatusType `json:"status"`
JobID string `json:"job_id"`
ID string `json:"id"`
Mode string `json:"mode"`
BuildID string `json:"build_id"`
Message string `json:"message"`
}
// BuildAbortMsg struct defines message for aborting a build
type BuildAbortMsg struct {
BuildID string `json:"build_id"`
}
================================================
FILE: pkg/cron/setup.go
================================================
package cron
import (
"context"
"sync"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/robfig/cron/v3"
)
// Setup initializes all crons on service startup
func Setup(ctx context.Context, wg *sync.WaitGroup, logger lumber.Logger, runner core.DockerRunner) {
defer wg.Done()
c := cron.New()
if _, err := c.AddFunc("@every 5m", func() { cleanupBuildCache(runner) }); err != nil {
logger.Errorf("error setting up cron")
return
}
c.Start()
<-ctx.Done()
c.Stop()
logger.Infof("Caller has requested graceful shutdown. Returning.....")
}
func cleanupBuildCache(runner core.DockerRunner) {
runner.RemoveOldVolumes(context.Background())
}
================================================
FILE: pkg/diffmanager/setup.go
================================================
// Package diffmanager is used for cloning repo
package diffmanager
import (
"bufio"
"context"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/urlmanager"
)
//TODO: add logger
type diffManager struct {
cfg *config.NucleusConfig
client http.Client
logger lumber.Logger
}
type gitLabDiffList struct {
CommitDiff []gitLabDiff `json:"diffs"`
PRDiff []gitLabDiff `json:"changes"`
}
type gitLabDiff struct {
OldPath string `json:"old_path"`
NewPath string `json:"new_path"`
NewFile bool `json:"new_file"`
RenamedFile bool `json:"renamed_file"`
DeletedFile bool `json:"deleted_file"`
}
// NewDiffManager Instantiate DiffManager
func NewDiffManager(cfg *config.NucleusConfig, logger lumber.Logger) *diffManager {
return &diffManager{
cfg: cfg,
logger: logger,
client: http.Client{
Timeout: 30 * time.Second,
Transport: &http.Transport{
DisableKeepAlives: true,
},
},
}
}
// Updated values with "or" operation
func (dm *diffManager) updateWithOr(m map[string]int, key string, value int) {
if _, exists := m[key]; !exists {
m[key] = 0
}
m[key] = m[key] | value
}
func (dm *diffManager) getCommitDiff(gitprovider, repoURL string, oauth *core.Oauth, baseCommit, targetCommit, forkSlug string) ([]byte, error) {
if baseCommit == "" {
dm.logger.Debugf("basecommit is empty for gitprovider %v error %v", gitprovider, errs.ErrGitDiffNotFound)
return nil, errs.ErrGitDiffNotFound
}
url, err := url.Parse(repoURL)
if err != nil {
return nil, err
}
apiURLString, err := urlmanager.GetCommitDiffURL(gitprovider, url.Path, baseCommit, targetCommit, forkSlug)
if err != nil {
dm.logger.Errorf("failed to get api url for gitprovider: %v error: %v", gitprovider, err)
return nil, err
}
apiURL, err := url.Parse(apiURLString)
if err != nil {
return nil, err
}
req, err := http.NewRequest(http.MethodGet, apiURL.String(), nil)
if err != nil {
return nil, err
}
if oauth.AccessToken != "" {
req.Header.Add("Authorization", fmt.Sprintf("%s %s", oauth.Type, oauth.AccessToken))
}
req.Header.Add("Accept", "application/vnd.github.v3.diff")
resp, err := dm.client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
//TODO: Handle initial commit case
if resp.StatusCode != http.StatusOK {
return nil, errs.ErrGitDiffNotFound
}
return ioutil.ReadAll(resp.Body)
}
func (dm *diffManager) getPRDiff(gitprovider, repoURL string, prNumber int, oauth *core.Oauth) ([]byte, error) {
parsedUrl, err := url.Parse(repoURL)
if err != nil {
return nil, err
}
diffURL, err := urlmanager.GetPullRequestDiffURL(gitprovider, parsedUrl.Path, prNumber)
if err != nil {
dm.logger.Errorf("failed to get diff url error: %v", err)
return nil, err
}
changeListURL, err := url.Parse(diffURL)
if err != nil {
dm.logger.Errorf("failed to get changelist url error: %v", err)
return nil, err
}
req, err := http.NewRequest(http.MethodGet, changeListURL.String(), nil)
if err != nil {
dm.logger.Errorf("failed to create http request for changelist url error: %v", err)
return nil, err
}
req.Header.Add("Authorization", fmt.Sprintf("%s %s", oauth.Type, oauth.AccessToken))
req.Header.Set("Accept", "application/vnd.github.v3.diff")
resp, err := dm.client.Do(req)
if err != nil {
dm.logger.Errorf("failed to get changedlist url api error: %v", err)
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, errors.New("non 200 response")
}
return ioutil.ReadAll(resp.Body)
}
func (dm *diffManager) parseDiff(diff string) map[string]int {
m := make(map[string]int)
scanner := bufio.NewScanner(strings.NewReader(diff))
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "--- a/") {
// removed
dm.updateWithOr(m, line[6:], core.FileRemoved)
} else if strings.HasPrefix(line, "+++ b/") {
// added or updated
dm.updateWithOr(m, line[6:], core.FileAdded)
}
}
return m
}
func (dm *diffManager) parseGitLabDiff(eventType core.EventType, diff []byte) (map[string]int, error) {
m := make(map[string]int)
var diffList gitLabDiffList
err := json.Unmarshal(diff, &diffList)
if err != nil {
dm.logger.Errorf("failed to unmarshall diff %v error %v", string(diff), err)
return nil, err
}
diffs := diffList.PRDiff
if eventType == core.EventPush {
diffs = diffList.CommitDiff
}
for _, diff := range diffs {
if diff.DeletedFile {
// removed
dm.updateWithOr(m, diff.OldPath, core.FileRemoved)
} else if diff.NewFile {
// added
dm.updateWithOr(m, diff.NewPath, core.FileAdded)
} else {
// updated
dm.updateWithOr(m, diff.NewPath, core.FileModified)
}
}
return m, nil
}
func (dm *diffManager) parseGitDiff(gitprovider string, eventType core.EventType, diff []byte) (map[string]int, error) {
switch gitprovider {
case core.GitHub, core.Bitbucket:
return dm.parseDiff(string(diff)), nil
case core.GitLab:
return dm.parseGitLabDiff(eventType, diff)
default:
return nil, errs.ErrUnsupportedGitProvider
}
}
// GetChangedFiles Figure out changed files
func (dm *diffManager) GetChangedFiles(ctx context.Context, payload *core.Payload, oauth *core.Oauth) (map[string]int, error) {
// map to store file and type of change (added, removed, modified)
var m map[string]int
var diff []byte
var err error
if payload.EventType == core.EventPullRequest {
diff, err = dm.getPRDiff(payload.GitProvider, payload.RepoLink, payload.PullRequestNumber, oauth)
if err != nil {
dm.logger.Errorf("failed to parse pr diff for gitprovider: %s error: %v", payload.GitProvider, err)
return nil, err
}
} else {
diff, err = dm.getCommitDiff(payload.GitProvider, payload.RepoLink, oauth, payload.BuildBaseCommit, payload.BuildTargetCommit, payload.ForkSlug)
if err != nil {
dm.logger.Errorf("failed to get commit diff for gitprovider: %s error: %v", payload.GitProvider, err)
return nil, err
}
}
m, err = dm.parseGitDiff(payload.GitProvider, payload.EventType, diff)
if err != nil {
dm.logger.Errorf("failed to parse gitdiff for gitprovider: %s error: %v", payload.GitProvider, err)
return nil, err
}
return m, nil
}
================================================
FILE: pkg/diffmanager/setup_test.go
================================================
// Package diffmanager is used for cloning repo
package diffmanager
import (
"context"
"math/rand"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/testutils"
)
func Test_updateWithOr(t *testing.T) {
check := func(t *testing.T) {
dm := &diffManager{}
m := make(map[string]int)
key := "key"
val := rand.Intn(1000) // nolint:gosec
dm.updateWithOr(m, key, val)
if ans, exists := m[key]; !exists || ans != val {
t.Errorf("Expected: %v, received: %v", val, m[key])
}
newVal := rand.Intn(1000) // nolint:gosec
dm.updateWithOr(m, key, newVal)
if ans, exists := m[key]; !exists || ans != (val|newVal) {
t.Errorf("Expected: %v, received: %v", val|newVal, m[key])
}
}
t.Run("Test_updateWithOr", func(t *testing.T) {
check(t)
})
}
func Test_diffManager_GetChangedFiles_PRDiff(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils")), // mock data stored at testutils/testdata
)
defer server.Close()
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
config, err := testutils.GetConfig()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
dm := NewDiffManager(config, logger)
type args struct {
ctx context.Context
payload *core.Payload
oauth *core.Oauth
}
tests := []struct {
name string
args args
want map[string]int
wantErr bool
}{
// expects to hit Server.URL/testdata/pulls/2
{"Test GetChangedFile for PRdiff for github gitprovider",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/testdata", RepoLink: server.URL + "/testdata",
GitProvider: "github", PrivateRepo: false, EventType: "pull-request", Diff: "xyz", PullRequestNumber: 2},
oauth: &core.Oauth{}}, map[string]int{}, false},
// expects to hit Server.URL/testdata/merge_requests/2/changes
{"Test GetChangedFile for PRdiff for gitlab gitprovider",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/testdata", RepoLink: server.URL + "/testdata",
GitProvider: "gitlab", PrivateRepo: false, EventType: "pull-request", Diff: "xyz", PullRequestNumber: 2},
oauth: &core.Oauth{}},
map[string]int{}, false},
{"Test GetChangedFile for Commitdiff for unsupported gitprovider", args{ctx: context.TODO(),
payload: &core.Payload{GitProvider: "unsupported"},
oauth: &core.Oauth{}},
map[string]int{}, true},
{"Test GetChangedFile for PRdiff for unsupported gitprovider", args{ctx: context.TODO(),
payload: &core.Payload{GitProvider: "unsupported", EventType: "pull-request"},
oauth: &core.Oauth{}}, map[string]int{}, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
global.APIHostURLMap[tt.args.payload.GitProvider] = server.URL
resp, err := dm.GetChangedFiles(tt.args.ctx, tt.args.payload, tt.args.oauth)
if tt.wantErr {
if err == nil {
t.Errorf("GetChangedFiles() error = %v, wantErr %v", err, tt.wantErr)
}
return
}
expResp := map[string]int{"src/steps/resource.ts": 3}
if err != nil {
t.Errorf("error in getting changed files, error %v", err.Error())
} else if tt.args.payload.GitProvider == "github" && !reflect.DeepEqual(resp, expResp) {
t.Errorf("Expected: %+v, received: %+v", expResp, resp)
} else if tt.args.payload.GitProvider == "gitlab" && len(resp) != 17 {
t.Errorf("Expected map entries: 17, received: %v, received map: %v", len(resp), resp)
}
})
}
}
func Test_diffManager_GetChangedFiles_CommitDiff_Github(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils")),
)
defer server.Close()
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
config, err := testutils.GetConfig()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
dm := NewDiffManager(config, logger)
type args struct {
ctx context.Context
payload *core.Payload
oauth *core.Oauth
}
tests := []struct {
name string
args args
want map[string]int
wantErr bool
}{
// expects to hit serverURL/testdata/compare/abc...xyz
{"Test GetChangedFile for CommitDiff for github gitprovider",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/testdata", RepoLink: server.URL + "/testdata", BuildTargetCommit: "xyz", BuildBaseCommit: "abc",
GitProvider: "github", EventType: "push", Diff: "xyz", PullRequestNumber: 2},
oauth: &core.Oauth{}},
map[string]int{}, false},
{"Test GetChangedFile for CommitDiff for github provider and empty base commit",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/testdata", RepoLink: server.URL + "/testdata", BuildBaseCommit: "",
GitProvider: "gitlab", EventType: "push"}, oauth: &core.Oauth{}}, map[string]int{}, true},
{"Test GetChangedFile for CommitDiff for github provider for non 200 response",
args{ctx: context.TODO(), payload: &core.Payload{RepoLink: server.URL + "/notfound/", BuildTargetCommit: "xyz", BuildBaseCommit: "abc",
GitProvider: "gitlab", EventType: "push"}, oauth: &core.Oauth{}}, map[string]int{}, true},
{"Test GetChangedFile for CommitDiff for non supported git provider",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/notfound/", RepoLink: server.URL + "/notfound/", BuildTargetCommit: "xyz", BuildBaseCommit: "abc",
GitProvider: "gittest", EventType: "push"}, oauth: &core.Oauth{}}, map[string]int{}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
global.APIHostURLMap[tt.args.payload.GitProvider] = server.URL
resp, err := dm.GetChangedFiles(tt.args.ctx, tt.args.payload, tt.args.oauth)
// t.Errorf("")
if tt.args.payload.GitProvider == "gittest" {
if resp != nil || err == nil {
t.Errorf("Expected error: 'unsupoorted git provider', received: %v\nexpected response: nil, received: %v", err, resp)
}
return
}
if tt.wantErr {
if err == nil {
t.Errorf("Expected error: %v, Received error: %v, response: %v", tt.wantErr, err, resp)
}
return
}
expResp := make(map[string]int)
if err != nil {
t.Errorf("error in getting changed files, error %v", err.Error())
} else if !reflect.DeepEqual(resp, expResp) {
t.Errorf("Expected: %+v, received: %+v", expResp, resp)
}
})
}
}
func Test_diffManager_GetChangedFiles_CommitDiff_Gitlab(t *testing.T) {
data, err := testutils.GetGitlabCommitDiff()
if err != nil {
t.Errorf("Received error in getting test gitlab commit diff, error: %v", err)
}
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/testdata/repository/compare" {
t.Errorf("Expected to request, got: %v", r.URL.Path)
}
w.WriteHeader(200)
w.Header().Set("Content-Type", "application/json")
_, err2 := w.Write(data)
if err2 != nil {
t.Errorf("Error in writing response data, error: %v", err)
}
}))
defer server.Close()
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
config, err := testutils.GetConfig()
if err != nil {
t.Errorf("Can't get logger, received: %s", err)
}
dm := NewDiffManager(config, logger)
type args struct {
ctx context.Context
payload *core.Payload
oauth *core.Oauth
}
tests := []struct {
name string
args args
want map[string]int
}{
// expects to hit serverURL/testdata/repository/compare?from=abc&to=abcd
{"Test GetChangedFile for CommitDiff for gitlab gitprovider",
args{ctx: context.TODO(),
payload: &core.Payload{RepoSlug: "/testdata", RepoLink: server.URL + "/testdata",
BuildTargetCommit: "abcd", BuildBaseCommit: "abc", TaskID: "taskid", BranchName: "branchname", BuildID: "buildid", RepoID: "repoid",
OrgID: "orgid", GitProvider: "gitlab", PrivateRepo: false, EventType: "push", Diff: "xyz", PullRequestNumber: 2},
oauth: &core.Oauth{}}, map[string]int{}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
global.APIHostURLMap[tt.args.payload.GitProvider] = server.URL
resp, err := dm.GetChangedFiles(tt.args.ctx, tt.args.payload, tt.args.oauth)
if err != nil {
t.Errorf("error in getting changed files, error %v", err.Error())
} else if len(resp) != 202 {
t.Errorf("Expected map length: 202, received: %v\nreceived map: %v", len(resp), resp)
}
})
}
}
================================================
FILE: pkg/driver/builder.go
================================================
package driver
import (
"context"
"fmt"
"os"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
const (
firstVersion = 1
secondVersion = 2
)
type (
Builder struct {
Logger lumber.Logger
TestExecutionService core.TestExecutionService
TestDiscoveryService core.TestDiscoveryService
AzureClient core.AzureClient
BlockTestService core.BlockTestService
ExecutionManager core.ExecutionManager
TASConfigManager core.TASConfigManager
CacheStore core.CacheStore
DiffManager core.DiffManager
ListSubModuleService core.ListSubModuleService
}
NodeInstaller struct {
logger lumber.Logger
ExecutionManager core.ExecutionManager
}
)
func (b *Builder) GetDriver(version int, filePath string) (core.Driver, error) {
switch version {
case firstVersion:
return &driverV1{
logger: b.Logger,
TestExecutionService: b.TestExecutionService,
TestDiscoveryService: b.TestDiscoveryService,
AzureClient: b.AzureClient,
BlockTestService: b.BlockTestService,
ExecutionManager: b.ExecutionManager,
TASConfigManager: b.TASConfigManager,
CacheStore: b.CacheStore,
DiffManager: b.DiffManager,
ListSubModuleService: b.ListSubModuleService,
TASVersion: firstVersion,
TASFilePath: filePath,
nodeInstaller: NodeInstaller{
logger: b.Logger,
ExecutionManager: b.ExecutionManager,
},
}, nil
case secondVersion:
return &driverV2{
logger: b.Logger,
TestExecutionService: b.TestExecutionService,
TestDiscoveryService: b.TestDiscoveryService,
AzureClient: b.AzureClient,
BlockTestService: b.BlockTestService,
ExecutionManager: b.ExecutionManager,
TASConfigManager: b.TASConfigManager,
CacheStore: b.CacheStore,
DiffManager: b.DiffManager,
ListSubModuleService: b.ListSubModuleService,
TASVersion: secondVersion,
TASFilePath: filePath,
nodeInstaller: NodeInstaller{
logger: b.Logger,
ExecutionManager: b.ExecutionManager,
},
}, nil
default:
return nil, fmt.Errorf("invalid version ( %d ) mentioned in yml file", version)
}
}
func (n *NodeInstaller) InstallNodeVersion(ctx context.Context, nodeVersion string) error {
// Running the `source` commands in a directory where .nvmrc is present, exits with exitCode 3
// https://github.com/nvm-sh/nvm/issues/1985
// TODO [good-to-have]: Auto-read and install from .nvmrc file, if present
commands := []string{
"source /home/nucleus/.nvm/nvm.sh",
fmt.Sprintf("nvm install %s", nodeVersion),
}
n.logger.Infof("Using user-defined node version: %v", nodeVersion)
err := n.ExecutionManager.ExecuteInternalCommands(ctx, core.InstallNodeVer, commands, "", nil, nil)
if err != nil {
n.logger.Errorf("Unable to install user-defined nodeversion %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
origPath := os.Getenv("PATH")
os.Setenv("PATH", fmt.Sprintf("/home/nucleus/.nvm/versions/node/v%s/bin:%s", nodeVersion, origPath))
return nil
}
================================================
FILE: pkg/driver/builder_test.go
================================================
package driver
import (
"fmt"
"testing"
)
func Test_driver(t *testing.T) {
b := Builder{}
invalidVersion := 4
_, err := b.GetDriver(invalidVersion, "")
wantErr := fmt.Sprintf("invalid version ( %d ) mentioned in yml file", invalidVersion)
if err.Error() != wantErr {
t.Errorf("want %s , got %s", err.Error(), wantErr)
}
}
================================================
FILE: pkg/driver/driver_v1.go
================================================
/*
This file implements core.Driver with operation over TAS config (YAML) version 1
*/
package driver
import (
"context"
"errors"
"fmt"
"os"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/logwriter"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"golang.org/x/sync/errgroup"
)
const languageJs = "javascript"
type (
driverV1 struct {
logger lumber.Logger
nodeInstaller NodeInstaller
TestExecutionService core.TestExecutionService
TestDiscoveryService core.TestDiscoveryService
AzureClient core.AzureClient
BlockTestService core.BlockTestService
ExecutionManager core.ExecutionManager
TASConfigManager core.TASConfigManager
CacheStore core.CacheStore
DiffManager core.DiffManager
ListSubModuleService core.ListSubModuleService
TASVersion int
TASFilePath string
}
setUpResultV1 struct {
diffExists bool
diff map[string]int
cacheKey string
}
)
func (d *driverV1) RunDiscovery(ctx context.Context, payload *core.Payload,
taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
tas, err := d.TASConfigManager.LoadAndValidate(ctx, d.TASVersion, d.TASFilePath, payload.EventType, payload.LicenseTier, d.TASFilePath)
if err != nil {
d.logger.Errorf("Unable to load tas yaml file, error: %v", err)
err = &errs.StatusFailed{Remark: err.Error()}
return err
}
tasConfig := tas.(*core.TASConfig)
language := global.FrameworkLanguageMap[tasConfig.Framework]
setupResults, err := d.setUp(ctx, payload, tasConfig, oauth, language)
if err != nil {
d.logger.Errorf("Error while doing common opertations error %v", err)
return err
}
if postErr := d.ListSubModuleService.Send(ctx, payload.BuildID, 1); postErr != nil {
return postErr
}
if tasConfig.Prerun != nil {
d.logger.Infof("Running pre-run steps for top module")
azureLogWriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposePreRunLogs, d.logger)
err = d.ExecutionManager.ExecuteUserCommands(ctx, core.PreRun, payload, tasConfig.Prerun, secretMap, azureLogWriter, global.RepoDir)
if err != nil {
d.logger.Errorf("Unable to run pre-run steps %v", err)
err = &errs.StatusFailed{Remark: "Failed in running pre-run steps"}
return err
}
}
err = d.ExecutionManager.ExecuteInternalCommands(ctx, core.InstallRunners, global.InstallRunnerCmds, global.RepoDir, nil, nil)
if err != nil {
d.logger.Errorf("Unable to install custom runners %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
d.logger.Debugf("Caching workspace")
if err = d.CacheStore.CacheWorkspace(ctx, ""); err != nil {
d.logger.Errorf("Error caching workspace: %+v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
args := d.buildDiscoveryArgs(payload, tasConfig, secretMap, setupResults.diffExists, setupResults.diff)
discoveryResult, err := d.TestDiscoveryService.Discover(ctx, &args)
if err != nil {
d.logger.Errorf("Unable to perform test discovery: %+v", err)
err = &errs.StatusFailed{Remark: "Failed in discovering tests"}
return err
}
populateDiscovery(discoveryResult, tasConfig)
if err = d.TestDiscoveryService.SendResult(ctx, discoveryResult); err != nil {
d.logger.Errorf("error while sending discovery API call , error %v", err)
return err
}
if language == languageJs {
if err = d.CacheStore.Upload(ctx, setupResults.cacheKey, tasConfig.Cache.Paths...); err != nil {
d.logger.Errorf("Unable to upload cache: %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
}
taskPayload.Status = core.Passed
d.logger.Debugf("Cache uploaded successfully")
return nil
// return nil
}
func (d *driverV1) RunExecution(ctx context.Context, payload *core.Payload,
taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
tas, err := d.TASConfigManager.LoadAndValidate(ctx, 1, d.TASFilePath, payload.EventType, payload.LicenseTier, d.TASFilePath)
if err != nil {
d.logger.Errorf("Unable to load tas yaml file, error: %v", err)
err = &errs.StatusFailed{Remark: err.Error()}
return err
}
tasConfig := tas.(*core.TASConfig)
if cachErr := d.setCache(tasConfig); cachErr != nil {
return cachErr
}
if errG := d.BlockTestService.GetBlockTests(ctx, tasConfig.Blocklist, payload.BranchName); errG != nil {
d.logger.Errorf("Unable to fetch blocklisted tests: %v", errG)
errG = errs.New(errs.GenericErrRemark.Error())
return errG
}
buildArgs := d.buildTestExecutionArgs(payload, tasConfig, secretMap, coverageDir)
executionResults, err := d.TestExecutionService.Run(ctx, &buildArgs)
if err != nil {
d.logger.Infof("Unable to perform test execution: %v", err)
err = &errs.StatusFailed{Remark: "Failed in executing tests."}
if executionResults == nil {
return err
}
}
resp, err := d.TestExecutionService.SendResults(ctx, executionResults)
if err != nil {
d.logger.Errorf("error while sending test reports %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
taskPayload.Status = resp.TaskStatus
logWriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposePostRunLogs, d.logger)
if tasConfig.Postrun != nil {
d.logger.Infof("Running post-run steps")
err = d.ExecutionManager.ExecuteUserCommands(ctx, core.PostRun, payload, tasConfig.Postrun, secretMap, logWriter, global.RepoDir)
if err != nil {
d.logger.Errorf("Unable to run post-run steps %v", err)
err = &errs.StatusFailed{Remark: "Failed in running post-run steps."}
return err
}
}
return nil
}
func (d *driverV1) setUp(ctx context.Context, payload *core.Payload,
tasConfig *core.TASConfig, oauth *core.Oauth, language string) (*setUpResultV1, error) {
d.logger.Infof("Tas yaml: %+v", tasConfig)
if err := d.setCache(tasConfig); err != nil {
return nil, err
}
cacheKey := ""
if language == languageJs {
cacheKey = tasConfig.Cache.Key
}
os.Setenv("REPO_CACHE_DIR", global.RepoCacheDir)
if tasConfig.NodeVersion != "" && language == languageJs {
nodeVersion := tasConfig.NodeVersion
if nodeErr := d.nodeInstaller.InstallNodeVersion(ctx, nodeVersion); nodeErr != nil {
return nil, nodeErr
}
}
blYml := tasConfig.Blocklist
if errG := d.BlockTestService.GetBlockTests(ctx, blYml, payload.BranchName); errG != nil {
d.logger.Errorf("Unable to fetch blocklisted tests: %v", errG)
errG = errs.New(errs.GenericErrRemark.Error())
return nil, errG
}
g, errCtx := errgroup.WithContext(ctx)
if language == languageJs {
g.Go(func() error {
if errG := d.CacheStore.Download(errCtx, cacheKey); errG != nil {
d.logger.Errorf("Unable to download cache: %v", errG)
errG = errs.New(errs.GenericErrRemark.Error())
return errG
}
return nil
})
}
d.logger.Infof("Identifying changed files ...")
diffExists := true
diff := map[string]int{}
g.Go(func() error {
diffC, errG := d.DiffManager.GetChangedFiles(errCtx, payload, oauth)
if errG != nil {
if errors.Is(errG, errs.ErrGitDiffNotFound) {
diffExists = false
} else {
d.logger.Errorf("Unable to identify changed files %s", errG)
errG = errs.New("Error occurred in fetching diff from GitHub")
return errG
}
}
diff = diffC
return nil
})
if err := g.Wait(); err != nil {
return nil, err
}
return &setUpResultV1{
diffExists: diffExists,
diff: diff,
cacheKey: cacheKey,
}, nil
}
func (d *driverV1) buildDiscoveryArgs(payload *core.Payload, tasConfig *core.TASConfig,
secretMap map[string]string,
diffExists bool,
diff map[string]int) core.DiscoveyArgs {
testPattern, envMap := d.getEnvAndPattern(payload, tasConfig)
return core.DiscoveyArgs{
TestPattern: testPattern,
Payload: payload,
EnvMap: envMap,
SecretData: secretMap,
TestConfigFile: tasConfig.ConfigFile,
FrameWork: tasConfig.Framework,
SmartRun: tasConfig.SmartRun,
Diff: diff,
DiffExists: diffExists,
FrameWorkVersion: tasConfig.FrameworkVersion,
CWD: global.RepoDir,
}
}
func (d *driverV1) buildTestExecutionArgs(payload *core.Payload, tasConfig *core.TASConfig,
secretMap map[string]string,
coverageDir string) core.TestExecutionArgs {
testPattern, envMap := d.getEnvAndPattern(payload, tasConfig)
logWriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposeExecutionLogs, d.logger)
return core.TestExecutionArgs{
Payload: payload,
CoverageDir: coverageDir,
LogWriterStrategy: logWriter,
TestPattern: testPattern,
EnvMap: envMap,
TestConfigFile: tasConfig.ConfigFile,
FrameWork: tasConfig.Framework,
SecretData: secretMap,
FrameWorkVersion: tasConfig.FrameworkVersion,
CWD: global.RepoDir,
}
}
func (d *driverV1) getEnvAndPattern(payload *core.Payload, tasConfig *core.TASConfig) (target []string, envMap map[string]string) {
if payload.EventType == core.EventPullRequest {
return tasConfig.Premerge.Patterns, tasConfig.Premerge.EnvMap
}
return tasConfig.Postmerge.Patterns, tasConfig.Postmerge.EnvMap
}
func populateDiscovery(testDiscoveryResult *core.DiscoveryResult, tasConfig *core.TASConfig) {
testDiscoveryResult.Parallelism = tasConfig.Parallelism
testDiscoveryResult.SplitMode = tasConfig.SplitMode
}
func (d *driverV1) setCache(tasConfig *core.TASConfig) error {
language := global.FrameworkLanguageMap[tasConfig.Framework]
if tasConfig.Cache == nil && language == "javascript" {
checksum, err := utils.ComputeChecksum(fmt.Sprintf("%s/%s", global.RepoDir, global.PackageJSON))
if err != nil {
d.logger.Errorf("Error while computing checksum, error %v", err)
return err
}
tasConfig.Cache = &core.Cache{
Key: checksum,
Paths: []string{},
}
}
return nil
}
================================================
FILE: pkg/driver/driver_v2.go
================================================
/*
This file implements core.Driver with operation over TAS config (YAML) version 2
*/
package driver
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"path"
"strings"
"sync"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/logwriter"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"golang.org/x/sync/errgroup"
)
const preRunLog = "Running Pre Run on Top level"
type (
driverV2 struct {
logger lumber.Logger
TestExecutionService core.TestExecutionService
AzureClient core.AzureClient
BlockTestService core.BlockTestService
ExecutionManager core.ExecutionManager
TASConfigManager core.TASConfigManager
CacheStore core.CacheStore
DiffManager core.DiffManager
ListSubModuleService core.ListSubModuleService
nodeInstaller NodeInstaller
TestDiscoveryService core.TestDiscoveryService
TASVersion int
TASFilePath string
}
setUpResultV2 struct {
diffExists bool
diff map[string]int
cacheKey string
}
)
func (d *driverV2) RunDiscovery(ctx context.Context, payload *core.Payload,
taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
// do something
d.logger.Debugf("Running in %d version", d.TASVersion)
tas, err := d.TASConfigManager.LoadAndValidate(ctx, d.TASVersion, d.TASFilePath, payload.EventType, payload.LicenseTier, d.TASFilePath)
if err != nil {
d.logger.Errorf("Unable to load tas yaml file, error: %v", err)
err = &errs.StatusFailed{Remark: err.Error()}
return err
}
tasConfig := tas.(*core.TASConfigV2)
taskPayload.Status = core.Passed
setUpResult, err := d.setUpDiscovery(ctx, payload, tasConfig, oauth)
if err != nil {
return err
}
mainBuffer := new(bytes.Buffer)
azureLogWriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposePreRunLogs, d.logger)
defer func() {
if writeErr := <-azureLogWriter.Write(ctx, mainBuffer); writeErr != nil {
// error in writing log should not fail the build
d.logger.Errorf("error in writing pre run log, error %v", writeErr)
}
}()
if payload.EventType == core.EventPush {
if discoveryErr := d.runDiscoveryHelper(ctx, tasConfig.PostMerge.PreRun,
tasConfig.PostMerge.SubModules, payload, tasConfig,
taskPayload, setUpResult.diff, setUpResult.diffExists, mainBuffer, secretMap); discoveryErr != nil {
return discoveryErr
}
} else {
if discoveryErr := d.runDiscoveryHelper(ctx, tasConfig.PreMerge.PreRun, tasConfig.PreMerge.SubModules,
payload, tasConfig, taskPayload, setUpResult.diff, setUpResult.diffExists, mainBuffer, secretMap); discoveryErr != nil {
return discoveryErr
}
}
if err = d.CacheStore.Upload(ctx, setUpResult.cacheKey, tasConfig.Cache.Paths...); err != nil {
// cache upload failure should not fail the task
d.logger.Errorf("Unable to upload cache: %v", err)
}
d.logger.Debugf("Cache uploaded successfully")
return nil
}
func (d *driverV2) RunExecution(ctx context.Context, payload *core.Payload,
taskPayload *core.TaskPayload, oauth *core.Oauth, coverageDir string, secretMap map[string]string) error {
tas, err := d.TASConfigManager.LoadAndValidate(ctx, d.TASVersion, d.TASFilePath, payload.EventType, payload.LicenseTier, d.TASFilePath)
if err != nil {
d.logger.Errorf("Unable to load tas yaml file, error: %v", err)
err = &errs.StatusFailed{Remark: err.Error()}
return err
}
subModuleName := os.Getenv(global.SubModuleName)
tasConfig := tas.(*core.TASConfigV2)
if cachErr := d.setCache(tasConfig); cachErr != nil {
return cachErr
}
subModule, err := d.findSubmodule(tasConfig, payload, subModuleName)
if err != nil {
d.logger.Errorf("Error finding sub module %s in tas config file", subModuleName)
return err
}
// Get blocklist data before execution
blYML := subModule.Blocklist
if err = d.BlockTestService.GetBlockTests(ctx, blYML, payload.BranchName); err != nil {
d.logger.Errorf("Unable to fetch blocklisted tests: %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
modulePath := path.Join(global.RepoDir, subModule.Path)
// PRE RUN steps should be run only if RunPrerunEveryTime is set to true
if subModule.Prerun != nil && subModule.RunPrerunEveryTime {
if preErr := d.runPreRunBeforeTestExecution(ctx, tasConfig, subModule, payload, secretMap, modulePath); preErr != nil {
return preErr
}
}
args := d.buildTestExecutionArgs(payload, tasConfig, subModule, secretMap, coverageDir)
testResult, err := d.TestExecutionService.Run(ctx, &args)
if err != nil {
return err
}
resp, err := d.TestExecutionService.SendResults(ctx, testResult)
if err != nil {
d.logger.Errorf("error while sending test reports %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
taskPayload.Status = resp.TaskStatus
if subModule.Postrun != nil {
d.logger.Infof("Running post-run steps")
azureLogwriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposePostRunLogs, d.logger)
err = d.ExecutionManager.ExecuteUserCommands(ctx, core.PostRun, payload, subModule.Postrun, secretMap, azureLogwriter, modulePath)
if err != nil {
d.logger.Errorf("Unable to run post-run steps %v", err)
err = &errs.StatusFailed{Remark: "Failed in running post-run steps."}
return err
}
}
return nil
}
func (d *driverV2) runPreRunBeforeTestExecution(ctx context.Context,
tasConfig *core.TASConfigV2,
subModule *core.SubModule,
payload *core.Payload,
secretMap map[string]string,
modulePath string) error {
if tasConfig.NodeVersion != "" {
// install node version before preRuns
if err := d.nodeInstaller.InstallNodeVersion(ctx, tasConfig.NodeVersion); err != nil {
d.logger.Debugf("error while installing node of version %s, error %v ", tasConfig.NodeVersion, err)
return err
}
}
d.logger.Infof("Running pre-run steps for submodule %s", subModule.Name)
azureLogwriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposePreRunLogs, d.logger)
err := d.ExecutionManager.ExecuteUserCommands(ctx, core.PreRun, payload, subModule.Prerun, secretMap, azureLogwriter, modulePath)
if err != nil {
d.logger.Errorf("Unable to run pre-run steps %v", err)
err = &errs.StatusFailed{Remark: "Failed in running pre-run steps"}
return err
}
d.logger.Debugf("installing runners at path %s", modulePath)
if err = d.ExecutionManager.ExecuteInternalCommands(ctx, core.InstallRunners, global.InstallRunnerCmds,
modulePath, nil, nil); err != nil {
d.logger.Errorf("Unable to install custom runners %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
return nil
}
func (d *driverV2) runDiscoveryHelper(ctx context.Context,
topPreRun *core.Run,
subModuleList []core.SubModule,
payload *core.Payload,
tasConfig *core.TASConfigV2,
taskPayload *core.TaskPayload,
diff map[string]int,
diffExists bool,
mainBuffer *bytes.Buffer,
secretMap map[string]string) error {
totalSubmoduleCount := len(subModuleList)
if apiErr := d.ListSubModuleService.Send(ctx, payload.BuildID, totalSubmoduleCount); apiErr != nil {
return apiErr
}
if tasConfig.NodeVersion != "" {
if err := d.nodeInstaller.InstallNodeVersion(ctx, tasConfig.NodeVersion); err != nil {
return err
}
}
if err := d.runPreRunCommand(ctx, topPreRun, mainBuffer, payload, secretMap, taskPayload, subModuleList); err != nil {
return err
}
d.logger.Debugf("Caching workspace")
// TODO: this will be change after we move to parallel pod executuon
if err := d.CacheStore.CacheWorkspace(ctx, ""); err != nil {
d.logger.Errorf("Error caching workspace: %+v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
errChannelDiscovery := make(chan error, totalSubmoduleCount)
discoveryWaitGroup := sync.WaitGroup{}
for i := 0; i < totalSubmoduleCount; i++ {
discoveryWaitGroup.Add(1)
go func(subModule *core.SubModule) {
defer discoveryWaitGroup.Done()
err := d.runDiscoveryForEachSubModule(ctx, payload, subModule, tasConfig, diff, diffExists, secretMap)
errChannelDiscovery <- err
}(&subModuleList[i])
}
discoveryWaitGroup.Wait()
for i := 0; i < totalSubmoduleCount; i++ {
e := <-errChannelDiscovery
if e != nil {
return e
}
}
return nil
}
func (d *driverV2) runPreRunCommand(ctx context.Context,
topPreRun *core.Run,
mainBuffer *bytes.Buffer, payload *core.Payload,
secretMap map[string]string, taskPayload *core.TaskPayload,
subModuleList []core.SubModule) error {
totalSubmoduleCount := len(subModuleList)
errChannelPreRun := make(chan error, totalSubmoduleCount)
preRunWaitGroup := sync.WaitGroup{}
if topPreRun != nil {
d.logger.Debugf("Running Pre Run on top level")
if _, err := mainBuffer.WriteString(preRunLog); err != nil {
return err
}
bufferWirter := logwriter.NewBufferLogWriter("TOP-LEVEL", mainBuffer, d.logger)
if err := d.ExecutionManager.ExecuteUserCommands(ctx, core.PreRun, payload,
topPreRun, secretMap, bufferWirter, global.RepoDir); err != nil {
d.logger.Errorf("Error occurred running top level PreRun , err %v", err)
return err
}
}
bufferList := []*bytes.Buffer{}
d.logger.Debugf("pre run on top level ended")
for i := 0; i < totalSubmoduleCount; i++ {
preRunWaitGroup.Add(1)
newBuffer := new(bytes.Buffer)
bufferList = append(bufferList, newBuffer)
go func(subModule *core.SubModule) {
defer preRunWaitGroup.Done()
bufferWirterSubmodule := logwriter.NewBufferLogWriter(subModule.Name, newBuffer, d.logger)
dicoveryErr := d.runPreRunForEachSubModule(ctx, payload, subModule, secretMap, bufferWirterSubmodule)
if dicoveryErr != nil {
taskPayload.Status = core.Error
d.logger.Errorf("error while running discovery for sub module %s, error %v", subModule.Name, dicoveryErr)
}
errChannelPreRun <- dicoveryErr
}(&subModuleList[i])
}
preRunWaitGroup.Wait()
for i := 0; i < totalSubmoduleCount; i++ {
mainBuffer.WriteString(bufferList[i].String())
}
for i := 0; i < totalSubmoduleCount; i++ {
e := <-errChannelPreRun
if e != nil {
d.logger.Debugf("pre run failed with error %v", e)
return e
}
}
return nil
}
func (d *driverV2) runDiscoveryForEachSubModule(ctx context.Context,
payload *core.Payload,
subModule *core.SubModule,
tasConfig *core.TASConfigV2,
diff map[string]int,
diffExists bool,
secretMap map[string]string) error {
args := d.buildDiscoveryArgs(payload, tasConfig, subModule, secretMap, diffExists, diff)
discoveryResult, err := d.TestDiscoveryService.Discover(ctx, &args)
if err != nil {
d.logger.Errorf("Unable to perform test discovery: %+v", err)
err = &errs.StatusFailed{Remark: "Failed in discovering tests"}
return err
}
populateTestDiscoveryV2(discoveryResult, subModule, tasConfig)
if err := d.TestDiscoveryService.SendResult(ctx, discoveryResult); err != nil {
return err
}
return nil
}
func (d *driverV2) runPreRunForEachSubModule(ctx context.Context,
payload *core.Payload,
subModule *core.SubModule,
secretMap map[string]string,
bufferWirterSubmodule core.LogWriterStrategy) error {
d.logger.Debugf("Running discovery for sub module %s", subModule.Name)
blYML := subModule.Blocklist
if err := d.BlockTestService.GetBlockTests(ctx, blYML, payload.BranchName); err != nil {
d.logger.Errorf("Unable to fetch blocklisted tests: %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
modulePath := path.Join(global.RepoDir, subModule.Path)
// PRE RUN steps
if subModule.Prerun != nil {
d.logger.Infof("Running pre-run steps for submodule %s", subModule.Name)
err := d.ExecutionManager.ExecuteUserCommands(ctx, core.PreRun, payload, subModule.Prerun,
secretMap, bufferWirterSubmodule, modulePath)
if err != nil {
d.logger.Errorf("Unable to run pre-run steps %v", err)
err = &errs.StatusFailed{Remark: "Failed in running pre-run steps"}
return err
}
d.logger.Debugf("error checks end")
}
err := d.ExecutionManager.ExecuteInternalCommands(ctx, core.InstallRunners, global.InstallRunnerCmds, modulePath, nil, nil)
if err != nil {
d.logger.Errorf("Unable to install custom runners %v", err)
err = errs.New(errs.GenericErrRemark.Error())
return err
}
return nil
}
func (d *driverV2) setUpDiscovery(ctx context.Context,
payload *core.Payload,
tasConfig *core.TASConfigV2,
oauth *core.Oauth) (*setUpResultV2, error) {
if err := d.setCache(tasConfig); err != nil {
return nil, err
}
cacheKey := tasConfig.Cache.Key
g, errCtx := errgroup.WithContext(ctx)
g.Go(func() error {
if errG := d.CacheStore.Download(errCtx, cacheKey); errG != nil {
d.logger.Errorf("Unable to download cache: %v", errG)
errG = errs.New(errs.GenericErrRemark.Error())
return errG
}
return nil
})
diffExists := true
diff := map[string]int{}
g.Go(func() error {
diffC, errG := d.DiffManager.GetChangedFiles(errCtx, payload, oauth)
if errG != nil {
if errors.Is(errG, errs.ErrGitDiffNotFound) {
diffExists = false
} else {
d.logger.Errorf("Unable to identify changed files %s", errG)
errG = errs.New("Error occurred in fetching diff from GitHub")
return errG
}
}
diff = diffC
return nil
})
err := g.Wait()
if err != nil {
return nil, err
}
return &setUpResultV2{
cacheKey: cacheKey,
diffExists: diffExists,
diff: diff,
}, nil
}
func (d *driverV2) buildDiscoveryArgs(payload *core.Payload, tasConfig *core.TASConfigV2,
subModule *core.SubModule,
secretMap map[string]string,
diffExists bool,
diff map[string]int) core.DiscoveyArgs {
testPattern := subModule.Patterns
envMap := getEnv(payload, tasConfig, subModule)
modulePath := path.Join(global.RepoDir, subModule.Path)
return core.DiscoveyArgs{
TestPattern: testPattern,
Payload: payload,
EnvMap: envMap,
SecretData: secretMap,
TestConfigFile: subModule.ConfigFile,
FrameWork: subModule.Framework,
SmartRun: tasConfig.SmartRun,
Diff: GetSubmoduleBasedDiff(diff, subModule.Path),
DiffExists: diffExists,
CWD: modulePath,
}
}
func getEnv(payload *core.Payload, tasConfig *core.TASConfigV2, subModule *core.SubModule) map[string]string {
var envMap map[string]string
if payload.EventType == core.EventPullRequest {
envMap = tasConfig.PreMerge.EnvMap
} else {
envMap = tasConfig.PostMerge.EnvMap
}
if envMap == nil {
envMap = map[string]string{}
}
// overwrite the existing env with more specific one
if subModule.Prerun != nil && subModule.Prerun.EnvMap != nil {
for k, v := range subModule.Prerun.EnvMap {
envMap[k] = v
}
}
if path.Join(global.RepoDir, subModule.Path) == global.RepoDir {
envMap[global.ModulePath] = ""
} else {
envMap[global.ModulePath] = subModule.Path
}
return envMap
}
func populateTestDiscoveryV2(testDiscoveryResult *core.DiscoveryResult, subModule *core.SubModule, tasConfig *core.TASConfigV2) {
testDiscoveryResult.Parallelism = subModule.Parallelism
testDiscoveryResult.SplitMode = tasConfig.SplitMode
testDiscoveryResult.SubModule = subModule.Name
}
func (d *driverV2) findSubmodule(tasConfig *core.TASConfigV2, payload *core.Payload, subModuleName string) (*core.SubModule, error) {
if payload.EventType == core.EventPullRequest {
for i := 0; i < len(tasConfig.PreMerge.SubModules); i++ {
if tasConfig.PreMerge.SubModules[i].Name == subModuleName {
return &tasConfig.PreMerge.SubModules[i], nil
}
}
} else {
for i := 0; i < len(tasConfig.PostMerge.SubModules); i++ {
if tasConfig.PostMerge.SubModules[i].Name == subModuleName {
return &tasConfig.PostMerge.SubModules[i], nil
}
}
}
return nil, errs.ErrSubModuleNotFound
}
func (d *driverV2) buildTestExecutionArgs(payload *core.Payload,
tasConfig *core.TASConfigV2,
subModule *core.SubModule,
secretMap map[string]string,
coverageDir string) core.TestExecutionArgs {
target := subModule.Patterns
envMap := getEnv(payload, tasConfig, subModule)
modulePath := path.Join(global.RepoDir, subModule.Path)
azureLogWriter := logwriter.NewAzureLogWriter(d.AzureClient, core.PurposeExecutionLogs, d.logger)
return core.TestExecutionArgs{
Payload: payload,
CoverageDir: coverageDir,
LogWriterStrategy: azureLogWriter,
TestPattern: target,
EnvMap: envMap,
TestConfigFile: subModule.ConfigFile,
FrameWork: subModule.Framework,
SecretData: secretMap,
CWD: modulePath,
}
}
func GetSubmoduleBasedDiff(diff map[string]int, subModulePath string) map[string]int {
newDiff := map[string]int{}
subModulePath = strings.TrimPrefix(subModulePath, "./")
if !strings.HasSuffix(subModulePath, "/") {
subModulePath += "/"
}
for file, value := range diff {
filePath := strings.TrimPrefix(file, subModulePath)
newDiff[filePath] = value
}
return newDiff
}
func (d *driverV2) setCache(tasConfig *core.TASConfigV2) error {
if tasConfig.Cache == nil {
checksum, err := utils.ComputeChecksum(fmt.Sprintf("%s/%s", global.RepoDir, global.PackageJSON))
if err != nil {
d.logger.Errorf("Error while computing checksum, error %v", err)
return err
}
tasConfig.Cache = &core.Cache{
Key: checksum,
Paths: []string{},
}
}
return nil
}
================================================
FILE: pkg/driver/driver_v2_test.go
================================================
package driver
import (
"reflect"
"testing"
)
type testArgs struct {
name string
subModulePath string
diffMap map[string]int
wantDiffMap map[string]int
}
func TestGetSubmoduleBasedDiff(t *testing.T) {
tests :=
[]testArgs{
{
name: "test with subModule package included in diff 1",
subModulePath: "./package/subModule-1",
diffMap: map[string]int{
"package/subModule-1/test/testFile1.js": 1,
"package/subModule-1/test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
wantDiffMap: map[string]int{
"test/testFile1.js": 1,
"test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
},
{
name: "test with subModule package included in diff 2",
subModulePath: "package/subModule-1",
diffMap: map[string]int{
"package/subModule-1/test/testFile1.js": 1,
"package/subModule-1/test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
wantDiffMap: map[string]int{
"test/testFile1.js": 1,
"test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
},
{
name: "test with subModule package included in diff 3",
subModulePath: "./package/subModule-1/",
diffMap: map[string]int{
"package/subModule-1/test/testFile1.js": 1,
"package/subModule-1/test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
wantDiffMap: map[string]int{
"test/testFile1.js": 1,
"test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
},
{
name: "test with subModule package included in diff 4",
subModulePath: "package/subModule-1/",
diffMap: map[string]int{
"package/subModule-1/test/testFile1.js": 1,
"package/subModule-1/test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
wantDiffMap: map[string]int{
"test/testFile1.js": 1,
"test/testFile2.js": 2,
"package/subModule-2/test/testFile1.js": 3,
"package/subModule-2/test/testFile2.js": 4,
},
},
{
name: "test with subModule package not included in diff ",
subModulePath: "package/subModule-1/",
diffMap: map[string]int{
"package/subModule-2/test/testFile1.js": 1,
"package/subModule-2/test/testFile2.js": 2,
"package/subModule-2/test/testFile3.js": 3,
"package/subModule-2/test/testFile4.js": 4,
},
wantDiffMap: map[string]int{
"package/subModule-2/test/testFile1.js": 1,
"package/subModule-2/test/testFile2.js": 2,
"package/subModule-2/test/testFile3.js": 3,
"package/subModule-2/test/testFile4.js": 4,
},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
actualMap := GetSubmoduleBasedDiff(test.diffMap, test.subModulePath)
if !reflect.DeepEqual(actualMap, test.wantDiffMap) {
t.Errorf("not equal wanted %+v , got %+v", test.diffMap, actualMap)
}
})
}
}
================================================
FILE: pkg/errs/nucleus.go
================================================
package errs
import (
"fmt"
)
// Err represents structure of a custom error
type Err struct {
Code string
Message string
URL string
}
func (e Err) Error() string {
return fmt.Sprintf("%s : %s ", e.Code, e.Message)
}
// Error represents a json-encoded API error.
type Error struct {
Message string `json:"message"`
}
func (e *Error) Error() string {
return e.Message
}
// New returns a new error message.
func New(text string) error {
return &Error{Message: text}
}
// ErrInvalidPayload returns an error when the nucleus payload is invalid.
func ErrInvalidPayload(errMsg string) error {
return New(errMsg)
}
// ErrSecretNotFound represents the error when a secret is not found in map.
func ErrSecretNotFound(secret string) error {
return New(fmt.Sprintf("secret with name %s not found", secret))
}
var (
// ErrParseVariableName represents the error when unable to parse a
// variable name within a substitution.
ErrParseVariableName = New("unable to parse variable name")
// ErrSecretRegexMatch represents the error when a regex does not match.
ErrSecretRegexMatch = New("secret regex match failed")
// ErrNotFound return when azure blob is not found.
ErrNotFound = New("blob not found")
// ErrSASToken returns when sas token is not found.
ErrSASToken = New("azure client requires SAS Token")
// ErrAzureCredentials is returned when the azure credentials are invalid.
ErrAzureCredentials = New("azure client requires credentials")
// ErrAPIStatus is returned when the api status is not 200.
ErrAPIStatus = New("non OK status")
// ErrInvalidLoggerInstance is returned when logger instance is not supported.
ErrInvalidLoggerInstance = New("Invalid logger instance")
// ErrUnsupportedGitProvider is returned when try to integrate unsupported provider repo
ErrUnsupportedGitProvider = New("unsupported gitprovider")
// ErrGitDiffNotFound is returned when basecommit is null or git provider returns empty diff
ErrGitDiffNotFound = New("diff not found")
// GenericErrRemark returns a generic error message for user facing errors.
GenericErrRemark = New("Unexpected error")
// ErrMarshalJSON is returned when json marshal failed
ErrMarshalJSON = New("JSON marshal failed")
// ErrUnMarshalJSON is returned when json unmarshal failed
ErrUnMarshalJSON = New("JSON unmarshal failed")
// ErrMissingAccessToken is returned when Oauth token is missing
ErrMissingAccessToken = New("Missing OAuth access token. Please add an OAuth token")
// ErrSubModuleNotFound will be thrown if submodule is not present in yml
ErrSubModuleNotFound = New("Submodule not found in tas config file")
)
type StatusFailed struct {
Remark string
}
func (e *StatusFailed) Error() string {
return e.Remark
}
// ErrInvalidConf represents field validation failures of TAS configuration
type ErrInvalidConf struct {
Message string
Fields []string
Values []interface{}
}
func (e ErrInvalidConf) Error() string {
errMsg := e.Message
for idx, field := range e.Fields {
errMsg += fmt.Sprintf("%s: %s\n", field, e.Values[idx])
}
return errMsg
}
================================================
FILE: pkg/errs/nucleus_test.go
================================================
package errs
import (
"testing"
)
func TestError_Error(t *testing.T) {
e := New("A secret message")
got := e.Error()
want := "A secret message"
if got != want {
t.Errorf("Received: %v, Expected: %v", got, want)
}
}
func TestErr_Error(t *testing.T) {
e := &Err{
Code: "fmt.Print(error)",
Message: "This is the message",
}
got := e.Error()
want := "fmt.Print(error) : This is the message "
if got != want {
t.Errorf("Received: %v, Expected: %v", got, want)
}
}
func Test_ErrInvalidPayload(t *testing.T) {
got := ErrInvalidPayload("Error for invalid nucleus payload")
want := "Error for invalid nucleus payload"
if got.Error() != want {
t.Errorf("Received: %v, Expected: %v", got, want)
}
}
func TestErrSecretNotFound(t *testing.T) {
got := ErrSecretNotFound("SECRET_STRING")
want := "secret with name SECRET_STRING not found"
if got.Error() != want {
t.Errorf("Received: %v, Expected: %v", got, want)
}
}
================================================
FILE: pkg/errs/synapse.go
================================================
package errs
import (
"fmt"
"strings"
)
// ERR_DUMMY dummy error
var ERR_DUMMY = Err{
Code: "ERR::DUMMY",
Message: "Dummy error "}
// ERR_INVALID_ENVIRONMENT should be thorwn when invalid environment specified"
var ERR_INVALID_ENVIRONMENT = Err{
Code: "ERR::INV::ENV",
Message: "Invalid environment specified"}
// ERR_CTRL_CONN_MAX_ATTEMPT should be thrown when control websocket reconnection max attempt reached
var ERR_CTRL_CONN_MAX_ATTEMPT = Err{
Code: "ERR::CTRL::CONN::MAX::ATTEMPT",
Message: "Control websocket reconnection max attempt reached"}
// ERR_SNK_PRX_MAX_ATTEMPT should be thrown when sink proxy restart max attempt reache
var ERR_SNK_PRX_MAX_ATTEMPT = Err{
Code: "ERR::SNK::PRX::MAX::ATTEMPT",
Message: "Sink proxy restart max attempt reached"}
// ERR_INF_API_MAX_ATTEMPT should be thrown when info api server restart max attempt reached
var ERR_INF_API_MAX_ATTEMPT = Err{
Code: "ERR::INF::API::MAX::ATTEMPT",
Message: "Info api server restart max attempt reached"}
// ERR_FS_MAX_ATTEMPT should be thrown when file server restart max attempt reached
var ERR_FS_MAX_ATTEMPT = Err{
Code: "ERR::FS::MAX::ATTEMPT",
Message: "File server restart max attempt reached"}
// ERR_INV_WS_DAT_TYPE should be thrown when invalid data type reader received from websocket
var ERR_INV_WS_DAT_TYPE = Err{
Code: "ERR::INV::WS::DAT::TYPE",
Message: "Invalid data type reader received from websocket"}
// ERR_BIN_UPD function retruns err with code "ERR::BIN::UPD"
func ERR_BIN_UPD(err string) Err {
return Err{
Code: "ERR::BIN::UPD",
Message: "Unable to update binary " + err}
}
// ERR_WS_CTRL_CONN function returns err with code "ERR::WS::Conn"
func ERR_WS_CTRL_CONN(err string) Err {
return Err{
Code: "ERR::WS::Conn",
Message: "Unable to establish control websocket connection " + err}
}
// ERR_WS_CONN function returns err with code "ERR::WS::Conn"
func ERR_WS_CONN(err string) Err {
return Err{
Code: "ERR::WS::Conn",
Message: "Unable to establish websocket connection " + err}
}
// ERR_WS_CTRL_CONN_DWN function returns err with code "ERR::WS::CTRL::CONN::DWN"
func ERR_WS_CTRL_CONN_DWN(err string) Err {
return Err{
Code: "ERR::WS::CTRL::CONN::DWN",
Message: "Control websocket connection closed " + err}
}
// ERR_DAT_CONN_DWN function returns err with code "ERR::DAT::CONN::DWN"
func ERR_DAT_CONN_DWN(err string) Err {
return Err{
Code: "ERR::DAT::CONN::DWN",
Message: "Data websocket connection closed " + err}
}
// ERR_INVALID_WS_URL function returns err with code "ERR::INV::WS::URL"
func ERR_INVALID_WS_URL(err string) Err {
return Err{
Code: "ERR::INV::WS::URL",
Message: "Invalid websocket url error " + err}
}
// ERR_SNK_PRX function return error with code "ERR::SNK::PRX"
func ERR_SNK_PRX(err string) Err {
return Err{
Code: "ERR::SNK::PRX",
Message: "Sink proxy failed : " + err}
}
// ERR_SNK_PRX_CONN function returns error with code "ERR::SNK::PRX::CONN"
func ERR_SNK_PRX_CONN(err string) Err {
return Err{
Code: "ERR::SNK::PRX::CONN",
Message: "Unable to establish connection to local proxy : " + err}
}
// ERR_WS_WRT function returns error with code "ERR::WS::WRT"
func ERR_WS_WRT(err string) Err {
return Err{
Code: "ERR::WS::WRT",
Message: "Unable to valid retrieve writer from ws : " + err}
}
// ERR_WS_RDR function returns error with code "ERR::WS::RDR"
func ERR_WS_RDR(err string) Err {
return Err{
Code: "ERR::WS::RDR",
Message: "Unable to retrieve valid reader from ws : " + err}
}
// ERR_ATT_PRX function returns error with code "ERR::ATT::PRX"
func ERR_ATT_PRX(reqType string, err string) Err {
return Err{
Code: "ERR::ATT::PRX",
Message: fmt.Sprintf("Unable to attach proxy to [ %s ]request : %s", reqType, err)}
}
// ERR_DNS_RLV function returns error with code "ERR::DNS::RLV"
func ERR_DNS_RLV(err string) Err {
return Err{
Code: "ERR::DNS::RLV",
Message: fmt.Sprintf("Error while resolving dns : %s", err)}
}
// ERR_VLD_CFG function return error with code ERR::CNF::FLD::VLD
func ERR_VLD_CFG(errs []string) Err {
return Err{
Code: "ERR::CNF::FLD::VLD",
Message: fmt.Sprintf("Validation errors : \n%s", strings.Join(errs, "\n"))}
}
// ERR_DAT_WS_RD function returns error with code ERR::DAT::WS::RD
func ERR_DAT_WS_RD(err string) Err {
return Err{
Code: "ERR::DAT::WS::RD",
Message: fmt.Sprintf("Unable to read from websocket : \n%s", err)}
}
// ERR_SNK_WRT function returns error with code ERR::SNK::WRT
func ERR_SNK_WRT(err string) Err {
return Err{
Code: "ERR::SNK::WRT",
Message: fmt.Sprintf("Unable to read from websocket : \n%s", err)}
}
// ERR_API_SRV_STR function returns error with code ERR::API::SRV::STR
func ERR_API_SRV_STR(err string) Err {
return Err{
Code: "ERR::API::SRV::STR",
Message: fmt.Sprintf("Unable to start api server : \n%s", err)}
}
// ERR_FIL_SRV_STR function returns error with code "ERR::FIL::SRV::STR"
func ERR_FIL_SRV_STR(err string) Err {
return Err{
Code: "ERR::FIL::SRV::STR",
Message: fmt.Sprintf("Unable to start file server : \n%s", err)}
}
// ERR_DIR_CRT function returns error with code "ERR::DIR::CRT"
func ERR_DIR_CRT(err string) Err {
return Err{
Code: "ERR::DIR::CRT",
Message: fmt.Sprintf("Unable to create directory : \n%s", err)}
}
// ErrDirDel function returns error with code "ERR::DIR::DEL"
func ErrDirDel(err string) Err {
return Err{
Code: "ERR::DIR::DEL",
Message: fmt.Sprintf("Unable to delete directory : \n%s", err)}
}
// ERR_FIL_CRT function returns error with code ERR::FIL::CRT
func ERR_FIL_CRT(err string) Err {
return Err{
Code: "ERR::FIL::CRT",
Message: fmt.Sprintf("Unable to create file : \n%s", err)}
}
// ERR_API_WEB_HOK function returns error with code ERR::API::WEB::HOK
func ERR_API_WEB_HOK(err string) Err {
return Err{
Code: "ERR::API::WEB::HOK",
Message: fmt.Sprintf("Unable to call webhook url : \n%s", err)}
}
// ERR_DOCKER_RUN function returns error with code ERR::DOCKER::RUN
func ERR_DOCKER_RUN(err string) Err {
return Err{
Code: "ERR::DOCKER::RUN",
Message: fmt.Sprintf("Docker run failed with error: \n%s", err)}
}
// ERR_DOCKER_CRT function returns error with code ERR::DOCKER::CRT
func ERR_DOCKER_CRT(err string) Err {
return Err{
Code: "ERR::DOCKER::CRT",
Message: fmt.Sprintf("Docker create failed with error: \n%s", err)}
}
// ERR_DOCKER_STRT function returns error with code "ERR::DOCKER::STRT"
func ERR_DOCKER_STRT(err string) Err {
return Err{
Code: "ERR::DOCKER::STRT",
Message: fmt.Sprintf("Docker start failed with error: \n%s", err)}
}
// ErrDockerVolCrt function returns error with code "ERR::DOCKER::VOL::CRT"
func ErrDockerVolCrt(err string) Err {
return Err{
Code: "ERR::DOCKER::VOL::CRT",
Message: fmt.Sprintf("Docker volume create failed with error: \n%s", err)}
}
// ErrDockerCP function returns error with code "ERR::DOCKER::CP"
func ErrDockerCP(err string) Err {
return Err{
Code: "ERR::DOCKER::CP",
Message: fmt.Sprintf("Error copying file to docker: \n%s", err)}
}
// ErrSecretLoad function returns error with code "ERR::SECRET::LOAD"
func ErrSecretLoad(err string) Err {
return Err{
Code: "ERR::SECRET::LOAD",
Message: fmt.Sprintf("Error in loading secrets: \n%s", err)}
}
// ERR_JSON_MAR function returns error with code "ERR::JSON::MAR"
func ERR_JSON_MAR(err string) Err {
return Err{
Code: "ERR::JSON::MAR",
Message: fmt.Sprintf("Error marshaling JSON: \n%s", err)}
}
// ERR_JSON_UNMAR function returns error with code "ERR::JSON::UNMAR"
func ERR_JSON_UNMAR(err string) Err {
return Err{
Code: "ERR::JSON::UNMAR",
Message: fmt.Sprintf("Error unmarshaling JSON: \n%s", err)}
}
// ERR_LT_CRDS functio returns error with code "ERR::LT::CRDS"
func ERR_LT_CRDS() Err {
return Err{
Code: "ERR::LT::CRDS",
Message: "No lambdatest config provided"}
}
// ERR_SNK_RD_WRT_MSM should be raise when there is read write mismatch in sink proxy
var ERR_SNK_RD_WRT_MSM = Err{
Code: "ERR::SNK::RD::WRT::MSM",
Message: "Read write mismatch in sink proxy "}
// CR_AUTH_NF should be raise when container registry auth are not present for private repo
var CR_AUTH_NF = Err{
Code: "CR::AUTH:NF",
Message: "Container registry auth are not present for private repo"}
================================================
FILE: pkg/fileutils/fileutils.go
================================================
package fileutils
import (
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
)
// CopyFile copies the contents of the file named src to the file named
// by dst. The file will be created if it does not already exist. If the
// destination file exists, all it's contents will be replaced by the contents
// of the source file. The file mode will be copied from the source and
// the copied data is synced/flushed to stable storage.
func CopyFile(src, dst string, changeMode bool) (err error) {
in, err := os.Open(src)
if err != nil {
return
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return
}
defer func() {
if e := out.Close(); e != nil {
err = e
}
}()
_, err = io.Copy(out, in)
if err != nil {
return
}
err = out.Sync()
if err != nil {
return
}
if !changeMode {
return
}
si, err := os.Lstat(src)
if err != nil {
return
}
err = os.Chmod(dst, si.Mode())
if err != nil {
return
}
return
}
// CopyDir recursively copies a directory tree, attempting to preserve permissions.
// Source directory must exist, destination directory must *not* exist.
// Symlinks are ignored and skipped.
func CopyDir(src, dst string, changeMode bool) (err error) {
src = filepath.Clean(src)
dst = filepath.Clean(dst)
si, err := os.Lstat(src)
if err != nil {
return err
}
if !si.IsDir() {
return fmt.Errorf("source is not a directory")
}
_, err = os.Lstat(dst)
if err != nil && !os.IsNotExist(err) {
return
}
if err == nil {
return fmt.Errorf("destination %+v already exists", dst)
}
err = os.MkdirAll(dst, si.Mode())
if err != nil {
return
}
// NOTE: ioutil.ReadDir -> os.ReadDir as the latter is better:
// """
// As of Go 1.16, os.ReadDir is a more efficient and correct choice:
// it returns a list of fs.DirEntry instead of fs.FileInfo,
// and it returns partial results in the case of an error
// midway through reading a directory.
// """
entries, err := os.ReadDir(src)
if err != nil {
return
}
var fileInfo fs.FileInfo
for _, entry := range entries {
srcPath := filepath.Join(src, entry.Name())
dstPath := filepath.Join(dst, entry.Name())
if entry.IsDir() {
err = CopyDir(srcPath, dstPath, changeMode)
if err != nil {
return
}
} else {
// Skip symlinks.
fileInfo, err = entry.Info()
if err != nil || fileInfo.Mode()&os.ModeSymlink != 0 {
continue
}
err = CopyFile(srcPath, dstPath, changeMode)
if err != nil {
return
}
}
}
return
}
// CheckIfExists checks if file or directory exists in the given path.
func CheckIfExists(path string) (bool, error) {
if _, err := os.Lstat(path); err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
return true, nil
}
// CreateIfNotExists creates a file or a directory only if it does not already exist.
func CreateIfNotExists(path string, isDir bool) error {
exists, err := CheckIfExists(path)
if err != nil {
return err
}
if !exists {
if isDir {
return os.MkdirAll(path, 0755)
}
if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil {
return err
}
f, err := os.OpenFile(path, os.O_CREATE, 0755)
if err != nil {
return err
}
f.Close()
}
return nil
}
================================================
FILE: pkg/fileutils/fileutils_test.go
================================================
package fileutils
import (
"fmt"
"os"
"testing"
)
func removeCopiedPath(path string) {
err := os.RemoveAll(path)
if err != nil {
fmt.Println("error in removing!!")
}
}
// nolint:dupl
func TestCopyFile(t *testing.T) {
type args struct {
src string
dst string
changeMode bool
}
tests := []struct {
name string
args args
wantErr bool
requireDelete bool // if new file is created, we need to delete for clean up
}{
{
"Check open error",
args{src: "../../testutils/file", dst: "./dst", changeMode: true},
true,
false,
}, // this file is not present
{
"Check create error for invalid path",
args{src: "../../testutils/testfile", dst: "../xyz/dst", changeMode: true},
true,
false,
}, // file present at given args.src
{
"Check fasle change mode",
args{src: "../../testutils/testfile", dst: "./dst", changeMode: true},
false,
true,
}, // new file will be created, delete it
{
"Check success",
args{src: "../../testutils/testfile", dst: "./copyfile", changeMode: true},
false,
true,
}, // new file will be created, delete it
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := CopyFile(tt.args.src, tt.args.dst, tt.args.changeMode)
if tt.requireDelete {
defer removeCopiedPath(tt.args.dst)
}
if (err != nil) != tt.wantErr {
t.Errorf("CopyFile() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
// nolint:dupl
func TestCopyDir(t *testing.T) {
type args struct {
src string
dst string
changeMode bool
}
tests := []struct {
name string
args args
wantErr bool
requireDelete bool // if new path/directory is created, we need to delete it for clean up
}{
{
"Check status error",
args{src: "../../testutils/dne/file", dst: "./dst", changeMode: true},
true,
false,
}, // this dir is not present
{
"Check for src is not a directory",
args{src: "../../testutils/testfile", dst: "../xyz/dst", changeMode: true},
true,
false,
}, // file present at given args.src
{
"Check for non-exist dst directory",
args{src: "../../testutils/testdirectory", dst: "./xyz", changeMode: true},
false,
true,
}, // new dir will be created
{
"Check existing dst",
args{src: "../../testutils/testdirectory", dst: "../../testutils/testdirectory", changeMode: true},
true,
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := CopyDir(tt.args.src, tt.args.dst, tt.args.changeMode)
if tt.requireDelete {
defer removeCopiedPath(tt.args.dst)
}
if (err != nil) != tt.wantErr {
t.Errorf("CopyDir() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestCheckIfExists(t *testing.T) {
type args struct {
path string
}
tests := []struct {
name string
args args
want bool
wantErr bool
}{
{
"Check false path error",
args{path: "../pathnotexist/dir"},
false,
false,
}, // this dir is not present
{
"Check for existing path, should not give error",
args{path: "../../testutils/"},
true,
false,
}, // this dir is present
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := CheckIfExists(tt.args.path)
if (err != nil) != tt.wantErr {
t.Errorf("CheckIfExists() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("CheckIfExists() = %v, want %v", got, tt.want)
}
})
}
}
func TestCreateIfNotExists(t *testing.T) {
type args struct {
path string
isDir bool
}
tests := []struct {
name string
args args
wantErr bool
requireDelete bool
}{
{
"Check false path directory",
args{path: "../pathnotexist", isDir: true},
false,
true,
}, // new dir will be created
{
"Check make directory error",
args{path: "pathnotexist", isDir: true},
false,
true,
}, // new dir will be created
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := CreateIfNotExists(tt.args.path, tt.args.isDir)
if tt.requireDelete {
defer removeCopiedPath(tt.args.path)
}
if (err != nil) != tt.wantErr {
t.Errorf("CreateIfNotExists() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
================================================
FILE: pkg/gitmanager/setup.go
================================================
// Package gitmanager is used for cloning repo
package gitmanager
import (
"context"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/pkg/urlmanager"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"github.com/cenkalti/backoff/v4"
"github.com/mholt/archiver/v3"
)
type GitLabSingleFileResponse struct {
Content string `json:"content"`
}
type gitManager struct {
logger lumber.Logger
httpClient http.Client
execManager core.ExecutionManager
request core.Requests
}
const (
authorization = "Authorization"
)
// NewGitManager returns a new GitManager
func NewGitManager(logger lumber.Logger, execManager core.ExecutionManager) core.GitManager {
return &gitManager{
logger: logger,
httpClient: http.Client{
Timeout: global.DefaultGitCloneTimeout,
},
execManager: execManager,
request: requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{}),
}
}
func (gm *gitManager) Clone(ctx context.Context, payload *core.Payload, oauth *core.Oauth) error {
repoLink := payload.RepoLink
repoItems := strings.Split(repoLink, "/")
repoName := repoItems[len(repoItems)-1]
commitID := payload.BuildTargetCommit
archiveURL, err := urlmanager.GetCloneURL(payload.GitProvider, repoLink, repoName, commitID, payload.ForkSlug, payload.RepoSlug)
if err != nil {
gm.logger.Errorf("failed to get clone url for provider %s, error %v", payload.GitProvider, err)
return err
}
gm.logger.Debugf("cloning from %s", archiveURL)
err = gm.downloadFile(ctx, archiveURL, commitID+".zip", oauth)
if err != nil {
gm.logger.Errorf("failed to download file %v", err)
return err
}
if err = gm.initGit(ctx, payload, oauth); err != nil {
gm.logger.Errorf("failed to initialize git, error %v", err)
return err
}
return nil
}
// downloadFile clones the archive from github and extracts the file if it is a zip file.
func (gm *gitManager) downloadFile(ctx context.Context, archiveURL, fileName string, oauth *core.Oauth) error {
header := getHeaderMap(oauth)
respBody, stausCode, err := gm.request.MakeAPIRequest(ctx, http.MethodGet, archiveURL, nil, nil, header)
if err != nil {
return err
}
if stausCode >= http.StatusMultipleChoices {
return fmt.Errorf("received non 200 status code [%d]", stausCode)
}
err = gm.copyAndExtractFile(ctx, respBody, fileName)
if err != nil {
gm.logger.Errorf("failed to copy file %v", err)
return err
}
return nil
}
// copyAndExtractFile copies the content of http response directly to the local storage
// and extracts the file if it is a zip file.
func (gm *gitManager) copyAndExtractFile(ctx context.Context, respBody []byte, path string) error {
out, err := os.Create(path)
if err != nil {
gm.logger.Errorf("failed to create file err %v", err)
return err
}
_, err = out.Write(respBody)
if err != nil {
gm.logger.Errorf("failed to write to file %v", err)
out.Close()
return err
}
out.Close()
// if zip file, then unarchive the file in same path
if filepath.Ext(path) == ".zip" {
zip := archiver.NewZip()
zip.OverwriteExisting = true
if err = zip.Unarchive(path, fmt.Sprintf("%s/clonedir", filepath.Dir(path))); err != nil {
gm.logger.Errorf("failed to unarchive file %v", err)
return err
}
}
commands := []string{
fmt.Sprintf("mkdir %s", global.RepoDir),
fmt.Sprintf("mv %s/clonedir/*/* %s", filepath.Dir(path), global.RepoDir),
}
err = gm.execManager.ExecuteInternalCommands(ctx, core.RenameCloneFile, commands, filepath.Dir(path), nil, nil)
if err != nil {
return err
}
return err
}
func (gm *gitManager) initGit(ctx context.Context, payload *core.Payload, oauth *core.Oauth) error {
branch := payload.BranchName
repoLink := payload.RepoLink
if payload.GitProvider == core.Bitbucket && payload.ForkSlug != "" {
repoLink = strings.Replace(repoLink, payload.RepoSlug, payload.ForkSlug, -1)
}
repoURL, perr := url.Parse(repoLink)
if perr != nil {
return perr
}
if oauth.Type == core.Basic {
decodedToken, err := base64.StdEncoding.DecodeString(oauth.AccessToken)
if err != nil {
gm.logger.Errorf("Failed to decode basic oauth token for RepoID %s: %s", payload.RepoID, err)
return err
}
creds := strings.Split(string(decodedToken), ":")
repoURL.User = url.UserPassword(creds[0], creds[1])
} else {
repoURL.User = url.UserPassword("x-token-auth", oauth.AccessToken)
if payload.GitProvider == core.GitLab {
repoURL.User = url.UserPassword("oauth2", oauth.AccessToken)
}
}
urlWithToken := repoURL.String()
commands := []string{
"git init",
fmt.Sprintf("git remote add origin %s.git", repoLink),
fmt.Sprintf("git config --global url.%s.InsteadOf %s", urlWithToken, repoLink),
fmt.Sprintf("git fetch --depth=1 origin +%s:refs/remotes/origin/%s", payload.BuildTargetCommit, branch),
fmt.Sprintf("git config --global --remove-section url.%s", urlWithToken),
fmt.Sprintf("git checkout --progress --force -B %s refs/remotes/origin/%s", branch, branch),
}
if err := gm.execManager.ExecuteInternalCommands(ctx, core.InitGit, commands, global.RepoDir, nil, nil); err != nil {
return err
}
return nil
}
func (gm *gitManager) DownloadFileByCommit(ctx context.Context, gitProvider, repoSlug,
commitID, filePath string, oauth *core.Oauth) (string, error) {
downloadURL, err := urlmanager.GetFileDownloadURL(gitProvider, commitID, repoSlug, filePath)
if err != nil {
return "", err
}
header := getHeaderMap(oauth)
respBody, stausCode, err := gm.request.MakeAPIRequest(ctx, http.MethodGet, downloadURL, nil, nil, header)
if err != nil {
return "", err
}
if stausCode >= http.StatusMultipleChoices {
return "", fmt.Errorf("received non 200 status code [%d]", stausCode)
}
path := utils.GenerateUUID() + ".yml"
out, err := os.Create(path)
if err != nil {
gm.logger.Errorf("failed to create file err %v", err)
return "", err
}
_, err = out.Write(respBody)
if err != nil {
gm.logger.Errorf("failed to copy file %v", err)
out.Close()
return "", err
}
out.Close()
return path, nil
}
func getHeaderMap(oauth *core.Oauth) map[string]string {
header := map[string]string{}
header[authorization] = fmt.Sprintf("%s %s", oauth.Type, oauth.AccessToken)
return header
}
================================================
FILE: pkg/gitmanager/setup_test.go
================================================
package gitmanager
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"net/http"
"net/http/httptest"
"os"
"strings"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/command"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
"github.com/stretchr/testify/mock"
)
func Test_downloadFile(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/archive/zipfile.zip" {
t.Errorf("Expected to request '/archive/zipfile.zip', got: %v", r.URL)
return
}
reqToken := r.Header.Get("Authorization")
splitToken := strings.Split(reqToken, "Bearer ")
expectedOauth := &core.Oauth{AccessToken: "dummy", Type: core.Bearer}
if splitToken[1] != expectedOauth.AccessToken {
t.Errorf("Invalid clone token, expected: %v\nreceived: %v", expectedOauth.AccessToken, splitToken[1])
w.WriteHeader(http.StatusUnauthorized)
} else {
w.WriteHeader(http.StatusOK)
}
}))
defer server.Close()
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't get logger, error: %v", err)
return
}
var httpClient http.Client
execManager := new(mocks.ExecutionManager)
execManager.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string")).Return(
func(ctx context.Context, commandType core.CommandType, commands []string, cwd string, envMap, secretData map[string]string) error {
return nil
},
)
gm := &gitManager{
logger: logger,
httpClient: httpClient,
execManager: execManager,
request: requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{}),
}
archiveURL := server.URL + "/archive/zipfile.zip"
fileName := "copyAndExtracted"
oauth := &core.Oauth{AccessToken: "dummy", Type: core.Bearer}
err2 := gm.downloadFile(context.TODO(), archiveURL, fileName, oauth)
defer removeFile(fileName) // remove the file created after downloading and extracting
if err2 != nil {
t.Errorf("Error: %v", err2)
}
}
func Test_copyAndExtractFile(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't get logger, error: %v", err)
}
var httpClient http.Client
execManager := new(mocks.ExecutionManager)
execManager.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string")).Return(
func(ctx context.Context, commandType core.CommandType, commands []string, cwd string, envMap, secretData map[string]string) error {
return nil
},
)
gm := &gitManager{
logger: logger,
httpClient: httpClient,
execManager: execManager,
request: requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{}),
}
fileBody := "Hello World!"
resp := http.Response{
Body: io.NopCloser(bytes.NewBufferString(fileBody)),
}
path := "newFile"
defer removeFile(path)
respBodyBuffer := bytes.Buffer{}
_, err = io.Copy(&respBodyBuffer, resp.Body)
if err != nil {
t.Errorf("Error: %v", err)
return
}
err2 := gm.copyAndExtractFile(context.TODO(), respBodyBuffer.Bytes(), path)
if err2 != nil {
t.Errorf("Error: %v", err2)
return
}
fileContent, err := os.ReadFile("./newFile")
if err != nil {
t.Errorf("Error: %v", err)
return
}
if string(fileContent) != fileBody {
t.Errorf("Expected file content: %v\nReceived: %v", fileBody, string(fileContent))
}
}
func TestClone(t *testing.T) {
checkClone := func(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils/testdata/archive")), // mock data stored at tests/mock/index.txt
)
defer server.Close()
global.TestEnv = true
global.TestServer = server.URL
logger, err := lumber.NewLogger(lumber.LoggingConfig{ConsoleLevel: lumber.Debug}, true, 1)
if err != nil {
fmt.Println("Logger can't be established")
}
azureClient := new(mocks.AzureClient)
secretParser := new(mocks.SecretParser)
execManager := command.NewExecutionManager(secretParser, azureClient, logger)
gm := NewGitManager(logger, execManager)
payload, err := testutils.GetPayload()
if err != nil {
t.Errorf("Unable to load payload, error %v", err)
}
payload.RepoLink = server.URL
payload.BuildTargetCommit = "testRepo"
oauth := &core.Oauth{AccessToken: "dummy", Type: core.Bearer}
commitID := payload.BuildTargetCommit
err = gm.Clone(context.TODO(), payload, oauth)
global.TestEnv = false
expErr := "opening zip archive for reading: creating reader: zip: not a valid zip file"
defer removeFile("testRepo")
defer removeFile(commitID + ".zip")
defer removeFile(global.RepoDir)
if err != nil && err.Error() != expErr {
t.Errorf("Error: %v", err)
return
}
_, err2 := os.OpenFile(commitID+".zip", 0440, 0440)
_, err3 := os.OpenFile("zipFile", 0440, 0440)
// check if downloaded file exist now
if errors.Is(err2, os.ErrNotExist) {
t.Errorf("Could not find the downloaded file, got error: %v", err2)
return
}
if err.Error() == expErr {
return
}
// check if unzipped folder exist
if errors.Is(err3, os.ErrNotExist) {
t.Errorf("Could not find the unzipped folder, got error: %v", err3)
return
}
// global.RepoDir does not exist on local
if err != nil && (errors.Is(err, os.ErrNotExist)) == false {
t.Errorf("Expected error: %v, Received: %v\n", os.ErrNotExist, err)
return
}
if err == nil {
if _, err4 := os.OpenFile(global.RepoDir, 0440, 0440); errors.Is(err4, os.ErrNotExist) {
t.Errorf("Failed to find file in global repodir, got error: %v", err4)
return
}
}
}
t.Run("Check the clone function", func(t *testing.T) {
checkClone(t)
})
}
func removeFile(path string) {
err := os.RemoveAll(path)
if err != nil {
fmt.Println("error in removing!!")
}
}
================================================
FILE: pkg/global/nucleusconstants.go
================================================
package global
import "time"
// TestEnv : to set test env for urlmanager package
var TestEnv bool = false
// TestServer : store server URL of test server while doing mock testing
var TestServer string
// All constant related to nucleus
const (
CoverageManifestFileName = "manifest.json"
HomeDir = "/home/nucleus"
WorkspaceCacheDir = "/workspace-cache"
RepoDir = HomeDir + "/repo"
CodeCoverageDir = RepoDir + "/coverage"
RepoCacheDir = RepoDir + "/__tas"
DefaultAPITimeout = 45 * time.Second
DefaultGitCloneTimeout = 30 * time.Minute
SamplingTime = 5 * time.Millisecond
RepoSecretPath = "/vault/secrets/reposecrets"
OauthSecretPath = "/vault/secrets/oauth"
NeuronRemoteHost = "http://neuron-service.phoenix.svc.cluster.local"
BlockTestFileLocation = "/tmp/blocktests.json"
SecretRegex = `\${{\s*secrets\.(.*?)\s*}}` // nolint: gosec
ExecutionResultChunkSize = 50
TestLocatorsDelimiter = "#TAS#"
ExpiryDelta = 15 * time.Minute
NewTASVersion = 2
ModulePath = "MODULE_PATH"
PackageJSON = "package.json"
SubModuleName = "SUBMODULE_NAME"
ArgPattern = "--pattern"
ArgConfig = "--config"
ArgDiff = "--diff"
ArgCommand = "--command"
ArgLocator = "--locator-file"
ArgFrameworVersion = "--frameworkVersion"
DefaultTASVersion = "1.0.0"
TASYmlConfigurationDocLink = "https://www.lambdatest.com/support/docs/tas-configuring-tas-yml"
)
// FrameworkRunnerMap is map of framework with there respective runner location
var FrameworkRunnerMap = map[string]string{
"jasmine": "./node_modules/.bin/jasmine-runner",
"mocha": "./node_modules/.bin/mocha-runner",
"jest": "./node_modules/.bin/jest-runner",
"golang": "/home/nucleus/server",
"junit": "java",
}
// APIHostURLMap is map of git provider with there api url
var APIHostURLMap = map[string]string{
"github": "https://api.github.com/repos",
"gitlab": "https://gitlab.com/api/v4/projects",
"bitbucket": "https://api.bitbucket.org/2.0",
}
// InstallRunnerCmds are list of command used to install custom runner
var InstallRunnerCmds = []string{"tar -xzf /custom-runners/custom-runners.tgz"}
// NeuronHost is neuron host end point
var NeuronHost string
// SetNeuronHost is setter for NeuronHost
func SetNeuronHost(host string) {
NeuronHost = host
}
var FrameworkLanguageMap = map[string]string{
"jasmine": "javascript",
"mocha": "javascript",
"jest": "javascript",
"golang": "golang",
"junit": "java",
}
// ValidYMLVersions defines all valid yml version
var ValidYMLVersions = []int{1, 2}
================================================
FILE: pkg/global/synapseconstants.go
================================================
package global
import (
"time"
)
// all constant related to synapse
const (
GracefulTimeout = 100 * time.Second
ProxyServerPort = "8000"
DirectoryPermissions = 0755
FilePermissions = 0755
VaultSecretDir = "/vault/secrets"
GitConfigFileName = "oauth"
RepoSecretsFileName = "reposecrets"
SynapseContainerURL = "http://synapse:8000"
NetworkEnvName = "NetworkName"
AutoRemoveEnv = "AutoRemove"
SynapseHostEnv = "synapsehost"
LocalEnv = "local"
NetworkName = "test-at-scale"
AutoRemove = true
Local = true
MaxConnectionAttempts = 10
ExecutionLogsPath = "/var/log/synapse"
PingWait = 30 * time.Second
MaxMessageSize = 4096
)
// SocketURL lambdatest url for synapse socket
var SocketURL map[string]string
// TASCloudURL url to send reports
var TASCloudURL map[string]string
func init() {
SocketURL = map[string]string{
"stage": "wss://stage-api-tas.lambdatestinternal.com/ws/",
"dev": "ws://host.docker.internal/ws/",
"prod": "wss://api.tas.lambdatest.com/ws/",
"pi": "wss://api.tas-pi.lambdatest.com/ws/",
}
TASCloudURL = map[string]string{
"stage": "https://stage-api-tas.lambdatestinternal.com",
"dev": "http://host.docker.internal",
"prod": "https://api.tas.lambdatest.com",
"pi": "https://api.tas-pi.lambdatest.com",
}
}
================================================
FILE: pkg/global/version.go
================================================
package global
import "os"
var (
// NucleusBinaryVersion Nucleus version
NucleusBinaryVersion = os.Getenv("VERSION")
// SynapseBinaryVersion Synapse version
SynapseBinaryVersion = os.Getenv("VERSION")
)
================================================
FILE: pkg/listsubmoduleservice/setup.go
================================================
package listsubmoduleservice
import (
"context"
"encoding/json"
"net/http"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
type subModuleListService struct {
logger lumber.Logger
requests core.Requests
subModuleListEndpoint string
}
func New(request core.Requests, logger lumber.Logger) core.ListSubModuleService {
return &subModuleListService{
logger: logger,
requests: request,
subModuleListEndpoint: global.NeuronHost + "/submodule-list",
}
}
func (s *subModuleListService) Send(ctx context.Context, buildID string, totalSubmodule int) error {
subModuleList := core.SubModuleList{
BuildID: buildID,
TotalSubModule: totalSubmodule,
}
reqBody, err := json.Marshal(&subModuleList)
if err != nil {
s.logger.Errorf("error while json marshal %v", err)
return err
}
query, headers := utils.GetDefaultQueryAndHeaders()
if _, statusCode, err := s.requests.MakeAPIRequest(ctx, http.MethodPost, s.subModuleListEndpoint,
reqBody, query, headers); err != nil || statusCode != 200 {
s.logger.Errorf("error while making submodule-list api call status code %d, err %v", statusCode, err)
return err
}
return nil
}
================================================
FILE: pkg/logstream/mask.go
================================================
package logstream
import (
"io"
"strings"
)
const (
maskedStr = "****************"
)
// masker wraps a stream writer with a masker
type masker struct {
w io.Writer
r *strings.Replacer
}
// NewMasker returns a masker that wraps io.Writer w.
func NewMasker(w io.Writer, secretData map[string]string) io.Writer {
var oldnew []string
for _, secret := range secretData {
if secret == "" {
continue
}
for _, part := range strings.Split(secret, "\n") {
part = strings.TrimSpace(part)
// avoid masking empty or single character strings.
if len(part) < 2 {
continue
}
oldnew = append(oldnew, part, maskedStr)
}
}
if len(oldnew) == 0 {
return w
}
return &masker{
w: w,
r: strings.NewReplacer(oldnew...),
}
}
// Write writes p to the base writer. The method scans for any
// sensitive data in p and masks before writing.
func (m *masker) Write(p []byte) (n int, err error) {
_, err = m.w.Write([]byte(m.r.Replace(string(p))))
return len(p), err
}
================================================
FILE: pkg/logstream/mask_test.go
================================================
package logstream
import (
"bytes"
"testing"
)
const keyLine = `{
"token":"dXNlcm5hbWU6cGFzc3dvcmQ="
}`
func TestReplace(t *testing.T) {
secrets := map[string]string{
"cipher": "lazy dog",
"cipher2": "",
}
buf := &bytes.Buffer{}
w := NewMasker(buf, secrets)
w.Write([]byte("The quick brown fox jumps over the lazy dog")) // nolint:errcheck
if got, want := buf.String(), "The quick brown fox jumps over the ****************"; got != want {
t.Errorf("Want masked string %s, got %s", want, got)
}
}
func TestReplaceMultiline(t *testing.T) {
key := `
-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0FPqri0cb2JZfXJ/DgYSF6vUp
wmJG8wVQZKjeGcjDOL5UlsuusFncCzWBQ7RKNUSesmQRMSGkVb1/3j+skZ6UtW+5u09lHNsj6tQ5
1s1SPrCBkedbNf0Tp0GbMJDyR4e9T04ZZwIDAQABAoGAFijko56+qGyN8M0RVyaRAXz++xTqHBLh
3tx4VgMtrQ+WEgCjhoTwo23KMBAuJGSYnRmoBZM3lMfTKevIkAidPExvYCdm5dYq3XToLkkLv5L2
pIIVOFMDG+KESnAFV7l2c+cnzRMW0+b6f8mR1CJzZuxVLL6Q02fvLi55/mbSYxECQQDeAw6fiIQX
GukBI4eMZZt4nscy2o12KyYner3VpoeE+Np2q+Z3pvAMd/aNzQ/W9WaI+NRfcxUJrmfPwIGm63il
AkEAxCL5HQb2bQr4ByorcMWm/hEP2MZzROV73yF41hPsRC9m66KrheO9HPTJuo3/9s5p+sqGxOlF
L0NDt4SkosjgGwJAFklyR1uZ/wPJjj611cdBcztlPdqoxssQGnh85BzCj/u3WqBpE2vjvyyvyI5k
X6zk7S0ljKtt2jny2+00VsBerQJBAJGC1Mg5Oydo5NwD6BiROrPxGo2bpTbu/fhrT8ebHkTz2epl
U9VQQSQzY1oZMVX8i1m5WUTLPz2yLJIBQVdXqhMCQBGoiuSoSjafUhV7i1cEGpb88h5NBYZzWXGZ
37sJ5QsW+sJyoNde3xH8vdXhzU7eT82D6X/scw9RZz+/6rCJ4p0=
-----END RSA PRIVATE KEY-----`
line := `> MIICXAIBAAKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0FPqri0cb2JZfXJ/DgYSF6vUp`
secrets := map[string]string{
"cipher": key,
}
buf := &bytes.Buffer{}
w := NewMasker(buf, secrets)
w.Write([]byte(line)) // nolint:errcheck
if got, want := buf.String(), "> ****************"; got != want {
t.Errorf("Want masked string %s, got %s", want, got)
}
}
func TestSkipSingleCharacterMask(t *testing.T) {
secrets := map[string]string{
"cipher": "l",
}
buf := &bytes.Buffer{}
w := NewMasker(buf, secrets)
w.Write([]byte("The quick brown fox jumps over the lazy dog")) // nolint:errcheck
if got, want := buf.String(), "The quick brown fox jumps over the lazy dog"; got != want {
t.Errorf("Want masked string %s, got %s", want, got)
}
}
func TestReplaceMultilineJson(t *testing.T) {
key := keyLine
line := keyLine
secrets := map[string]string{
"cipher": key,
}
buf := &bytes.Buffer{}
w := NewMasker(buf, secrets)
w.Write([]byte(line)) // nolint:errcheck
if got, want := buf.String(), "{\n ****************\n}"; got != want {
t.Errorf("Want masked string %s, got %s", want, got)
}
}
================================================
FILE: pkg/logwriter/setup.go
================================================
package logwriter
import (
"bytes"
"context"
"fmt"
"io"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
type (
BufferLogWriter struct {
subModule string
buffer *bytes.Buffer
logger lumber.Logger
}
AzureLogWriter struct {
azureClient core.AzureClient
purpose core.SASURLPurpose
logger lumber.Logger
}
)
func NewAzureLogWriter(azureClient core.AzureClient,
purpose core.SASURLPurpose,
logger lumber.Logger) core.LogWriterStrategy {
return &AzureLogWriter{
azureClient: azureClient,
purpose: purpose,
logger: logger,
}
}
func NewBufferLogWriter(subModule string,
buffer *bytes.Buffer,
logger lumber.Logger) core.LogWriterStrategy {
return &BufferLogWriter{
subModule: subModule,
buffer: buffer,
logger: logger,
}
}
func (b *BufferLogWriter) Write(ctx context.Context, reader io.Reader) <-chan error {
errChan := make(chan error, 1)
go func() {
if _, err := fmt.Fprintf(b.buffer, "\n<------ PRE RUN for %s ------> \n", b.subModule); err != nil {
b.logger.Debugf("Error writing the logs separator for submodule %s, error %v", b.subModule, err)
errChan <- err
return
}
if _, err := b.buffer.ReadFrom(reader); err != nil {
b.logger.Debugf("Error writing the logs to buffer for submodule %s, error %v", b.subModule, err)
errChan <- err
return
}
close(errChan)
b.logger.Debugf("written logs for sub module %s to buffer", b.subModule)
}()
return errChan
}
func (a *AzureLogWriter) Write(ctx context.Context, reader io.Reader) <-chan error {
errChan := make(chan error, 1)
go func() {
sasURL, err := a.azureClient.GetSASURL(ctx, a.purpose, nil)
if err != nil {
a.logger.Errorf("failed to genereate SAS URL for purpose %s, error: %v", a.purpose, err)
errChan <- err
return
}
blobPath, err := a.azureClient.CreateUsingSASURL(ctx, sasURL, reader, "text/plain")
if err != nil {
a.logger.Errorf("failed to create SAS URL for path %s, error: %v", blobPath, err)
errChan <- err
return
}
close(errChan)
a.logger.Debugf("created blob path %s", blobPath)
}()
return errChan
}
================================================
FILE: pkg/logwriter/setup_test.go
================================================
package logwriter
import (
"context"
"io"
"strings"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/mock"
)
func Test_azure_write_logger_strategy(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
azureClientGetSASURL := new(mocks.AzureClient)
mockUtil(azureClientGetSASURL, "getSASURL", "createUsingSASURL", "error in GetSASURL", "error in CreateUsingSASURL", true, false)
azureClientCreateSASURL := new(mocks.AzureClient)
mockUtil(azureClientCreateSASURL, "getSASURL", "createUsingSASURL", "error in GetSASURL", "error in CreateUsingSASURL", false, true)
azureClientSuccess := new(mocks.AzureClient)
mockUtil(azureClientSuccess, "getSASURL", "createUsingSASURL", "error in GetSASURL", "error in CreateUsingSASURL", false, false)
errGetSASURL := make(chan error, 1)
defer func() { close(errGetSASURL) }()
errCreateUsingSASURL := make(chan error, 1)
defer func() { close(errCreateUsingSASURL) }()
errSuccess := make(chan error, 1)
defer func() { close(errSuccess) }()
type fields struct {
azureClient core.AzureClient
}
type args struct {
ctx context.Context
blobPath string
reader io.Reader
}
tests := []struct {
name string
fields fields
args args
want <-chan error
wantErr bool
}{
{"Test StoreCommandLogs for getSASURL error",
fields{
azureClient: azureClientGetSASURL,
},
args{
ctx: context.TODO(),
blobPath: "blobpath",
reader: &strings.Reader{},
},
errGetSASURL,
true,
},
{"Test StoreCommandLogs for CreateUsingSASURL error",
fields{
azureClient: azureClientCreateSASURL,
},
args{
ctx: context.TODO(),
blobPath: "blobpath",
reader: &strings.Reader{},
},
errCreateUsingSASURL,
true,
},
{"Test StoreCommandLogs for success",
fields{
azureClient: azureClientSuccess,
},
args{
ctx: context.TODO(),
blobPath: "blobpath",
reader: &strings.Reader{},
},
errSuccess,
false,
},
}
errGetSASURL <- errs.New("error in GetSASURL")
errCreateUsingSASURL <- errs.New("error in CreateUsingSASURL")
errSuccess <- errs.New("")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
m := &AzureLogWriter{
logger: logger,
purpose: core.PurposeCache,
azureClient: tt.fields.azureClient,
}
got := m.Write(tt.args.ctx, tt.args.reader)
if !tt.wantErr {
if len(got) != 0 {
t.Errorf("Expected channel to be empty, received: %v", <-got)
}
return
}
received := <-got
want := <-tt.want
if received.Error() != want.Error() {
t.Errorf("manager.StoreCommandLogs() = %+v, want %+v", received, want)
}
})
}
}
func mockUtil(azureClient *mocks.AzureClient, msgGet, msgCreate, errGet, errCreate string, wantErrGet, wantErrCreate bool) {
var x map[string]interface{}
azureClient.On("GetSASURL", mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.SASURLPurpose"), x).Return(
func(ctx context.Context, purpose core.SASURLPurpose, data map[string]interface{}) string {
return msgGet
},
func(ctx context.Context, purpose core.SASURLPurpose, data map[string]interface{}) error {
if !wantErrGet {
return nil
}
return errs.New(errGet)
})
azureClient.On("CreateUsingSASURL", mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("string"), mock.AnythingOfType("*strings.Reader"), "text/plain").Return(
func(ctx context.Context, sasURL string, reader io.Reader, mimeType string) string {
return msgCreate
},
func(ctx context.Context, sasURL string, reader io.Reader, mimeType string) error {
if !wantErrCreate {
return nil
}
return errs.New(errCreate)
})
}
================================================
FILE: pkg/lumber/logio.go
================================================
package lumber
import (
"bytes"
)
// Writer must be closed when finished to flush buffered data to the logger.
type Writer struct {
// Log specifies the logger to which the Writer will write messages.
// The Writer will panic if Log is unspecified.
Log Logger
buff bytes.Buffer
}
// NewWriter returns a new Writer that writes to the provided Logger.
func NewWriter(log Logger) *Writer {
return &Writer{Log: log}
}
// Write writes the provided bytes to the underlying logger at the configured
// log level and returns the length of the bytes.
//
// Write will split the input on newlines and post each line as a new log entry
// to the logger.
func (w *Writer) Write(bs []byte) (n int, err error) {
n = len(bs)
for len(bs) > 0 {
bs = w.writeLine(bs)
}
return n, nil
}
// writeLine writes a single line from the input, returning the remaining,
// unconsumed bytes.
func (w *Writer) writeLine(line []byte) (remaining []byte) {
idx := bytes.IndexByte(line, '\n')
if idx < 0 {
// If there are no newlines, buffer the entire string.
w.buff.Write(line)
return nil
}
// Split on the newline, buffer and flush the left.
line, remaining = line[:idx], line[idx+1:]
// Fast path: if we don't have a partial message from a previous write
// in the buffer, skip the buffer and log directly.
if w.buff.Len() == 0 {
w.log(line)
return
}
w.buff.Write(line)
// Log empty messages in the middle of the stream so that we don't lose
// information when the user writes "foo\n\nbar".
w.flush(true /* allowEmpty */)
return remaining
}
// Close closes the writer, flushing any buffered data in the process.
func (w *Writer) Close() error {
return w.Sync()
}
// Sync flushes buffered data to the logger as a new log entry even if it
// doesn't contain a newline.
func (w *Writer) Sync() error {
// Don't allow empty messages on explicit Sync calls or on Close
// because we don't want an extraneous empty message at the end of the
// stream -- it's common for files to end with a newline.
w.flush(false)
return nil
}
// flush flushes the buffered data to the logger, allowing empty messages only
// if the bool is set.
func (w *Writer) flush(allowEmpty bool) {
if allowEmpty || w.buff.Len() > 0 {
w.log(w.buff.Bytes())
}
w.buff.Reset()
}
func (w *Writer) log(b []byte) {
w.Log.Debugf("%s", string(b))
}
================================================
FILE: pkg/lumber/logrus.go
================================================
package lumber
import (
"io"
"os"
"github.com/sirupsen/logrus"
lumberjack "gopkg.in/natefinch/lumberjack.v2"
)
type logrusLogEntry struct {
entry *logrus.Entry
}
type logrusLogger struct {
logger *logrus.Logger
}
func getFormatter(isJSON bool) logrus.Formatter {
if isJSON {
return &logrus.JSONFormatter{}
}
return &logrus.TextFormatter{
FullTimestamp: true,
DisableLevelTruncation: true,
}
}
func newLogrusLogger(config LoggingConfig, verbose bool) (Logger, error) {
logLevel := config.ConsoleLevel
if logLevel == "" {
logLevel = config.FileLevel
}
// command line args take highest precedence
if verbose {
logLevel = "debug"
}
level, err := logrus.ParseLevel(logLevel)
if err != nil {
return nil, err
}
stdOutHandler := os.Stdout
fileHandler := &lumberjack.Logger{
Filename: config.FileLocation,
MaxSize: 100,
Compress: true,
MaxAge: 28,
}
lLogger := &logrus.Logger{
Out: stdOutHandler,
Formatter: getFormatter(config.ConsoleJSONFormat),
Hooks: make(logrus.LevelHooks),
Level: level,
}
multiWriter := make([]io.Writer, 0)
if config.EnableConsole {
multiWriter = append(multiWriter, stdOutHandler)
}
if config.EnableFile {
multiWriter = append(multiWriter, fileHandler)
lLogger.SetFormatter(getFormatter(config.FileJSONFormat))
}
lLogger.SetOutput(io.MultiWriter(multiWriter...))
return &logrusLogger{
logger: lLogger,
}, nil
}
func (l *logrusLogger) Debugf(format string, args ...interface{}) {
l.logger.Debugf(format, args...)
}
func (l *logrusLogger) Infof(format string, args ...interface{}) {
l.logger.Infof(format, args...)
}
func (l *logrusLogger) Warnf(format string, args ...interface{}) {
l.logger.Warnf(format, args...)
}
func (l *logrusLogger) Errorf(format string, args ...interface{}) {
l.logger.Errorf(format, args...)
}
func (l *logrusLogger) Fatalf(format string, args ...interface{}) {
l.logger.Fatalf(format, args...)
}
func (l *logrusLogger) Panicf(format string, args ...interface{}) {
l.logger.Fatalf(format, args...)
}
func (l *logrusLogger) WithFields(fields Fields) Logger {
return &logrusLogEntry{
entry: l.logger.WithFields(convertToLogrusFields(fields)),
}
}
func (l *logrusLogEntry) Debugf(format string, args ...interface{}) {
l.entry.Debugf(format, args...)
}
func (l *logrusLogEntry) Infof(format string, args ...interface{}) {
l.entry.Infof(format, args...)
}
func (l *logrusLogEntry) Warnf(format string, args ...interface{}) {
l.entry.Warnf(format, args...)
}
func (l *logrusLogEntry) Errorf(format string, args ...interface{}) {
l.entry.Errorf(format, args...)
}
func (l *logrusLogEntry) Fatalf(format string, args ...interface{}) {
l.entry.Fatalf(format, args...)
}
func (l *logrusLogEntry) Panicf(format string, args ...interface{}) {
l.entry.Panicf(format, args...)
}
func (l *logrusLogEntry) WithFields(fields Fields) Logger {
return &logrusLogEntry{
entry: l.entry.WithFields(convertToLogrusFields(fields)),
}
}
func convertToLogrusFields(fields Fields) logrus.Fields {
logrusFields := logrus.Fields{}
for index, val := range fields {
logrusFields[index] = val
}
return logrusFields
}
================================================
FILE: pkg/lumber/setup.go
================================================
// Logging package for tunnel server
package lumber
import "github.com/LambdaTest/test-at-scale/pkg/errs"
// LoggingConfig stores the config for the logger
// For some loggers there can only be one level across writers, for such the level of Console is picked by default
type LoggingConfig struct {
EnableConsole bool
ConsoleJSONFormat bool
ConsoleLevel string
EnableFile bool
FileJSONFormat bool
FileLevel string
FileLocation string
}
// Fields Type to pass when we want to call WithFields for structured logging
type Fields map[string]interface{}
const (
// Debug has verbose message
Debug = "debug"
// Info is default log level
Info = "info"
// Warn is for logging messages about possible issues
Warn = "warn"
// Error is for logging errors
Error = "error"
// Fatal is for logging fatal messages. The system shutsdown after logging the message.
Fatal = "fatal"
)
// List of supported loggers.
const (
InstanceZapLogger int = iota
InstanceLogrusLogger
)
// Logger is our contract for the logger
type Logger interface {
// Debugf logs a message at level Debug on the standard logger.
Debugf(format string, args ...interface{})
// Infof logs a message at level Info on the standard logger.
Infof(format string, args ...interface{})
// Warnf logs a message at level Warn on the standard logger.
Warnf(format string, args ...interface{})
// Errorf logs a message at level Error on the standard logger.
Errorf(format string, args ...interface{})
// Fatalf logs a message at level Fatal on the standard logger then the process will exit with status set to 1.
Fatalf(format string, args ...interface{})
// Panicf logs a message at level Panic on the standard logger.
Panicf(format string, args ...interface{})
// WithField creates an entry from the standard logger and adds a field to
// it. If you want multiple fields, use `WithFields`
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
// or Panic on the Entry it returns.
WithFields(keyValues Fields) Logger
}
// NewLogger returns an instance of logger
func NewLogger(config LoggingConfig, verbose bool, loggerInstance int) (Logger, error) {
switch loggerInstance {
case InstanceZapLogger:
logger := newZapLogger(config, verbose)
return logger, nil
case InstanceLogrusLogger:
logger, err := newLogrusLogger(config, verbose)
if err != nil {
return nil, err
}
return logger, nil
default:
return nil, errs.ErrInvalidLoggerInstance
}
}
================================================
FILE: pkg/lumber/zap.go
================================================
package lumber
import (
"os"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
lumberjack "gopkg.in/natefinch/lumberjack.v2"
)
type zapLogger struct {
sugaredLogger *zap.SugaredLogger
}
const callDepth = 2
func getEncoder(isJSON bool) zapcore.Encoder {
encoderConfig := zap.NewProductionEncoderConfig()
encoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
encoderConfig.TimeKey = "time" // This will change the key from ts to time
if isJSON {
return zapcore.NewJSONEncoder(encoderConfig)
}
return zapcore.NewConsoleEncoder(encoderConfig)
}
func getZapLevel(level string) zapcore.Level {
switch level {
case Info:
return zapcore.InfoLevel
case Warn:
return zapcore.WarnLevel
case Debug:
return zapcore.DebugLevel
case Error:
return zapcore.ErrorLevel
case Fatal:
return zapcore.FatalLevel
default:
return zapcore.InfoLevel
}
}
func newZapLogger(config LoggingConfig, verbose bool) Logger {
cores := []zapcore.Core{}
if config.EnableConsole {
level := getZapLevel(config.ConsoleLevel)
// command line args take highest precedence
if verbose {
level = getZapLevel("debug")
}
writer := zapcore.Lock(os.Stdout)
core := zapcore.NewCore(getEncoder(config.ConsoleJSONFormat), writer, level)
cores = append(cores, core)
}
if config.EnableFile {
level := getZapLevel(config.FileLevel)
writer := zapcore.AddSync(&lumberjack.Logger{
Filename: config.FileLocation,
MaxSize: 100,
Compress: true,
MaxAge: 28,
})
core := zapcore.NewCore(getEncoder(config.FileJSONFormat), writer, level)
cores = append(cores, core)
}
combinedCore := zapcore.NewTee(cores...)
// AddCallerSkip skips 2 number of callers, this is important else the file that gets
// logged will always be the wrapped file. In our case zap.go
logger := zap.New(combinedCore,
zap.AddCallerSkip(callDepth),
zap.AddCaller(),
).Sugar()
return &zapLogger{
sugaredLogger: logger,
}
}
func (l *zapLogger) Debugf(format string, args ...interface{}) {
l.sugaredLogger.Debugf(format, args...)
}
func (l *zapLogger) Infof(format string, args ...interface{}) {
l.sugaredLogger.Infof(format, args...)
}
func (l *zapLogger) Warnf(format string, args ...interface{}) {
l.sugaredLogger.Warnf(format, args...)
}
func (l *zapLogger) Errorf(format string, args ...interface{}) {
l.sugaredLogger.Errorf(format, args...)
}
func (l *zapLogger) Fatalf(format string, args ...interface{}) {
l.sugaredLogger.Fatalf(format, args...)
}
func (l *zapLogger) Panicf(format string, args ...interface{}) {
l.sugaredLogger.Panicf(format, args...)
}
func (l *zapLogger) WithFields(fields Fields) Logger {
var f = make([]interface{}, 0, len(fields))
for k, v := range fields {
f = append(f, k, v)
}
newLogger := l.sugaredLogger.With(f...)
return &zapLogger{newLogger}
}
================================================
FILE: pkg/payloadmanager/setup.go
================================================
// Package payloadmanager is used for fetching and validating the nucleus execution payload
package payloadmanager
import (
"context"
"encoding/json"
"net/http"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
// PayloadManager represents the payload for nucleus
type payloadManager struct {
logger lumber.Logger
azureClient core.AzureClient
cfg *config.NucleusConfig
requests core.Requests
}
// NewPayloadManger creates and returns a new PayloadManager instance
func NewPayloadManger(azureClient core.AzureClient,
logger lumber.Logger, cfg *config.NucleusConfig, requests core.Requests) core.PayloadManager {
return &payloadManager{
azureClient: azureClient,
logger: logger,
cfg: cfg,
requests: requests,
}
}
func (pm *payloadManager) FetchPayload(ctx context.Context, payloadAddress string) (*core.Payload, error) {
rawBytes, _, err := pm.requests.MakeAPIRequest(ctx, http.MethodGet, payloadAddress, nil, nil, nil)
if err != nil {
return nil, err
}
p := new(core.Payload)
if err := json.Unmarshal(rawBytes, p); err != nil {
return nil, err
}
return p, nil
}
func (pm *payloadManager) ValidatePayload(ctx context.Context, payload *core.Payload) error {
if payload.RepoLink == "" {
return errs.ErrInvalidPayload("Missing repo link")
}
if payload.RepoSlug == "" {
return errs.ErrInvalidPayload("Missing repo slug")
}
if payload.GitProvider == "" {
return errs.ErrInvalidPayload("Missing git provider")
}
if payload.BuildID == "" {
return errs.ErrInvalidPayload("Missing BuildID")
}
if payload.RepoID == "" {
return errs.ErrInvalidPayload("Missing RepoID")
}
if payload.BranchName == "" {
return errs.ErrInvalidPayload("Missing Branch Name")
}
if payload.OrgID == "" {
return errs.ErrInvalidPayload("Missing OrgID")
}
if payload.TasFileName == "" {
return errs.ErrInvalidPayload("Missing tas yml filename")
}
if pm.cfg.Locators != "" {
payload.Locators = pm.cfg.Locators
}
if pm.cfg.LocatorAddress != "" {
payload.LocatorAddress = pm.cfg.LocatorAddress
}
if payload.BuildTargetCommit == "" {
return errs.ErrInvalidPayload("Missing build target commit")
}
// some checks are removed in case of coverage mode or parsing mode
if !pm.cfg.CoverageMode {
if pm.cfg.TaskID == "" {
return errs.ErrInvalidPayload("Missing taskID in config")
}
payload.TaskID = pm.cfg.TaskID
}
if payload.EventType != core.EventPush && payload.EventType != core.EventPullRequest {
return errs.ErrInvalidPayload("Invalid event type")
}
if payload.EventType == core.EventPush && len(payload.Commits) == 0 {
return errs.ErrInvalidPayload("Missing commits error")
}
return nil
}
================================================
FILE: pkg/payloadmanager/setup_test.go
================================================
// Package payloadmanager is used for fetching and validating the nucleus execution payload
package payloadmanager
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
)
type validatePayloadArgs struct {
ctx context.Context
payload *core.Payload
coverageMode bool
locators string
locatorAddress string
taskID string
}
type testCaseValidatePayload struct {
name string
args validatePayloadArgs
wantErr bool
}
// nolint: unparam
func getPayloadManagerArgs() (core.AzureClient, lumber.Logger, *config.NucleusConfig, error) {
logger, err := testutils.GetLogger()
if err != nil {
return nil, nil, nil, err
}
cfg, err := testutils.GetConfig()
if err != nil {
return nil, nil, nil, err
}
var azureClient core.AzureClient
return azureClient, logger, cfg, nil
}
func Test_payloadManager_FetchPayload(t *testing.T) {
server := httptest.NewServer( // mock server
http.FileServer(http.Dir("../../testutils/testdata")), // mock data stored at testutils/testdata/index.txt
)
defer server.Close()
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't get logger, received: %s", err)
}
cfg, err := testutils.GetConfig()
if err != nil {
t.Errorf("Couldn't get config, received: %s", err)
}
azureClient := new(mocks.AzureClient)
wantResp, err := os.ReadFile("../../testutils/testdata/index.json")
if err != nil {
fmt.Printf("error in reading file: %+v\n", err)
}
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
pm := NewPayloadManger(azureClient, logger, cfg, requests)
type args struct {
ctx context.Context
payloadAddress string
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{
"Test Payload fetch for success",
args{ctx: context.TODO(), payloadAddress: server.URL + "/index.txt"},
string(wantResp),
false,
},
{
"Test Payload fetch for empty url",
args{ctx: context.TODO(), payloadAddress: ""},
"",
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := pm.FetchPayload(tt.args.ctx, tt.args.payloadAddress)
if tt.wantErr {
if err == nil {
t.Errorf("payloadManager.FetchPayload() error = %v, wantErr %v", err, tt.wantErr)
}
return
}
received, _ := json.Marshal(got)
receivedPayload := fmt.Sprintf("%+v\n", string(received))
if receivedPayload != tt.want {
t.Errorf("payloadManager.FetchPayload() = \n%v, \nwant: %v\n", receivedPayload, tt.want)
}
})
}
}
func Test_payloadManager_ValidatePayload(t *testing.T) {
azureClient, logger, cfg, err := getPayloadManagerArgs()
if err != nil {
t.Errorf("Couldn't establish required arguments, error: %v", err)
return
}
tests := getValidatePayloadTestCases()
for _, tt := range tests {
cfg.CoverageMode = tt.args.coverageMode
cfg.LocatorAddress = tt.args.locatorAddress
cfg.Locators = tt.args.locators
cfg.TaskID = tt.args.taskID
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
pm := NewPayloadManger(azureClient, logger, cfg, requests)
t.Run(tt.name, func(t *testing.T) {
if err := pm.ValidatePayload(tt.args.ctx, tt.args.payload); (err != nil) != tt.wantErr {
t.Errorf("payloadManager.ValidatePayload() error = %v, wantErr %v", err, tt.wantErr)
return
}
if cfg.Locators != "" {
if tt.args.payload.Locators != tt.args.locators {
t.Errorf("payloadManager.ValidatePayload() payload.locatorAdress = %v, want: %v", tt.args.payload.LocatorAddress, tt.args.locators)
return
}
}
if cfg.LocatorAddress != "" {
if tt.args.payload.LocatorAddress != tt.args.locatorAddress {
t.Errorf("got = %v, want: %v",
tt.args.payload.LocatorAddress, tt.args.locatorAddress)
return
}
}
if !(cfg.CoverageMode) {
if cfg.TaskID != "" {
if tt.args.payload.TaskID != tt.args.taskID {
t.Errorf("got payload.TaskID: %v, want: %v", tt.args.payload.TaskID, tt.args.taskID)
}
}
}
})
}
}
func getValidatePayloadTestCases() []*testCaseValidatePayload {
testCases := []*testCaseValidatePayload{
{"Test validate payload for empty repolink",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: ""}},
true,
},
{"Test validate payload for empty reposlug",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: ""}},
true,
},
{"Test validate payload for empty gitprovider",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug", GitProvider: ""}},
true,
},
{"Test validate payload for empty buildID",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/",
RepoSlug: "/slug", GitProvider: "fake", BuildID: ""}},
true,
},
{"Test validate payload for empty repoID",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: ""}},
true,
},
{"Test validate payload for empty branchName",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: ""}}, true,
},
{"Test validate payload for empty orgID",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: ""}},
true,
},
{"Test validate payload for empty TASFileName",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org", TasFileName: ""}},
true,
},
{"Test validate payload for expected payload.Locator Address & payloadLocator",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org", TasFileName: "a"},
locators: "/locator", locatorAddress: "/locatorAddr"},
true,
},
{"Test validate payload for empty build target commit",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: ""}},
true,
},
{"Test validate payload for empty target commit in config",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg"}, coverageMode: false, locators: "../dummy"},
true,
},
{"Test validate payload for target & base commit in config",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg"}, coverageMode: false, locators: "../dummy"},
true,
},
{"Test validate payload for target, base commit & taskID in config",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg"}, coverageMode: false, locators: "../dummy", taskID: "tid"},
true,
},
{"Test validate payload for non push and pull event",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg", EventType: "invalid"}, coverageMode: true},
true,
},
{"Test validate payload for push event with nil commit",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg", EventType: "push", Commits: []core.CommitChangeList{}}, coverageMode: true},
true,
},
{"Test validate payload for success",
validatePayloadArgs{ctx: context.TODO(), payload: &core.Payload{RepoLink: "github.com/abc/", RepoSlug: "/slug",
GitProvider: "fake", BuildID: "build", RepoID: "repo", BranchName: "branch", OrgID: "org",
TasFileName: "tas", BuildTargetCommit: "btg", EventType: "push",
Commits: []core.CommitChangeList{{Sha: "sha", Message: "msg"}}}, coverageMode: true},
false,
},
}
return testCases
}
================================================
FILE: pkg/procfs/procfs.go
================================================
//go:build linux
// +build linux
package procfs
import (
"context"
"math"
"runtime"
"time"
"github.com/shirou/gopsutil/v3/mem"
"github.com/shirou/gopsutil/v3/process"
)
const hundred = 100
// Proc represents the process for which we want to find stats
type Proc struct {
totalMem uint64
process *process.Process
samplingTime time.Duration
usePss bool
}
// Stats represents the process stats
type Stats struct {
CPUPercentage float64
MemPercentage float64
MemShared uint64
MemSwapped uint64
MemConsumed uint64
RecordTime time.Time
}
// New returns new Proc struct
func New(pid int32, samplingInterval time.Duration, usePss bool) (*Proc, error) {
p, err := process.NewProcess(pid)
if err != nil {
return nil, err
}
machineMemory, err := mem.VirtualMemory()
if err != nil {
return nil, err
}
return &Proc{process: p, samplingTime: samplingInterval, usePss: usePss, totalMem: machineMemory.Total}, nil
}
// GetStats returns process stats
func (ps *Proc) GetStats() (stat *Stats, err error) {
s := Stats{}
s.RecordTime = time.Now()
cpuPerc, err := ps.process.Percent(0)
if err != nil {
return nil, err
}
// https://github.com/alibaba/sentinel-golang/pull/448.
// The underlying library returns abnormally large number in some cases
s.CPUPercentage = math.Min(hundred, cpuPerc/float64(runtime.NumCPU()))
memInfo, err := ps.process.MemoryInfo()
if err != nil {
return nil, err
}
if !ps.usePss {
s.MemConsumed = memInfo.RSS
s.MemSwapped = memInfo.Swap
s.MemPercentage = (hundred * float64(s.MemConsumed) / float64(ps.totalMem))
return &s, nil
}
// why use pss instead of rss, Ref #https://stackoverflow.com/questions/1420426/how-to-calculate-the-cpu-usage-of-a-process-by-pid-in-linux-from-c/1424556
maps, err := ps.process.MemoryMaps(true)
if err != nil {
return nil, err
}
var pss uint64
for _, m := range *maps {
pss += m.Pss
s.MemSwapped += m.Swap
}
s.MemConsumed = pss * 1024 // PSS is in kB
s.MemPercentage = (100 * float64(s.MemConsumed) / float64(ps.totalMem))
return &s, nil
}
// GetStatsInInterval returns process stats after every interval
func (ps *Proc) GetStatsInInterval() []*Stats {
return ps.GetStatsInIntervalWithContext(context.Background())
}
// GetStatsInIntervalWithContext returns process stats after every interval
func (ps *Proc) GetStatsInIntervalWithContext(ctx context.Context) []*Stats {
var stats []*Stats
s, err := ps.GetStats()
if err != nil {
return stats
}
//append initial values to slice, then check after an interval
stats = append(stats, s)
ticker := time.NewTicker(ps.samplingTime)
for {
select {
case <-ticker.C:
s, err := ps.GetStats()
if err != nil {
return stats
}
stats = append(stats, s)
case <-ctx.Done():
return stats
}
}
}
================================================
FILE: pkg/proxyserver/proxyhandler.go
================================================
package proxyserver
import (
"encoding/base64"
"fmt"
"net/http"
"net/http/httputil"
"net/url"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/spf13/viper"
)
// ProxyHandler defines struct for proxy handler
type ProxyHandler struct {
remote *url.URL
logger lumber.Logger
}
const synapseURL = "/synapse"
// NewProxyHandler returns pointer of new instace of ProxyHandler
func NewProxyHandler(logger lumber.Logger) (*ProxyHandler, error) {
remote, err := url.Parse(global.TASCloudURL[viper.GetString("env")])
if err != nil {
return nil, err
}
return &ProxyHandler{
remote: remote,
logger: logger,
}, nil
}
// HandlerProxy handles the proxy server
func (ph *ProxyHandler) HandlerProxy(w http.ResponseWriter, r *http.Request) {
proxy := httputil.NewSingleHostReverseProxy(ph.remote)
proxy.Director = func(req *http.Request) {
req.Header = r.Header
encodedSecret := base64.StdEncoding.EncodeToString([]byte(viper.GetString("Lambdatest.SecretKey")))
req.Header.Add("Lambdatest-SecretKey", encodedSecret)
req.Host = ph.remote.Host
req.URL.Scheme = ph.remote.Scheme
req.URL.Host = ph.remote.Host
req.URL.Path = fmt.Sprintf("%s%s", synapseURL, r.URL.Path)
ph.logger.Debugf("proxying to url: %s", req.URL.Path)
}
proxy.ServeHTTP(w, r)
}
================================================
FILE: pkg/proxyserver/setup.go
================================================
package proxyserver
import (
"context"
"net/http"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/gin-gonic/gin"
)
// ListenAndServe starts proxy http server for synapse
func ListenAndServe(ctx context.Context, proxyHandler *ProxyHandler, config *config.SynapseConfig, logger lumber.Logger) error {
gin.SetMode(gin.ReleaseMode)
logger.Infof("Setting up HTTP handler")
errChan := make(chan error)
// HTTP server instance
srv := &http.Server{
Addr: ":" + global.ProxyServerPort,
Handler: http.HandlerFunc(proxyHandler.HandlerProxy),
}
// channel to signal server process exit
done := make(chan struct{})
go func() {
logger.Infof("Starting server on port %s", global.ProxyServerPort)
// service connections
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Errorf("listen: %#v", err)
errChan <- err
}
}()
select {
case <-ctx.Done():
logger.Infof("Caller has requested graceful shutdown. shutting down the server")
if err := srv.Shutdown(ctx); err != nil {
logger.Errorf("Server Shutdown:", "error", err)
}
return nil
case err := <-errChan:
return err
case <-done:
return nil
}
}
================================================
FILE: pkg/requestutils/request.go
================================================
package requestutils
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"reflect"
"time"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/cenkalti/backoff/v4"
)
type requests struct {
logger lumber.Logger
client http.Client
retryBackoff backoff.BackOff
}
func New(logger lumber.Logger, requestTimeout time.Duration, retryBackoff backoff.BackOff) core.Requests {
return &requests{
logger: logger,
client: http.Client{Timeout: requestTimeout},
retryBackoff: retryBackoff,
}
}
func (r *requests) MakeAPIRequest(
ctx context.Context,
httpMethod, endpoint string,
body []byte,
query map[string]interface{},
headers map[string]string,
) (respBody []byte, statusCode int, err error) {
u, err := url.Parse(endpoint)
if err != nil {
r.logger.Errorf("error while parsing endpoint %s, %v", endpoint, err)
return nil, 0, err
}
q := u.Query()
for id, val := range query {
v := reflect.ValueOf(val)
// nolint:exhaustive
switch v.Kind() {
case reflect.Array:
case reflect.Slice:
for i := 0; i < v.Len(); i += 1 {
q.Add(id, v.Index(i).String())
}
default:
q.Set(id, fmt.Sprintf("%v", val))
}
}
u.RawQuery = q.Encode()
req, err := http.NewRequestWithContext(ctx, httpMethod, u.String(), bytes.NewBuffer(body))
if err != nil {
r.logger.Errorf("error while creating http request %v", err)
return nil, 0, err
}
for id, val := range headers {
req.Header.Add(id, val)
}
operation := func() error {
resp, errD := r.client.Do(req)
if errD != nil {
r.logger.Errorf("error while sending http request %v", errD)
return errD
}
defer resp.Body.Close()
statusCode = resp.StatusCode
if 500 <= statusCode && statusCode < 600 {
return fmt.Errorf("status code %d received", statusCode)
}
respBody, err = io.ReadAll(resp.Body)
if err != nil {
r.logger.Errorf("error while reading http response body %v", err)
return nil
}
return nil
}
if errR := backoff.Retry(operation, r.retryBackoff); errR != nil {
r.logger.Errorf("Retry limit exceeded. Error %+v", errR)
return respBody, statusCode, errors.New("retry limit exceeded")
}
if statusCode != http.StatusOK {
r.logger.Errorf("non 200 status code %s", statusCode)
return respBody, statusCode, errors.New("non 200 status code")
}
return respBody, statusCode, err
}
================================================
FILE: pkg/runner/docker/config.go
================================================
package docker
import (
"context"
"fmt"
"os"
"strconv"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/synapse"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/volume"
"github.com/docker/go-units"
)
const (
defaultVaultPath = "/vault/secrets"
repoSourcePath = "/tmp/synapse/%s/nucleus"
nanoCPUUnit = 1e9
volumePrefix = "tas-build"
)
func (d *docker) getVolumeName(r *core.RunnerOptions) string {
return fmt.Sprintf("%s-%s", volumePrefix, r.Label[synapse.BuildID])
}
func (d *docker) getVolumeConfiguration(r *core.RunnerOptions) *volume.VolumeCreateBody {
return &volume.VolumeCreateBody{
Driver: "local",
Name: d.getVolumeName(r),
Labels: map[string]string{synapse.BuildID: r.Label[synapse.BuildID]},
}
}
func (d *docker) getContainerConfiguration(r *core.RunnerOptions) *container.Config {
return &container.Config{
Image: r.DockerImage,
Env: r.Env,
Tty: false,
Cmd: r.ContainerArgs,
Volumes: make(map[string]struct{}),
}
}
func (d *docker) getContainerHostConfiguration(r *core.RunnerOptions) *container.HostConfig {
specs := getSpecs(r.Tier)
/*
https://pkg.go.dev/github.com/docker/docker@v20.10.12+incompatible/api/types/container#Resources
AS per documentation , 1 core = 1e9 NanoCPUs
*/
nanoCPU := int64(specs.CPU * nanoCPUUnit)
d.logger.Infof("Specs %+v", specs)
mounts := []mount.Mount{
{
Type: mount.TypeVolume,
Source: d.getVolumeName(r),
Target: defaultVaultPath,
},
}
if r.PodType == core.NucleusPod || r.PodType == core.CoveragePod {
repoBuildSourcePath := fmt.Sprintf(repoSourcePath, r.Label[synapse.BuildID])
if err := utils.CreateDirectory(repoBuildSourcePath); err != nil {
d.logger.Errorf("error creating directory: %v", err)
}
mounts = append(mounts, mount.Mount{
Type: mount.TypeVolume,
Source: d.getVolumeName(r),
Target: global.WorkspaceCacheDir,
})
}
hostConfig := container.HostConfig{
Mounts: mounts,
AutoRemove: true,
SecurityOpt: []string{"seccomp=unconfined"},
Resources: container.Resources{Memory: specs.RAM * units.MiB, NanoCPUs: nanoCPU},
}
autoRemove, err := strconv.ParseBool(os.Getenv(global.AutoRemoveEnv))
if err != nil {
d.logger.Errorf("Error reading os env AutoRemove with error: %v \n returning default host config", err)
return &hostConfig
}
hostConfig.AutoRemove = autoRemove
return &hostConfig
}
func (d *docker) getContainerNetworkConfiguration() (*network.NetworkingConfig, error) {
var networkResource types.NetworkResource
opts := types.NetworkListOptions{
Filters: filters.NewArgs(filters.Arg("name", networkName)),
}
networkList, err := d.client.NetworkList(context.TODO(), opts)
if err != nil {
return nil, err
}
for idx := 0; idx < len(networkList); idx += 1 {
if networkList[idx].Name == networkName {
networkResource = networkList[idx]
}
}
endpointSettings := network.EndpointSettings{
NetworkID: networkResource.ID,
}
networkConfig := network.NetworkingConfig{
EndpointsConfig: map[string]*network.EndpointSettings{},
}
networkConfig.EndpointsConfig[networkName] = &endpointSettings
return &networkConfig, nil
}
func getSpecs(tier core.Tier) core.Specs {
if val, ok := core.TierOpts[tier]; ok {
return core.Specs{CPU: val.CPU, RAM: val.RAM}
}
return core.TierOpts[core.Small]
}
================================================
FILE: pkg/runner/docker/docker.go
================================================
package docker
import (
"archive/tar"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"os"
"strconv"
"strings"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/synapse"
"github.com/LambdaTest/test-at-scale/pkg/utils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/go-units"
)
const (
buildCacheExpiry time.Duration = 4 * time.Hour
BuildID = "build-id"
)
var gracefulyContainerStopDuration = time.Second * 10
var networkName string
type docker struct {
client *client.Client
logger lumber.Logger
cfg *config.SynapseConfig
secretsManager core.SecretsManager
cpu float32
ram int64
RunningContainers []*core.RunnerOptions
}
// newDockerClient creates a new docker client
func newDockerClient(secretsManager core.SecretsManager) (*docker, error) {
client, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
return nil, err
}
dockerInfo, err := client.Info(context.TODO())
if err != nil {
return nil, err
}
networkName = os.Getenv(global.NetworkEnvName)
return &docker{
client: client,
cpu: float32(dockerInfo.NCPU),
ram: dockerInfo.MemTotal / units.MiB,
secretsManager: secretsManager,
}, nil
}
// New initialize a new docker configuration
func New(secretsManager core.SecretsManager,
logger lumber.Logger,
cfg *config.SynapseConfig) (core.DockerRunner, error) {
dockerConfig, err := newDockerClient(secretsManager)
if err != nil {
return nil, err
}
dockerConfig.logger = logger
dockerConfig.cfg = cfg
logger.Infof("available cpu: %f", dockerConfig.cpu)
logger.Infof("available memory: %d", dockerConfig.ram)
return dockerConfig, nil
}
func (d *docker) CreateVolume(ctx context.Context, r *core.RunnerOptions) error {
volumeOptions := d.getVolumeConfiguration(r)
isVolume, err := d.FindVolumes(volumeOptions.Name)
if err != nil {
return err
}
if !isVolume {
if _, err := d.client.VolumeCreate(ctx, *volumeOptions); err != nil {
return err
}
}
return nil
}
func (d *docker) CopyFileToContainer(ctx context.Context, path, fileName, containerID string, content []byte) error {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
defer tw.Close()
if err := tw.WriteHeader(&tar.Header{
Name: fileName,
Mode: 0777,
Size: int64(len(content)),
}); err != nil {
return err
}
if _, err := tw.Write(content); err != nil {
return err
}
if err := d.client.CopyToContainer(
ctx,
containerID,
global.VaultSecretDir,
&buf,
types.CopyToContainerOptions{AllowOverwriteDirWithFile: true},
); err != nil {
return err
}
return nil
}
func (d *docker) Create(ctx context.Context, r *core.RunnerOptions) core.ContainerStatus {
containerStatus := core.ContainerStatus{Done: true}
containerImageConfig, err := d.secretsManager.GetDockerSecrets(r)
if err != nil {
d.logger.Errorf("Something went wrong while seeking docker secrets %+v", err)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_CRT(err.Error())
return containerStatus
}
if err = d.CreateVolume(ctx, r); err != nil {
d.logger.Errorf("Error in creating docker volume: %+v", err)
containerStatus.Done = false
containerStatus.Error = errs.ErrDockerVolCrt(err.Error())
return containerStatus
}
if errP := d.PullImage(&containerImageConfig, r); errP != nil {
d.logger.Errorf("Something went wrong while pulling container image %+v", errP)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_CRT(errP.Error())
return containerStatus
}
containerConfig := d.getContainerConfiguration(r)
hostConfig := d.getContainerHostConfiguration(r)
networkConfig, err := d.getContainerNetworkConfiguration()
if err != nil {
d.logger.Errorf("error retrieving network: %v", err)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_CRT(err.Error())
return containerStatus
}
containerName := fmt.Sprintf("%s-%s", r.ContainerName, r.PodType)
resp, err := d.client.ContainerCreate(ctx, containerConfig, hostConfig, networkConfig, nil, containerName)
r.ContainerID = resp.ID
if err != nil {
d.logger.Errorf("error creating container: %v", err)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_CRT(err.Error())
return containerStatus
}
d.logger.Debugf("container created with name: %s, updating status %+v",
fmt.Sprintf("%s-%s", r.ContainerName, r.PodType), containerStatus)
gitSecretBytes, err := d.secretsManager.GetGitSecretBytes()
if err != nil {
d.logger.Errorf("Error in loading git secrets: %s", err.Error())
containerStatus.Done = false
containerStatus.Error = errs.ErrSecretLoad(err.Error())
return containerStatus
}
if err = d.CopyFileToContainer(
ctx,
global.VaultSecretDir,
global.GitConfigFileName,
r.ContainerID,
gitSecretBytes,
); err != nil {
containerStatus.Done = false
containerStatus.Error = errs.ErrDockerCP(err.Error())
return containerStatus
}
// copies repo secrets to container
repoSecretBytes, err := d.secretsManager.GetRepoSecretBytes(r.Label["repo"])
if err != nil {
d.logger.Debugf("Error in loading repo secrets: %s", err.Error())
} else {
if err := d.CopyFileToContainer(
ctx,
global.VaultSecretDir,
global.RepoSecretsFileName,
r.ContainerID,
repoSecretBytes,
); err != nil {
containerStatus.Done = false
containerStatus.Error = errs.ErrDockerCP(err.Error())
return containerStatus
}
}
return containerStatus
}
func (d *docker) Destroy(ctx context.Context, r *core.RunnerOptions) error {
if err := d.client.ContainerStop(ctx, r.ContainerID, &gracefulyContainerStopDuration); err != nil {
d.logger.Errorf("error stopping container %v", err)
return err
}
autoRemove, err := strconv.ParseBool(os.Getenv(global.AutoRemoveEnv))
if err != nil {
d.logger.Errorf("Error reading AutoRemove os env error: %v", err)
return errors.New("error reading AutoRemove os env error")
}
if autoRemove {
// if autoRemove is set then it docker container will be removed once it stopped or exited
return nil
}
err = d.client.ContainerRemove(ctx, r.ContainerID, types.ContainerRemoveOptions{
RemoveVolumes: true,
Force: true,
})
if err != nil {
d.logger.Errorf("error removing container %v", err)
return err
}
return nil
}
func (d *docker) Run(ctx context.Context, r *core.RunnerOptions) core.ContainerStatus {
containerStatus := core.ContainerStatus{Done: true}
d.logger.Debugf("running container %s", r.ContainerID)
if err := d.client.ContainerStart(ctx, r.ContainerID, types.ContainerStartOptions{}); err != nil {
d.logger.Errorf("error starting the container: %s", err)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_STRT(err.Error())
return containerStatus
}
d.RunningContainers = append(d.RunningContainers, r)
if err := d.writeLogs(ctx, r); err != nil {
d.logger.Errorf("error writing logs to stdout: %+v", err)
}
return containerStatus
}
// removing element from slice of string
func removeContainerID(slice []*core.RunnerOptions, r *core.RunnerOptions) []*core.RunnerOptions {
index := -1
for i, val := range slice {
if val.ContainerID == r.ContainerID {
index = i
break
}
}
if index == -1 {
return slice
}
newSlice := make([]*core.RunnerOptions, 0)
newSlice = append(newSlice, slice[:index]...)
if index != len(slice)-1 {
newSlice = append(newSlice, slice[index+1:]...)
}
return newSlice
}
func (d *docker) WaitForCompletion(ctx context.Context, r *core.RunnerOptions) error {
d.logger.Infof("waiting for container %s compeletion", r.ContainerID)
statusCh, errCh := d.client.ContainerWait(ctx, r.ContainerID, container.WaitConditionRemoved)
select {
case err := <-errCh:
if err != nil {
d.logger.Debugf("%s container terminated with exit code: %d, reason %s", r.ContainerID, err)
return err
}
case status := <-statusCh:
d.logger.Debugf("status code: %d", status.StatusCode)
if status.StatusCode != 0 {
msg := fmt.Sprintf("Received non zero status code %v", status.StatusCode)
return errs.ERR_DOCKER_RUN(msg)
}
return nil
}
return nil
}
func (d *docker) GetInfo(ctx context.Context) (cpu float32, ram int64) {
return d.cpu, d.ram
}
func (d *docker) Initiate(ctx context.Context, r *core.RunnerOptions, statusChan chan core.ContainerStatus) {
// creating the docker contaienr
r.ContainerArgs = append(r.ContainerArgs, "--local", os.Getenv(global.LocalEnv), "--synapsehost", os.Getenv(global.SynapseHostEnv))
if status := d.Create(ctx, r); !status.Done {
d.logger.Errorf("error creating container: %v", status.Error)
d.logger.Infof("Update error status after creation")
statusChan <- status
return
}
if status := d.Run(ctx, r); !status.Done {
d.logger.Errorf("error running container: %v", status.Error)
d.logger.Infof("Update error status after running")
statusChan <- status
return
}
containerStatus := core.ContainerStatus{Done: true}
if err := d.WaitForCompletion(ctx, r); err != nil {
d.logger.Errorf("error while waiting for the completion of container: %v", err)
containerStatus.Done = false
containerStatus.Error = errs.ERR_DOCKER_RUN(err.Error())
d.RunningContainers = removeContainerID(d.RunningContainers, r)
statusChan <- containerStatus
return
}
d.RunningContainers = removeContainerID(d.RunningContainers, r)
d.logger.Infof("container %+s execution successful", r.ContainerID)
statusChan <- containerStatus
}
func (d *docker) KillRunningDocker(ctx context.Context) {
for _, r := range d.RunningContainers {
d.logger.Infof("Destroying container %s", r.ContainerID)
if err := d.Destroy(ctx, r); err != nil {
d.logger.Errorf("Error occur while destroying container ID %s , err %+v", r.ContainerID, err)
}
}
}
func (d *docker) KillContainerForBuildID(buildID string) error {
for _, r := range d.RunningContainers {
if r.Label[BuildID] == buildID {
if err := d.Destroy(context.Background(), r); err != nil {
d.logger.Errorf("error while destroying container: %v", err)
return err
}
return nil
}
}
return nil
}
func (d *docker) PullImage(containerImageConfig *core.ContainerImageConfig, r *core.RunnerOptions) error {
if containerImageConfig.PullPolicy == config.PullNever && r.PodType == core.NucleusPod {
d.logger.Infof("pull policy %s pod type %s, not pulling any image",
containerImageConfig.PullPolicy, r.PodType)
return nil
}
dockerImage := containerImageConfig.Image
d.logger.Infof("Pulling image : %s", dockerImage)
ImagePullOptions := types.ImagePullOptions{}
ImagePullOptions.RegistryAuth = containerImageConfig.AuthRegistry
reader, err := d.client.ImagePull(context.TODO(), dockerImage, ImagePullOptions)
defer func() {
if reader == nil {
d.logger.Errorf("Reader returned by docker pull is null")
return
}
if errC := reader.Close(); errC != nil {
d.logger.Errorf(errC.Error())
}
}()
if err != nil {
return err
}
if _, err := io.Copy(os.Stdout, reader); err != nil {
return err
}
return nil
}
// writeLogs writes container logs to a file
func (d *docker) writeLogs(ctx context.Context, r *core.RunnerOptions) error {
reader, err := d.client.ContainerLogs(ctx,
r.ContainerID,
types.ContainerLogsOptions{
ShowStdout: true,
ShowStderr: true,
Follow: true,
})
if err != nil {
return err
}
defer reader.Close()
buildLogsPath := fmt.Sprintf("%s/%s", global.ExecutionLogsPath, r.Label[synapse.BuildID])
if errDir := utils.CreateDirectory(buildLogsPath); err != nil {
return errDir
}
f, err := os.Create(fmt.Sprintf("%s/%s-%s.log", buildLogsPath, r.ContainerName, r.PodType))
if err != nil {
return err
}
defer f.Close()
if _, errCopy := stdcopy.StdCopy(f, f, reader); err != nil {
return errCopy
}
return nil
}
func (d *docker) FindVolumes(volumeName string) (bool, error) {
volumeFilter := filters.KeyValuePair{Key: "name", Value: volumeName}
volumes, err := d.client.VolumeList(context.Background(), filters.NewArgs(volumeFilter))
if err != nil {
return false, err
}
for _, v := range volumes.Volumes {
if v.Name == volumeName {
return true, nil
}
}
return false, nil
}
func (d *docker) RemoveVolume(ctx context.Context, volumeName string) error {
if err := d.client.VolumeRemove(ctx, volumeName, true); err != nil {
return err
}
return nil
}
func (d *docker) RemoveOldVolumes(ctx context.Context) {
volumes, err := d.client.VolumeList(context.Background(), filters.NewArgs())
if err != nil {
d.logger.Errorf("error fetching volume lists: %v", err.Error())
}
for _, v := range volumes.Volumes {
if strings.HasPrefix(v.Name, volumePrefix) {
_, data, err := d.client.VolumeInspectWithRaw(context.Background(), v.Name)
if err == nil {
var volumeDetails core.VolumeDetails
err = json.Unmarshal(data, &volumeDetails)
if err != nil {
d.logger.Errorf("error in unmarshaling volume details: %v", err.Error())
continue
}
now := time.Now()
diff := now.Sub(volumeDetails.CreatedAt)
if diff > buildCacheExpiry {
d.logger.Debugf("Deleting volume: %s", v.Name)
if err = d.RemoveVolume(ctx, v.Name); err != nil {
d.logger.Errorf("Error deleting volume: %v", err.Error())
}
}
} else {
d.logger.Errorf("error in fetching volume details: %v", err.Error())
}
}
}
}
================================================
FILE: pkg/runner/docker/docker_test.go
================================================
package docker
import (
"context"
"fmt"
"os"
"strconv"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/synapse"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
)
func getRunnerOptions() *core.RunnerOptions {
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(true))
containerName := fmt.Sprintf("test-container-%s", uuid.NewString())
r := core.RunnerOptions{
ContainerName: containerName,
ContainerArgs: []string{"sleep", "10"},
DockerImage: "alpine:latest",
HostVolumePath: "/tmp",
PodType: core.NucleusPod,
Label: map[string]string{synapse.BuildID: containerName},
}
return &r
}
func TestDockerCreate(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
// test create container
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
}
func TestDockerRun(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
// test create container
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
if status := runner.Run(ctx, runnerOpts); !status.Done {
t.Errorf("error in running container : %v", status.Error)
return
}
}
func TestDockerWaitCompletion(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
// test create container
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
if status := runner.Run(ctx, runnerOpts); !status.Done {
t.Errorf("error in running container : %v", status.Error)
return
}
if err := runner.WaitForCompletion(ctx, runnerOpts); err != nil {
t.Errorf("Error while waiting for completion of container")
}
}
func TestDockerDestroyWithoutRunning(t *testing.T) {
ctx := context.Background()
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(false))
runnerOpts := getRunnerOptions()
// test create container
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
if err := runner.Destroy(ctx, runnerOpts); err != nil {
t.Errorf("error destroying container: %v", err)
}
}
func TestDockerDestroyWithRunningWoAutoRemove(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
// test create container
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(false))
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
if status := runner.Run(ctx, runnerOpts); !status.Done {
t.Errorf("error in running container : %v", status.Error)
return
}
if err := runner.Destroy(ctx, runnerOpts); err != nil {
t.Errorf("error destroying container: %v", err)
}
}
func TestDockerDestroyWithRunningWithAutoRemove(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
// test create container
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(true))
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
if status := runner.Run(ctx, runnerOpts); !status.Done {
t.Errorf("error in running container : %v", status.Error)
return
}
if err := runner.Destroy(ctx, runnerOpts); err != nil {
t.Errorf("error destroying container: %v", err)
}
}
func TestDockerPullAlways(t *testing.T) {
runnerOpts := getRunnerOptions()
// test create container
runnerOpts.PodType = core.NucleusPod
if err := runner.PullImage(&core.ContainerImageConfig{
Mode: config.PublicMode,
PullPolicy: config.PullAlways,
Image: runnerOpts.DockerImage,
}, runnerOpts); err != nil {
t.Errorf("Error while pulling image %v", err)
}
}
func TestDockerPullNever(t *testing.T) {
runnerOpts := getRunnerOptions()
// test create container
runnerOpts.PodType = core.NucleusPod
if err := runner.PullImage(&core.ContainerImageConfig{
Mode: config.PublicMode,
PullPolicy: config.PullNever,
Image: "dummy-image",
}, runnerOpts); err != nil {
t.Errorf("Error while pulling image %v", err)
}
}
func TestDockerVolumes(t *testing.T) {
ctx := context.Background()
runnerOpts := getRunnerOptions()
os.Setenv(global.AutoRemoveEnv, strconv.FormatBool(true))
statusCreate := runner.Create(ctx, runnerOpts)
if !statusCreate.Done {
t.Errorf("error creating container: %v", statusCreate.Error)
}
correctVolumeName := fmt.Sprintf("%s-%s", volumePrefix, runnerOpts.Label[synapse.BuildID])
incorrectVolumeName := fmt.Sprintf("incorrect-%s-%s", volumePrefix, runnerOpts.Label[synapse.BuildID])
exists, err := runner.FindVolumes(incorrectVolumeName)
if err != nil {
t.Errorf("error finding docker volume: %v", err)
}
assert.Equal(t, false, exists)
exists, err = runner.FindVolumes(correctVolumeName)
if err != nil {
t.Errorf("error finding docker volume: %v", err)
}
assert.Equal(t, true, exists)
if status := runner.Run(ctx, runnerOpts); !status.Done {
t.Errorf("error in running container : %v", status.Error)
return
}
expectedFileContent := `{"access_token":"dummytoken","expiry":"0001-01-01T00:00:00Z","refresh_token":"","token_type":"Bearer"}`
secretBytes, err := secretsManager.GetGitSecretBytes()
if err != nil {
t.Errorf("error retrieving secrets: %v", err)
}
assert.Equal(t, expectedFileContent, string(secretBytes))
if err = runner.Destroy(ctx, runnerOpts); err != nil {
t.Errorf("error destroying container: %v", err)
}
}
================================================
FILE: pkg/runner/docker/setup_test.go
================================================
package docker
import (
"context"
"os"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/secrets"
"github.com/LambdaTest/test-at-scale/pkg/tests"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
)
var cfg *config.SynapseConfig
var secretsManager core.SecretsManager
var runner core.DockerRunner
func createNetworkIfNotExists(dockerClient *client.Client, networkName string) error {
opts := types.NetworkListOptions{
Filters: filters.NewArgs(filters.Arg("name", networkName)),
}
networkList, err := dockerClient.NetworkList(context.TODO(), opts)
if err != nil {
return err
}
for idx := 0; idx < len(networkList); idx++ {
if networkList[idx].Name == networkName {
return nil
}
}
if _, err := dockerClient.NetworkCreate(context.TODO(), networkName, types.NetworkCreate{
Internal: true,
}); err != nil {
return err
}
return nil
}
func deletNetworkIfExists(dockerClient *client.Client, networkName string) error {
ctx := context.TODO()
opts := types.NetworkListOptions{
Filters: filters.NewArgs(filters.Arg("name", networkName)),
}
networkList, err := dockerClient.NetworkList(ctx, opts)
if err != nil {
return err
}
for idx := 0; idx < len(networkList); idx++ {
if networkList[idx].Name == networkName {
return dockerClient.NetworkRemove(ctx, networkName)
}
}
return nil
}
func TestMain(m *testing.M) {
networkName := "dummy-network"
os.Setenv(global.NetworkEnvName, networkName)
cfg = tests.MockConfig()
logger, err := lumber.NewLogger(cfg.LogConfig, cfg.Verbose, lumber.InstanceZapLogger)
// TODO: check proper way to collect error
if err != nil {
os.Exit(1)
}
cl, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
os.Exit(1)
}
if errC := createNetworkIfNotExists(cl, networkName); errC != nil {
logger.Errorf("Error in creating network %s", networkName)
os.Exit(1)
}
secretsManager = secrets.New(cfg, logger)
runner, err = New(secretsManager, logger, cfg)
if err != nil {
logger.Errorf("error in configuring docker client")
os.Exit(1)
}
exitCode := m.Run()
if err := deletNetworkIfExists(cl, networkName); err != nil {
logger.Errorf("Error in deleting network %s", networkName)
os.Exit(1)
}
os.Exit(exitCode)
}
================================================
FILE: pkg/secret/secret.go
================================================
package secret
import (
"encoding/json"
"io/ioutil"
"os"
"regexp"
"strings"
"time"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
type secretParser struct {
logger lumber.Logger
secretRegex *regexp.Regexp
}
// New return new secret parser
func New(logger lumber.Logger) core.SecretParser {
return &secretParser{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
}
}
// GetRepoSecret read repo secrets from given path
func (s *secretParser) GetRepoSecret(path string) (map[string]string, error) {
var secretData map[string]string
if _, err := os.Lstat(path); os.IsNotExist(err) {
s.logger.Debugf("failed to find user env secrets in path %s, as path does not exists", path)
return nil, nil
}
body, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
if err = json.Unmarshal(body, &secretData); err != nil {
s.logger.Errorf("failed to unmarshal user env secrets, error %v", err)
return nil, errs.ErrUnMarshalJSON
}
// extract secretmap from data map[data: map[secretname:secretvalue]]
return secretData, nil
}
// GetOauthSecret parses the oauth secret
func (s *secretParser) GetOauthSecret(path string) (*core.Oauth, error) {
o := &core.Oauth{
Type: core.Bearer,
}
if _, err := os.Lstat(path); os.IsNotExist(err) {
s.logger.Errorf("failed to find oauth secret in path %s", path)
return nil, err
}
body, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
if err = json.Unmarshal(body, o); err != nil {
s.logger.Errorf("failed to unmarshal oauth secret, error %v", err)
return nil, errs.ErrUnMarshalJSON
}
if o.AccessToken == "" {
return nil, errs.ErrMissingAccessToken
}
// If tokentype is not basic set it to bearer
if o.Type != core.Basic {
o.Type = core.Bearer
}
return o, err
}
// SubstituteSecret replace secret placeholders with their respective values
func (s *secretParser) SubstituteSecret(command string, secretData map[string]string) (string, error) {
matches := s.secretRegex.FindAllStringSubmatch(command, -1)
if matches == nil {
return command, nil
}
result := command
for _, match := range matches {
if len(match) < 2 {
return "", errs.ErrSecretRegexMatch
}
// validating secret key exists or not
if _, ok := secretData[match[1]]; !ok {
s.logger.Warnf("secret with name %s not found in map", match[0])
continue
}
result = strings.ReplaceAll(result, match[0], secretData[match[1]])
}
return result, nil
}
func (s *secretParser) Expired(token *core.Oauth) bool {
if token.RefreshToken == "" {
return false
}
if token.Expiry.IsZero() && token.AccessToken != "" {
return false
}
return token.Expiry.Add(-global.ExpiryDelta).
Before(time.Now())
}
================================================
FILE: pkg/secret/secret_test.go
================================================
package secret
import (
"errors"
"log"
"os"
"reflect"
"regexp"
"testing"
"time"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
func TestGetRepoSecret(t *testing.T) {
logger, err := lumber.NewLogger(lumber.LoggingConfig{EnableConsole: true}, true, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("could not instantiate logger %s", err.Error())
}
secretParser := New(logger)
tests := []struct {
name string
path string
want map[string]string
errorType error
}{
{"Test for correct file", "../../testutils/testdata/secretTestData/secretfile.json", map[string]string{"abc": "val", "xyz": "val2"}, nil},
{"Test for invalid file", "../../testutils/testdata/secretTestData/invalidsecretfile.json", map[string]string{}, errs.ErrUnMarshalJSON},
{"Test for incorrect path", "", nil, os.ErrNotExist},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := secretParser.GetRepoSecret(tt.path)
if err != nil {
if !errors.Is(err, tt.errorType) {
t.Error(err)
}
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("expected: %v, got: %v", tt.want, got)
return
}
})
}
}
func TestGetOauthSecret(t *testing.T) {
logger, err := lumber.NewLogger(lumber.LoggingConfig{EnableConsole: true}, true, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("could not instantiate logger %s", err.Error())
}
secretParser := New(logger)
oauthToken := core.Oauth{AccessToken: "token", Expiry: time.Unix(1645527121, 0), RefreshToken: "refresh", Type: core.Bearer}
tests := []struct {
name string
path string
want *core.Oauth
errorType error
}{
{"Test for correct file", "../../testutils/testdata/secretTestData/secretOauthFile.json", &oauthToken, nil},
{"Test for invalid file", "../../testutils/testdata/secretTestData/invalidsecretfile.json", nil, errs.ErrMissingAccessToken},
{"Test for incorrect path", "", nil, os.ErrNotExist},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := secretParser.GetOauthSecret(tt.path)
if err != nil {
if !errors.Is(err, tt.errorType) {
t.Error(err)
}
return
}
if got, want := got.AccessToken, tt.want.AccessToken; got != want {
t.Errorf("Want access_token %s, got %s", want, got)
}
if got, want := got.Type, tt.want.Type; got != want {
t.Errorf("Want type %s, got %s", want, got)
}
if got, want := got.Expiry.Unix(), tt.want.Expiry.Unix(); got != want {
t.Errorf("Want expiry %d, got %d", want, got)
}
})
}
}
func TestSubstituteSecret(t *testing.T) {
logger, err := lumber.NewLogger(lumber.LoggingConfig{EnableConsole: true}, true, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("Could not instantiate logger %s", err.Error())
}
secretParser := New(logger)
var expressions = []struct {
params map[string]string
input string
output string
errorType error
}{
// basic
{
params: map[string]string{"token": "secret"},
input: "${{ secrets.token }}",
output: "secret",
errorType: nil,
},
// multiple
{
params: map[string]string{"NPM_TOKEN": "secret", "TAG": "nucleus"},
input: "docker build --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} --tag=${{ secrets.TAG }}",
output: "docker build --build-arg NPM_TOKEN=secret --tag=nucleus",
errorType: nil,
},
// no match
{
params: map[string]string{"clone_token": "secret"},
input: "${{ secrets.token }}",
output: "${{ secrets.token }}",
errorType: nil,
},
}
for _, expr := range expressions {
t.Run(expr.input, func(t *testing.T) {
t.Logf(expr.input)
output, err := secretParser.SubstituteSecret(expr.input, expr.params)
if err != nil {
if expr.errorType != nil {
if err.Error() != expr.errorType.Error() {
t.Errorf("Want error %q expanded but got error %q", expr.errorType, err)
return
}
return
}
t.Errorf("Want %q expanded but got error %q", expr.input, err)
return
}
if output != expr.output {
t.Errorf("Want %q expanded to %q, got %q",
expr.input,
expr.output,
output)
}
})
}
}
//nolint:funlen
func TestExpired(t *testing.T) {
logger, err := lumber.NewLogger(lumber.LoggingConfig{EnableConsole: true}, true, lumber.InstanceZapLogger)
if err != nil {
log.Fatalf("Could not instantiate logger %s", err.Error())
}
type fields struct {
logger lumber.Logger
secretRegex *regexp.Regexp
}
type args struct {
token *core.Oauth
}
tests := []struct {
name string
fields fields
args args
want bool
}{
{
name: "Missing Refresh Token",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "54321",
RefreshToken: "",
Expiry: time.Now().Add(-time.Hour)},
},
want: false,
},
{
name: "Missing Access Token",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "",
RefreshToken: "54321"},
},
want: true,
},
{
name: "Missing Time",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "12345",
RefreshToken: "54321"},
},
want: false,
},
{
name: "Token Valid",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "12345",
RefreshToken: "54321",
Expiry: time.Now().Add(time.Hour)},
},
want: false,
},
{
name: "Token Expire",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "12345",
RefreshToken: "54321",
Expiry: time.Now().Add(-time.Second)},
},
want: true,
},
{
name: "Token not Expiredn but in expiry buffer",
fields: fields{
logger: logger,
secretRegex: regexp.MustCompile(global.SecretRegex),
},
args: args{
token: &core.Oauth{
AccessToken: "12345",
RefreshToken: "54321",
Expiry: time.Now().Add(time.Second * 600)},
},
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &secretParser{
logger: tt.fields.logger,
secretRegex: tt.fields.secretRegex,
}
if got := s.Expired(tt.args.token); got != tt.want {
t.Errorf("secretParser.Expired() = %v, want %v", got, tt.want)
}
})
}
}
================================================
FILE: pkg/secrets/secrets.go
================================================
package secrets
import (
"encoding/base64"
"encoding/json"
"errors"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
errs "github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
type secertManager struct {
logger lumber.Logger
cfg *config.SynapseConfig
}
// New returns new secretManager
func New(cfg *config.SynapseConfig, logger lumber.Logger) core.SecretsManager {
return &secertManager{
logger: logger,
cfg: cfg,
}
}
func (s *secertManager) GetLambdatestSecrets() *config.LambdatestConfig {
return &s.cfg.Lambdatest
}
// GetSynapseName returns the name of synapse if mentioned in config
func (s *secertManager) GetSynapseName() string {
return s.cfg.Name
}
func (s *secertManager) GetGitSecretBytes() ([]byte, error) {
gitSecrets := core.Secret{
"access_token": s.cfg.Git.Token,
"expiry": "0001-01-01T00:00:00Z",
"refresh_token": "",
"token_type": s.cfg.Git.TokenType,
}
gitSecretsJSON, err := json.Marshal(gitSecrets)
if err != nil {
return []byte{}, errs.ERR_JSON_MAR(err.Error())
}
return gitSecretsJSON, nil
}
func (s *secertManager) GetRepoSecretBytes(repo string) ([]byte, error) {
val, ok := s.cfg.RepoSecrets[repo]
if !ok {
return []byte{}, errors.New("no secrets found in configuration file")
}
repoSecretsJSON, err := json.Marshal(val)
if err != nil {
return []byte{}, errs.ERR_JSON_MAR(err.Error())
}
return repoSecretsJSON, nil
}
func (s *secertManager) GetDockerSecrets(r *core.RunnerOptions) (core.ContainerImageConfig, error) {
containerImageConfig := core.ContainerImageConfig{}
containerImageConfig.Mode = s.cfg.ContainerRegistry.Mode
containerImageConfig.Image = r.DockerImage
containerImageConfig.PullPolicy = s.cfg.ContainerRegistry.PullPolicy
/*
In parsing mode use default public container
*/
if r.PodType != core.NucleusPod {
return containerImageConfig, nil
}
/*
1. if mode is public then no need to build AuthRegistry
2. PullPolicy is set to never, then we assume docker image is being pulled manually by user
*/
if s.cfg.ContainerRegistry.Mode == config.PublicMode || s.cfg.ContainerRegistry.PullPolicy == config.PullNever {
return containerImageConfig, nil
}
// for private repo check whether creds are empty
if s.cfg.ContainerRegistry.Username == "" || s.cfg.ContainerRegistry.Password == "" {
return containerImageConfig, errs.CR_AUTH_NF
}
jsonBytes, _ := json.Marshal(map[string]string{
"username": s.cfg.ContainerRegistry.Username,
"password": s.cfg.ContainerRegistry.Password,
})
containerImageConfig.AuthRegistry = base64.StdEncoding.EncodeToString(jsonBytes)
return containerImageConfig, nil
}
func (s *secertManager) GetOauthToken() *core.Oauth {
return &core.Oauth{
AccessToken: s.cfg.Git.Token,
Type: core.TokenType(s.cfg.Git.TokenType),
}
}
================================================
FILE: pkg/secrets/secrets_test.go
================================================
package secrets
import (
"fmt"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func removeCreatedPath(path string) {
err := os.RemoveAll(path)
if err != nil {
fmt.Println("error in removing!!")
}
}
func TestGetLambdatestSecrets(t *testing.T) {
lambdatestSecrets := secretsManager.GetLambdatestSecrets()
assert.Equal(t, "dummysecretkey", lambdatestSecrets.SecretKey)
}
================================================
FILE: pkg/secrets/setup_test.go
================================================
package secrets
import (
"os"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/tests"
)
var cfg *config.SynapseConfig
var secretsManager core.SecretsManager
const testdDataDir = "./testdata"
func TestMain(m *testing.M) {
cfg = tests.MockConfig()
logger, err := lumber.NewLogger(cfg.LogConfig, cfg.Verbose, lumber.InstanceZapLogger)
// TODO: check proper way to collect error
if err != nil {
return
}
secretsManager = New(cfg, logger)
os.Exit(m.Run())
}
================================================
FILE: pkg/server/setup.go
================================================
package server
import (
"context"
"net/http"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/api"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/gin-gonic/gin"
)
// ListenAndServe initializes a server to respond to HTTP network requests.
func ListenAndServe(ctx context.Context, router api.Router, config *config.NucleusConfig, logger lumber.Logger) error {
// set gin to release mode
gin.SetMode(gin.ReleaseMode)
logger.Infof("Setting up http handler")
errChan := make(chan error)
// HTTP server instance
srv := &http.Server{
Addr: ":" + config.Port,
Handler: router.Handler(),
}
// channel to signal server process exit
done := make(chan struct{})
go func() {
logger.Infof("Starting server on port %s", config.Port)
// service connections
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Errorf("listen: %#v", err)
errChan <- err
}
}()
select {
case <-ctx.Done():
logger.Infof("Caller has requested graceful shutdown. shutting down the server")
if err := srv.Shutdown(ctx); err != nil {
logger.Errorf("Server Shutdown:", "error", err)
}
return nil
case err := <-errChan:
return err
case <-done:
return nil
}
}
================================================
FILE: pkg/service/coverage/coverage.go
================================================
package coverage
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"io/fs"
"io/ioutil"
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"strings"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"golang.org/x/sync/errgroup"
"github.com/LambdaTest/test-at-scale/pkg/fileutils"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
const (
coverageJSONFileName = "coverage-final.json"
mergedcoverageJSON = "coverage-merged.json"
compressedFileName = "coverage-files.tzst"
manifestJSONFileName = "manifest.json"
coverageFilePath = "/scripts/mapCoverage.js"
)
type codeCoverageService struct {
logger lumber.Logger
execManager core.ExecutionManager
codeCoveragParentDir string
azureClient core.AzureClient
zstd core.ZstdCompressor
httpClient http.Client
endpoint string
}
// New returns a new instance of CoverageService
func New(execManager core.ExecutionManager,
azureClient core.AzureClient,
zstd core.ZstdCompressor,
cfg *config.NucleusConfig,
logger lumber.Logger) (core.CoverageService, error) {
// if coverage mode not enabled do not initialize the service
if !cfg.CoverageMode {
return nil, nil
}
if _, err := os.Lstat(global.CodeCoverageDir); os.IsNotExist(err) {
return nil, errors.New("coverage directory not mounted")
}
return &codeCoverageService{
logger: logger,
execManager: execManager,
azureClient: azureClient,
zstd: zstd,
codeCoveragParentDir: global.CodeCoverageDir,
endpoint: global.NeuronHost + "/coverage",
httpClient: http.Client{
Timeout: global.DefaultAPITimeout,
}}, nil
}
// mergeCodeCoverageFiles merge all the coverage.json into single entity
func (c *codeCoverageService) mergeCodeCoverageFiles(ctx context.Context, commitDir, coverageManifestPath string, threshold bool) error {
if _, err := os.Lstat(commitDir); os.IsNotExist(err) {
c.logger.Errorf("coverage files not found, skipping merge")
return nil
}
coverageFiles := make([]string, 0)
if err := filepath.WalkDir(commitDir, func(path string, d fs.DirEntry, err error) error {
// add all individual coverage json files
if d.Name() == coverageJSONFileName {
coverageFiles = append(coverageFiles, path)
}
return nil
}); err != nil {
return err
}
if len(coverageFiles) < 1 {
return errors.New("no coverage dirs found")
}
command := fmt.Sprintf("/scripts/node_modules/.bin/babel-node %s --commitDir %s --coverageFiles '%s'",
coverageFilePath, commitDir, strings.Join(coverageFiles, " "))
if threshold {
command = fmt.Sprintf("%s --coverageManifest %s", command, coverageManifestPath)
}
commands := []string{command}
return c.execManager.ExecuteInternalCommands(ctx, core.CoverageMerge, commands, "", nil, nil)
}
// MergeAndUpload compress the file and upload in azure blob
func (c *codeCoverageService) MergeAndUpload(ctx context.Context, payload *core.Payload) error {
var parentCommitDir, repoDir string
var g errgroup.Group
// change variable name
repoDir = filepath.Join(c.codeCoveragParentDir, payload.OrgID, payload.RepoID)
repoBlobPath := path.Join(payload.GitProvider, payload.OrgID, payload.RepoID)
// skip downloading if parent commit does not exists for the repository
if payload.ParentCommitCoverageExists {
coverage, err := c.getParentCommitCoverageDir(payload.RepoID, payload.BuildBaseCommit)
if err != nil {
return err
}
if err = c.downloadAndDecompressParentCommitDir(ctx, coverage, repoDir); err != nil {
return err
}
parentCommitDir = filepath.Join(repoDir, coverage.ParentCommit)
}
coveragePayload := make([]coverageData, 0, len(payload.Commits))
for _, commit := range payload.Commits {
commitDir := filepath.Join(repoDir, commit.Sha)
c.logger.Debugf("commit directory %s", commitDir)
if _, err := os.Lstat(commitDir); os.IsNotExist(err) {
c.logger.Errorf("code coverage directory not found commit id %s", commit.Sha)
return err
}
coverageManifestPath := filepath.Join(commitDir, manifestJSONFileName)
manifestPayload, err := c.parseManifestFile(coverageManifestPath)
if err != nil {
c.logger.Errorf("failed to parse manifest file: %s, error :%v", commitDir, err)
return err
}
//skip copy of parent directory if all test files executed
if !manifestPayload.AllFilesExecuted {
if err := c.copyFromParentCommitDir(parentCommitDir, commitDir, manifestPayload.Removedfiles...); err != nil {
c.logger.Errorf("failed to copy coverage files from %s to %s, error :%v", parentCommitDir, commitDir, err)
return err
}
}
thresholdEnabled := false
if manifestPayload.CoverageThreshold != nil {
thresholdEnabled = true
}
if err := c.mergeCodeCoverageFiles(ctx, commitDir, coverageManifestPath, thresholdEnabled); err != nil {
c.logger.Errorf("failed to merge coverage files %v", err)
return err
}
c.logger.Debugf("compressed file name %v", compressedFileName)
g.Go(func() error {
if err := c.zstd.Compress(ctx, compressedFileName, false, repoDir, commit.Sha); err != nil {
c.logger.Errorf("failed to compress coverage files %v", err)
return err
}
_, err := c.uploadFile(ctx, repoBlobPath, compressedFileName, commit.Sha)
return err
})
var blobURL string
g.Go(func() error {
blobURL, err = c.uploadFile(ctx, repoBlobPath, filepath.Join(commitDir, mergedcoverageJSON), commit.Sha)
return err
})
var totalCoverage json.RawMessage
g.Go(func() error {
totalCoverage, err = c.getTotalCoverage(filepath.Join(commitDir, mergedcoverageJSON))
return err
})
if err = g.Wait(); err != nil {
c.logger.Errorf("failed to upload files to azure blob %v", err)
return err
}
blobURL = strings.TrimSuffix(blobURL, fmt.Sprintf("/%s", mergedcoverageJSON))
coveragePayload = append(coveragePayload, coverageData{BuildID: payload.BuildID, RepoID: payload.RepoID, CommitID: commit.Sha, BlobLink: blobURL, TotalCoverage: totalCoverage})
//current commit dir becomes parent for next commit
parentCommitDir = commitDir
}
return c.sendCoverageData(coveragePayload)
}
func (c *codeCoverageService) uploadFile(ctx context.Context, blobPath, filename, commitID string) (blobURL string, err error) {
file, err := os.Open(filename)
if err != nil {
return
}
defer file.Close()
mimeType := "application/json"
if filepath.Ext(filename) == ".tzst" {
mimeType = "application/zstd"
}
blobURL, err = c.azureClient.Create(ctx, fmt.Sprintf("%s/%s/%s", blobPath, commitID, filepath.Base(filename)), file, mimeType)
return
}
func (c *codeCoverageService) parseManifestFile(filepath string) (core.CoverageManifest, error) {
manifestPayload := core.CoverageManifest{}
if _, err := os.Lstat(filepath); os.IsNotExist(err) {
c.logger.Errorf("manifest file not found in path %s", filepath)
return manifestPayload, err
}
body, err := ioutil.ReadFile(filepath)
if err != nil {
return manifestPayload, err
}
err = json.Unmarshal(body, &manifestPayload)
return manifestPayload, err
}
func (c *codeCoverageService) downloadAndDecompressParentCommitDir(ctx context.Context, coverage parentCommitCoverage, repoDir string) error {
u, err := url.Parse(coverage.Bloblink)
if err != nil {
c.logger.Errorf("failed to parse blob link %s, error :%v", coverage.Bloblink, err)
return err
}
u.Path = path.Join(u.Path, compressedFileName)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return err
}
resp, err := c.httpClient.Do(req)
if err != nil {
c.logger.Errorf("error while making http request %v", err)
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("non 200 status while cloning from endpoint %s, status %d ", u.String(), resp.StatusCode)
}
parentCommitFilePath := filepath.Join(repoDir, coverage.ParentCommit+".tzst")
c.logger.Debugf("parent commit file path %s", parentCommitFilePath)
out, err := os.Create(parentCommitFilePath)
if err != nil {
return err
}
defer out.Close()
if _, err := io.Copy(out, resp.Body); err != nil {
return err
}
// decompress the file in temp directory as we cannot decompress inside azure file volume
if err := c.zstd.Decompress(ctx, parentCommitFilePath, false, os.TempDir()); err != nil {
c.logger.Errorf("failed to decompress parent commit directory %v", err)
return err
}
srcPath := filepath.Join(os.TempDir(), coverage.ParentCommit)
destPath := filepath.Join(repoDir, coverage.ParentCommit)
// copy the coverage directories to shared volume,
// chmod is not allowed inside azure file volume so that is skipped Ref: https://stackoverflow.com/questions/58301985/permissions-on-azure-file
if err := fileutils.CopyDir(srcPath, destPath, false); err != nil {
c.logger.Errorf("failed to copy directory from src %s to dest %s, error %v", srcPath, destPath, err)
return err
}
return nil
}
func (c *codeCoverageService) copyFromParentCommitDir(parentCommitDir, commitDir string, removedFiles ...string) error {
if _, err := os.Lstat(parentCommitDir); os.IsNotExist(err) {
c.logger.Errorf("Parent Commit Directory %s not found", parentCommitDir)
return err
}
if err := filepath.WalkDir(parentCommitDir, func(path string, info fs.DirEntry, err error) error {
if info.IsDir() && info.Name() != filepath.Base(parentCommitDir) {
if len(removedFiles) > 0 {
for index, removedfile := range removedFiles {
//if testfile is now removed don't copy to current commit directory
if info.Name() == removedfile {
//remove file from slice
removedFiles = append(removedFiles[:index], removedFiles[index+1:]...)
return filepath.SkipDir
}
}
}
testfileDir := filepath.Join(commitDir, info.Name())
//TODO: check if copied dir size is not 0
//if file already exists then don't copy from parent directory
if _, err := os.Lstat(testfileDir); os.IsNotExist(err) {
if err := fileutils.CopyDir(path, testfileDir, false); err != nil {
c.logger.Errorf("failed to copy directory from src %s to dest %s, error %v", path, testfileDir, err)
return err
}
}
//all files copied now we can move next sub directory
return filepath.SkipDir
}
return nil
}); err != nil {
return err
}
return nil
}
func (c *codeCoverageService) getParentCommitCoverageDir(repoID, commitID string) (coverage parentCommitCoverage, err error) {
u, err := url.Parse(c.endpoint)
if err != nil {
c.logger.Errorf("error while parsing endpoint %s, %v", c.endpoint, err)
return coverage, err
}
q := u.Query()
q.Set("repoID", repoID)
q.Set("commitID", commitID)
u.RawQuery = q.Encode()
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
if err != nil {
c.logger.Errorf("failed to create new request %v", err)
return coverage, err
}
resp, err := c.httpClient.Do(req)
if err != nil {
c.logger.Errorf("error while getting coverage details for parent commitID %s, %v", commitID, err)
return coverage, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
c.logger.Errorf("error while getting coverage data, status_code %d", resp.StatusCode)
return coverage, errors.New("non 200 status")
}
payload := parentCommitCoverage{}
decode := json.NewDecoder(resp.Body)
if err := decode.Decode(&payload); err != nil {
c.logger.Errorf("failed to decode response body %v", err)
return coverage, err
}
c.logger.Infof("Got parent directory bloblink %s, commitID:%s", payload.Bloblink, payload.ParentCommit)
return payload, nil
}
func (c *codeCoverageService) sendCoverageData(payload []coverageData) error {
reqBody, err := json.Marshal(payload)
if err != nil {
c.logger.Errorf("failed to marshal request body %v", err)
return err
}
req, err := http.NewRequest(http.MethodPost, c.endpoint, bytes.NewBuffer(reqBody))
if err != nil {
c.logger.Errorf("failed to create new request %v", err)
return err
}
resp, err := c.httpClient.Do(req)
if err != nil {
c.logger.Errorf("error while sending coverage data %v", err)
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
c.logger.Errorf("error while sending coverage data, status code %d", resp.StatusCode)
return errors.New("non 200 status")
}
return nil
}
func (c *codeCoverageService) getTotalCoverage(filepath string) (json.RawMessage, error) {
if _, err := os.Lstat(filepath); os.IsNotExist(err) {
c.logger.Errorf("coverage summary file not found in path %s", filepath)
return nil, err
}
body, err := ioutil.ReadFile(filepath)
if err != nil {
c.logger.Errorf("failed to read coverage summary json, error: %v", err)
return nil, err
}
var payload map[string]json.RawMessage
if err = json.Unmarshal(body, &payload); err != nil {
c.logger.Errorf("failed to unmarshal coverage summary json, error: %v", err)
return nil, err
}
totalCoverage, ok := payload["total"]
if !ok {
c.logger.Errorf("total coverage summary not found in map")
return nil, errors.New("total coverage summary not found in map")
}
return totalCoverage, nil
}
================================================
FILE: pkg/service/coverage/coverage_test.go
================================================
package coverage
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/mock"
)
func Test_codeCoverageService_mergeCodeCoverageFiles(t *testing.T) {
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
var commandType core.CommandType
var commands []string
execManager.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string")).Return(
func(ctx context.Context, commType core.CommandType, comm []string,
cwd string, envMap, secretData map[string]string) error {
commandType = commType
commands = comm
return nil
},
)
coverageFiles := "../../../testutils/testdata/coverage/coverage-final.json ../../../testutils/testdata/coverage/sample/coverage-final.json"
commitDir := "../../../testutils/testdata"
coverageManifestPath := "../../../testutils/testdata/coverage"
type args struct {
ctx context.Context
commitDir string
coverageManifestPath string
threshold bool
}
type expected struct {
commandType core.CommandType
commands []string
cwd string
envMap map[string]string
secretData map[string]string
}
tests := []struct {
name string
args args
wantErr bool
expected expected
}{
{"Test",
args{
ctx: context.TODO(),
commitDir: commitDir,
coverageManifestPath: coverageManifestPath,
threshold: true,
},
false,
expected{
commandType: core.CoverageMerge,
commands: []string{
fmt.Sprintf("/scripts/node_modules/.bin/babel-node %s --commitDir %s --coverageFiles '%s' --coverageManifest %s",
coverageFilePath, commitDir, coverageFiles, coverageManifestPath),
},
cwd: "",
envMap: nil,
secretData: nil,
},
},
}
c := newCodeCoverageService(logger, execManager, coverageManifestPath, azureClient, zstdCompressor, "endpoint")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := c.mergeCodeCoverageFiles(tt.args.ctx, tt.args.commitDir, tt.args.coverageManifestPath, tt.args.threshold)
if err != nil != tt.wantErr {
t.Errorf("codeCoverageService.mergeCodeCoverageFiles() error = %v, wantErr %v", err, tt.wantErr)
return
}
if commandType != tt.expected.commandType || !reflect.DeepEqual(commands, tt.expected.commands) {
t.Errorf("Received commandType: %v, commands: %v\nexpected commandType: %v, commands: %v",
commandType, commands, tt.expected.commandType, tt.expected.commands)
}
})
}
}
func Test_codeCoverageService_uploadFile(t *testing.T) {
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
var calledArgs string
azureClient.On("Create",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("string"),
mock.AnythingOfType("*os.File"),
mock.AnythingOfType("string")).Return(
func(ctx context.Context, path string, reader io.Reader, mimeType string) string {
st, _ := io.ReadAll(reader)
calledArgs = fmt.Sprintf("%v %v %v", path, string(st), mimeType)
return "blobURL"
},
func(ctx context.Context, path string, reader io.Reader, mimeType string) error {
return nil
},
)
type args struct {
ctx context.Context
blobPath string
filename string
commitID string
}
tests := []struct {
name string
args args
wantBlobURL string
wantArgs string
wantErr bool
}{
{"Test uploadFile",
args{
ctx: context.TODO(),
blobPath: "blobpath",
filename: "../../../testutils/testdata/coverage/coverage-final.json",
commitID: "cID",
},
"blobURL",
`blobpath/cID/coverage-final.json {
"cover1" : "f1"
} application/json`,
false,
},
}
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, "")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotBlobURL, err := c.uploadFile(tt.args.ctx, tt.args.blobPath, tt.args.filename, tt.args.commitID)
if (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.uploadFile() error = %v, wantErr %v", err, tt.wantErr)
return
}
if gotBlobURL != tt.wantBlobURL {
t.Errorf("codeCoverageService.uploadFile() = %v, want %v", gotBlobURL, tt.wantBlobURL)
return
}
if tt.wantArgs != calledArgs {
t.Errorf("Expected: \n%v\nreceived: \n%v", tt.wantArgs, calledArgs)
}
})
}
}
func Test_codeCoverageService_parseManifestFile(t *testing.T) {
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
type args struct {
filepath string
}
tests := []struct {
name string
args args
want core.CoverageManifest
wantErr bool
}{
{"Test parseManifestFile for success",
args{filepath: "../../../testutils/testdata/coverage/coverage-final.json"},
core.CoverageManifest{},
false,
},
{"Test parseManifestFile",
args{filepath: "../../../testutils/testdata/coverage/dne.json"},
core.CoverageManifest{},
true,
},
}
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, "")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := c.parseManifestFile(tt.args.filepath)
if (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.parseManifestFile() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("codeCoverageService.parseManifestFile() = %v, want %v", got, tt.want)
}
})
}
}
func Test_codeCoverageService_downloadAndDecompressParentCommitDir(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/coverage-files.tzst" {
t.Errorf("Expected to request '/coverage-files.tzst', got: %v", r.URL)
return
}
w.WriteHeader(200)
}))
defer server.Close()
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
zstdCompressor.On("Decompress",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("string"),
false,
mock.AnythingOfType("string")).Return(
func(ctx context.Context, filePath string, preservePath bool, workingDirectory string) error {
return nil
},
)
type args struct {
ctx context.Context
coverage parentCommitCoverage
repoDir string
}
tests := []struct {
name string
args args
wantErr bool
}{
// TODO: Add success case, currently on local tempdir can't be created
{"Test downloadAndDecompressParentCommitDir",
args{
ctx: context.TODO(),
coverage: parentCommitCoverage{Bloblink: server.URL, ParentCommit: "parentCommit"},
repoDir: "../../../testutils/testdata"},
true,
},
}
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, "")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := c.downloadAndDecompressParentCommitDir(tt.args.ctx, tt.args.coverage, tt.args.repoDir)
defer removeCreatedFile(filepath.Join(tt.args.repoDir, tt.args.coverage.ParentCommit+".tzst"))
if (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.downloadAndDecompressParentCommitDir() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func Test_codeCoverageService_getParentCommitCoverageDir(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/" {
w.WriteHeader(http.StatusBadRequest)
return
}
if r.URL.RawQuery == "commitID=non200&repoID=non200" {
w.WriteHeader(300)
return
}
if r.URL.RawQuery == "commitID=payloadError&repoID=payloadDecodeError" {
_, writeErr := fmt.Fprintln(w, `{"undefined_field"}`)
if writeErr != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
}
w.Header().Set("Content-Type", "application/json")
_, writeErr := fmt.Fprintln(w, `{"blob_link": "http://fakeblob.link", "parent_commit" : "fake_parent_commit"}`)
if writeErr != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
w.WriteHeader(200)
}))
defer ts.Close()
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
type args struct {
repoID string
commitID string
}
tests := []struct {
name string
args args
wantCoverage parentCommitCoverage
wantErr bool
}{
{
"Test getParentCommitCoverageDir",
args{repoID: "dummyRepoID", commitID: "dummyCommitID"},
parentCommitCoverage{Bloblink: "http://fakeblob.link", ParentCommit: "fake_parent_commit"},
false,
},
{
"Test getParentCommitCoverageDir for non 200 status error",
args{repoID: "non200", commitID: "non200"},
parentCommitCoverage{},
true,
},
{
"Test getParentCommitCoverageDir for payloadDecodeError",
args{repoID: "payloadDecodeError", commitID: "payloadError"},
parentCommitCoverage{},
true,
},
}
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, ts.URL)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotCoverage, err := c.getParentCommitCoverageDir(tt.args.repoID, tt.args.commitID)
if (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.getParentCommitCoverageDir() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(gotCoverage, tt.wantCoverage) {
t.Errorf("codeCoverageService.getParentCommitCoverageDir() = %v, want %v", gotCoverage, tt.wantCoverage)
}
})
}
}
func Test_codeCoverageService_sendCoverageData(t *testing.T) {
payload := []coverageData{
{
BuildID: "buildID1",
RepoID: "repoID1",
CommitID: "commitID1",
BlobLink: "blobLink1",
TotalCoverage: json.RawMessage([]byte(`{"bar":"baz"}`)),
},
}
mux := http.NewServeMux()
mux.HandleFunc("/endpoint", func(res http.ResponseWriter, req *http.Request) {
body, _ := io.ReadAll(req.Body)
expResp := `[{"build_id":"buildID1","repo_id":"repoID1","commit_id":"commitID1","blob_link":"blobLink1","total_coverage":{"bar":"baz"}}]`
if !reflect.DeepEqual(string(body), expResp) {
t.Errorf("Expected response body: %v, got: %v\n", expResp, string(body))
}
res.WriteHeader(200)
})
mux.HandleFunc("/endpoint-err", func(res http.ResponseWriter, req *http.Request) {
res.WriteHeader(404)
})
ts := httptest.NewServer(mux)
defer ts.Close()
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
type args struct {
payload []coverageData
}
tests := []struct {
name string
args args
endpoint string
wantErr bool
}{
{
"Test sendCoverageData for success",
args{payload: payload},
"/endpoint",
false,
},
{
"Test sendCoverageData for non 200 status",
args{payload: payload},
"/endpoint-err",
true,
},
}
for _, tt := range tests {
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, ts.URL+tt.endpoint)
t.Run(tt.name, func(t *testing.T) {
if err := c.sendCoverageData(tt.args.payload); (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.sendCoverageData() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func Test_codeCoverageService_getTotalCoverage(t *testing.T) {
logger, execManager, azureClient, zstdCompressor := initialiseArgs()
c := newCodeCoverageService(logger, execManager, "", azureClient, zstdCompressor, "")
type args struct {
filepath string
}
tests := []struct {
name string
args args
want json.RawMessage
wantErr bool
}{
{
"Test getTotalCoverage",
args{"../../../testutils/testdata/coverage/sample/coverage-final.json"},
json.RawMessage([]byte(`"80%"`)),
false,
},
{
"Test getTotalCoverage for no field of total coverage",
args{"../../../testutils/testdata/coverage/coverage-final.json"},
json.RawMessage{},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := c.getTotalCoverage(tt.args.filepath)
if (err != nil) != tt.wantErr {
t.Errorf("codeCoverageService.getTotalCoverage() error = %v, wantErr %v", err, tt.wantErr)
return
}
if len(tt.want) > 0 && !reflect.DeepEqual(got, tt.want) {
t.Errorf("codeCoverageService.getTotalCoverage() = %v, want %v", string(got), string(tt.want))
}
})
}
}
func newCodeCoverageService(logger lumber.Logger,
execManager *mocks.ExecutionManager,
codeCoveragParentDir string,
azureClient *mocks.AzureClient,
zstd *mocks.ZstdCompressor,
endpoint string) *codeCoverageService {
return &codeCoverageService{
logger: logger,
execManager: execManager,
codeCoveragParentDir: codeCoveragParentDir,
azureClient: azureClient,
zstd: zstd,
httpClient: http.Client{
Timeout: global.DefaultAPITimeout,
},
endpoint: endpoint,
}
}
func initialiseArgs() (logger lumber.Logger,
execManager *mocks.ExecutionManager,
azureClient *mocks.AzureClient,
zstd *mocks.ZstdCompressor) {
azureClient = new(mocks.AzureClient)
execManager = new(mocks.ExecutionManager)
zstdCompressor := new(mocks.ZstdCompressor)
logger, err := testutils.GetLogger()
if err != nil {
fmt.Printf("Couldn't initialize logger, error: %v", err)
}
return logger, execManager, azureClient, zstdCompressor
}
func removeCreatedFile(path string) {
err := os.RemoveAll(path)
if err != nil {
fmt.Println("error in removing!!")
}
}
================================================
FILE: pkg/service/coverage/models.go
================================================
package coverage
import "encoding/json"
type parentCommitCoverage struct {
Bloblink string `json:"blob_link"`
ParentCommit string `json:"parent_commit"`
}
type coverageData struct {
BuildID string `json:"build_id"`
RepoID string `json:"repo_id"`
CommitID string `json:"commit_id"`
BlobLink string `json:"blob_link"`
TotalCoverage json.RawMessage `json:"total_coverage"`
}
================================================
FILE: pkg/service/teststats/teststats.go
================================================
package teststats
import (
"sort"
"sync"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/procfs"
)
//ProcStats represents the process stats for a particular pid
type ProcStats struct {
logger lumber.Logger
ExecutionResultInputChannel chan core.ExecutionResults
wg sync.WaitGroup
ExecutionResultOutputChannel chan *core.ExecutionResults
}
// New returns instance of ProcStats
func New(cfg *config.NucleusConfig, logger lumber.Logger) (*ProcStats, error) {
return &ProcStats{
logger: logger,
ExecutionResultInputChannel: make(chan core.ExecutionResults),
ExecutionResultOutputChannel: make(chan *core.ExecutionResults),
}, nil
}
// CaptureTestStats combines the ps stats for each test
func (s *ProcStats) CaptureTestStats(pid int32, collectStats bool) error {
ps, err := procfs.New(pid, global.SamplingTime, false)
if err != nil {
s.logger.Errorf("failed to find process stats with pid %d %v", pid, err)
return err
}
s.wg.Add(1)
go func() {
defer s.wg.Done()
processStats := ps.GetStatsInInterval()
if len(processStats) == 0 {
s.logger.Errorf("no process stats found with pid %d", pid)
}
select {
case executionResults := <-s.ExecutionResultInputChannel:
if collectStats {
for ind := range executionResults.Results {
// Refactor the impl of below 2 functions using generics when Go 1.18 arrives
// https://www.freecodecamp.org/news/generics-in-golang/
s.appendStatsToTests(executionResults.Results[ind].TestPayload, processStats)
s.appendStatsToTestSuites(executionResults.Results[ind].TestSuitePayload, processStats)
}
}
s.ExecutionResultOutputChannel <- &executionResults
default:
// Can reach here in 2 cases (ie `/results` API wasn't called):
// 1. runner process exited with zero exit exitCode but no testFiles were run (changes in Readme.md etc)
// 2. runner process exited with non-zero exitCode
s.logger.Warnf("No test results found, pid %d", pid)
s.ExecutionResultOutputChannel <- nil
}
}()
return nil
}
// processStats is RecordTime sorted
func (s *ProcStats) getProcsForInterval(start, end time.Time, processStats []*procfs.Stats) []*procfs.Stats {
n := len(processStats)
left := sort.Search(n, func(i int) bool { return !processStats[i].RecordTime.Before(start) })
right := sort.Search(n, func(i int) bool { return !processStats[i].RecordTime.Before(end) })
if left <= right && 0 <= left && right <= n {
return processStats[left:right]
}
// return empty slice
return processStats[0:0]
}
func (s *ProcStats) appendStatsToTests(testResults []core.TestPayload, processStats []*procfs.Stats) {
for r := 0; r < len(testResults); r++ {
result := &testResults[r]
// check if start time of test t(start) is not 0
if !result.StartTime.IsZero() {
// calculate end time of test t(end)
result.EndTime = result.StartTime.Add(time.Duration(result.Duration) * time.Millisecond)
for _, proc := range s.getProcsForInterval(result.StartTime, result.EndTime, processStats) {
result.Stats = append(result.Stats, core.TestProcessStats{CPU: proc.CPUPercentage, Memory: proc.MemConsumed, RecordTime: proc.RecordTime})
}
}
}
}
func (s *ProcStats) appendStatsToTestSuites(testSuiteResults []core.TestSuitePayload, processStats []*procfs.Stats) {
for r := 0; r < len(testSuiteResults); r++ {
result := &testSuiteResults[r]
// check if start time of test suite ts(start) is not 0
if !result.StartTime.IsZero() {
// calculate end time of test suite ts(end)
result.EndTime = result.StartTime.Add(time.Duration(result.Duration) * time.Millisecond)
for _, proc := range s.getProcsForInterval(result.StartTime, result.EndTime, processStats) {
result.Stats = append(result.Stats, core.TestProcessStats{CPU: proc.CPUPercentage, Memory: proc.MemConsumed, RecordTime: proc.RecordTime})
}
}
}
}
================================================
FILE: pkg/service/teststats/teststats_test.go
================================================
package teststats
import (
"fmt"
"reflect"
"testing"
"time"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/procfs"
"github.com/LambdaTest/test-at-scale/testutils"
)
func getDummyTimeMap() map[string]time.Time {
tpresent, err := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 28 Feb 2022 16:22:01 UTC")
if err != nil {
fmt.Printf("Error parsing time: %v", err)
}
t2025, _ := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 22 Feb 2025 16:22:01 UTC")
tpast1, _ := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 22 Feb 2021 16:23:01 UTC")
tpast2, _ := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 22 Feb 2021 16:22:05 UTC")
tfuture1, _ := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 22 Feb 2023 16:14:01 UTC")
tfuture2, _ := time.Parse("Mon, 02 Jan 2006 15:04:05 MST", "Tue, 22 Feb 2023 16:25:01 UTC")
return map[string]time.Time{"tpresent": tpresent, "t2025": t2025, "tpast1": tpast1, "tpast2": tpast2, "tfuture1": tfuture1, "tfuture2": tfuture2}
}
// NOTE: Tests in this package are meant to be run in a Linux environment
func TestNew(t *testing.T) {
cfg, _ := testutils.GetConfig()
logger, _ := testutils.GetLogger()
type args struct {
cfg *config.NucleusConfig
logger lumber.Logger
}
tests := []struct {
name string
args args
want *ProcStats
wantErr bool
}{
{"Test New",
args{cfg, logger},
&ProcStats{
logger: logger,
ExecutionResultInputChannel: make(chan core.ExecutionResults),
ExecutionResultOutputChannel: make(chan *core.ExecutionResults),
}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := New(tt.args.cfg, tt.args.logger)
if (err != nil) != tt.wantErr {
t.Errorf("New() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got.logger, tt.want.logger) {
t.Errorf("New() = %v, want %v", got, tt.want)
}
})
}
}
func TestProcStats_getProcsForInterval(t *testing.T) {
cfg, _ := testutils.GetConfig()
logger, _ := testutils.GetLogger()
timeMap := getDummyTimeMap()
type args struct {
start time.Time
end time.Time
processStats []*procfs.Stats
}
tests := []struct {
name string
args args
want []*procfs.Stats
}{
{"Test getProcsForInterval", args{timeMap["tpresent"], timeMap["tpresent"], []*procfs.Stats{}}, []*procfs.Stats{}},
{"Test getProcsForInterval", args{timeMap["tpresent"], timeMap["t2025"], []*procfs.Stats{
{
CPUPercentage: 1.2,
MemPercentage: 14.1,
MemShared: 105.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tpast1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture2"],
},
}}, []*procfs.Stats{
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture2"],
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s, err := New(cfg, logger)
if err != nil {
t.Errorf("New() error = %v", err)
}
got := s.getProcsForInterval(tt.args.start, tt.args.end, tt.args.processStats)
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("ProcStats.getProcsForInterval() = %v, want %v", got, tt.want)
}
})
}
}
func TestProcStats_appendStatsToTests(t *testing.T) {
cfg, _ := testutils.GetConfig()
logger, _ := testutils.GetLogger()
timeMap := getDummyTimeMap()
type args struct {
testResults []core.TestPayload
processStats []*procfs.Stats
}
tests := []struct {
name string
args args
want string
}{
{"Test appendStatsToTests",
args{[]core.TestPayload{
{Name: "test 1", StartTime: timeMap["tpast1"], EndTime: timeMap["tfuture1"]},
},
[]*procfs.Stats{}},
// nolint:lll
"[{TestID: Detail: SuiteID: Suites:[] Title: FullTitle: Name:test 1 Duration:0 FilePath: Line: Col: CurrentRetry:0 Status: DAG:[] Filelocator: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:23:01 +0000 UTC EndTime:2021-02-22 16:23:01 +0000 UTC Stats:[] FailureMessage:}]",
},
{"Test appendStatsToTests",
args{[]core.TestPayload{
{
Name: "test 1",
StartTime: timeMap["tpast1"],
Duration: 100,
EndTime: timeMap["tfuture1"],
Stats: []core.TestProcessStats{},
},
{
Name: "test 2",
StartTime: timeMap["tpast2"],
Duration: 200,
EndTime: timeMap["tfuture2"],
Stats: []core.TestProcessStats{{Memory: 100, CPU: 25.4, Storage: 250, RecordTime: timeMap["tpast2"]}},
},
},
[]*procfs.Stats{
{
CPUPercentage: 1.2,
MemPercentage: 14.1,
MemShared: 105.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tpast1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture2"],
},
},
},
// nolint:lll
"[{TestID: Detail: SuiteID: Suites:[] Title: FullTitle: Name:test 1 Duration:100 FilePath: Line: Col: CurrentRetry:0 Status: DAG:[] Filelocator: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:23:01 +0000 UTC EndTime:2021-02-22 16:23:01.1 +0000 UTC Stats:[{Memory:131 CPU:1.2 Storage:0 RecordTime:2021-02-22 16:23:01 +0000 UTC}] FailureMessage:} {TestID: Detail: SuiteID: Suites:[] Title: FullTitle: Name:test 2 Duration:200 FilePath: Line: Col: CurrentRetry:0 Status: DAG:[] Filelocator: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:22:05 +0000 UTC EndTime:2021-02-22 16:22:05.2 +0000 UTC Stats:[{Memory:100 CPU:25.4 Storage:250 RecordTime:2021-02-22 16:22:05 +0000 UTC}] FailureMessage:}]",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s, err := New(cfg, logger)
if err != nil {
t.Errorf("New() error = %v", err)
}
s.appendStatsToTests(tt.args.testResults, tt.args.processStats)
got := fmt.Sprintf("%+v", tt.args.testResults)
if got != tt.want {
t.Errorf("ProcStats.appendStatsToTests() = \n%v\nwant: \n%v", got, tt.want)
}
})
}
}
func TestProcStats_appendStatsToTestSuites(t *testing.T) {
cfg, _ := testutils.GetConfig()
logger, _ := testutils.GetLogger()
timeMap := getDummyTimeMap()
type args struct {
testSuiteResults []core.TestSuitePayload
processStats []*procfs.Stats
}
tests := []struct {
name string
args args
want string
}{
{"Test appendStatsToTests",
args{[]core.TestSuitePayload{
{SuiteID: "testSuite1", StartTime: timeMap["tpast1"], EndTime: timeMap["tfuture1"], TotalTests: 2},
},
[]*procfs.Stats{}},
"[{SuiteID:testSuite1 SuiteName: ParentSuiteID: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:23:01 +0000 UTC EndTime:2021-02-22 16:23:01 +0000 UTC Duration:0 Status: Stats:[] TotalTests:2}]", // nolint
},
{"Test appendStatsToTests",
args{[]core.TestSuitePayload{
{
SuiteID: "testSuite2",
StartTime: timeMap["tpast1"],
Duration: 100,
EndTime: timeMap["tfuture1"],
Stats: []core.TestProcessStats{},
TotalTests: 3,
},
{
SuiteID: "testSuite3",
StartTime: timeMap["tpast2"],
Duration: 200,
EndTime: timeMap["tfuture2"],
Stats: []core.TestProcessStats{{Memory: 100, CPU: 25.4, Storage: 250, RecordTime: timeMap["tpast2"]}},
TotalTests: 5,
},
},
[]*procfs.Stats{
{
CPUPercentage: 1.2,
MemPercentage: 14.1,
MemShared: 105.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tpast1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture1"],
},
{
CPUPercentage: 1.25,
MemPercentage: 14.15,
MemShared: 107.0,
MemSwapped: 25,
MemConsumed: 131,
RecordTime: timeMap["tfuture2"],
},
},
},
"[{SuiteID:testSuite2 SuiteName: ParentSuiteID: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:23:01 +0000 UTC EndTime:2021-02-22 16:23:01.1 +0000 UTC Duration:100 Status: Stats:[{Memory:131 CPU:1.2 Storage:0 RecordTime:2021-02-22 16:23:01 +0000 UTC}] TotalTests:3} {SuiteID:testSuite3 SuiteName: ParentSuiteID: BlocklistSource: Blocklisted:false StartTime:2021-02-22 16:22:05 +0000 UTC EndTime:2021-02-22 16:22:05.2 +0000 UTC Duration:200 Status: Stats:[{Memory:100 CPU:25.4 Storage:250 RecordTime:2021-02-22 16:22:05 +0000 UTC}] TotalTests:5}]", //nolint
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s, err := New(cfg, logger)
if err != nil {
t.Errorf("New() error = %v", err)
}
s.appendStatsToTestSuites(tt.args.testSuiteResults, tt.args.processStats)
got := fmt.Sprintf("%+v", tt.args.testSuiteResults)
if got != tt.want {
t.Errorf("ProcStats.appendStatsToTestSuites = \n%v\nwant: \n%v", got, tt.want)
}
})
}
}
================================================
FILE: pkg/synapse/synapse.go
================================================
package synapse
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/tasconfigdownloader"
"github.com/cenkalti/backoff/v4"
"github.com/denisbrodbeck/machineid"
"github.com/gorilla/websocket"
"github.com/spf13/viper"
)
// All constant related to synapse
const (
Repo = "repo"
BuildID = "build-id"
JobID = "job-id"
Mode = "mode"
ID = "id"
DuplicateConnectionErr = "Synapse already has an open connection"
AuthenticationFailed = "Synapse authentication failed"
duplicateConnectionSleepDuration = 15 * time.Second
)
var buildAbortMap = make(map[string]bool)
type synapse struct {
conn *websocket.Conn
runner core.DockerRunner
secretsManager core.SecretsManager
logger lumber.Logger
MsgErrChan chan struct{}
MsgChan chan []byte
ConnectionAborted chan struct{}
InvalidConnectionRequest chan struct{}
LogoutRequired bool
tasConfigDownloader *tasconfigdownloader.TASConfigDownloader
}
// New returns new instance of synapse
func New(
runner core.DockerRunner,
logger lumber.Logger,
secretsManager core.SecretsManager,
tasConfigDownloader *tasconfigdownloader.TASConfigDownloader,
) core.SynapseManager {
return &synapse{
runner: runner,
logger: logger,
secretsManager: secretsManager,
MsgErrChan: make(chan struct{}),
InvalidConnectionRequest: make(chan struct{}),
MsgChan: make(chan []byte, 1024),
ConnectionAborted: make(chan struct{}, 10),
LogoutRequired: true,
tasConfigDownloader: tasConfigDownloader,
}
}
func (s *synapse) InitiateConnection(
ctx context.Context,
wg *sync.WaitGroup,
connectionFailed chan struct{}) {
defer wg.Done()
go s.openAndMaintainConnection(ctx, connectionFailed)
<-ctx.Done()
if s.LogoutRequired {
s.logout()
}
s.runner.KillRunningDocker(context.TODO())
s.logger.Debugf("exiting synapse")
}
/*
openAndMaintainConnection tries to create and mantain connection with
exponential backoff factor
*/
func (s *synapse) openAndMaintainConnection(ctx context.Context, connectionFailed chan struct{}) {
// setup exponential backoff for retrying control websocket connection
exponentialBackoff := backoff.NewExponentialBackOff()
exponentialBackoff.InitialInterval = 500 * time.Millisecond
exponentialBackoff.RandomizationFactor = 0.05
exponentialBackoff.MaxElapsedTime = 10 * time.Minute
s.logger.Debugf("starting socket connection at URL %s", global.SocketURL[viper.GetString("env")])
operation := func() error {
s.logger.Debugf("trying to connect to TAS server")
select {
case <-ctx.Done():
return nil
default:
conn, _, err := websocket.DefaultDialer.Dial(global.SocketURL[viper.GetString("env")], nil)
if err != nil {
s.logger.Errorf("error connecting synapse to TAS %+v", err)
return err
}
s.conn = conn
s.logger.Debugf("synapse connected to TAS server")
s.login()
if !s.connectionHandler(ctx, conn, connectionFailed) {
return nil
}
s.MsgErrChan = make(chan struct{})
// re-listen for any connection breaks
go s.openAndMaintainConnection(ctx, connectionFailed)
return nil
}
}
if err := backoff.Retry(operation, exponentialBackoff); err != nil {
s.logger.Errorf("Unable to establish connection with lambdatest server. exiting...")
connectionFailed <- struct{}{}
s.LogoutRequired = false
}
}
/*
connectionHandler handles the connection by listening to any connection closer
also it returns boolean value which represents whether we can retry to connect
*/
func (s *synapse) connectionHandler(ctx context.Context, conn *websocket.Conn, connectionFailed chan struct{}) bool {
normalCloser := make(chan struct{})
ctxDone := false
defer func() {
// if gracefully terminated, wait for logout message to be sent
if !ctxDone {
conn.Close()
}
s.ConnectionAborted <- struct{}{}
}()
go s.messageReader(normalCloser, conn)
go s.messageWriter(conn)
select {
case <-ctx.Done():
ctxDone = true
return false
case <-normalCloser:
return false
case <-s.InvalidConnectionRequest:
connectionFailed <- struct{}{}
s.LogoutRequired = false
return false
case <-s.MsgErrChan:
s.logger.Errorf("Connection between synpase and lambdatest break")
return true
}
}
/*
messageReader reads websocket messages and acts upon it
*/
func (s *synapse) messageReader(normalCloser chan struct{}, conn *websocket.Conn) {
conn.SetReadLimit(global.MaxMessageSize)
if err := conn.SetReadDeadline(time.Now().Add(global.PingWait)); err != nil {
s.logger.Errorf("Error in setting read deadline , error: %v", err)
s.MsgErrChan <- struct{}{}
close(s.MsgErrChan)
return
}
conn.SetPingHandler(func(string) error {
if err := conn.WriteMessage(websocket.PongMessage, nil); err != nil {
s.logger.Errorf("Error in writing pong msg , error: %v", err)
return err
}
if err := conn.SetReadDeadline(time.Now().Add(global.PingWait)); err != nil {
s.logger.Errorf("Error in setting read deadline , error: %v", err)
return err
}
return nil
})
duplicateConnectionChan := make(chan struct{})
for {
select {
case <-duplicateConnectionChan:
s.logger.Errorf("Duplicate connection detected .. will retry after certain time")
time.Sleep(duplicateConnectionSleepDuration)
s.MsgErrChan <- struct{}{}
close(s.MsgErrChan)
close(duplicateConnectionChan)
return
default:
_, msg, err := conn.ReadMessage()
if err != nil {
if websocket.IsCloseError(err, websocket.CloseNormalClosure) {
s.logger.Debugf("Normal closure occurred...........")
normalCloser <- struct{}{}
return
}
s.logger.Errorf("disconnecting from lambdatest server. error in reading message %v", err)
s.MsgErrChan <- struct{}{}
close(s.MsgErrChan)
return
}
s.processMessage(msg, duplicateConnectionChan)
}
}
}
// processMessage process messages received via websocket
func (s *synapse) processMessage(msg []byte, duplicateConnectionChan chan struct{}) {
var message core.Message
err := json.Unmarshal(msg, &message)
if err != nil {
s.logger.Errorf("error unmarshaling message")
}
switch message.Type {
case core.MsgError:
s.logger.Debugf("error message received from server")
go s.processErrorMessage(message, duplicateConnectionChan)
case core.MsgInfo:
s.logger.Debugf("info message received from server")
case core.MsgTask:
s.logger.Debugf("task message received from server")
go s.processTask(message)
case core.MsgYMLParsingRequest:
s.logger.Debugf("yml parsing request received from server")
go s.processYMLParsingRequest(message)
case core.MsgBuildAbort:
s.logger.Debugf("abort-build message received from server")
go s.processAbortBuild(message)
default:
s.logger.Errorf("message type not found")
}
}
// processErrorMessage handles error messages
func (s *synapse) processErrorMessage(message core.Message, duplicateConnectionChan chan struct{}) {
errMsg := string(message.Content)
s.logger.Errorf("error message received from server, error %s ", errMsg)
if errMsg == AuthenticationFailed {
s.InvalidConnectionRequest <- struct{}{}
}
if errMsg == DuplicateConnectionErr {
duplicateConnectionChan <- struct{}{}
}
}
// processAbortBuild handles aborting a running build
func (s *synapse) processAbortBuild(message core.Message) {
buildID := string(message.Content)
buildAbortMap[buildID] = true
s.logger.Debugf("message received to abort build %s", buildID)
if err := s.runner.KillContainerForBuildID(buildID); err != nil {
s.logger.Errorf("error while terminating container for buildID: %s, error: %v", buildID, err)
return
}
}
// processTask handles task type message
func (s *synapse) processTask(message core.Message) {
var runnerOpts core.RunnerOptions
err := json.Unmarshal(message.Content, &runnerOpts)
if err != nil {
s.logger.Errorf("error unmarshaling core.task")
}
// sending job started updates
if runnerOpts.PodType == core.NucleusPod {
jobInfo := CreateJobInfo(core.JobStarted, &runnerOpts, "")
s.logger.Infof("Sending update to neuron %+v", jobInfo)
resourceStatsMessage := CreateJobUpdateMessage(jobInfo)
s.writeMessageToBuffer(&resourceStatsMessage)
}
// mounting secrets to container
runnerOpts.HostVolumePath = fmt.Sprintf("/tmp/synapse/data/%s", runnerOpts.ContainerName)
s.runAndUpdateJobStatus(&runnerOpts)
}
// runAndUpdateJobStatus intiate and sends jobs status
func (s *synapse) runAndUpdateJobStatus(runnerOpts *core.RunnerOptions) {
// starting container
statusChan := make(chan core.ContainerStatus)
defer close(statusChan)
s.logger.Debugf("starting container %s for build %s...", runnerOpts.ContainerName, runnerOpts.Label[BuildID])
go s.runner.Initiate(context.TODO(), runnerOpts, statusChan)
status := <-statusChan
// post job completion steps
s.logger.Debugf("jobID %s, buildID %s status %+v", runnerOpts.Label[JobID], runnerOpts.Label[BuildID], status)
s.sendResourceUpdates(core.ResourceRelease, runnerOpts, runnerOpts.Label[JobID], runnerOpts.Label[BuildID])
jobStatus := core.JobFailed
if status.Done {
jobStatus = core.JobCompleted
}
if buildAbortMap[runnerOpts.Label[BuildID]] {
jobStatus = core.JobAborted
}
jobInfo := CreateJobInfo(jobStatus, runnerOpts, status.Error.Message)
s.logger.Infof("Sending update to neuron %+v", jobInfo)
resourceStatsMessage := CreateJobUpdateMessage(jobInfo)
s.writeMessageToBuffer(&resourceStatsMessage)
}
// login write login message to lambdatest server
func (s *synapse) login() {
cpu, ram := s.runner.GetInfo(context.TODO())
id, err := machineid.ProtectedID("synapaseMeta")
if err != nil {
s.logger.Fatalf("Error while generating unique id")
}
lambdatestConfig := s.secretsManager.GetLambdatestSecrets()
loginDetails := core.LoginDetails{
Name: s.secretsManager.GetSynapseName(),
SecretKey: lambdatestConfig.SecretKey,
CPU: cpu,
RAM: ram,
SynapseID: id,
SynapseVersion: global.SynapseBinaryVersion,
}
s.logger.Infof("Login synapse with id %s", loginDetails.SynapseID)
loginMessage := CreateLoginMessage(loginDetails)
s.writeMessageToBuffer(&loginMessage)
}
// logout writes logout message to lambdatest server
func (s *synapse) logout() {
s.logger.Infof("Logging out from lambdatest server")
logoutMessage := CreateLogoutMessage()
messageJson, err := json.Marshal(logoutMessage)
if err != nil {
s.logger.Errorf("error marshaling message")
return
}
if err := s.conn.WriteMessage(websocket.TextMessage, messageJson); err != nil {
s.logger.Errorf("error sending message to the server, error %v", err)
}
}
// sendResourceUpdates sends resource status of synapse
func (s *synapse) sendResourceUpdates(
status core.StatType,
runnerOpts *core.RunnerOptions,
jobID, buildID string,
) {
specs := GetResources(runnerOpts.Tier)
resourceStats := core.ResourceStats{
Status: status,
CPU: specs.CPU,
RAM: specs.RAM,
}
s.logger.Debugf("sending resource update for jobID %s buildID %s to lambdatest %+v", jobID, buildID, resourceStats)
resourceStatsMessage := CreateResourceStatsMessage(resourceStats)
s.writeMessageToBuffer(&resourceStatsMessage)
}
// writeMessageToBuffer writes all message to buffer channel
func (s *synapse) writeMessageToBuffer(message *core.Message) {
messageJSON, err := json.Marshal(message)
if err != nil {
s.logger.Errorf("error marshaling message")
return
}
s.MsgChan <- messageJSON
}
// messageWriter writes the messages to open websocket
func (s *synapse) messageWriter(conn *websocket.Conn) {
for {
select {
case <-s.ConnectionAborted:
return
case messageJson := <-s.MsgChan:
if err := conn.WriteMessage(websocket.TextMessage, messageJson); err != nil {
s.logger.Errorf("error sending message to the server error %v", err)
s.MsgChan <- messageJson
s.MsgErrChan <- struct{}{}
close(s.MsgErrChan)
return
}
}
}
}
func (s *synapse) processYMLParsingRequest(message core.Message) {
var parsingReqMsg *core.YMLParsingRequestMessage
var writeMsg core.Message
defer s.writeMessageToBuffer(&writeMsg)
if err := json.Unmarshal(message.Content, &parsingReqMsg); err != nil {
s.logger.Errorf("error in unmarshaling message for yml parsing request, error %v ", err)
writeMsg = createYMlParsingResultMessage(core.YMLParsingResultMessage{
OrgID: parsingReqMsg.OrgID,
BuildID: parsingReqMsg.BuildID,
ErrorMsg: err.Error(),
})
return
}
oauth := s.secretsManager.GetOauthToken()
tasOutput, err := s.tasConfigDownloader.GetTASConfig(context.TODO(), parsingReqMsg.GitProvider,
parsingReqMsg.CommitID,
parsingReqMsg.RepoSlug, parsingReqMsg.TasFileName, oauth,
parsingReqMsg.Event, parsingReqMsg.LicenseTier)
if err != nil {
s.logger.Errorf("error occurred while fetching tas config file for buildID %s orgID %s, error %v",
parsingReqMsg.BuildID, parsingReqMsg.OrgID, err)
writeMsg = createYMlParsingResultMessage(core.YMLParsingResultMessage{
OrgID: parsingReqMsg.OrgID,
BuildID: parsingReqMsg.BuildID,
ErrorMsg: err.Error(),
})
return
}
writeMsg = createYMlParsingResultMessage(core.YMLParsingResultMessage{
OrgID: parsingReqMsg.OrgID,
BuildID: parsingReqMsg.BuildID,
YMLOutput: *tasOutput,
})
}
================================================
FILE: pkg/synapse/utils.go
================================================
package synapse
import (
"encoding/json"
"github.com/LambdaTest/test-at-scale/pkg/core"
)
// CreateLoginMessage creates message of type login
func CreateLoginMessage(loginDetails core.LoginDetails) core.Message {
loginDetailsJson, err := json.Marshal(loginDetails)
if err != nil {
return core.Message{}
}
return core.Message{
Type: core.MsgLogin,
Content: loginDetailsJson,
Success: true,
}
}
// CreateLogoutMessage creates message of type logout
func CreateLogoutMessage() core.Message {
return core.Message{
Type: core.MsgLogout,
Content: []byte(""),
Success: true,
}
}
// CreateJobInfo creates jobInfo based on status and runner
func CreateJobInfo(status core.StatusType, runnerOpts *core.RunnerOptions, message string) core.JobInfo {
jobInfo := core.JobInfo{
Status: status,
JobID: runnerOpts.Label[JobID],
BuildID: runnerOpts.Label[BuildID],
ID: runnerOpts.Label[ID],
Mode: runnerOpts.Label[Mode],
Message: message,
}
return jobInfo
}
// CreateJobUpdateMessage creates message of type job updates
func CreateJobUpdateMessage(jobInfo core.JobInfo) core.Message {
jobInfoJson, err := json.Marshal(jobInfo)
if err != nil {
return core.Message{}
}
return core.Message{
Type: core.MsgJobInfo,
Content: []byte(jobInfoJson),
Success: true,
}
}
// CreateResourceStatsMessage creates message of type resource stats
func CreateResourceStatsMessage(resourceStats core.ResourceStats) core.Message {
resourceStatsJson, err := json.Marshal(resourceStats)
if err != nil {
return core.Message{}
}
return core.Message{
Type: core.MsgResourceStats,
Content: resourceStatsJson,
Success: true,
}
}
// GetResources returns dummy resources based on pod type
func GetResources(tierOpts core.Tier) core.Specs {
if val, ok := core.TierOpts[tierOpts]; ok {
return val
}
return core.Specs{CPU: 0, RAM: 0}
}
// createYMlParsingResultMessage creates message for YML parsing result
func createYMlParsingResultMessage(ymlParsingOutput core.YMLParsingResultMessage) core.Message {
ymlParsingOutputJSON, err := json.Marshal(ymlParsingOutput)
if err != nil {
return core.Message{}
}
return core.Message{
Type: core.MsgYMLParsingResult,
Content: ymlParsingOutputJSON,
Success: true,
}
}
================================================
FILE: pkg/synapse/utils_test.go
================================================
package synapse
import (
"encoding/json"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/stretchr/testify/assert"
)
func TestCreateLoginMessage(t *testing.T) {
loginDetails := core.LoginDetails{
SecretKey: "dummysecretkey",
CPU: 4,
RAM: 4096,
}
loginMessage := CreateLoginMessage(loginDetails)
loginDetailsJSON, err := json.Marshal(loginDetails)
if err != nil {
t.Errorf("error in marshaling login details: %v", err)
}
assert.Equal(t, loginDetailsJSON, loginMessage.Content)
assert.Equal(t, core.MsgLogin, loginMessage.Type)
}
func TestCreateLogoutMessage(t *testing.T) {
logoutMessage := CreateLogoutMessage()
assert.Empty(t, logoutMessage.Content)
assert.Equal(t, core.MsgLogout, logoutMessage.Type)
}
func TestCreateJobUpdateMessage(t *testing.T) {
jobInfo := core.JobInfo{
Status: core.JobCompleted,
JobID: "dummyjobid",
ID: "dummyid",
Mode: "nucleus",
BuildID: "dummybuildid",
}
jobInfoMessage := CreateJobUpdateMessage(jobInfo)
jobInfoJSON, err := json.Marshal(jobInfo)
if err != nil {
t.Errorf("error in marshaling job info: %v", err)
}
assert.Equal(t, jobInfoJSON, jobInfoMessage.Content)
assert.Equal(t, core.MsgJobInfo, jobInfoMessage.Type)
}
func TestCreateResourceStatsMessage(t *testing.T) {
resourceStats := core.ResourceStats{
Status: core.ResourceRelease,
CPU: 2,
RAM: 2000,
}
resourceStatsMessage := CreateResourceStatsMessage(resourceStats)
resourceStatsJSON, err := json.Marshal(resourceStats)
if err != nil {
t.Errorf("error in marshaling job info: %v", err)
}
assert.Equal(t, resourceStatsJSON, resourceStatsMessage.Content)
assert.Equal(t, core.MsgResourceStats, resourceStatsMessage.Type)
}
================================================
FILE: pkg/tasconfigdownloader/setup.go
================================================
package tasconfigdownloader
import (
"context"
"errors"
"fmt"
"os"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/gitmanager"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/tasconfigmanager"
)
const ymlVersionMismtachRemarks = "the yml structure is invalid, please check the TAS yml documentation : %s"
type TASConfigDownloader struct {
logger lumber.Logger
gitmanager core.GitManager
tasconfigmanager core.TASConfigManager
}
func New(logger lumber.Logger) *TASConfigDownloader {
return &TASConfigDownloader{
logger: logger,
gitmanager: gitmanager.NewGitManager(logger, nil),
tasconfigmanager: tasconfigmanager.NewTASConfigManager(logger),
}
}
func (t *TASConfigDownloader) GetTASConfig(ctx context.Context, gitProvider, commitID, repoSlug,
filePath string, oauth *core.Oauth, eventType core.EventType, licenseTier core.Tier) (*core.TASConfigDownloaderOutput, error) {
ymlPath, err := t.gitmanager.DownloadFileByCommit(ctx, gitProvider, repoSlug, commitID, filePath, oauth)
if err != nil {
t.logger.Errorf("error occurred while downloading file %s from %s for commitID %s, error %v", filePath, repoSlug, commitID, err)
return nil, err
}
version, err := t.tasconfigmanager.GetVersion(ymlPath)
if err != nil {
t.logger.Errorf("error reading version for tas config file %s, error %v", ymlPath, err)
return nil, err
}
tasConfig, err := t.tasconfigmanager.LoadAndValidate(ctx, version, ymlPath, eventType, licenseTier, filePath)
if err != nil {
if supportedVersion := t.checkYmlValidityForOtherVersion(ctx, version, ymlPath, eventType,
licenseTier, filePath); supportedVersion != -1 {
errMsg := fmt.Sprintf(ymlVersionMismtachRemarks, global.TASYmlConfigurationDocLink)
t.logger.Errorf("error while parsing yml for commitID %s, error: %s", commitID, errMsg)
return nil, errors.New(errMsg)
}
t.logger.Errorf("error while parsing yml for commitID %s error %v", commitID, err)
return nil, err
}
if err := os.Remove(ymlPath); err != nil {
t.logger.Errorf("failed to delete file %s , error %v", ymlPath, err)
return nil, err
}
return &core.TASConfigDownloaderOutput{Version: version, TASConfig: tasConfig}, nil
}
func (t *TASConfigDownloader) checkYmlValidityForOtherVersion(ctx context.Context,
version int,
ymlPath string,
eventType core.EventType,
licenseTier core.Tier, filePath string) int {
for _, supportedVersion := range global.ValidYMLVersions {
if version == supportedVersion {
continue
}
if _, err := t.tasconfigmanager.LoadAndValidate(ctx, supportedVersion, ymlPath, eventType, licenseTier, filePath); err == nil {
return supportedVersion
}
}
return -1
}
================================================
FILE: pkg/tasconfigmanager/setup.go
================================================
// Package tasconfigmanager is used for fetching and validating the tas config file
package tasconfigmanager
import (
"context"
"errors"
"fmt"
"os"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
const packageJSON = "package.json"
var tierEnumMapping = map[core.Tier]int{
core.XSmall: 1,
core.Small: 2,
core.Medium: 3,
core.Large: 4,
core.XLarge: 5,
}
// tasConfigManager represents an instance of TASConfigManager instance
type tasConfigManager struct {
logger lumber.Logger
}
// NewTASConfigManager creates and returns a new TASConfigManager instance
func NewTASConfigManager(logger lumber.Logger) core.TASConfigManager {
return &tasConfigManager{logger: logger}
}
func (tc *tasConfigManager) LoadAndValidate(ctx context.Context,
version int,
path string,
eventType core.EventType,
licenseTier core.Tier, tasFilePathInRepo string) (interface{}, error) {
yamlFile, err := os.ReadFile(path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
return nil, errs.New(fmt.Sprintf("Configuration file not found at path: %s", tasFilePathInRepo))
}
tc.logger.Errorf("Error while reading file, error %v", err)
return nil, errs.New(fmt.Sprintf("Error while reading configuration file at path: %s", tasFilePathInRepo))
}
if version < global.NewTASVersion {
return tc.validateYMLV1(ctx, yamlFile, eventType, licenseTier, tasFilePathInRepo)
}
return tc.validateYMLV2(ctx, yamlFile, eventType, licenseTier, tasFilePathInRepo)
}
func (tc *tasConfigManager) validateYMLV1(ctx context.Context,
yamlFile []byte,
eventType core.EventType,
licenseTier core.Tier,
filePath string) (*core.TASConfig, error) {
tasConfig, err := utils.ValidateStructTASYmlV1(ctx, yamlFile, filePath)
if err != nil {
return nil, err
}
if tasConfig.CoverageThreshold == nil {
tasConfig.CoverageThreshold = new(core.CoverageThreshold)
}
switch eventType {
case core.EventPullRequest:
if tasConfig.Premerge == nil {
return nil, errs.New(fmt.Sprintf("`preMerge` test cases are not configured in `%s` configuration file.", filePath))
}
case core.EventPush:
if tasConfig.Postmerge == nil {
return nil, errs.New(fmt.Sprintf("`postMerge` test cases are not configured in `%s` configuration file.", filePath))
}
}
if err := isValidLicenseTier(tasConfig.Tier, licenseTier); err != nil {
tc.logger.Errorf("LicenseTier validation failed. error: %v", err)
return nil, err
}
return tasConfig, nil
}
func isValidLicenseTier(yamlTier, licenseTier core.Tier) error {
if tierEnumMapping[yamlTier] > tierEnumMapping[licenseTier] {
return errs.New(
fmt.Sprintf(
"Sorry, the requested tier `%s` is not supported under the current plan. Please upgrade your plan.",
yamlTier))
}
return nil
}
func (tc *tasConfigManager) validateYMLV2(ctx context.Context,
yamlFile []byte,
eventType core.EventType,
licenseTier core.Tier,
yamlFilePath string) (*core.TASConfigV2, error) {
tasConfig, err := utils.ValidateStructTASYmlV2(ctx, yamlFile, yamlFilePath)
if err != nil {
return nil, err
}
if tasConfig.CoverageThreshold == nil {
tasConfig.CoverageThreshold = new(core.CoverageThreshold)
}
switch eventType {
case core.EventPullRequest:
if tasConfig.PreMerge == nil {
return nil, fmt.Errorf("`preMerge` is missing in tas configuration file %s", yamlFilePath)
}
subModuleMap := map[string]bool{}
for i := 0; i < len(tasConfig.PreMerge.SubModules); i++ {
if err := utils.ValidateSubModule(&tasConfig.PreMerge.SubModules[i]); err != nil {
return nil, err
}
if _, ok := subModuleMap[tasConfig.PreMerge.SubModules[i].Name]; ok {
return nil, fmt.Errorf("duplicate subModule name found in `preMerge` in tas configuration file %s", yamlFilePath)
}
subModuleMap[tasConfig.PreMerge.SubModules[i].Name] = true
}
case core.EventPush:
if tasConfig.PostMerge == nil {
return nil, fmt.Errorf("`postMerge` is missing in tas configuration file %s", yamlFilePath)
}
subModuleMap := map[string]bool{}
for i := 0; i < len(tasConfig.PostMerge.SubModules); i++ {
if err := utils.ValidateSubModule(&tasConfig.PostMerge.SubModules[i]); err != nil {
return nil, err
}
if _, ok := subModuleMap[tasConfig.PostMerge.SubModules[i].Name]; ok {
return nil, fmt.Errorf("duplicate subModule name found in `postMerge` in tas configuration file %s", yamlFilePath)
}
subModuleMap[tasConfig.PostMerge.SubModules[i].Name] = true
}
}
if err := isValidLicenseTier(tasConfig.Tier, licenseTier); err != nil {
tc.logger.Errorf("LicenseTier validation failed. error: %v", err)
return nil, err
}
return tasConfig, nil
}
func (tc *tasConfigManager) GetVersion(path string) (int, error) {
yamlFile, err := os.ReadFile(path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
return 0, errs.New(fmt.Sprintf("Configuration file not found at path: %s", path))
}
tc.logger.Errorf("Error while reading file, error %v", err)
return 0, errs.New(fmt.Sprintf("Error while reading configuration file at path: %s", path))
}
versionYml, err := utils.GetVersion(yamlFile)
if err != nil {
tc.logger.Errorf("error while reading tas yml version : %v", err)
return 0, err
}
return versionYml, nil
}
func (tc *tasConfigManager) GetTasConfigFilePath(payload *core.Payload) (string, error) {
// load tas yaml file
filePath, err := utils.GetTASFilePath(payload.TasFileName)
if err != nil {
tc.logger.Errorf("Unable to load tas yaml file, error: %v", err)
return "", err
}
return filePath, nil
}
================================================
FILE: pkg/tasconfigmanager/setup_test.go
================================================
package tasconfigmanager
import (
"context"
"fmt"
"path"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/assert"
)
func assertTasConfigV1(got, want *core.TASConfig) error {
if got.SmartRun != want.SmartRun {
return fmt.Errorf("Mismatch in smart run got %t, want %t", got.SmartRun, want.SmartRun)
}
if got.Framework != want.Framework {
return fmt.Errorf("Mismatch in framework , got %s , want %s", got.Framework, want.Framework)
}
if got.ConfigFile != want.ConfigFile {
return fmt.Errorf("Mismatch in configFile , got %s , want %s", got.ConfigFile, want.ConfigFile)
}
if got.NodeVersion != want.NodeVersion {
return fmt.Errorf("Mismatch in nodeVersion , got %s, want %s", got.NodeVersion, want.NodeVersion)
}
if got.Tier != want.Tier {
return fmt.Errorf("Mismatch in tier , got %s, want %s", got.Tier, want.Tier)
}
if got.SplitMode != want.SplitMode {
return fmt.Errorf("Mismatch in split mode , got %s, want %s", got.SplitMode, want.SplitMode)
}
if got.Version != want.Version {
return fmt.Errorf("Mismatch in version , got %s, want %s", got.Version, want.Version)
}
if !reflect.DeepEqual(*got.Premerge, *want.Premerge) {
return fmt.Errorf("Mismmatch in pre merge pattern , got %+v, want %+v", *got.Premerge, *want.Premerge)
}
if !reflect.DeepEqual(*got.Postmerge, *want.Postmerge) {
return fmt.Errorf("Mismmatch in post merge pattern , got %+v, want %+v", *got.Postmerge, *want.Postmerge)
}
if !reflect.DeepEqual(*got.Prerun, *want.Prerun) {
return fmt.Errorf("Mismmatch in preRun , got %+v, want %+v", *got.Prerun, *want.Prerun)
}
if !reflect.DeepEqual(*got.Postrun, *want.Postrun) {
return fmt.Errorf("Mismmatch in preRun , got %+v, want %+v", *got.Postrun, *want.Postrun)
}
return nil
}
func assertTasConfigV2(got, want *core.TASConfigV2) error {
if got.SmartRun != want.SmartRun {
return fmt.Errorf("Mismatch in smart run got %t, want %t", got.SmartRun, want.SmartRun)
}
if got.Tier != want.Tier {
return fmt.Errorf("Mismatch in tier , got %s, want %s", got.Tier, want.Tier)
}
if got.SplitMode != want.SplitMode {
return fmt.Errorf("Mismatch in split mode , got %s, want %s", got.SplitMode, want.SplitMode)
}
if got.Version != want.Version {
return fmt.Errorf("Mismatch in version , got %s, want %s", got.Version, want.Version)
}
if err := assertMergeV2(got.PreMerge, want.PreMerge, "preMerge"); err != nil {
return err
}
return assertMergeV2(got.PostMerge, want.PostMerge, "postMerge")
}
func assertMergeV2(got, want *core.MergeV2, mode string) error {
if !assert.ObjectsAreEqualValues(got.PreRun, want.PreRun) {
return fmt.Errorf("Mismatch in %s preRun , got %+v, want %+v", mode, got.PreRun, want.PreRun)
}
if !reflect.DeepEqual(got.EnvMap, want.EnvMap) {
return fmt.Errorf("Mismatch in %s env , got %+v, want %+v", mode, got.EnvMap, want.EnvMap)
}
return nil
}
func TestLoadAndValidateV1(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
tasConfigManager := NewTASConfigManager(logger)
ctx := context.TODO()
tests := []struct {
Name string
FilePath string
EventType core.EventType
Tier core.Tier
want *core.TASConfig
wantErr error
}{
{
"Invalid yml file for tas version 1",
path.Join("../../", "testutils/testdata/tasyml/junk.yml"),
core.EventPush,
core.Small,
nil,
fmt.Errorf("`%s` configuration file contains invalid format. Please correct the `%s` file",
"../../testutils/testdata/tasyml/junk.yml",
"../../testutils/testdata/tasyml/junk.yml"),
},
{
"Valid Config",
path.Join("../../", "testutils/testdata/tasyml/validwithCacheKey.yml"),
core.EventPush,
core.Small,
&core.TASConfig{
SmartRun: true,
Framework: "jest",
Postmerge: &core.Merge{
EnvMap: map[string]string{"NODE_ENV": "development"},
Patterns: []string{"{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"},
},
Premerge: &core.Merge{
EnvMap: map[string]string{"NODE_ENV": "development"},
Patterns: []string{"{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"},
},
Prerun: &core.Run{EnvMap: map[string]string{"NODE_ENV": "development"}, Commands: []string{"yarn"}},
Postrun: &core.Run{Commands: []string{"node --version"}},
ConfigFile: "scripts/jest/config.source-www.js",
NodeVersion: "14.17.6",
Tier: "small",
SplitMode: core.TestSplit,
Version: "1.0",
Cache: &core.Cache{
Key: "xyz",
Paths: []string{"abcd"},
},
},
nil,
},
{
"PreMerge is empty in tas yml for PR",
path.Join("../../", "testutils/testdata/tasyml/pre_merge_emptyv1.yml"),
core.EventPullRequest,
core.Small,
nil,
fmt.Errorf("`preMerge` test cases are not configured in `%s` configuration file.",
"../../testutils/testdata/tasyml/pre_merge_emptyv1.yml"),
},
{
"post merge is empty in tas yml for push event ",
path.Join("../../", "testutils/testdata/tasyml/postmerge_emptyv1.yml"),
core.EventPush,
core.Small,
nil,
fmt.Errorf("`postMerge` test cases are not configured in `%s` configuration file.",
"../../testutils/testdata/tasyml/postmerge_emptyv1.yml"),
},
}
for _, tt := range tests {
tas, err := tasConfigManager.LoadAndValidate(ctx, 1, tt.FilePath, tt.EventType, core.Small, tt.FilePath)
if err != nil {
assert.Equal(t, err.Error(), tt.wantErr.Error(), "error mismatch")
} else {
tasConfig := tas.(*core.TASConfig)
err = assertTasConfigV1(tasConfig, tt.want)
if err != nil {
t.Errorf(err.Error())
return
}
}
}
}
// nolint
func TestLoadAndValidateV2(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
tasConfigManager := NewTASConfigManager(logger)
ctx := context.TODO()
tests := []struct {
Name string
FilePath string
EventType core.EventType
Tier core.Tier
want *core.TASConfigV2
wantErr error
}{
{
"Invalid yml file for tas version 1",
path.Join("../../", "testutils/testdata/tasyml/junk.yml"),
core.EventPush,
core.Small,
nil,
fmt.Errorf("`%s` configuration file contains invalid format. Please correct the `%s` file",
"../../testutils/testdata/tasyml/junk.yml",
"../../testutils/testdata/tasyml/junk.yml"),
},
{
"PreMerge is missing in tas yml file for pull_request event",
path.Join("../../", "testutils/testdata/tasyml/premerge_emptyv2.yaml"),
core.EventPullRequest,
core.Small,
nil,
fmt.Errorf("`preMerge` is missing in tas configuration file %s",
"../../testutils/testdata/tasyml/premerge_emptyv2.yaml"),
},
{
"PostMerge is missing in tas yml file for push event",
path.Join("../../", "testutils/testdata/tasyml/postmerge_emptyv2.yaml"),
core.EventPush,
core.Small,
nil,
fmt.Errorf("`postMerge` is missing in tas configuration file %s",
"../../testutils/testdata/tasyml/postmerge_emptyv2.yaml"),
},
{
"Duplicate submodule name in preMerge",
path.Join("../../", "testutils/testdata/tasyml/duplicate_submodule_premerge.yaml"),
core.EventPullRequest,
core.Small,
nil,
fmt.Errorf("duplicate subModule name found in `preMerge` in tas configuration file %s",
"../../testutils/testdata/tasyml/duplicate_submodule_premerge.yaml"),
},
{
"Duplicate submodule name in postMerge",
path.Join("../../", "testutils/testdata/tasyml/duplicate_submodule_postmerge.yaml"),
core.EventPush,
core.Small,
nil,
fmt.Errorf("duplicate subModule name found in `postMerge` in tas configuration file %s",
"../../testutils/testdata/tasyml/duplicate_submodule_postmerge.yaml"),
},
{
"Valid Config",
"../../testutils/testdata/tasyml/valid_with_cachekeyV2.yml",
core.EventPush,
core.Small,
&core.TASConfigV2{
SmartRun: true,
Tier: "small",
SplitMode: core.TestSplit,
PostMerge: &core.MergeV2{
SubModules: []core.SubModule{
{
Name: "some-module-1",
Path: "./somepath",
Patterns: []string{
"./x/y/z",
},
Framework: "mocha",
ConfigFile: "x/y/z",
},
},
},
PreMerge: &core.MergeV2{
SubModules: []core.SubModule{
{
Name: "some-module-1",
Path: "./somepath",
Patterns: []string{
"./x/y/z",
},
Framework: "jasmine",
ConfigFile: "/x/y/z",
},
},
},
Parallelism: 1,
Version: "2.0.1",
Cache: &core.Cache{
Key: "xyz",
Paths: []string{"abcd"},
},
},
nil,
},
}
for _, tt := range tests {
tas, err := tasConfigManager.LoadAndValidate(ctx, 2, tt.FilePath, tt.EventType, core.Small, tt.FilePath)
if err != nil {
assert.Equal(t, err.Error(), tt.wantErr.Error(), "error mismatch")
} else {
tasConfig := tas.(*core.TASConfigV2)
err = assertTasConfigV2(tasConfig, tt.want)
if err != nil {
t.Errorf(err.Error())
return
}
}
}
}
================================================
FILE: pkg/task/task.go
================================================
package task
import (
"context"
"encoding/json"
"net/http"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
// task represents each instance of nucleus spawned by neuron
type task struct {
requests core.Requests
endpoint string
logger lumber.Logger
}
// New returns new task
func New(requests core.Requests, logger lumber.Logger) (core.Task, error) {
return &task{
requests: requests,
logger: logger,
endpoint: global.NeuronHost + "/task",
}, nil
}
func (t *task) UpdateStatus(ctx context.Context, payload *core.TaskPayload) error {
t.logger.Debugf("sending status update of task: %s to %s for repository: %s", payload.TaskID, payload.Status, payload.RepoLink)
reqBody, err := json.Marshal(payload)
if err != nil {
t.logger.Errorf("error while json marshal %v", err)
return err
}
query, headers := utils.GetDefaultQueryAndHeaders()
if _, _, err := t.requests.MakeAPIRequest(ctx, http.MethodPut, t.endpoint, reqBody, query, headers); err != nil {
return err
}
return nil
}
================================================
FILE: pkg/task/task_test.go
================================================
package task
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
)
var noContext = context.Background()
const (
taskE = "/task"
non200 = "non 200 status code"
)
func TestTask_UpdateStatus(t *testing.T) {
check := func(t *testing.T, st int) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != taskE {
t.Errorf("Expected to request '/task', got: %v", r.URL)
return
}
w.WriteHeader(st)
_, err := w.Write([]byte(`{"value":"fixed"}`))
if err != nil {
fmt.Printf("Could not write data in httptest server, error: %v", err)
}
}))
defer server.Close()
logger, err := lumber.NewLogger(lumber.LoggingConfig{ConsoleLevel: lumber.Debug}, true, 1)
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
if err != nil {
fmt.Println("Logger can't be established")
}
taskPayload, err := testutils.GetTaskPayload()
if err != nil {
t.Errorf("Couldn't get task payload, received: %v", err)
}
_, err2 := New(requests, logger)
if err2 != nil {
t.Errorf("New task couldn't initialized, received: %v", err)
}
tk := &task{
requests: requests,
logger: logger,
endpoint: server.URL + taskE,
}
updateStatusErr := tk.UpdateStatus(noContext, taskPayload)
if st != 200 {
expectedErr := non200
if updateStatusErr == nil {
t.Errorf("Expected: %s, Received: %s", expectedErr, updateStatusErr)
}
return
}
if updateStatusErr != nil {
t.Errorf("Received: %v", updateStatusErr)
}
}
t.Run("TestUpdateStatus check for statusOK", func(t *testing.T) {
check(t, 200)
})
t.Run("TestUpdateStatus check for non statusOK", func(t *testing.T) {
check(t, 404)
})
}
func TestTask_UpdateStatusForError(t *testing.T) {
checkErr := func(t *testing.T, st int) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != taskE {
t.Errorf("Expected to request '/task', got: %v", r.URL)
}
w.WriteHeader(st)
w.Header().Set("Content-Type", "application/json")
}))
defer server.Close()
logger, err := lumber.NewLogger(lumber.LoggingConfig{ConsoleLevel: lumber.Debug}, true, 1)
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
if err != nil {
fmt.Println("Logger can't be established")
}
taskPayload, err := testutils.GetTaskPayload()
if err != nil {
t.Errorf("Couldn't get task payload, received: %v", err)
}
tk := &task{
requests: requests,
logger: logger,
endpoint: server.URL + taskE,
}
updateStatusErr := tk.UpdateStatus(noContext, taskPayload)
if st != 200 {
expectedErr := non200
if expectedErr != updateStatusErr.Error() {
t.Errorf("Expected: %s, Received: %s", expectedErr, updateStatusErr)
}
return
}
if updateStatusErr != nil {
t.Errorf("Received: %v", updateStatusErr)
}
}
t.Run("TestUpdateStatus check for error", func(t *testing.T) {
checkErr(t, 404) // statusNotFound
})
}
================================================
FILE: pkg/testdiscoveryservice/testdiscovery.go
================================================
// Package testdiscoveryservice is used for discover tests
package testdiscoveryservice
import (
"context"
"encoding/json"
"net/http"
"os/exec"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/logstream"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
type testDiscoveryService struct {
logger lumber.Logger
execManager core.ExecutionManager
tdResChan chan core.DiscoveryResult
requests core.Requests
discoveryEndpoint string
}
// NewTestDiscoveryService creates and returns a new testDiscoveryService instance
func NewTestDiscoveryService(ctx context.Context,
tdResChan chan core.DiscoveryResult,
execManager core.ExecutionManager,
requests core.Requests,
logger lumber.Logger) core.TestDiscoveryService {
return &testDiscoveryService{
logger: logger,
execManager: execManager,
tdResChan: tdResChan,
requests: requests,
discoveryEndpoint: global.NeuronHost + "/test-list",
}
}
func (tds *testDiscoveryService) Discover(ctx context.Context, discoveryArgs *core.DiscoveyArgs) (*core.DiscoveryResult, error) {
configFilePath, err := utils.GetConfigFileName(discoveryArgs.Payload.TasFileName)
if err != nil {
return nil, err
}
impactAll := tds.shouldImpactAll(discoveryArgs.SmartRun, configFilePath, discoveryArgs.Diff)
args := utils.GetArgs("discover", discoveryArgs.FrameWork, discoveryArgs.FrameWorkVersion,
discoveryArgs.TestConfigFile, discoveryArgs.TestPattern)
if !impactAll {
if len(discoveryArgs.Diff) == 0 && discoveryArgs.DiffExists {
// empty diff; in PR, a commit added and then reverted to cause an overall empty PR diff
args = append(args, global.ArgDiff)
} else {
for k, v := range discoveryArgs.Diff {
// in changed files we only have added or modified files.
if v != core.FileRemoved {
args = append(args, global.ArgDiff, k)
}
}
}
}
tds.logger.Debugf("Discovering tests at paths %+v", discoveryArgs.TestPattern)
cmd := exec.CommandContext(ctx, global.FrameworkRunnerMap[discoveryArgs.FrameWork], args...) //nolint:gosec
cmd.Dir = discoveryArgs.CWD
envVars, err := tds.execManager.GetEnvVariables(discoveryArgs.EnvMap, discoveryArgs.SecretData)
if err != nil {
tds.logger.Errorf("failed to parse env variables, error: %v", err)
return nil, err
}
cmd.Env = envVars
logWriter := lumber.NewWriter(tds.logger)
defer logWriter.Close()
maskWriter := logstream.NewMasker(logWriter, discoveryArgs.SecretData)
cmd.Stdout = maskWriter
cmd.Stderr = maskWriter
tds.logger.Debugf("Executing test discovery command: %s", cmd.String())
if err := cmd.Run(); err != nil {
tds.logger.Errorf("command %s of type %s failed with error: %v", cmd.String(), core.Discovery, err)
return nil, err
}
testDiscoveryResult := <-tds.tdResChan
return &testDiscoveryResult, nil
}
func (tds *testDiscoveryService) shouldImpactAll(smartRun bool, configFilePath string, diff map[string]int) bool {
impactAll := !smartRun
if _, ok := diff[configFilePath]; ok {
impactAll = true
}
for diffFile := range diff {
if strings.HasSuffix(diffFile, global.PackageJSON) {
impactAll = true
break
}
}
return impactAll
}
func (tds *testDiscoveryService) SendResult(ctx context.Context, testDiscoveryResult *core.DiscoveryResult) error {
reqBody, err := json.Marshal(testDiscoveryResult)
if err != nil {
tds.logger.Errorf("error while json marshal %v", err)
return err
}
query, headers := utils.GetDefaultQueryAndHeaders()
if _, _, err := tds.requests.MakeAPIRequest(ctx, http.MethodPost, tds.discoveryEndpoint, reqBody, query, headers); err != nil {
return err
}
return nil
}
================================================
FILE: pkg/testdiscoveryservice/testdiscovery_test.go
================================================
// Package testdiscoveryservice is used for discover tests
package testdiscoveryservice
import (
"context"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
"github.com/stretchr/testify/mock"
)
type argsV1 struct {
ctx context.Context
discoveryArgs core.DiscoveyArgs
}
type testV1 struct {
name string
args argsV1
wantErr bool
wantEnvMap map[string]string
wantSecretData map[string]string
}
func Test_testDiscoveryService_Discover(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
tdResChan := make(chan core.DiscoveryResult)
global.TestEnv = true
defer func() { global.TestEnv = false }()
var PassedEnvMap map[string]string // envMap which should be passed to call execManager.GetEnvVariables
var PassedSecretDataMap map[string]string // secretData map which should be passed to call execManager.GetEnvVariables
execManager := new(mocks.ExecutionManager)
execManager.On("GetEnvVariables", mock.AnythingOfType("map[string]string"), mock.AnythingOfType("map[string]string")).Return(
func(envMap, secretData map[string]string) []string {
PassedEnvMap = envMap
PassedSecretDataMap = secretData
return []string{"success", "ss"}
},
func(envMap, secretData map[string]string) error {
PassedEnvMap = envMap
PassedSecretDataMap = secretData
return nil
},
)
tds := NewTestDiscoveryService(context.TODO(), tdResChan, execManager, requests, logger)
tests := getTestCases()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := tds.Discover(tt.args.ctx, &tt.args.discoveryArgs)
if !reflect.DeepEqual(PassedEnvMap, tt.wantEnvMap) || !reflect.DeepEqual(PassedSecretDataMap, tt.wantSecretData) {
t.Errorf("expected Envmap: %+v, received: %+v\nexpected SecretDataMap: %+v, received: %+v\n",
tt.wantEnvMap, PassedEnvMap, tt.wantSecretData, PassedSecretDataMap)
}
if (err != nil) != tt.wantErr {
t.Errorf("got error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func getTestCases() []*testV1 {
testCases := []*testV1{
{"Test Discover with Premerge pattern",
argsV1{
ctx: context.TODO(),
discoveryArgs: core.DiscoveyArgs{
TestPattern: []string{"./test/**/*.spec.ts"},
Payload: &core.Payload{
EventType: core.EventPullRequest,
TasFileName: "../../tesutils/testdata/tas.yaml",
},
EnvMap: map[string]string{"env": "repo"},
SecretData: map[string]string{"secret": "data"},
TestConfigFile: "",
FrameWork: "jest",
SmartRun: false,
Diff: map[string]int{},
DiffExists: true,
},
},
true,
map[string]string{"env": "repo"},
map[string]string{"secret": "data"},
},
{"Test Discover with Postmerge pattern",
argsV1{
ctx: context.TODO(),
discoveryArgs: core.DiscoveyArgs{
TestPattern: []string{"./test/**/*.spec.ts"},
EnvMap: map[string]string{"env": "RepoName"},
Payload: &core.Payload{
EventType: "push",
TasFileName: "../../tesutils/testdata/tas.yaml",
},
SecretData: map[string]string{"this is": "a secret"},
FrameWork: "mocha",
TestConfigFile: "",
SmartRun: false,
Diff: map[string]int{},
DiffExists: false,
},
},
true,
map[string]string{"env": "RepoName"},
map[string]string{"this is": "a secret"},
},
{"Test Discover not to execute discoverAll",
argsV1{
ctx: context.TODO(),
discoveryArgs: core.DiscoveyArgs{
TestPattern: []string{"./test/**/*.spec.ts"},
EnvMap: map[string]string{"env": "RepoName"},
Payload: &core.Payload{
EventType: "push",
TasFileName: "../../tesutils/testdata/tas.yaml",
ParentCommitCoverageExists: true,
},
SecretData: map[string]string{"secret": "data"},
FrameWork: "jasmine",
},
},
true,
map[string]string{"env": "RepoName"},
map[string]string{"secret": "data"},
},
}
return testCases
}
================================================
FILE: pkg/testexecutionservice/testexecution.go
================================================
// Package testexecutionservice is used for executing tests
package testexecutionservice
import (
"context"
"encoding/json"
"errors"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/logstream"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/pkg/utils"
)
const locatorFile = "locators"
type testExecutionService struct {
logger lumber.Logger
azureClient core.AzureClient
cfg *config.NucleusConfig
ts *teststats.ProcStats
execManager core.ExecutionManager
requests core.Requests
serverEndpoint string
}
// NewTestExecutionService creates and returns a new TestExecutionService instance
func NewTestExecutionService(cfg *config.NucleusConfig,
requests core.Requests,
execManager core.ExecutionManager,
azureClient core.AzureClient,
ts *teststats.ProcStats,
logger lumber.Logger) core.TestExecutionService {
return &testExecutionService{cfg: cfg,
requests: requests,
serverEndpoint: global.NeuronHost + "/report",
execManager: execManager,
azureClient: azureClient,
ts: ts,
logger: logger}
}
// Run executes the test files
func (tes *testExecutionService) Run(ctx context.Context,
testExecutionArgs *core.TestExecutionArgs) (*core.ExecutionResults, error) {
azureReader, azureWriter := io.Pipe()
defer azureWriter.Close()
errChan := testExecutionArgs.LogWriterStrategy.Write(ctx, azureReader)
defer tes.closeAndWriteLog(azureWriter, errChan)
logWriter := lumber.NewWriter(tes.logger)
defer logWriter.Close()
multiWriter := io.MultiWriter(logWriter, azureWriter)
maskWriter := logstream.NewMasker(multiWriter, testExecutionArgs.SecretData)
args, err := tes.buildCmdArgs(ctx, testExecutionArgs.TestConfigFile,
testExecutionArgs.FrameWork, testExecutionArgs.FrameWorkVersion, testExecutionArgs.Payload, testExecutionArgs.TestPattern)
if err != nil {
return nil, err
}
payload := testExecutionArgs.Payload
collectCoverage := payload.CollectCoverage
commandArgs := args
envVars, err := tes.execManager.GetEnvVariables(testExecutionArgs.EnvMap, testExecutionArgs.SecretData)
if err != nil {
tes.logger.Errorf("failed to parse env variables, error: %v", err)
return nil, err
}
executionResults := &core.ExecutionResults{
TaskID: payload.TaskID,
BuildID: payload.BuildID,
RepoID: payload.RepoID,
OrgID: payload.OrgID,
CommitID: payload.BuildTargetCommit,
TaskType: payload.TaskType,
}
for i := 1; i <= tes.cfg.ConsecutiveRuns; i++ {
var cmd *exec.Cmd
if testExecutionArgs.FrameWork == "jasmine" || testExecutionArgs.FrameWork == "mocha" {
if collectCoverage {
cmd = exec.CommandContext(ctx, "nyc", commandArgs...)
} else {
cmd = exec.CommandContext(ctx, commandArgs[0], commandArgs[1:]...) //nolint:gosec
}
} else {
cmd = exec.CommandContext(ctx, commandArgs[0], commandArgs[1:]...) //nolint:gosec
if collectCoverage {
envVars = append(envVars, "TAS_COLLECT_COVERAGE=true")
}
}
cmd.Dir = testExecutionArgs.CWD
cmd.Env = envVars
cmd.Stdout = maskWriter
cmd.Stderr = maskWriter
tes.logger.Debugf("Executing test execution command: %s", cmd.String())
if err := cmd.Start(); err != nil {
tes.logger.Errorf("failed to execute test %s %v", cmd.String(), err)
return nil, err
}
pid := int32(cmd.Process.Pid)
tes.logger.Debugf("execution command started with pid %d", pid)
if err := tes.ts.CaptureTestStats(pid, tes.cfg.CollectStats); err != nil {
tes.logger.Errorf("failed to find process for command %s with pid %d %v", cmd.String(), pid, err)
return nil, err
}
err := cmd.Wait()
result := <-tes.ts.ExecutionResultOutputChannel
if err != nil {
tes.logger.Errorf("error in test execution: %+v", err)
// returning error when result is nil to throw execution errors like heap out of memory
if result == nil {
return nil, err
}
}
if result != nil {
executionResults.Results = append(executionResults.Results, result.Results...)
}
}
return executionResults, nil
}
func getPatternAndEnvV1(payload *core.Payload, tasConfig *core.TASConfig) (target []string, envMap map[string]string) {
if payload.EventType == core.EventPullRequest {
target = tasConfig.Premerge.Patterns
envMap = tasConfig.Premerge.EnvMap
} else {
target = tasConfig.Postmerge.Patterns
envMap = tasConfig.Postmerge.EnvMap
}
return target, envMap
}
func (tes *testExecutionService) SendResults(ctx context.Context,
payload *core.ExecutionResults) (resp *core.TestReportResponsePayload, err error) {
reqBody, err := json.Marshal(payload)
if err != nil {
tes.logger.Errorf("failed to marshal request body %v", err)
return nil, err
}
query, headers := utils.GetDefaultQueryAndHeaders()
respBody, _, err := tes.requests.MakeAPIRequest(ctx, http.MethodPost, tes.serverEndpoint, reqBody, query, headers)
if err != nil {
tes.logger.Errorf("error while sending reports %v", err)
return nil, err
}
err = json.Unmarshal(respBody, &resp)
if err != nil {
tes.logger.Errorf("failed to unmarshal response body %v", err)
return nil, err
}
if resp.TaskStatus == "" {
return nil, errors.New("empty task status")
}
return resp, nil
}
func (tes *testExecutionService) getLocatorsFile(ctx context.Context, locatorAddress string) (string, error) {
resp, err := tes.azureClient.FindUsingSASUrl(ctx, locatorAddress)
if err != nil {
tes.logger.Errorf("Error while downloading locatorFile, error %v", err)
return "", err
}
defer resp.Close()
locatorFilePath := filepath.Join(os.TempDir(), locatorFile)
out, err := os.Create(locatorFilePath)
if err != nil {
return "", err
}
defer out.Close()
if _, err := io.Copy(out, resp); err != nil {
return "", err
}
return locatorFilePath, err
}
func (tes *testExecutionService) closeAndWriteLog(azureWriter *io.PipeWriter, errChan <-chan error) {
azureWriter.Close()
if err := <-errChan; err != nil {
tes.logger.Errorf("failed to upload logs for test execution, error: %v", err)
}
}
func (tes *testExecutionService) buildCmdArgs(ctx context.Context,
testConfigFile string,
frameWork string,
frameworkVersion int,
payload *core.Payload,
target []string) ([]string, error) {
args := []string{global.FrameworkRunnerMap[frameWork]}
args = append(args, utils.GetArgs("execute", frameWork, frameworkVersion, testConfigFile, target)...)
if payload.LocatorAddress != "" {
locatorFile, err := tes.getLocatorsFile(ctx, payload.LocatorAddress)
tes.logger.Debugf("locators : %v\n", locatorFile)
if err != nil {
tes.logger.Errorf("failed to get locator file, error: %v", err)
return nil, err
}
args = append(args, global.ArgLocator, locatorFile)
}
return args, nil
}
================================================
FILE: pkg/testexecutionservice/testexecution_test.go
================================================
// Package testexecutionservice is used for executing tests
package testexecutionservice
import (
"context"
"io"
"io/ioutil"
"reflect"
"strings"
"testing"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/pkg/requestutils"
"github.com/LambdaTest/test-at-scale/pkg/service/teststats"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/cenkalti/backoff/v4"
"github.com/stretchr/testify/mock"
)
// These tests are meant to be run on a Linux machine
func TestNewTestExecutionService(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialise logger, error: %v", err)
}
cfg := new(config.NucleusConfig)
cfg.ConsecutiveRuns = 1
cfg.CollectStats = true
var ts *teststats.ProcStats
azureClient := new(mocks.AzureClient)
execManager := new(mocks.ExecutionManager)
requests := requestutils.New(logger, global.DefaultAPITimeout, &backoff.StopBackOff{})
type args struct {
execManager core.ExecutionManager
azureClient core.AzureClient
ts *teststats.ProcStats
logger lumber.Logger
}
tests := []struct {
name string
args args
want *testExecutionService
}{
{"TestNewTestExecutionService",
args{execManager, azureClient, ts, logger},
&testExecutionService{logger, azureClient, cfg, ts, execManager, requests, global.NeuronHost + "/report"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewTestExecutionService(cfg, requests, tt.args.execManager,
tt.args.azureClient, tt.args.ts, tt.args.logger); !reflect.DeepEqual(got, tt.want) {
t.Errorf("NewTestExecutionService() = %v, want %v", got, tt.want)
}
})
}
}
func Test_testExecutionService_GetLocatorsFile(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialise logger, error: %v", err)
}
var ts *teststats.ProcStats
azureClient := new(mocks.AzureClient)
execManager := new(mocks.ExecutionManager)
azureClient.On("GetSASURL",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("string"),
mock.AnythingOfType("core.ContainerType"),
).Return("sasURL", nil)
azureClient.On("FindUsingSASUrl",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("string"),
).Return(io.NopCloser(strings.NewReader("Hello, world!")), nil)
type fields struct {
logger lumber.Logger
azureClient core.AzureClient
ts *teststats.ProcStats
execManager core.ExecutionManager
}
type args struct {
ctx context.Context
locatorAddress string
}
tests := []struct {
name string
fields fields
args args
want string
wantErr bool
}{
{"Test GetLocatorsFile",
fields{
logger: logger,
azureClient: azureClient,
ts: ts,
execManager: execManager,
},
args{
ctx: context.TODO(),
locatorAddress: "locAddr",
},
"/tmp/locators",
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tes := &testExecutionService{
logger: tt.fields.logger,
azureClient: tt.fields.azureClient,
ts: tt.fields.ts,
execManager: tt.fields.execManager,
}
got, err := tes.getLocatorsFile(tt.args.ctx, tt.args.locatorAddress)
if (err != nil) != tt.wantErr {
t.Errorf("testExecutionService.GetLocatorsFile() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("testExecutionService.GetLocatorsFile() = %v, want %v", got, tt.want)
}
file, err := ioutil.ReadFile(got)
if err != nil {
t.Errorf("testExecutionService.GetLocatorsFile() error in opening file = %v", err)
return
}
if string(file) != "Hello, world!" {
t.Errorf("testExecutionService.GetLocatorsFile() = %v, want %v", string(file), "Hello, world!")
}
})
}
}
================================================
FILE: pkg/tests/testutils.go
================================================
package tests
import (
"github.com/LambdaTest/test-at-scale/config"
)
// MockConfig creates new dummy config
func MockConfig() *config.SynapseConfig {
cfg := config.SynapseConfig{
LogFile: "./synapsetest.go",
Verbose: true,
Lambdatest: config.LambdatestConfig{
SecretKey: "dummysecretkey",
},
Git: config.GitConfig{
Token: "dummytoken",
TokenType: "Bearer",
},
ContainerRegistry: config.ContainerRegistryConfig{
Mode: config.PublicMode,
PullPolicy: config.PullAlways,
},
}
return &cfg
}
================================================
FILE: pkg/urlmanager/urlmanager.go
================================================
package urlmanager
import (
"fmt"
"net/url"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
)
// GetCloneURL returns repo clone url for given git provider
func GetCloneURL(gitprovider, repoLink, repo, commitID, forkSlug, repoSlug string) (string, error) {
if global.TestEnv {
return global.TestServer, nil
}
switch gitprovider {
case core.GitHub:
return fmt.Sprintf("%s/%s/zipball/%s", global.APIHostURLMap[gitprovider], repoSlug, commitID), nil
case core.GitLab:
return fmt.Sprintf("%s/-/archive/%s/%s-%s.zip", repoLink, commitID, repo, commitID), nil
case core.Bitbucket:
if forkSlug != "" {
forkLink := strings.Replace(repoLink, repoSlug, forkSlug, -1)
return fmt.Sprintf("%s/get/%s.zip", forkLink, commitID), nil
}
return fmt.Sprintf("%s/get/%s.zip", repoLink, commitID), nil
default:
return "", errs.ErrUnsupportedGitProvider
}
}
// GetCommitDiffURL returns commit diff url for given git provider
func GetCommitDiffURL(gitprovider, path, baseCommit, targetCommit, forkSlug string) (string, error) {
if global.TestEnv {
return global.TestServer, nil
}
switch gitprovider {
case core.GitHub:
return fmt.Sprintf("%s%s/compare/%s...%s", global.APIHostURLMap[gitprovider], path, baseCommit, targetCommit), nil
case core.GitLab:
encodedPath := url.QueryEscape(path[1:])
return fmt.Sprintf("%s/%s/repository/compare?from=%s&to=%s",
global.APIHostURLMap[gitprovider], encodedPath, baseCommit, targetCommit), nil
case core.Bitbucket:
if forkSlug != "" {
return fmt.Sprintf("%s/repositories%s/diff/%s..%s",
global.APIHostURLMap[gitprovider], path, fmt.Sprintf("%s:%s", forkSlug, targetCommit), baseCommit), nil
}
return fmt.Sprintf("%s/repositories%s/diff/%s..%s", global.APIHostURLMap[gitprovider], path, targetCommit, baseCommit), nil
default:
return "", errs.ErrUnsupportedGitProvider
}
}
// GetPullRequestDiffURL returns PR Diff url for given git provider
func GetPullRequestDiffURL(gitprovider, path string, prNumber int) (string, error) {
if global.TestEnv {
return global.TestServer, nil
}
switch gitprovider {
case core.GitHub:
return fmt.Sprintf("%s%s/pulls/%d", global.APIHostURLMap[gitprovider], path, prNumber), nil
case core.GitLab:
encodedPath := url.QueryEscape(path[1:])
return fmt.Sprintf("%s/%s/merge_requests/%d/changes", global.APIHostURLMap[gitprovider], encodedPath, prNumber), nil
case core.Bitbucket:
return fmt.Sprintf("%s/repositories%s/pullrequests/%d/diff", global.APIHostURLMap[gitprovider], path, prNumber), nil
default:
return "", errs.ErrUnsupportedGitProvider
}
}
// GetFileDownloadURL returns download URL for file in repo
func GetFileDownloadURL(gitprovider, commitID, repoSlug, filePath string) (string, error) {
if global.TestEnv {
return global.TestServer, nil
}
switch gitprovider {
case core.GitHub:
return fmt.Sprintf("https://raw.githubusercontent.com/%s/%s/%s", repoSlug, commitID, filePath), nil
case core.GitLab:
repoSlug = url.PathEscape(repoSlug)
filePath = url.PathEscape(filePath)
return fmt.Sprintf("%s/%s/repository/files/%s/raw?ref=%s", global.APIHostURLMap[gitprovider], repoSlug, filePath, commitID), nil
case core.Bitbucket:
// TODO: check for fork PR
return fmt.Sprintf("%s/repositories/%s/src/%s/%s", global.APIHostURLMap[gitprovider], repoSlug, commitID, filePath), nil
default:
return "", nil
}
}
================================================
FILE: pkg/urlmanager/urlmanager_test.go
================================================
package urlmanager
import (
"net/url"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/global"
)
func TestGetCloneURL(t *testing.T) {
type args struct {
gitprovider string
repoLink string
repo string
commitID string
repoSlug string
forkSlug string
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{
"For github as git provider",
args{"github", "https://github.com/nexe", "nexe", "abc", "nexe", "nexe/nexe"},
"https://api.github.com/repos/nexe/nexe/zipball/abc",
false,
},
{
"For non-github and gitlab as git provider",
args{"gittest", "https://github.com/nexe", "nexe", "abc", "nexe", ""},
"",
true,
},
{
"For gitlab as git provider",
args{"gitlab", "https://gitlab.com/nexe", "nexe", "abc", "nexe", ""},
"https://gitlab.com/nexe/-/archive/abc/nexe-abc.zip",
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := GetCloneURL(tt.args.gitprovider, tt.args.repoLink, tt.args.repo, tt.args.commitID, tt.args.repoSlug, tt.args.forkSlug)
if (err != nil) != tt.wantErr {
t.Errorf("GetCloneURL() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("GetCloneURL() = %v, want %v", got, tt.want)
}
})
}
}
func TestGetCommitDiffURL(t *testing.T) {
type args struct {
gitprovider string
path string
baseCommit string
targetCommit string
forkSlug string
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{
"For github as git provider",
args{"github", "/tests/nexe", "abc", "xyz", ""},
"https://api.github.com/repos/tests/nexe/compare/abc...xyz",
false,
},
{
"For non-github and gitlab as git provider",
args{"gittest", "tests/nexe", "abc", "xyz", ""},
"",
true,
},
{
"For gitlab as git provider",
args{"gitlab", "/tests/nexe", "abc", "xyz", ""},
global.APIHostURLMap["gitlab"] + "/" + url.QueryEscape("tests/nexe") + "/repository/compare?from=abc&to=xyz",
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := GetCommitDiffURL(tt.args.gitprovider, tt.args.path, tt.args.baseCommit, tt.args.targetCommit, tt.args.forkSlug)
if (err != nil) != tt.wantErr {
t.Errorf("GetCommitDiffURL() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("GetCommitDiffURL() = %v, want %v", got, tt.want)
}
})
}
}
func TestGetPullRequestDiffURL(t *testing.T) {
type args struct {
gitprovider string
path string
prNumber int
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{
"For github as git provider",
args{"github", "/tests/nexe", 2},
"https://api.github.com/repos/tests/nexe/pulls/2",
false,
},
{
"For non-github and gitlab as git provider",
args{"gittest", "tests/nexe", 2},
"",
true},
{
"For gitlab as git provider",
args{"gitlab", "/tests/nexe", 2},
global.APIHostURLMap["gitlab"] + "/" + url.QueryEscape("tests/nexe") + "/merge_requests/2/changes",
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := GetPullRequestDiffURL(tt.args.gitprovider, tt.args.path, tt.args.prNumber)
if (err != nil) != tt.wantErr {
t.Errorf("GetPullRequestDiffURL() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("GetPullRequestDiffURL() = %v, want %v", got, tt.want)
}
})
}
}
================================================
FILE: pkg/utils/utils.go
================================================
package utils
import (
"context"
"crypto/md5"
"fmt"
"io"
"os"
"path/filepath"
"reflect"
"strconv"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/global"
"github.com/bmatcuk/doublestar/v4"
"github.com/go-playground/locales/en"
ut "github.com/go-playground/universal-translator"
"github.com/go-playground/validator/v10"
en_translations "github.com/go-playground/validator/v10/translations/en"
"github.com/google/uuid"
"gopkg.in/yaml.v3"
)
const (
namespaceSeparator = "."
emptyTagName = "-"
yamlTagName = "yaml"
requiredTagName = "required"
v1 = 1
v2 = 2
)
// Min returns the smaller of x or y.
func Min(x, y int) int {
if x > y {
return y
}
return x
}
// ComputeChecksum compute the md5 hash for the given filename
func ComputeChecksum(filename string) (string, error) {
checksum := ""
file, err := os.Open(filename)
if err != nil {
return checksum, err
}
defer file.Close()
hash := md5.New()
if _, err := io.Copy(hash, file); err != nil {
return checksum, err
}
checksum = fmt.Sprintf("%x", hash.Sum(nil))
return checksum, nil
}
// InterfaceToMap converts interface{} to map[string]string
func InterfaceToMap(in interface{}) map[string]string {
result := make(map[string]string)
for key, value := range in.(map[string]interface{}) {
result[key] = value.(string)
}
return result
}
// CreateDirectory creates directory recursively if does not exists
func CreateDirectory(path string) error {
if _, err := os.Lstat(path); os.IsNotExist(err) {
if err := os.MkdirAll(path, global.DirectoryPermissions); err != nil {
return errs.ERR_DIR_CRT(err.Error())
}
}
return nil
}
// DeleteDirectory deletes directory and all its children
func DeleteDirectory(path string) error {
if err := os.RemoveAll(path); err != nil {
return errs.ErrDirDel(err.Error())
}
return nil
}
// WriteFileToDirectory writes `data` file to `filename`/`path`
func WriteFileToDirectory(path, filename string, data []byte) error {
location := fmt.Sprintf("%s/%s", path, filename)
if err := os.WriteFile(location, data, global.FilePermissions); err != nil {
return errs.ERR_FIL_CRT(err.Error())
}
return nil
}
// GetOutboundIP returns preferred outbound ip of this container
func GetOutboundIP() string {
return global.SynapseContainerURL
}
// GetConfigFileName returns the name of the configuration file
func GetConfigFileName(path string) (string, error) {
if global.TestEnv {
return path, nil
}
ext := filepath.Ext(path)
// Add support for both yaml extensions
if ext == ".yaml" || ext == ".yml" {
matches, _ := doublestar.Glob(os.DirFS(global.RepoDir), strings.TrimSuffix(path, ext)+".{yml,yaml}")
if len(matches) == 0 {
return "", errs.New(
fmt.Sprintf(
"`%s` configuration file not found at the root of your project. Please make sure you have placed it correctly.",
path))
}
// If there are files with the both extensions, pick the first match
path = matches[0]
}
return path, nil
}
func ValidateStructTASYmlV1(ctx context.Context, ymlContent []byte, ymlFilename string) (*core.TASConfig, error) {
validate, err := getValidator()
if err != nil {
return nil, err
}
tasConfig := &core.TASConfig{SmartRun: true, Tier: core.Small, SplitMode: core.TestSplit, Version: global.DefaultTASVersion}
if err := yaml.Unmarshal(ymlContent, tasConfig); err != nil {
return nil, fmt.Errorf("`%s` configuration file contains invalid format. Please correct the `%s` file", ymlFilename, ymlFilename)
}
if err := validateStruct(validate, tasConfig, ymlFilename); err != nil {
return nil, err
}
return tasConfig, nil
}
// configureValidator configure the struct validator
func configureValidator(validate *validator.Validate, trans ut.Translator) {
validate.RegisterTagNameFunc(func(fld reflect.StructField) string {
// nolint: gomnd
name := strings.SplitN(fld.Tag.Get(yamlTagName), ",", 2)[0]
if name == emptyTagName {
return fld.Name
}
return name
})
// nolint: errcheck
validate.RegisterTranslation(requiredTagName, trans, func(ut ut.Translator) error {
return ut.Add(requiredTagName, "{0} field is required!", true)
}, func(ut ut.Translator, fe validator.FieldError) string {
i := strings.Index(fe.Namespace(), namespaceSeparator)
t, _ := ut.T(requiredTagName, fe.Namespace()[i+1:])
return t
})
}
// GetVersion returns version of tas yml file
func GetVersion(ymlContent []byte) (int, error) {
tasVersion := &core.TasVersion{Version: global.DefaultTASVersion}
if err := yaml.Unmarshal(ymlContent, tasVersion); err != nil {
return 0, fmt.Errorf("error in unmarshling tas yml file")
}
majorVersion := strings.Split(tasVersion.Version, ".")[0]
version, err := strconv.Atoi(majorVersion)
if err != nil {
return version, errs.New("error while parsing version for tas yml")
}
return version, err
}
// ValidateStructTASYmlV2 validates tas configuration file
func ValidateStructTASYmlV2(ctx context.Context, ymlContent []byte, ymlFileName string) (*core.TASConfigV2, error) {
tasConfig := &core.TASConfigV2{SmartRun: true, Tier: core.Small, SplitMode: core.TestSplit}
if err := yaml.Unmarshal(ymlContent, tasConfig); err != nil {
return nil, fmt.Errorf("`%s` configuration file contains invalid format. Please correct the `%s` file", ymlFileName, ymlFileName)
}
validate, err := getValidator()
if err != nil {
return nil, err
}
if err := validateStruct(validate, tasConfig, ymlFileName); err != nil {
return nil, err
}
return tasConfig, nil
}
func getValidator() (*validator.Validate, error) {
enObj := en.New()
uni := ut.New(enObj, enObj)
trans, _ := uni.GetTranslator("en")
validate := validator.New()
if err := en_translations.RegisterDefaultTranslations(validate, trans); err != nil {
return nil, err
}
configureValidator(validate, trans)
return validate, nil
}
func validateStruct(validate *validator.Validate, config interface{}, ymlFilename string) error {
validateErr := validate.Struct(config)
if validateErr != nil {
// translate all error at once
validationErrs := validateErr.(validator.ValidationErrors)
err := new(errs.ErrInvalidConf)
err.Message = errs.New(
fmt.Sprintf(
"Invalid values provided for the following fields in the `%s` configuration file: \n",
ymlFilename),
).Error()
for _, e := range validationErrs {
// can translate each error one at a time.
err.Fields = append(err.Fields, e.Field())
err.Values = append(err.Values, e.Value())
}
return err
}
return nil
}
// ValidateSubModule validates submodule
func ValidateSubModule(module *core.SubModule) error {
if module.Name == "" {
return errs.New("module name is not defined")
}
if module.Path == "" {
return errs.New(fmt.Sprintf("module path is not defined for module %s ", module.Name))
}
if len(module.Patterns) == 0 {
return errs.New(fmt.Sprintf("module %s pattern length is 0", module.Name))
}
return nil
}
// GetDefaultQueryAndHeaders returns the query and headers that should be supplied with each request made to TAS Server
func GetDefaultQueryAndHeaders() (query map[string]interface{}, headers map[string]string) {
query = map[string]interface{}{
"repoID": os.Getenv("REPO_ID"),
"buildID": os.Getenv("BUILD_ID"),
"orgID": os.Getenv("ORG_ID"),
"taskID": os.Getenv("TASK_ID"),
}
headers = map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", os.Getenv("TOKEN")),
}
return query, headers
}
func GetArgs(command string, frameWork string, frameworkVersion int,
configFile string,
target []string) []string {
language := global.FrameworkLanguageMap[frameWork]
args := []string{}
if language == "java" {
args = append(args, "-jar", "/test-at-scale-java.jar",
global.ArgCommand, command, global.ArgFrameworVersion,
strconv.Itoa(frameworkVersion))
} else {
args = append(args, global.ArgCommand, command)
}
if configFile != "" {
args = append(args, global.ArgConfig, configFile)
}
for _, pattern := range target {
args = append(args, global.ArgPattern, pattern)
}
return args
}
// GetTASFilePath returns tas file path
func GetTASFilePath(path string) (string, error) {
path, err := GetConfigFileName(path)
if err != nil {
return "", err
}
filePath := fmt.Sprintf("%s/%s", global.RepoDir, path)
return filePath, nil
}
// GenerateUUID generates uuid v4
func GenerateUUID() string {
uuidV4 := uuid.New() // panics on error
return strings.Map(func(r rune) rune {
if r == '-' {
return -1
}
return r
}, uuidV4.String())
}
// ValidateStructTASYml validates the TAS config for all supported version
func ValidateStructTASYml(ctx context.Context, ymlContent []byte, ymlFilename string) (interface{}, error) {
version, err := GetVersion(ymlContent)
if err != nil {
return nil, err
}
switch version {
case v1:
return ValidateStructTASYmlV1(ctx, ymlContent, ymlFilename)
case v2:
return ValidateStructTASYmlV2(ctx, ymlContent, ymlFilename)
default:
return nil, fmt.Errorf("version %d is not supported ", version)
}
}
================================================
FILE: pkg/utils/utils_test.go
================================================
package utils
import (
"context"
"fmt"
"os"
"path/filepath"
"testing"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/assert"
)
const (
directory = "../../testutils/testdirectory"
)
func TestMin(t *testing.T) {
type args struct {
x int
y int
}
tests := []struct {
name string
args args
want int
}{
{"x: 5, y: -1", args{5, -1}, -1},
{"x: 0, y: 0", args{0, 0}, 0},
{"x: -293836, y: 0", args{-293836, 0}, -293836},
{"x: 2545, y: 374", args{2545, 374}, 374},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := Min(tt.args.x, tt.args.y); got != tt.want {
t.Errorf("Min() = %v, want %v", got, tt.want)
}
})
}
}
func TestComputeChecksum(t *testing.T) {
_, err := os.Create("dummy_file")
if err != nil {
fmt.Printf("Error in creating file, error: %v", err)
}
type args struct {
filename string
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{"dummy_file_name", args{"dummy_file_name"}, "", true},
{"dummy_file", args{"dummy_file"}, "d41d8cd98f00b204e9800998ecf8427e", false},
}
defer removeCreatedFile("dummy_file")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ComputeChecksum(tt.args.filename)
if (err != nil) != tt.wantErr {
t.Errorf("ComputeChecksum() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("ComputeChecksum() = %v, want %v", got, tt.want)
}
})
}
}
func TestCreateDirectory(t *testing.T) {
newDir := "../../testutils/nonExistingDir"
existDir := directory
type args struct {
path string
}
tests := []struct {
name string
args args
wantErr bool
}{
{"Existing directory: ../../testutils/testdirecotry", args{existDir}, false},
{"Non-existing directory: ../../testutils/nonExistingDir", args{newDir}, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := CreateDirectory(tt.args.path); (err != nil) != tt.wantErr {
t.Errorf("CreateDirectory() error = %v, wantErr %v", err, tt.wantErr)
return
}
if tt.args.path == newDir {
if _, err := os.Lstat(newDir); err != nil {
t.Errorf("Directory did not exist, error: %v", err)
return
}
defer removeCreatedFile(newDir)
}
})
}
}
func TestWriteFileToDirectory(t *testing.T) {
path := directory
filename := "writeFileToDirectory"
data := []byte("Hello world!")
err := WriteFileToDirectory(path, filename, data)
if err != nil {
t.Errorf("Error: %v", err)
return
}
defer removeCreatedFile(filepath.Join(path, filename))
checkData, err := os.ReadFile(filepath.Join(path, filename))
if err != nil {
t.Errorf("Error: %v", err)
return
}
if string(checkData) != "Hello world!" {
t.Errorf("expected file contents: Hello world!, got: %s", string(checkData))
}
}
func TestGetOutboundIP(t *testing.T) {
tests := []struct {
name string
want string
}{
{"Test1", "http://synapse:8000"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := GetOutboundIP(); got != tt.want {
t.Errorf("GetOutboundIP() = %v, want %v", got, tt.want)
}
})
}
}
func TestValidateStructv1(t *testing.T) {
ctx := context.TODO()
tests := []struct {
name string
filename string
wantErr error
want *core.TASConfig
}{
{
"Junk characters File",
"testutils/testdata/tasyml/junk.yml",
// nolint:lll
fmt.Errorf("`testutils/testdata/tasyml/junk.yml` configuration file contains invalid format. Please correct the `testutils/testdata/tasyml/junk.yml` file"),
nil,
},
{
"Invalid Types",
"testutils/testdata/tasyml/invalid_types.yml",
// nolint:lll
fmt.Errorf("`testutils/testdata/tasyml/invalid_types.yml` configuration file contains invalid format. Please correct the `testutils/testdata/tasyml/invalid_types.yml` file"),
nil,
},
{
"Invalid Field Values",
"testutils/testdata/tasyml/invalid_fields.yml",
errs.ErrInvalidConf{
// nolint:lll
Message: "Invalid values provided for the following fields in the `testutils/testdata/tasyml/invalid_fields.yml` configuration file: \n",
Fields: []string{"framework", "nodeVersion"},
Values: []interface{}{"hello", "test"}},
nil,
},
{
"Valid Config",
"testutils/testdata/tasyml/valid.yml",
nil,
&core.TASConfig{
SmartRun: true,
Framework: "jest",
Postmerge: &core.Merge{
EnvMap: map[string]string{"NODE_ENV": "development"},
Patterns: []string{"{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"},
},
Premerge: &core.Merge{
EnvMap: map[string]string{"NODE_ENV": "development"},
Patterns: []string{"{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"},
},
Prerun: &core.Run{EnvMap: map[string]string{"NODE_ENV": "development"}, Commands: []string{"yarn"}},
Postrun: &core.Run{Commands: []string{"node --version"}},
ConfigFile: "scripts/jest/config.source-www.js",
NodeVersion: "14.17.6",
Tier: "small",
SplitMode: core.TestSplit,
Version: "1.0",
},
},
{
"Valid Config - Only Framework",
"testutils/testdata/tasyml/framework_only_required.yml",
nil,
&core.TASConfig{
SmartRun: true,
Framework: "mocha",
Tier: "small",
SplitMode: core.TestSplit,
Version: "1.2",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ymlContent, err := testutils.LoadFile(tt.filename)
if err != nil {
t.Errorf("Error loading testfile %s", tt.filename)
return
}
tasConfig, errV := ValidateStructTASYmlV1(ctx, ymlContent, tt.filename)
if errV != nil {
assert.Equal(t, errV.Error(), tt.wantErr.Error(), "Error mismatch")
return
}
assert.Equal(t, tt.want, tasConfig, "Struct mismatch")
})
}
}
func removeCreatedFile(path string) {
err := os.RemoveAll(path)
if err != nil {
fmt.Println("error in removing!!")
}
}
func TestValidateStructv2(t *testing.T) {
ctx := context.TODO()
tests := []struct {
name string
filename string
wantErr error
want *core.TASConfigV2
}{
{
"Junk characters File",
"testutils/testdata/tasyml/junk.yml",
// nolint:lll
fmt.Errorf("`testutils/testdata/tasyml/junk.yml` configuration file contains invalid format. Please correct the `testutils/testdata/tasyml/junk.yml` file"),
nil,
},
{
"Invalid Types",
"testutils/testdata/tasyml/invalid_typesv2.yml",
// nolint:lll
fmt.Errorf("`testutils/testdata/tasyml/invalid_typesv2.yml` configuration file contains invalid format. Please correct the `testutils/testdata/tasyml/invalid_typesv2.yml` file"),
nil,
},
{
"Valid Config",
"testutils/testdata/tasyml/validV2.yml",
nil,
&core.TASConfigV2{
SmartRun: true,
Tier: "small",
SplitMode: core.TestSplit,
PostMerge: &core.MergeV2{
SubModules: []core.SubModule{
{
Name: "some-module-1",
Path: "./somepath",
Patterns: []string{
"./x/y/z",
},
Framework: "mocha",
ConfigFile: "x/y/z",
},
},
},
PreMerge: &core.MergeV2{
SubModules: []core.SubModule{
{
Name: "some-module-1",
Path: "./somepath",
Patterns: []string{
"./x/y/z",
},
Framework: "jasmine",
ConfigFile: "/x/y/z",
},
},
},
Parallelism: 1,
Version: "2.0.1",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ymlContent, err := testutils.LoadFile(tt.filename)
if err != nil {
t.Errorf("Error loading testfile %s", tt.filename)
return
}
tasConfig, errV := ValidateStructTASYmlV2(ctx, ymlContent, tt.filename)
if errV != nil {
assert.Equal(t, errV.Error(), tt.wantErr.Error(), "Error mismatch")
return
}
assert.Equal(t, tt.want, tasConfig, "Struct mismatch")
})
}
}
func TestGetVersion(t *testing.T) {
tests := []struct {
name string
filename string
wantErr error
want int
}{
{
"Test with invalid version type",
"testutils/testdata/tasyml/invalidVersion.yml",
fmt.Errorf("error while parsing version for tas yml"),
0,
},
{
"Test valid yml type for tas version 1",
"testutils/testdata/tasyml/valid.yml",
nil,
1,
},
{
"Test valid yml type for tas version 2",
"testutils/testdata/tasyml/validV2.yml",
nil,
2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ymlContent, err := testutils.LoadFile(tt.filename)
if err != nil {
t.Errorf("Error loading testfile %s", tt.filename)
return
}
version, errV := GetVersion(ymlContent)
if errV != nil {
assert.Equal(t, errV.Error(), tt.wantErr.Error(), "Error mismatch")
return
}
assert.Equal(t, tt.want, version, "value mismatch")
})
}
}
func TestValidateSubModule(t *testing.T) {
tests := []struct {
name string
subModule core.SubModule
wantErr error
}{
{
"Test submodule if name is empty",
core.SubModule{
Path: "/x/y",
Patterns: []string{"/a/c"},
},
errs.New("module name is not defined"),
},
{
"Test submodule if path is empty",
core.SubModule{
Name: "some name",
Patterns: []string{"/a/c"},
},
errs.New("module path is not defined for module some name "),
},
{
"Test submodule if pattern length is empty",
core.SubModule{
Name: "some-name",
Path: "/x/y",
},
errs.New("module some-name pattern length is 0"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotErr := ValidateSubModule(&tt.subModule)
assert.Equal(t, tt.wantErr, gotErr, "Error mismatch")
})
}
}
================================================
FILE: pkg/zstd/zstd.go
================================================
package zstd
import (
"context"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
type zstdCompressor struct {
logger lumber.Logger
execManager core.ExecutionManager
execPath string
}
const (
manifestFileName = "manifest.txt"
executableName = "tar"
)
//New return zStandard compression manager
func New(execManager core.ExecutionManager, logger lumber.Logger) (core.ZstdCompressor, error) {
path, err := exec.LookPath(executableName)
if err != nil {
logger.Errorf("failed to find path for tar, error:%v", err)
return nil, err
}
return &zstdCompressor{logger: logger, execManager: execManager, execPath: path}, nil
}
func (z *zstdCompressor) createManifestFile(workingDir string, fileNames ...string) error {
return ioutil.WriteFile(filepath.Join(os.TempDir(), manifestFileName), []byte(strings.Join(fileNames, "\n")), 0660)
}
// Compress compress the list of files
func (z *zstdCompressor) Compress(ctx context.Context, compressedFileName string, preservePath bool, workingDirectory string, filesToCompress ...string) error {
if err := z.createManifestFile(workingDirectory, filesToCompress...); err != nil {
z.logger.Errorf("failed to create manifest file %v", err)
return err
}
command := fmt.Sprintf("%s --posix -I 'zstd -5 -T0' -cf %s -C %s -T %s", z.execPath, compressedFileName, workingDirectory, filepath.Join(os.TempDir(), manifestFileName))
if preservePath {
command = fmt.Sprintf("%s -P", command)
}
commands := []string{command}
if err := z.execManager.ExecuteInternalCommands(ctx, core.Zstd, commands, workingDirectory, nil, nil); err != nil {
z.logger.Errorf("error while zstd compression %v", err)
return err
}
return nil
}
//Decompress performs the decompression operation for the given file
func (z *zstdCompressor) Decompress(ctx context.Context, filePath string, preservePath bool, workingDirectory string) error {
command := fmt.Sprintf("%s --posix -I 'zstd -d' -xf %s -C %s", z.execPath, filePath, workingDirectory)
if preservePath {
command = fmt.Sprintf("%s -P", command)
}
commands := []string{command}
if err := z.execManager.ExecuteInternalCommands(ctx, core.Zstd, commands, workingDirectory, nil, nil); err != nil {
z.logger.Errorf("error while zstd decompression %v", err)
return err
}
return nil
}
================================================
FILE: pkg/zstd/zstd_test.go
================================================
package zstd
import (
"context"
"fmt"
"os"
"path/filepath"
"reflect"
"testing"
"github.com/LambdaTest/test-at-scale/mocks"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
"github.com/LambdaTest/test-at-scale/testutils"
"github.com/stretchr/testify/mock"
)
const tarPath = "tar"
func TestNew(t *testing.T) {
execManager := new(mocks.ExecutionManager)
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
_, err2 := New(execManager, logger)
if err2 != nil {
t.Errorf("Couldn't initialize a new zstdCompressor, error: %v", err2)
}
}
func Test_zstdCompressor_createManifestFile(t *testing.T) {
execManager := new(mocks.ExecutionManager)
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
path := tarPath
type fields struct {
logger lumber.Logger
execManager core.ExecutionManager
execPath string
}
type args struct {
workingDir string
fileNames []string
}
tests := []struct {
name string
fields fields
args args
wantErr bool
}{
{
"Test createManifestFile",
fields{logger: logger, execManager: execManager, execPath: path},
args{"./", []string{"file1", "file2"}},
false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
z := &zstdCompressor{
logger: tt.fields.logger,
execManager: tt.fields.execManager,
execPath: tt.fields.execPath,
}
if err := z.createManifestFile(tt.args.workingDir, tt.args.fileNames...); (err != nil) != tt.wantErr {
t.Errorf("zstdCompressor.createManifestFile() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func Test_zstdCompressor_Compress(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
path := tarPath
// ReceivedStringArg will have args passed to ExecuteInternalCommands
var ReceivedArgs []string
execManager := new(mocks.ExecutionManager)
execManager.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string"),
).Return(
func(ctx context.Context, commandType core.CommandType, commands []string, cwd string, envMap, secretData map[string]string) error {
ReceivedArgs = commands
return nil
},
)
execManagerErr := new(mocks.ExecutionManager)
execManagerErr.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string"),
).Return(
func(ctx context.Context, commandType core.CommandType, commands []string,
cwd string, envMap, secretData map[string]string) error {
ReceivedArgs = commands
return errs.New("error from mocked interface")
},
)
type fields struct {
logger lumber.Logger
execManager core.ExecutionManager
execPath string
}
type args struct {
ctx context.Context
compressedFileName string
preservePath bool
workingDirectory string
filesToCompress []string
}
tests := []struct {
name string
fields fields
args args
wantErr bool
}{
{
"Test Compress for success, with preservePath=true",
fields{logger: logger, execManager: execManager, execPath: path},
args{context.TODO(), "compressedFileName", true, "./", []string{"f1", "f2"}},
false,
},
{
"Test Compress for success, with preservePath=false",
fields{logger: logger, execManager: execManager, execPath: path},
args{context.TODO(), "compressedFileName", false, "./", []string{"f1", "f2"}},
false,
},
{
"Test Compress for error",
fields{logger: logger, execManager: execManagerErr, execPath: path},
args{context.TODO(), "compressedFileName", true, "./", []string{"f1", "f2"}},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
z := &zstdCompressor{
logger: tt.fields.logger,
execManager: tt.fields.execManager,
execPath: tt.fields.execPath,
}
err := z.Compress(tt.args.ctx, tt.args.compressedFileName, tt.args.preservePath, tt.args.workingDirectory, tt.args.filesToCompress...)
if (err != nil) != tt.wantErr {
t.Errorf("zstdCompressor.Compress() error = %v, wantErr %v", err, tt.wantErr)
return
}
command := fmt.Sprintf("%s --posix -I 'zstd -5 -T0' -cf compressedFileName -C ./ -T %s",
z.execPath, filepath.Join(os.TempDir(), manifestFileName))
if tt.args.preservePath {
command = fmt.Sprintf("%s -P", command)
}
commands := []string{command}
if !reflect.DeepEqual(ReceivedArgs, commands) {
t.Errorf("Expected commands: %v, got: %v", commands, ReceivedArgs)
}
})
}
}
func Test_zstdCompressor_Decompress(t *testing.T) {
logger, err := testutils.GetLogger()
if err != nil {
t.Errorf("Couldn't initialize logger, error: %v", err)
}
path := tarPath
// ReceivedStringArg will have args passed to ExecuteInternalCommands
var ReceivedArgs []string
execManager := new(mocks.ExecutionManager)
execManager.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string")).Return(
func(ctx context.Context, commandType core.CommandType, commands []string,
cwd string, envMap, secretData map[string]string) error {
ReceivedArgs = commands
return nil
})
execManagerErr := new(mocks.ExecutionManager)
execManagerErr.On("ExecuteInternalCommands",
mock.AnythingOfType("*context.emptyCtx"),
mock.AnythingOfType("core.CommandType"),
mock.AnythingOfType("[]string"),
mock.AnythingOfType("string"),
mock.AnythingOfType("map[string]string"),
mock.AnythingOfType("map[string]string"),
).Return(
func(ctx context.Context, commandType core.CommandType, commands []string,
cwd string, envMap, secretData map[string]string) error {
ReceivedArgs = commands
return errs.New("error from mocked interface")
})
type fields struct {
logger lumber.Logger
execManager core.ExecutionManager
execPath string
}
type args struct {
ctx context.Context
filePath string
preservePath bool
workingDirectory string
}
tests := []struct {
name string
fields fields
args args
wantErr bool
}{
{
"Tests Decompress for success with preservePath=true",
fields{logger: logger, execManager: execManager, execPath: path},
args{ctx: context.TODO(), filePath: "./", preservePath: true, workingDirectory: "./"},
false,
},
{
"Tests Decompress for success with preservePath=false",
fields{logger: logger, execManager: execManager, execPath: path},
args{ctx: context.TODO(), filePath: "./", preservePath: false, workingDirectory: "./"},
false,
},
{
"Tests Decompress for error",
fields{logger: logger, execManager: execManagerErr, execPath: path},
args{ctx: context.TODO(), filePath: "./", preservePath: true, workingDirectory: "./"},
true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
z := &zstdCompressor{
logger: tt.fields.logger,
execManager: tt.fields.execManager,
execPath: tt.fields.execPath,
}
if err := z.Decompress(tt.args.ctx, tt.args.filePath, tt.args.preservePath, tt.args.workingDirectory); (err != nil) != tt.wantErr {
t.Errorf("zstdCompressor.Decompress() error = %v, wantErr %v", err, tt.wantErr)
return
}
command := fmt.Sprintf("%s --posix -I 'zstd -d' -xf ./ -C ./", z.execPath)
if tt.args.preservePath {
command = fmt.Sprintf("%s -P", command)
}
commands := []string{command}
if !reflect.DeepEqual(ReceivedArgs, commands) {
t.Errorf("Expected args: %v, got: %v", commands, ReceivedArgs)
}
})
}
}
================================================
FILE: runner.conf
================================================
root: .
tmp_path: ./builds
build_name: lt
valid_ext: .go
no_rebuild_ext: .tpl, .tmpl, .html
ignored: assets, tmp, vendor
build_delay: 600
colors: 1
log_color_main: cyan
log_color_build: yellow
log_color_runner: green
log_color_watcher: magenta
log_color_app:
================================================
FILE: sample-tas.yaml
================================================
# supported frameworks: mocha|jest|jasmine
framework: mocha
# supported tiers: xmall|small|medium|large|xlarge
tier: xsmall
blocklist:
# format: "######"
- "src/test/api.js"
- "src/test/api1.js##this is a test-suite"
- "src/test/api2.js##this is a test-suite##this is a test-case"
postMerge:
# env vars provided at the time of discovering and executing the post-merge tests
env:
REPONAME: nexe
AWS_KEY: ${{ secrets.AWS_KEY }}
# glob-pattern for identifying the test files
pattern:
- "./test/**/*.spec.ts"
# strategy for trigerring builds for post-merge
strategy:
threshold: 1
name: after_n_commits
preMerge:
pattern:
- "./test/**/*.spec.ts"
preRun:
# set of commands to run before running the tests like `yarn install`, `yarn build`
command:
- npm ci
- docker build --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} --tag=nucleus
postRun:
# set of commands to run after running the tests
command:
- node --version
# path to your custom configuration file required by framework
configFile: mocharc.yml
# provide the version of nodejs required for your project
nodeVersion: 14.17.2
version: 2.0
================================================
FILE: scripts/.eslintrc.json
================================================
{
"env": {
"commonjs": true,
"es2021": true,
"node": true
},
"extends": [
"google"
],
"parserOptions": {
"ecmaVersion": 12
},
"rules": {
"require-jsdoc":"off",
"max-len":"off"
}
}
================================================
FILE: scripts/custom-reporter.js
================================================
"use strict";
const { ReportBase } = require("istanbul-lib-report");
function nodeMissing(metrics, fileCoverage) {
const isEmpty = metrics.isEmpty();
const lines = isEmpty ? 0 : metrics.lines.pct;
let coveredLines;
if (lines === 100) {
const branches = fileCoverage.getBranchCoverageByLine();
coveredLines = Object.entries(branches).map(([key, { coverage }]) => [
key,
coverage === 100,
]);
} else {
coveredLines = Object.entries(fileCoverage.getLineCoverage());
}
let newRange = true;
const ranges = coveredLines
.reduce((acum, [line, hit]) => {
if (hit) newRange = true;
else {
line = parseInt(line);
if (newRange) {
acum.push([line]);
newRange = false;
} else acum[acum.length - 1][1] = line;
}
return acum;
}, [])
.map((range) => {
const { length } = range;
if (length === 1) return range[0];
return `${range[0]}-${range[1]}`;
});
return [].concat(...ranges).join(",");
}
class JsonSummaryReport extends ReportBase {
constructor(opts) {
super();
const { maxCols } = opts;
this.maxCols = maxCols != null ? maxCols : process.stdout.columns || 80;
this.file = opts.file || "coverage-merged.json";
this.contentWriter = null;
this.first = true;
}
onStart(root, context) {
this.contentWriter = context.writer.writeFile(this.file);
this.contentWriter.write("{");
}
writeSummary(filePath, sc, uncovered) {
const cw = this.contentWriter;
if (this.first) {
this.first = false;
} else {
cw.write(",");
}
if (uncovered) {
sc.data.uncovered_lines = uncovered;
}
cw.write(JSON.stringify(filePath));
cw.write(": ");
cw.write(JSON.stringify(sc));
cw.println("");
}
onSummary(node) {
if (!node.isRoot()) {
return;
}
this.writeSummary("total", node.getCoverageSummary());
}
onDetail(node) {
const metrics = node.getCoverageSummary();
const fileCoverage = node.getFileCoverage();
let missingLines;
if (!node.isSummary()) {
missingLines = nodeMissing(metrics, fileCoverage);
}
this.writeSummary(fileCoverage.path, metrics, missingLines);
}
onEnd() {
const cw = this.contentWriter;
cw.println("}");
cw.close();
}
}
module.exports = JsonSummaryReport;
================================================
FILE: scripts/mapCoverage.js
================================================
const istanbulCoverage = require('istanbul-lib-coverage');
const istanbulReport = require('istanbul-lib-report');
const istanbulReports = require('istanbul-reports');
const libSourceMaps = require('istanbul-lib-source-maps');
const map = istanbulCoverage.createCoverageMap();
const parser = require('yargs-parser');
const argv = parser(process.argv.slice(2));
if (!!!argv.commitDir || !!!argv.coverageFiles) {
console.error('error while running merging coverage files');
process.exit(-1);
}
const mapFileCoverage = (fileCoverage) => {
fileCoverage.path = fileCoverage.path.replace(
/(.*packages\/.*\/)(build)(\/.*)/,
'$1src$3',
);
return fileCoverage;
};
for (const coverageFile of argv.coverageFiles.split(' ')) {
console.log(coverageFile);
try {
const coverageJSON = require(coverageFile);
Object.keys(coverageJSON).forEach((filename) =>
map.addFileCoverage(mapFileCoverage(coverageJSON[filename])),
);
} catch (err) {
console.error('error while loading ' + coverageFile + err);
process.exit(-1);
}
}
const checkCoverage = (summary, thresholds, file) => {
console.log(thresholds);
console.log(summary);
Object.keys(thresholds).forEach((key) => {
if (summary[key]) {
const coverage = summary[key].pct;
if (coverage < thresholds[key]) {
if (file) {
console.error('ERROR: Coverage for ' + key + ' (' + coverage + '%) does not meet threshold (' + thresholds[key] + '%) for ' + file);
} else {
console.error('ERROR: Coverage for ' + key + ' (' + coverage + '%) does not meet global threshold (' + thresholds[key] + '%)');
}
}
}
});
};
(async () => {
const sourceMapStore = libSourceMaps.createSourceMapStore();
const transformedMap = await sourceMapStore.transformCoverage(map);
const context = istanbulReport.createContext({coverageMap: transformedMap, dir: argv.commitDir});
[{name: '/scripts/custom-reporter.js', file: 'coverage-merged.json'}, {name: 'text'}].forEach((reporter) =>
istanbulReports.create(reporter.name, {file: reporter.file}).execute(context),
);
if (argv.coverageManifest) {
const manifestFile = require(argv.coverageManifest);
const thresholds = manifestFile.coverage_threshold;
if (thresholds) {
if (thresholds.perfile) {
transformedMap.files().forEach((file) => {
checkCoverage(transformedMap.fileCoverageFor(file).toSummary(), thresholds, file);
});
} else {
checkCoverage(transformedMap.getCoverageSummary(), thresholds);
}
}
}
})();
================================================
FILE: scripts/package.json
================================================
{
"name": "scripts",
"version": "1.0.0",
"description": "JS scripts for nucleus",
"dependencies": {
"@babel/core": "^7.14.3",
"@babel/node": "^7.14.2",
"istanbul-lib-coverage": "^3.2.0",
"istanbul-lib-report": "^3.0.0",
"istanbul-lib-source-maps": "^4.0.1",
"istanbul-reports": "^3.0.5",
"yargs-parser": "^20.2.7"
},
"license": "ISC"
}
================================================
FILE: testutils/constants.go
================================================
package testutils
// Various constant defined for to obtain dummy data for tests
const (
ApplicationConfigPath = "/testutils/testdata/sample_config.json" // AplicationConfigPath points to dummy config file in json format for NucleusConfig
TaskPayloadPath = "/testutils/testdata/taskPayload.json" // TaskPayloadPath points to json file containing dummy TaskPayload
PayloadPath = "/testutils/testdata/payload.json" // PayloadPath points to json file containing dummy PayloadPath
GitlabCommitDiff = "/testutils/testdata/gitlabCommitDiff.json" // GitLabCommitDiff points to json file containing dummy GitLabCommitDiff
)
================================================
FILE: testutils/testdata/compare/abc...xyz
================================================
}
const step = compiler.log.step('Bundling Resources...')
let count = 0
const testCommitChangeM = "Added 1 line in steps.ts"
// workaround for https://github.com/sindresorhus/globby/issues/127
// and https://github.com/mrmlnc/fast-glob#pattern-syntax
const resourcesWithForwardSlashes = resources.map((r) => r.replace(/\\/g, '/'))
================================================
FILE: testutils/testdata/coverage/coverage-final.json
================================================
{
"cover1" : "f1"
}
================================================
FILE: testutils/testdata/coverage/sample/coverage-final.json
================================================
{
"build_id" : "dummyBuildID",
"repo_id" : "dummyRepoID",
"commit_id" : "dummyCommitID",
"blob_link" : "dummy://BlobLink.com",
"total" : "80%"
}
================================================
FILE: testutils/testdata/gitlabCommitDiff.json
================================================
{"commit":{"id":"2295d352f6073101497f9bf4e4981c7ae72706a3","short_id":"2295d352","created_at":"2021-11-08T21:10:05.000+00:00","parent_ids":["f18d1ffec0ecaae592a0ccd708ce77146f5f37e3"],"title":"Add latest changes from gitlab-org/gitlab@master","message":"Add latest changes from gitlab-org/gitlab@master\n","author_name":"GitLab Bot","author_email":"gitlab-bot@gitlab.com","authored_date":"2021-11-08T21:10:05.000+00:00","committer_name":"GitLab Bot","committer_email":"gitlab-bot@gitlab.com","committed_date":"2021-11-08T21:10:05.000+00:00","trailers":{},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/commit/2295d352f6073101497f9bf4e4981c7ae72706a3"},"commits":[{"id":"6a380347147d1a55afbc6a1c16e04b567ab90d86","short_id":"6a380347","created_at":"2021-11-08T12:12:07.000+00:00","parent_ids":["4901ff1764398bb017487d4a5104b74bc284f33a"],"title":"Add latest changes from gitlab-org/gitlab@master","message":"Add latest changes from gitlab-org/gitlab@master\n","author_name":"GitLab Bot","author_email":"gitlab-bot@gitlab.com","authored_date":"2021-11-08T12:12:07.000+00:00","committer_name":"GitLab Bot","committer_email":"gitlab-bot@gitlab.com","committed_date":"2021-11-08T12:12:07.000+00:00","trailers":{},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/commit/6a380347147d1a55afbc6a1c16e04b567ab90d86"},{"id":"05db4ead6d5c73cf62ad95d80ccac415bc3bf3cd","short_id":"05db4ead","created_at":"2021-11-08T15:13:35.000+00:00","parent_ids":["6a380347147d1a55afbc6a1c16e04b567ab90d86"],"title":"Add latest changes from gitlab-org/gitlab@master","message":"Add latest changes from gitlab-org/gitlab@master\n","author_name":"GitLab Bot","author_email":"gitlab-bot@gitlab.com","authored_date":"2021-11-08T15:13:35.000+00:00","committer_name":"GitLab Bot","committer_email":"gitlab-bot@gitlab.com","committed_date":"2021-11-08T15:13:35.000+00:00","trailers":{},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/commit/05db4ead6d5c73cf62ad95d80ccac415bc3bf3cd"},{"id":"f18d1ffec0ecaae592a0ccd708ce77146f5f37e3","short_id":"f18d1ffe","created_at":"2021-11-08T18:09:52.000+00:00","parent_ids":["05db4ead6d5c73cf62ad95d80ccac415bc3bf3cd"],"title":"Add latest changes from gitlab-org/gitlab@master","message":"Add latest changes from gitlab-org/gitlab@master\n","author_name":"GitLab Bot","author_email":"gitlab-bot@gitlab.com","authored_date":"2021-11-08T18:09:52.000+00:00","committer_name":"GitLab Bot","committer_email":"gitlab-bot@gitlab.com","committed_date":"2021-11-08T18:09:52.000+00:00","trailers":{},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/commit/f18d1ffec0ecaae592a0ccd708ce77146f5f37e3"},{"id":"2295d352f6073101497f9bf4e4981c7ae72706a3","short_id":"2295d352","created_at":"2021-11-08T21:10:05.000+00:00","parent_ids":["f18d1ffec0ecaae592a0ccd708ce77146f5f37e3"],"title":"Add latest changes from gitlab-org/gitlab@master","message":"Add latest changes from gitlab-org/gitlab@master\n","author_name":"GitLab Bot","author_email":"gitlab-bot@gitlab.com","authored_date":"2021-11-08T21:10:05.000+00:00","committer_name":"GitLab Bot","committer_email":"gitlab-bot@gitlab.com","committed_date":"2021-11-08T21:10:05.000+00:00","trailers":{},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/commit/2295d352f6073101497f9bf4e4981c7ae72706a3"}],"diffs":[{"old_path":".gitlab/ci/qa.gitlab-ci.yml","new_path":".gitlab/ci/qa.gitlab-ci.yml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -58,10 +58,13 @@ update-qa-cache:\n - tooling/bin/find_change_diffs ${CHANGES_DIFFS_DIR}\n script:\n - |\n- if tooling/bin/qa/check_if_only_quarantined_specs ${CHANGES_DIFFS_DIR}; then\n- exit 0\n- else\n+ tooling/bin/qa/package_and_qa_check ${CHANGES_DIFFS_DIR} \u0026\u0026 exit_code=$?\n+ if [ $exit_code -eq 0 ]; then\n ./scripts/trigger-build omnibus\n+ elif [ $exit_code -eq 1 ]; then\n+ exit 1\n+ else\n+ echo \"Downstream jobs will not be triggered because package_and_qa_check exited with code: $exit_code\"\n fi\n # These jobs often time out, so temporarily use private runners and a long timeout: https://gitlab.com/gitlab-org/gitlab/-/issues/238563\n tags:\n"},{"old_path":".gitlab/ci/rails.gitlab-ci.yml","new_path":".gitlab/ci/rails.gitlab-ci.yml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -534,6 +534,50 @@ rspec:feature-flags:\n run_timed_command \"bundle exec scripts/used-feature-flags\";\n fi\n \n+rspec:skipped-flaky-tests-report:\n+ extends:\n+ - .default-retry\n+ - .rails:rules:skipped-flaky-tests-report\n+ image: ruby:2.7-alpine\n+ stage: post-test\n+ # We cannot use needs since it would mean needing 84 jobs (since most are parallelized)\n+ # so we use `dependencies` here.\n+ dependencies:\n+ # FOSS/EE jobs\n+ - rspec migration pg12\n+ - rspec unit pg12\n+ - rspec integration pg12\n+ - rspec system pg12\n+ # FOSS/EE minimal jobs\n+ - rspec migration pg12 minimal\n+ - rspec unit pg12 minimal\n+ - rspec integration pg12 minimal\n+ - rspec system pg12 minimal\n+ # EE jobs\n+ - rspec-ee migration pg12\n+ - rspec-ee unit pg12\n+ - rspec-ee integration pg12\n+ - rspec-ee system pg12\n+ # EE minimal jobs\n+ - rspec-ee migration pg12 minimal\n+ - rspec-ee unit pg12 minimal\n+ - rspec-ee integration pg12 minimal\n+ - rspec-ee system pg12 minimal\n+ # Geo jobs\n+ - rspec-ee unit pg12 geo\n+ - rspec-ee integration pg12 geo\n+ - rspec-ee system pg12 geo\n+ # Geo minimal jobs\n+ - rspec-ee unit pg12 geo minimal\n+ - rspec-ee integration pg12 geo minimal\n+ - rspec-ee system pg12 geo minimal\n+ script:\n+ - cat rspec_flaky/skipped_flaky_tests_*_report.txt \u003e\u003e skipped_flaky_tests_report.txt\n+ artifacts:\n+ expire_in: 31d\n+ paths:\n+ - skipped_flaky_tests_report.txt\n+\n # EE/FOSS: default refs (MRs, default branch, schedules) jobs #\n #######################################################\n \n"},{"old_path":".gitlab/ci/rules.gitlab-ci.yml","new_path":".gitlab/ci/rules.gitlab-ci.yml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1352,6 +1352,13 @@\n when: never\n - changes: *code-backstage-patterns\n \n+.rails:rules:skipped-flaky-tests-report:\n+ rules:\n+ - \u003c\u003c: *if-not-ee\n+ when: never\n+ - if: '$SKIP_FLAKY_TESTS_AUTOMATICALLY == \"true\"'\n+ changes: *code-backstage-patterns\n+\n #########################\n # Static analysis rules #\n #########################\n"},{"old_path":"CHANGELOG.md","new_path":"CHANGELOG.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -2,6 +2,22 @@\n documentation](doc/development/changelog.md) for instructions on adding your own\n entry.\n \n+## 14.4.2 (2021-11-08)\n+\n+### Fixed (3 changes)\n+\n+- [Skip retrying for reads on connection errors if primary only](gitlab-org/gitlab@8e1976ed75bd6c606d49c83863cf46bf3c4d5070) ([merge request](gitlab-org/gitlab!73919))\n+- [Fix error 500 loading branch with UTF-8 characters with performance bar](gitlab-org/gitlab@67ddc428472d57bb3d8a4a84eb0750487a175f75) ([merge request](gitlab-org/gitlab!73919))\n+- [Skip st_diff callback setting on LegacyDiffNote when importing](gitlab-org/gitlab@84f5c66321473cd702b3b671584054fcf3d141ae) ([merge request](gitlab-org/gitlab!73919))\n+\n+### Changed (1 change)\n+\n+- [Remove skip_legacy_diff_note_callback_on_import from legacy diff note](gitlab-org/gitlab@547a2ec29ea9e9299eab727899c3d90886ffc21c) ([merge request](gitlab-org/gitlab!73919))\n+\n+### Performance (1 change)\n+\n+- [Prevent Sidekiq size limiter middleware from running multiple times on the same job](gitlab-org/gitlab@294c01be38d400607536fb20a2038e098c0f0e28) ([merge request](gitlab-org/gitlab!73919))\n+\n ## 14.4.1 (2021-10-28)\n \n ### Security (13 changes)\n"},{"old_path":"GITALY_SERVER_VERSION","new_path":"GITALY_SERVER_VERSION","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1 +1 @@\n-7b9cd199b0851fd1b6615e0798f2aafddafd63cb\n+460a880c6993ab5f76cac951fccc02efd5cbd444\n"},{"old_path":"Gemfile","new_path":"Gemfile","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -342,7 +342,7 @@ group :development do\n gem 'lefthook', '~\u003e 0.7.0', require: false\n gem 'solargraph', '~\u003e 0.43', require: false\n \n- gem 'letter_opener_web', '~\u003e 1.4.1'\n+ gem 'letter_opener_web', '~\u003e 2.0.0'\n \n # Better errors handler\n gem 'better_errors', '~\u003e 2.9.0'\n"},{"old_path":"Gemfile.lock","new_path":"Gemfile.lock","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -700,10 +700,11 @@ GEM\n lefthook (0.7.5)\n letter_opener (1.7.0)\n launchy (~\u003e 2.2)\n- letter_opener_web (1.4.1)\n- actionmailer (\u003e= 3.2)\n- letter_opener (~\u003e 1.0)\n- railties (\u003e= 3.2)\n+ letter_opener_web (2.0.0)\n+ actionmailer (\u003e= 5.2)\n+ letter_opener (~\u003e 1.7)\n+ railties (\u003e= 5.2)\n+ rexml\n libyajl2 (1.2.0)\n license_finder (6.0.0)\n bundler\n@@ -1516,7 +1517,7 @@ DEPENDENCIES\n kramdown (~\u003e 2.3.1)\n kubeclient (~\u003e 4.9.2)\n lefthook (~\u003e 0.7.0)\n- letter_opener_web (~\u003e 1.4.1)\n+ letter_opener_web (~\u003e 2.0.0)\n license_finder (~\u003e 6.0)\n licensee (~\u003e 9.14.1)\n lockbox (~\u003e 0.6.2)\n"},{"old_path":"app/assets/javascripts/analytics/devops_report/components/devops_score.vue","new_path":"app/assets/javascripts/analytics/devops_reports/components/devops_score.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/analytics/devops_report/components/devops_score_callout.vue","new_path":"app/assets/javascripts/analytics/devops_reports/components/devops_score_callout.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/analytics/devops_report/components/service_ping_disabled.vue","new_path":"app/assets/javascripts/analytics/devops_reports/components/service_ping_disabled.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/analytics/devops_report/constants.js","new_path":"app/assets/javascripts/analytics/devops_reports/constants.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/analytics/devops_report/devops_score.js","new_path":"app/assets/javascripts/analytics/devops_reports/devops_score.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/analytics/devops_report/devops_score_disabled_service_ping.js","new_path":"app/assets/javascripts/analytics/devops_reports/devops_score_disabled_service_ping.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/boards/graphql/group_board_iterations.query.graphql","new_path":"app/assets/javascripts/boards/graphql/group_board_iterations.query.graphql","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,10 +0,0 @@\n-query GroupBoardIterations($fullPath: ID!, $title: String) {\n- group(fullPath: $fullPath) {\n- iterations(includeAncestors: true, title: $title) {\n- nodes {\n- id\n- title\n- }\n- }\n- }\n-}\n"},{"old_path":"app/assets/javascripts/boards/graphql/project_board_iterations.query.graphql","new_path":"app/assets/javascripts/boards/graphql/project_board_iterations.query.graphql","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,10 +0,0 @@\n-query ProjectBoardIterations($fullPath: ID!, $title: String) {\n- project(fullPath: $fullPath) {\n- iterations(includeAncestors: true, title: $title) {\n- nodes {\n- id\n- title\n- }\n- }\n- }\n-}\n"},{"old_path":"app/assets/javascripts/boards/stores/actions.js","new_path":"app/assets/javascripts/boards/stores/actions.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -36,13 +36,11 @@ import {\n } from '../boards_util';\n import { gqlClient } from '../graphql';\n import boardLabelsQuery from '../graphql/board_labels.query.graphql';\n-import groupBoardIterationsQuery from '../graphql/group_board_iterations.query.graphql';\n import groupBoardMilestonesQuery from '../graphql/group_board_milestones.query.graphql';\n import groupProjectsQuery from '../graphql/group_projects.query.graphql';\n import issueCreateMutation from '../graphql/issue_create.mutation.graphql';\n import issueSetLabelsMutation from '../graphql/issue_set_labels.mutation.graphql';\n import listsIssuesQuery from '../graphql/lists_issues.query.graphql';\n-import projectBoardIterationsQuery from '../graphql/project_board_iterations.query.graphql';\n import projectBoardMilestonesQuery from '../graphql/project_board_milestones.query.graphql';\n \n import * as types from './mutation_types';\n@@ -203,52 +201,6 @@ export default {\n });\n },\n \n- fetchIterations({ state, commit }, title) {\n- commit(types.RECEIVE_ITERATIONS_REQUEST);\n-\n- const { fullPath, boardType } = state;\n-\n- const variables = {\n- fullPath,\n- title,\n- };\n-\n- let query;\n- if (boardType === BoardType.project) {\n- query = projectBoardIterationsQuery;\n- }\n- if (boardType === BoardType.group) {\n- query = groupBoardIterationsQuery;\n- }\n-\n- if (!query) {\n- // eslint-disable-next-line @gitlab/require-i18n-strings\n- throw new Error('Unknown board type');\n- }\n-\n- return gqlClient\n- .query({\n- query,\n- variables,\n- })\n- .then(({ data }) =\u003e {\n- const errors = data[boardType]?.errors;\n- const iterations = data[boardType]?.iterations.nodes;\n-\n- if (errors?.[0]) {\n- throw new Error(errors[0]);\n- }\n-\n- commit(types.RECEIVE_ITERATIONS_SUCCESS, iterations);\n-\n- return iterations;\n- })\n- .catch((e) =\u003e {\n- commit(types.RECEIVE_ITERATIONS_FAILURE);\n- throw e;\n- });\n- },\n-\n fetchMilestones({ state, commit }, searchTerm) {\n commit(types.RECEIVE_MILESTONES_REQUEST);\n \n"},{"old_path":"app/assets/javascripts/boards/stores/mutation_types.js","new_path":"app/assets/javascripts/boards/stores/mutation_types.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -41,7 +41,3 @@ export const ADD_LIST_TO_HIGHLIGHTED_LISTS = 'ADD_LIST_TO_HIGHLIGHTED_LISTS';\n export const REMOVE_LIST_FROM_HIGHLIGHTED_LISTS = 'REMOVE_LIST_FROM_HIGHLIGHTED_LISTS';\n export const RESET_BOARD_ITEM_SELECTION = 'RESET_BOARD_ITEM_SELECTION';\n export const SET_ERROR = 'SET_ERROR';\n-\n-export const RECEIVE_ITERATIONS_REQUEST = 'RECEIVE_ITERATIONS_REQUEST';\n-export const RECEIVE_ITERATIONS_SUCCESS = 'RECEIVE_ITERATIONS_SUCCESS';\n-export const RECEIVE_ITERATIONS_FAILURE = 'RECEIVE_ITERATIONS_FAILURE';\n"},{"old_path":"app/assets/javascripts/boards/stores/mutations.js","new_path":"app/assets/javascripts/boards/stores/mutations.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -64,20 +64,6 @@ export default {\n );\n },\n \n- [mutationTypes.RECEIVE_ITERATIONS_REQUEST](state) {\n- state.iterationsLoading = true;\n- },\n-\n- [mutationTypes.RECEIVE_ITERATIONS_SUCCESS](state, iterations) {\n- state.iterations = iterations;\n- state.iterationsLoading = false;\n- },\n-\n- [mutationTypes.RECEIVE_ITERATIONS_FAILURE](state) {\n- state.iterationsLoading = false;\n- state.error = __('Failed to load iterations.');\n- },\n-\n [mutationTypes.SET_ACTIVE_ID](state, { id, sidebarType }) {\n state.activeId = id;\n state.sidebarType = sidebarType;\n"},{"old_path":"app/assets/javascripts/diffs/components/app.vue","new_path":"app/assets/javascripts/diffs/components/app.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -44,6 +44,7 @@ import {\n TRACKING_MULTIPLE_FILES_MODE,\n } from '../constants';\n \n+import { discussionIntersectionObserverHandlerFactory } from '../utils/discussions';\n import diffsEventHub from '../event_hub';\n import { reviewStatuses } from '../utils/file_reviews';\n import { diffsApp } from '../utils/performance';\n@@ -86,6 +87,9 @@ export default {\n ALERT_MERGE_CONFLICT,\n ALERT_COLLAPSED_FILES,\n },\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n props: {\n endpoint: {\n type: String,\n"},{"old_path":"app/assets/javascripts/diffs/utils/discussions.js","new_path":"app/assets/javascripts/diffs/utils/discussions.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,76 @@\n+function normalize(processable) {\n+ const { entry } = processable;\n+ const offset = entry.rootBounds.bottom - entry.boundingClientRect.top;\n+ const direction =\n+ offset \u003c 0 ? 'Up' : 'Down'; /* eslint-disable-line @gitlab/require-i18n-strings */\n+\n+ return {\n+ ...processable,\n+ entry: {\n+ time: entry.time,\n+ type: entry.isIntersecting ? 'intersection' : `scroll${direction}`,\n+ },\n+ };\n+}\n+\n+function sort({ entry: alpha }, { entry: beta }) {\n+ const diff = alpha.time - beta.time;\n+ let order = 0;\n+\n+ if (diff \u003c 0) {\n+ order = -1;\n+ } else if (diff \u003e 0) {\n+ order = 1;\n+ } else if (alpha.type === 'intersection' \u0026\u0026 beta.type === 'scrollUp') {\n+ order = 2;\n+ } else if (alpha.type === 'scrollUp' \u0026\u0026 beta.type === 'intersection') {\n+ order = -2;\n+ }\n+\n+ return order;\n+}\n+\n+function filter(entry) {\n+ return entry.type !== 'scrollDown';\n+}\n+\n+export function discussionIntersectionObserverHandlerFactory() {\n+ let unprocessed = [];\n+ let timer = null;\n+\n+ return (processable) =\u003e {\n+ unprocessed.push(processable);\n+\n+ if (timer) {\n+ clearTimeout(timer);\n+ }\n+\n+ timer = setTimeout(() =\u003e {\n+ unprocessed\n+ .map(normalize)\n+ .filter(filter)\n+ .sort(sort)\n+ .forEach((discussionObservationContainer) =\u003e {\n+ const {\n+ entry: { type },\n+ currentDiscussion,\n+ isFirstUnresolved,\n+ isDiffsPage,\n+ functions: { setCurrentDiscussionId, getPreviousUnresolvedDiscussionId },\n+ } = discussionObservationContainer;\n+\n+ if (type === 'intersection') {\n+ setCurrentDiscussionId(currentDiscussion.id);\n+ } else if (type === 'scrollUp') {\n+ setCurrentDiscussionId(\n+ isFirstUnresolved\n+ ? null\n+ : getPreviousUnresolvedDiscussionId(currentDiscussion.id, isDiffsPage),\n+ );\n+ }\n+ });\n+\n+ unprocessed = [];\n+ }, 0);\n+ };\n+}\n"},{"old_path":"app/assets/javascripts/notes/components/discussion_notes.vue","new_path":"app/assets/javascripts/notes/components/discussion_notes.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,5 +1,6 @@\n \u003cscript\u003e\n import { mapGetters, mapActions } from 'vuex';\n+import { GlIntersectionObserver } from '@gitlab/ui';\n import { __ } from '~/locale';\n import PlaceholderNote from '~/vue_shared/components/notes/placeholder_note.vue';\n import PlaceholderSystemNote from '~/vue_shared/components/notes/placeholder_system_note.vue';\n@@ -16,7 +17,9 @@ export default {\n ToggleRepliesWidget,\n NoteEditedText,\n DiscussionNotesRepliesWrapper,\n+ GlIntersectionObserver,\n },\n+ inject: ['discussionObserverHandler'],\n props: {\n discussion: {\n type: Object,\n@@ -54,7 +57,11 @@ export default {\n },\n },\n computed: {\n- ...mapGetters(['userCanReply']),\n+ ...mapGetters([\n+ 'userCanReply',\n+ 'previousUnresolvedDiscussionId',\n+ 'firstUnresolvedDiscussionId',\n+ ]),\n hasReplies() {\n return Boolean(this.replies.length);\n },\n@@ -77,9 +84,20 @@ export default {\n url: this.discussion.discussion_path,\n };\n },\n+ isFirstUnresolved() {\n+ return this.firstUnresolvedDiscussionId === this.discussion.id;\n+ },\n+ },\n+ observerOptions: {\n+ threshold: 0,\n+ rootMargin: '0px 0px -50% 0px',\n },\n methods: {\n- ...mapActions(['toggleDiscussion', 'setSelectedCommentPositionHover']),\n+ ...mapActions([\n+ 'toggleDiscussion',\n+ 'setSelectedCommentPositionHover',\n+ 'setCurrentDiscussionId',\n+ ]),\n componentName(note) {\n if (note.isPlaceholderNote) {\n if (note.placeholderType === SYSTEM_NOTE) {\n@@ -110,6 +128,18 @@ export default {\n this.setSelectedCommentPositionHover();\n }\n },\n+ observerTriggered(entry) {\n+ this.discussionObserverHandler({\n+ entry,\n+ isFirstUnresolved: this.isFirstUnresolved,\n+ currentDiscussion: { ...this.discussion },\n+ isDiffsPage: !this.isOverviewTab,\n+ functions: {\n+ setCurrentDiscussionId: this.setCurrentDiscussionId,\n+ getPreviousUnresolvedDiscussionId: this.previousUnresolvedDiscussionId,\n+ },\n+ });\n+ },\n },\n };\n \u003c/script\u003e\n@@ -122,33 +152,35 @@ export default {\n @mouseleave=\"handleMouseLeave(discussion)\"\n \u003e\n \u003ctemplate v-if=\"shouldGroupReplies\"\u003e\n- \u003ccomponent\n- :is=\"componentName(firstNote)\"\n- :note=\"componentData(firstNote)\"\n- :line=\"line || diffLine\"\n- :discussion-file=\"discussion.diff_file\"\n- :commit=\"commit\"\n- :help-page-path=\"helpPagePath\"\n- :show-reply-button=\"userCanReply\"\n- :discussion-root=\"true\"\n- :discussion-resolve-path=\"discussion.resolve_path\"\n- :is-overview-tab=\"isOverviewTab\"\n- @handleDeleteNote=\"$emit('deleteNote')\"\n- @startReplying=\"$emit('startReplying')\"\n- \u003e\n- \u003ctemplate #discussion-resolved-text\u003e\n- \u003cnote-edited-text\n- v-if=\"discussion.resolved\"\n- :edited-at=\"discussion.resolved_at\"\n- :edited-by=\"discussion.resolved_by\"\n- :action-text=\"resolvedText\"\n- class-name=\"discussion-headline-light js-discussion-headline discussion-resolved-text\"\n- /\u003e\n- \u003c/template\u003e\n- \u003ctemplate #avatar-badge\u003e\n- \u003cslot name=\"avatar-badge\"\u003e\u003c/slot\u003e\n- \u003c/template\u003e\n- \u003c/component\u003e\n+ \u003cgl-intersection-observer :options=\"$options.observerOptions\" @update=\"observerTriggered\"\u003e\n+ \u003ccomponent\n+ :is=\"componentName(firstNote)\"\n+ :note=\"componentData(firstNote)\"\n+ :line=\"line || diffLine\"\n+ :discussion-file=\"discussion.diff_file\"\n+ :commit=\"commit\"\n+ :help-page-path=\"helpPagePath\"\n+ :show-reply-button=\"userCanReply\"\n+ :discussion-root=\"true\"\n+ :discussion-resolve-path=\"discussion.resolve_path\"\n+ :is-overview-tab=\"isOverviewTab\"\n+ @handleDeleteNote=\"$emit('deleteNote')\"\n+ @startReplying=\"$emit('startReplying')\"\n+ \u003e\n+ \u003ctemplate #discussion-resolved-text\u003e\n+ \u003cnote-edited-text\n+ v-if=\"discussion.resolved\"\n+ :edited-at=\"discussion.resolved_at\"\n+ :edited-by=\"discussion.resolved_by\"\n+ :action-text=\"resolvedText\"\n+ class-name=\"discussion-headline-light js-discussion-headline discussion-resolved-text\"\n+ /\u003e\n+ \u003c/template\u003e\n+ \u003ctemplate #avatar-badge\u003e\n+ \u003cslot name=\"avatar-badge\"\u003e\u003c/slot\u003e\n+ \u003c/template\u003e\n+ \u003c/component\u003e\n+ \u003c/gl-intersection-observer\u003e\n \u003cdiscussion-notes-replies-wrapper :is-diff-discussion=\"discussion.diff_discussion\"\u003e\n \u003ctoggle-replies-widget\n v-if=\"hasReplies\"\n"},{"old_path":"app/assets/javascripts/notes/components/notes_app.vue","new_path":"app/assets/javascripts/notes/components/notes_app.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -8,6 +8,7 @@ import TimelineEntryItem from '~/vue_shared/components/notes/timeline_entry_item\n import OrderedLayout from '~/vue_shared/components/ordered_layout.vue';\n import glFeatureFlagsMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';\n import draftNote from '../../batch_comments/components/draft_note.vue';\n+import { discussionIntersectionObserverHandlerFactory } from '../../diffs/utils/discussions';\n import { getLocationHash, doesHashExistInUrl } from '../../lib/utils/url_utility';\n import placeholderNote from '../../vue_shared/components/notes/placeholder_note.vue';\n import placeholderSystemNote from '../../vue_shared/components/notes/placeholder_system_note.vue';\n@@ -38,6 +39,9 @@ export default {\n TimelineEntryItem,\n },\n mixins: [glFeatureFlagsMixin()],\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n props: {\n noteableData: {\n type: Object,\n"},{"old_path":"app/assets/javascripts/pages/admin/dev_ops_report/index.js","new_path":"app/assets/javascripts/pages/admin/dev_ops_report/index.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,5 +1,5 @@\n-import initDevOpsScore from '~/analytics/devops_report/devops_score';\n-import initDevOpsScoreDisabledServicePing from '~/analytics/devops_report/devops_score_disabled_service_ping';\n+import initDevOpsScore from '~/analytics/devops_reports/devops_score';\n+import initDevOpsScoreDisabledServicePing from '~/analytics/devops_reports/devops_score_disabled_service_ping';\n \n initDevOpsScoreDisabledServicePing();\n initDevOpsScore();\n"},{"old_path":"app/assets/javascripts/runner/components/cells/runner_actions_cell.vue","new_path":"app/assets/javascripts/runner/components/cells/runner_actions_cell.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -3,7 +3,7 @@ import { GlButton, GlButtonGroup, GlTooltipDirective } from '@gitlab/ui';\n import createFlash from '~/flash';\n import { __, s__ } from '~/locale';\n import runnerDeleteMutation from '~/runner/graphql/runner_delete.mutation.graphql';\n-import runnerUpdateMutation from '~/runner/graphql/runner_update.mutation.graphql';\n+import runnerActionsUpdateMutation from '~/runner/graphql/runner_actions_update.mutation.graphql';\n import { captureException } from '~/runner/sentry_utils';\n \n const i18n = {\n@@ -71,7 +71,7 @@ export default {\n runnerUpdate: { errors },\n },\n } = await this.$apollo.mutate({\n- mutation: runnerUpdateMutation,\n+ mutation: runnerActionsUpdateMutation,\n variables: {\n input: {\n id: this.runner.id,\n"},{"old_path":"app/assets/javascripts/runner/components/cells/runner_type_cell.vue","new_path":"app/assets/javascripts/runner/components/cells/runner_status_cell.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,15 +1,15 @@\n \u003cscript\u003e\n import { GlTooltipDirective } from '@gitlab/ui';\n-import RunnerTypeBadge from '../runner_type_badge.vue';\n-import RunnerStateLockedBadge from '../runner_state_locked_badge.vue';\n-import RunnerStatePausedBadge from '../runner_state_paused_badge.vue';\n+\n+import RunnerContactedStateBadge from '../runner_contacted_state_badge.vue';\n+import RunnerPausedBadge from '../runner_paused_badge.vue';\n+\n import { I18N_LOCKED_RUNNER_DESCRIPTION, I18N_PAUSED_RUNNER_DESCRIPTION } from '../../constants';\n \n export default {\n components: {\n- RunnerTypeBadge,\n- RunnerStateLockedBadge,\n- RunnerStatePausedBadge,\n+ RunnerContactedStateBadge,\n+ RunnerPausedBadge,\n },\n directives: {\n GlTooltip: GlTooltipDirective,\n@@ -21,12 +21,6 @@ export default {\n },\n },\n computed: {\n- runnerType() {\n- return this.runner.runnerType;\n- },\n- locked() {\n- return this.runner.locked;\n- },\n paused() {\n return !this.runner.active;\n },\n@@ -40,8 +34,7 @@ export default {\n \n \u003ctemplate\u003e\n \u003cdiv\u003e\n- \u003crunner-type-badge :type=\"runnerType\" size=\"sm\" /\u003e\n- \u003crunner-state-locked-badge v-if=\"locked\" size=\"sm\" /\u003e\n- \u003crunner-state-paused-badge v-if=\"paused\" size=\"sm\" /\u003e\n+ \u003crunner-contacted-state-badge :runner=\"runner\" size=\"sm\" /\u003e\n+ \u003crunner-paused-badge v-if=\"paused\" size=\"sm\" /\u003e\n \u003c/div\u003e\n \u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/runner/components/cells/runner_summary_cell.vue","new_path":"app/assets/javascripts/runner/components/cells/runner_summary_cell.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,11 +1,21 @@\n \u003cscript\u003e\n+import { GlIcon, GlTooltipDirective } from '@gitlab/ui';\n+\n import TooltipOnTruncate from '~/vue_shared/components/tooltip_on_truncate.vue';\n import RunnerName from '../runner_name.vue';\n+import RunnerTypeBadge from '../runner_type_badge.vue';\n+\n+import { I18N_LOCKED_RUNNER_DESCRIPTION } from '../../constants';\n \n export default {\n components: {\n+ GlIcon,\n TooltipOnTruncate,\n RunnerName,\n+ RunnerTypeBadge,\n+ },\n+ directives: {\n+ GlTooltip: GlTooltipDirective,\n },\n props: {\n runner: {\n@@ -14,10 +24,19 @@ export default {\n },\n },\n computed: {\n+ runnerType() {\n+ return this.runner.runnerType;\n+ },\n+ locked() {\n+ return this.runner.locked;\n+ },\n description() {\n return this.runner.description;\n },\n },\n+ i18n: {\n+ I18N_LOCKED_RUNNER_DESCRIPTION,\n+ },\n };\n \u003c/script\u003e\n \n@@ -26,6 +45,14 @@ export default {\n \u003cslot :runner=\"runner\" name=\"runner-name\"\u003e\n \u003crunner-name :runner=\"runner\" /\u003e\n \u003c/slot\u003e\n+\n+ \u003crunner-type-badge :type=\"runnerType\" size=\"sm\" /\u003e\n+ \u003cgl-icon\n+ v-if=\"locked\"\n+ v-gl-tooltip\n+ :title=\"$options.i18n.I18N_LOCKED_RUNNER_DESCRIPTION\"\n+ name=\"lock\"\n+ /\u003e\n \u003ctooltip-on-truncate class=\"gl-display-block\" :title=\"description\" truncate-target=\"child\"\u003e\n \u003cdiv class=\"gl-text-truncate\"\u003e\n {{ description }}\n"},{"old_path":"app/assets/javascripts/runner/components/runner_contacted_state_badge.vue","new_path":"app/assets/javascripts/runner/components/runner_contacted_state_badge.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,69 @@\n+\u003cscript\u003e\n+import { GlBadge, GlTooltipDirective } from '@gitlab/ui';\n+import { s__, sprintf } from '~/locale';\n+import { getTimeago } from '~/lib/utils/datetime_utility';\n+import {\n+ I18N_ONLINE_RUNNER_DESCRIPTION,\n+ I18N_OFFLINE_RUNNER_DESCRIPTION,\n+ I18N_NOT_CONNECTED_RUNNER_DESCRIPTION,\n+ STATUS_ONLINE,\n+ STATUS_OFFLINE,\n+ STATUS_NOT_CONNECTED,\n+} from '../constants';\n+\n+export default {\n+ components: {\n+ GlBadge,\n+ },\n+ directives: {\n+ GlTooltip: GlTooltipDirective,\n+ },\n+ props: {\n+ runner: {\n+ required: true,\n+ type: Object,\n+ },\n+ },\n+ computed: {\n+ contactedAtTimeAgo() {\n+ if (this.runner.contactedAt) {\n+ return getTimeago().format(this.runner.contactedAt);\n+ }\n+ return null;\n+ },\n+ badge() {\n+ switch (this.runner.status) {\n+ case STATUS_ONLINE:\n+ return {\n+ variant: 'success',\n+ label: s__('Runners|online'),\n+ tooltip: sprintf(I18N_ONLINE_RUNNER_DESCRIPTION, {\n+ timeAgo: this.contactedAtTimeAgo,\n+ }),\n+ };\n+ case STATUS_OFFLINE:\n+ return {\n+ variant: 'muted',\n+ label: s__('Runners|offline'),\n+ tooltip: sprintf(I18N_OFFLINE_RUNNER_DESCRIPTION, {\n+ timeAgo: this.contactedAtTimeAgo,\n+ }),\n+ };\n+ case STATUS_NOT_CONNECTED:\n+ return {\n+ variant: 'muted',\n+ label: s__('Runners|not connected'),\n+ tooltip: I18N_NOT_CONNECTED_RUNNER_DESCRIPTION,\n+ };\n+ default:\n+ return null;\n+ }\n+ },\n+ },\n+};\n+\u003c/script\u003e\n+\u003ctemplate\u003e\n+ \u003cgl-badge v-if=\"badge\" v-gl-tooltip=\"badge.tooltip\" :variant=\"badge.variant\" v-bind=\"$attrs\"\u003e\n+ {{ badge.label }}\n+ \u003c/gl-badge\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/runner/components/runner_list.vue","new_path":"app/assets/javascripts/runner/components/runner_list.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -5,7 +5,7 @@ import { __, s__ } from '~/locale';\n import TimeAgo from '~/vue_shared/components/time_ago_tooltip.vue';\n import RunnerActionsCell from './cells/runner_actions_cell.vue';\n import RunnerSummaryCell from './cells/runner_summary_cell.vue';\n-import RunnerTypeCell from './cells/runner_type_cell.vue';\n+import RunnerStatusCell from './cells/runner_status_cell.vue';\n import RunnerTags from './runner_tags.vue';\n \n const tableField = ({ key, label = '', width = 10 }) =\u003e {\n@@ -36,7 +36,7 @@ export default {\n RunnerActionsCell,\n RunnerSummaryCell,\n RunnerTags,\n- RunnerTypeCell,\n+ RunnerStatusCell,\n },\n directives: {\n GlTooltip: GlTooltipDirective,\n@@ -63,8 +63,8 @@ export default {\n },\n },\n fields: [\n- tableField({ key: 'type', label: __('Type/State') }),\n- tableField({ key: 'summary', label: s__('Runners|Runner'), width: 30 }),\n+ tableField({ key: 'status', label: s__('Runners|Status') }),\n+ tableField({ key: 'summary', label: s__('Runners|Runner ID'), width: 30 }),\n tableField({ key: 'version', label: __('Version') }),\n tableField({ key: 'ipAddress', label: __('IP Address') }),\n tableField({ key: 'tagList', label: __('Tags'), width: 20 }),\n@@ -88,8 +88,8 @@ export default {\n \u003cgl-skeleton-loader v-for=\"i in 4\" :key=\"i\" /\u003e\n \u003c/template\u003e\n \n- \u003ctemplate #cell(type)=\"{ item }\"\u003e\n- \u003crunner-type-cell :runner=\"item\" /\u003e\n+ \u003ctemplate #cell(status)=\"{ item }\"\u003e\n+ \u003crunner-status-cell :runner=\"item\" /\u003e\n \u003c/template\u003e\n \n \u003ctemplate #cell(summary)=\"{ item, index }\"\u003e\n"},{"old_path":"app/assets/javascripts/runner/components/runner_state_paused_badge.vue","new_path":"app/assets/javascripts/runner/components/runner_paused_badge.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/assets/javascripts/runner/components/runner_state_locked_badge.vue","new_path":"app/assets/javascripts/runner/components/runner_state_locked_badge.vue","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,25 +0,0 @@\n-\u003cscript\u003e\n-import { GlBadge, GlTooltipDirective } from '@gitlab/ui';\n-import { I18N_LOCKED_RUNNER_DESCRIPTION } from '../constants';\n-\n-export default {\n- components: {\n- GlBadge,\n- },\n- directives: {\n- GlTooltip: GlTooltipDirective,\n- },\n- i18n: {\n- I18N_LOCKED_RUNNER_DESCRIPTION,\n- },\n-};\n-\u003c/script\u003e\n-\u003ctemplate\u003e\n- \u003cgl-badge\n- v-gl-tooltip=\"$options.i18n.I18N_LOCKED_RUNNER_DESCRIPTION\"\n- variant=\"warning\"\n- v-bind=\"$attrs\"\n- \u003e\n- {{ s__('Runners|locked') }}\n- \u003c/gl-badge\u003e\n-\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/runner/components/runner_type_alert.vue","new_path":"app/assets/javascripts/runner/components/runner_type_alert.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -9,17 +9,14 @@ const ALERT_DATA = {\n message: s__(\n 'Runners|This runner is available to all groups and projects in your GitLab instance.',\n ),\n- variant: 'success',\n anchor: 'shared-runners',\n },\n [GROUP_TYPE]: {\n message: s__('Runners|This runner is available to all projects and subgroups in a group.'),\n- variant: 'success',\n anchor: 'group-runners',\n },\n [PROJECT_TYPE]: {\n message: s__('Runners|This runner is associated with one or more projects.'),\n- variant: 'info',\n anchor: 'specific-runners',\n },\n };\n@@ -50,7 +47,7 @@ export default {\n };\n \u003c/script\u003e\n \u003ctemplate\u003e\n- \u003cgl-alert v-if=\"alert\" :variant=\"alert.variant\" :dismissible=\"false\"\u003e\n+ \u003cgl-alert v-if=\"alert\" variant=\"info\" :dismissible=\"false\"\u003e\n {{ alert.message }}\n \u003cgl-link :href=\"helpHref\"\u003e{{ __('Learn more.') }}\u003c/gl-link\u003e\n \u003c/gl-alert\u003e\n"},{"old_path":"app/assets/javascripts/runner/components/runner_type_badge.vue","new_path":"app/assets/javascripts/runner/components/runner_type_badge.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -12,17 +12,14 @@ import {\n \n const BADGE_DATA = {\n [INSTANCE_TYPE]: {\n- variant: 'success',\n text: s__('Runners|shared'),\n tooltip: I18N_INSTANCE_RUNNER_DESCRIPTION,\n },\n [GROUP_TYPE]: {\n- variant: 'success',\n text: s__('Runners|group'),\n tooltip: I18N_GROUP_RUNNER_DESCRIPTION,\n },\n [PROJECT_TYPE]: {\n- variant: 'info',\n text: s__('Runners|specific'),\n tooltip: I18N_PROJECT_RUNNER_DESCRIPTION,\n },\n@@ -53,7 +50,7 @@ export default {\n };\n \u003c/script\u003e\n \u003ctemplate\u003e\n- \u003cgl-badge v-if=\"badge\" v-gl-tooltip=\"badge.tooltip\" :variant=\"badge.variant\" v-bind=\"$attrs\"\u003e\n+ \u003cgl-badge v-if=\"badge\" v-gl-tooltip=\"badge.tooltip\" variant=\"info\" v-bind=\"$attrs\"\u003e\n {{ badge.text }}\n \u003c/gl-badge\u003e\n \u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/runner/constants.js","new_path":"app/assets/javascripts/runner/constants.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -6,11 +6,24 @@ export const GROUP_RUNNER_COUNT_LIMIT = 1000;\n export const I18N_FETCH_ERROR = s__('Runners|Something went wrong while fetching runner data.');\n export const I18N_DETAILS_TITLE = s__('Runners|Runner #%{runner_id}');\n \n+// Type\n export const I18N_INSTANCE_RUNNER_DESCRIPTION = s__('Runners|Available to all projects');\n export const I18N_GROUP_RUNNER_DESCRIPTION = s__(\n 'Runners|Available to all projects and subgroups in the group',\n );\n export const I18N_PROJECT_RUNNER_DESCRIPTION = s__('Runners|Associated with one or more projects');\n+\n+// Status\n+export const I18N_ONLINE_RUNNER_DESCRIPTION = s__(\n+ 'Runners|Runner is online; last contact was %{timeAgo}',\n+);\n+export const I18N_OFFLINE_RUNNER_DESCRIPTION = s__(\n+ 'Runners|No recent contact from this runner; last contact was %{timeAgo}',\n+);\n+export const I18N_NOT_CONNECTED_RUNNER_DESCRIPTION = s__(\n+ 'Runners|This runner has never connected to this instance',\n+);\n+\n export const I18N_LOCKED_RUNNER_DESCRIPTION = s__('Runners|You cannot assign to other projects');\n export const I18N_PAUSED_RUNNER_DESCRIPTION = s__('Runners|Not available to run jobs');\n \n"},{"old_path":"app/assets/javascripts/runner/graphql/runner_actions_update.mutation.graphql","new_path":"app/assets/javascripts/runner/graphql/runner_actions_update.mutation.graphql","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,14 @@\n+#import \"~/runner/graphql/runner_node.fragment.graphql\"\n+\n+# Mutation for updates within the runners list via action\n+# buttons (play, pause, ...), loads attributes shown in the\n+# runner list.\n+\n+mutation runnerActionsUpdate($input: RunnerUpdateInput!) {\n+ runnerUpdate(input: $input) {\n+ runner {\n+ ...RunnerNode\n+ }\n+ errors\n+ }\n+}\n"},{"old_path":"app/assets/javascripts/runner/graphql/runner_node.fragment.graphql","new_path":"app/assets/javascripts/runner/graphql/runner_node.fragment.graphql","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -10,4 +10,5 @@ fragment RunnerNode on CiRunner {\n locked\n tagList\n contactedAt\n+ status\n }\n"},{"old_path":"app/assets/javascripts/runner/graphql/runner_update.mutation.graphql","new_path":"app/assets/javascripts/runner/graphql/runner_update.mutation.graphql","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,5 +1,8 @@\n #import \"ee_else_ce/runner/graphql/runner_details.fragment.graphql\"\n \n+# Mutation for updates from the runner form, loads\n+# attributes shown in the runner details.\n+\n mutation runnerUpdate($input: RunnerUpdateInput!) {\n runnerUpdate(input: $input) {\n runner {\n"},{"old_path":"app/assets/javascripts/vue_merge_request_widget/components/added_commit_message.vue","new_path":"app/assets/javascripts/vue_merge_request_widget/components/added_commit_message.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -40,9 +40,9 @@ export default {\n },\n message() {\n return this.isFastForwardEnabled\n- ? s__('mrWidgetCommitsAdded|%{commitCount} will be added to %{targetBranch}.')\n+ ? s__('mrWidgetCommitsAdded|Adds %{commitCount} to %{targetBranch}.')\n : s__(\n- 'mrWidgetCommitsAdded|%{commitCount} and %{mergeCommitCount} will be added to %{targetBranch}%{squashedCommits}.',\n+ 'mrWidgetCommitsAdded|Adds %{commitCount} and %{mergeCommitCount} to %{targetBranch}%{squashedCommits}.',\n );\n },\n textDecorativeComponent() {\n@@ -69,7 +69,7 @@ export default {\n \u003c/template\u003e\n \u003ctemplate #squashedCommits\u003e\n \u003ctemplate v-if=\"glFeatures.restructuredMrWidget \u0026\u0026 isSquashEnabled\"\u003e\n- {{ __('(commits will be squashed)') }}\u003c/template\n+ {{ n__('(squashes %d commit)', '(squashes %d commits)', commitsCount) }}\u003c/template\n \u003e\u003c/template\n \u003e\n \u003c/gl-sprintf\u003e\n"},{"old_path":"app/assets/javascripts/vue_merge_request_widget/components/source_branch_removal_status.vue","new_path":"app/assets/javascripts/vue_merge_request_widget/components/source_branch_removal_status.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,7 +4,7 @@ import { __ } from '../../locale';\n \n export default {\n i18n: {\n- removesBranchText: __('The source branch will be deleted'),\n+ removesBranchText: __('Deletes the source branch'),\n tooltipTitle: __('A user with write access to the source branch selected this option'),\n },\n components: {\n"},{"old_path":"app/assets/javascripts/vue_merge_request_widget/components/states/mr_widget_auto_merge_enabled.vue","new_path":"app/assets/javascripts/vue_merge_request_widget/components/states/mr_widget_auto_merge_enabled.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -177,10 +177,10 @@ export default {\n \u003c/h4\u003e\n \u003csection class=\"mr-info-list\"\u003e\n \u003cp v-if=\"shouldRemoveSourceBranch\"\u003e\n- {{ s__('mrWidget|The source branch will be deleted') }}\n+ {{ s__('mrWidget|Deletes the source branch') }}\n \u003c/p\u003e\n \u003cp v-else class=\"gl-display-flex\"\u003e\n- \u003cspan class=\"gl-mr-3\"\u003e{{ s__('mrWidget|The source branch will not be deleted') }}\u003c/span\u003e\n+ \u003cspan class=\"gl-mr-3\"\u003e{{ s__('mrWidget|Does not delete the source branch') }}\u003c/span\u003e\n \u003cgl-button\n v-if=\"canRemoveSourceBranch\"\n :loading=\"isRemovingSourceBranch\"\n"},{"old_path":"app/assets/javascripts/vue_merge_request_widget/components/states/mr_widget_merging.vue","new_path":"app/assets/javascripts/vue_merge_request_widget/components/states/mr_widget_merging.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -32,7 +32,7 @@ export default {\n \u003c/h4\u003e\n \u003csection class=\"mr-info-list\"\u003e\n \u003cp\u003e\n- {{ s__('mrWidget|The changes will be merged into') }}\n+ {{ s__('mrWidget|Merges changes into') }}\n \u003cspan class=\"label-branch\"\u003e\n \u003ca :href=\"mr.targetBranchPath\"\u003e{{ mr.targetBranch }}\u003c/a\u003e\n \u003c/span\u003e\n"},{"old_path":"app/assets/javascripts/vue_merge_request_widget/components/states/ready_to_merge.vue","new_path":"app/assets/javascripts/vue_merge_request_widget/components/states/ready_to_merge.vue","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -710,10 +710,10 @@ export default {\n \u003c/li\u003e\n \u003cli class=\"gl-line-height-normal\"\u003e\n \u003ctemplate v-if=\"removeSourceBranch\"\u003e\n- {{ __('Source branch will be deleted.') }}\n+ {{ __('Deletes the source branch.') }}\n \u003c/template\u003e\n \u003ctemplate v-else\u003e\n- {{ __('Source branch will not be deleted.') }}\n+ {{ __('Does not delete the source branch.') }}\n \u003c/template\u003e\n \u003c/li\u003e\n \u003cli v-if=\"mr.relatedLinks\" class=\"gl-line-height-normal\"\u003e\n"},{"old_path":"app/assets/stylesheets/framework/files.scss","new_path":"app/assets/stylesheets/framework/files.scss","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -227,7 +227,7 @@\n // IMPORTANT PERFORMANCE OPTIMIZATION\n //\n // When viewinng a blame with many commits a lot of content is rendered on the page.\n- // Two selectors below ensure that we only render what is visible to the user, thus reducing TBT in the browser.\n+ // content-visibility rules below ensure that we only render what is visible to the user, thus reducing TBT in the browser.\n .commit {\n content-visibility: auto;\n contain-intrinsic-size: 1px 3em;\n@@ -237,6 +237,10 @@\n content-visibility: auto;\n contain-intrinsic-size: 1px 1.1875rem;\n }\n+\n+ .line-numbers {\n+ content-visibility: auto;\n+ }\n }\n \n \u0026.logs {\n"},{"old_path":"app/graphql/mutations/issues/set_crm_contacts.rb","new_path":"app/graphql/mutations/issues/set_crm_contacts.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,48 @@\n+# frozen_string_literal: true\n+\n+module Mutations\n+ module Issues\n+ class SetCrmContacts \u003c Base\n+ graphql_name 'IssueSetCrmContacts'\n+\n+ argument :crm_contact_ids,\n+ [::Types::GlobalIDType[::CustomerRelations::Contact]],\n+ required: true,\n+ description: 'Customer relations contact IDs to set. Replaces existing contacts by default.'\n+\n+ argument :operation_mode,\n+ Types::MutationOperationModeEnum,\n+ required: false,\n+ description: 'Changes the operation mode. Defaults to REPLACE.'\n+\n+ def resolve(project_path:, iid:, crm_contact_ids:, operation_mode: Types::MutationOperationModeEnum.enum[:replace])\n+ issue = authorized_find!(project_path: project_path, iid: iid)\n+ project = issue.project\n+ raise Gitlab::Graphql::Errors::ResourceNotAvailable, 'Feature disabled' unless Feature.enabled?(:customer_relations, project.group, default_enabled: :yaml)\n+\n+ crm_contact_ids = crm_contact_ids.compact.map do |crm_contact_id|\n+ raise Gitlab::Graphql::Errors::ArgumentError, \"Contact #{crm_contact_id} is invalid.\" unless crm_contact_id.respond_to?(:model_id)\n+\n+ crm_contact_id.model_id.to_i\n+ end\n+\n+ attribute_name = case operation_mode\n+ when Types::MutationOperationModeEnum.enum[:append]\n+ :add_crm_contact_ids\n+ when Types::MutationOperationModeEnum.enum[:remove]\n+ :remove_crm_contact_ids\n+ else\n+ :crm_contact_ids\n+ end\n+\n+ response = ::Issues::SetCrmContactsService.new(project: project, current_user: current_user, params: { attribute_name =\u003e crm_contact_ids })\n+ .execute(issue)\n+\n+ {\n+ issue: issue,\n+ errors: response.errors\n+ }\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"app/graphql/types/mutation_type.rb","new_path":"app/graphql/types/mutation_type.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -49,6 +49,7 @@ class MutationType \u003c BaseObject\n mount_mutation Mutations::Environments::CanaryIngress::Update\n mount_mutation Mutations::Issues::Create\n mount_mutation Mutations::Issues::SetAssignees\n+ mount_mutation Mutations::Issues::SetCrmContacts\n mount_mutation Mutations::Issues::SetConfidential\n mount_mutation Mutations::Issues::SetLocked\n mount_mutation Mutations::Issues::SetDueDate\n"},{"old_path":"app/helpers/application_settings_helper.rb","new_path":"app/helpers/application_settings_helper.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -404,6 +404,10 @@ def visible_attributes\n :keep_latest_artifact,\n :whats_new_variant,\n :user_deactivation_emails_enabled,\n+ :sentry_enabled,\n+ :sentry_dsn,\n+ :sentry_clientside_dsn,\n+ :sentry_environment,\n :sidekiq_job_limiter_mode,\n :sidekiq_job_limiter_compression_threshold_bytes,\n :sidekiq_job_limiter_limit_bytes,\n"},{"old_path":"app/helpers/issues_helper.rb","new_path":"app/helpers/issues_helper.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,6 +1,8 @@\n # frozen_string_literal: true\n \n module IssuesHelper\n+ include Issues::IssueTypeHelpers\n+\n def issue_css_classes(issue)\n classes = [\"issue\"]\n classes \u003c\u003c \"closed\" if issue.closed?\n"},{"old_path":"app/models/application_setting.rb","new_path":"app/models/application_setting.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -536,6 +536,18 @@ def self.kroki_formats_attributes\n validates :sidekiq_job_limiter_limit_bytes,\n numericality: { only_integer: true, greater_than_or_equal_to: 0 }\n \n+ validates :sentry_enabled,\n+ inclusion: { in: [true, false], message: _('must be a boolean value') }\n+ validates :sentry_dsn,\n+ addressable_url: true, presence: true, length: { maximum: 255 },\n+ if: :sentry_enabled?\n+ validates :sentry_clientside_dsn,\n+ addressable_url: true, allow_blank: true, length: { maximum: 255 },\n+ if: :sentry_enabled?\n+ validates :sentry_environment,\n+ presence: true, length: { maximum: 255 },\n+ if: :sentry_enabled?\n+\n attr_encrypted :asset_proxy_secret_key,\n mode: :per_attribute_iv,\n key: Settings.attr_encrypted_db_key_base_truncated,\n"},{"old_path":"app/models/concerns/alert_event_lifecycle.rb","new_path":"app/models/concerns/alert_event_lifecycle.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -41,8 +41,6 @@ module AlertEventLifecycle\n scope :firing, -\u003e { where(status: status_value_for(:firing)) }\n scope :resolved, -\u003e { where(status: status_value_for(:resolved)) }\n \n- scope :count_by_project_id, -\u003e { group(:project_id).count }\n-\n def self.status_value_for(name)\n state_machines[:status].states[name].value\n end\n"},{"old_path":"app/models/concerns/issuable.rb","new_path":"app/models/concerns/issuable.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -92,7 +92,6 @@ def award_emojis_loaded?\n scope :recent, -\u003e { reorder(id: :desc) }\n scope :of_projects, -\u003e(ids) { where(project_id: ids) }\n scope :opened, -\u003e { with_state(:opened) }\n- scope :only_opened, -\u003e { with_state(:opened) }\n scope :closed, -\u003e { with_state(:closed) }\n \n # rubocop:disable GitlabSecurity/SqlInjection\n"},{"old_path":"app/models/concerns/milestoneable.rb","new_path":"app/models/concerns/milestoneable.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -14,7 +14,6 @@ module Milestoneable\n \n validate :milestone_is_valid\n \n- scope :of_milestones, -\u003e(ids) { where(milestone_id: ids) }\n scope :any_milestone, -\u003e { where.not(milestone_id: nil) }\n scope :with_milestone, -\u003e(title) { left_joins_milestones.where(milestones: { title: title }) }\n scope :without_particular_milestone, -\u003e(title) { left_outer_joins(:milestone).where(\"milestones.title != ? OR milestone_id IS NULL\", title) }\n"},{"old_path":"app/models/customer_relations/issue_contact.rb","new_path":"app/models/customer_relations/issue_contact.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -15,6 +15,6 @@ def contact_belongs_to_issue_group\n return unless issue\u0026.project\u0026.namespace_id\n return if contact.group_id == issue.project.namespace_id\n \n- errors.add(:base, _('The contact does not belong to the same group as the issue.'))\n+ errors.add(:base, _('The contact does not belong to the same group as the issue'))\n end\n end\n"},{"old_path":"app/models/group.rb","new_path":"app/models/group.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -194,13 +194,8 @@ def preset_root_ancestor_for(groups)\n def ids_with_disabled_email(groups)\n inner_groups = Group.where('id = namespaces_with_emails_disabled.id')\n \n- inner_ancestors = if Feature.enabled?(:linear_group_ancestor_scopes, default_enabled: :yaml)\n- inner_groups.self_and_ancestors\n- else\n- Gitlab::ObjectHierarchy.new(inner_groups).base_and_ancestors\n- end\n-\n- inner_query = inner_ancestors\n+ inner_query = inner_groups\n+ .self_and_ancestors\n .where(emails_disabled: true)\n .select('1')\n .limit(1)\n"},{"old_path":"app/policies/issue_policy.rb","new_path":"app/policies/issue_policy.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -12,6 +12,9 @@ class IssuePolicy \u003c IssuablePolicy\n @user \u0026\u0026 IssueCollection.new([@subject]).visible_to(@user).any?\n end\n \n+ desc \"User can read contacts belonging to the issue group\"\n+ condition(:can_read_crm_contacts, scope: :subject) { @user.can?(:read_crm_contact, @subject.project.group) }\n+\n desc \"Issue is confidential\"\n condition(:confidential, scope: :subject) { @subject.confidential? }\n \n@@ -77,6 +80,10 @@ class IssuePolicy \u003c IssuablePolicy\n rule { ~persisted \u0026 can?(:create_issue) }.policy do\n enable :set_confidentiality\n end\n+\n+ rule { can?(:set_issue_metadata) \u0026 can_read_crm_contacts }.policy do\n+ enable :set_issue_crm_contacts\n+ end\n end\n \n IssuePolicy.prepend_mod_with('IssuePolicy')\n"},{"old_path":"app/services/authorized_project_update/project_access_changed_service.rb","new_path":"app/services/authorized_project_update/project_access_changed_service.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,19 @@\n+# frozen_string_literal: true\n+\n+module AuthorizedProjectUpdate\n+ class ProjectAccessChangedService\n+ def initialize(project_ids)\n+ @project_ids = Array.wrap(project_ids)\n+ end\n+\n+ def execute(blocking: true)\n+ bulk_args = @project_ids.map { |id| [id] }\n+\n+ if blocking\n+ AuthorizedProjectUpdate::ProjectRecalculateWorker.bulk_perform_and_wait(bulk_args)\n+ else\n+ AuthorizedProjectUpdate::ProjectRecalculateWorker.bulk_perform_async(bulk_args) # rubocop:disable Scalability/BulkPerformWithContext\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"app/services/concerns/issues/issue_type_helpers.rb","new_path":"app/services/concerns/issues/issue_type_helpers.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,12 @@\n+# frozen_string_literal: true\n+\n+module Issues\n+ module IssueTypeHelpers\n+ # @param object [Issue, Project]\n+ # @param issue_type [String, Symbol]\n+ def create_issue_type_allowed?(object, issue_type)\n+ WorkItem::Type.base_types.key?(issue_type.to_s) \u0026\u0026\n+ can?(current_user, :\"create_#{issue_type}\", object)\n+ end\n+ end\n+end\n"},{"old_path":"app/services/groups/transfer_service.rb","new_path":"app/services/groups/transfer_service.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -175,21 +175,18 @@ def ensure_ownership\n end\n \n def refresh_project_authorizations\n- ProjectAuthorization.where(project_id: @group.all_projects.select(:id)).delete_all # rubocop: disable CodeReuse/ActiveRecord\n+ projects_to_update = Set.new\n \n- # refresh authorized projects for current_user immediately\n- current_user.refresh_authorized_projects\n-\n- # schedule refreshing projects for all the members of the group\n- @group.refresh_members_authorized_projects\n+ # All projects in this hierarchy need to have their project authorizations recalculated\n+ @group.all_projects.each_batch { |prjs| projects_to_update.merge(prjs.ids) } # rubocop: disable CodeReuse/ActiveRecord\n \n # When a group is transferred, it also affects who gets access to the projects shared to\n # the subgroups within its hierarchy, so we also schedule jobs that refresh authorizations for all such shared projects.\n- project_group_shares_within_the_hierarchy = ProjectGroupLink.in_group(group.self_and_descendants.select(:id))\n-\n- project_group_shares_within_the_hierarchy.find_each do |project_group_link|\n- AuthorizedProjectUpdate::ProjectRecalculateWorker.perform_async(project_group_link.project_id)\n+ ProjectGroupLink.in_group(@group.self_and_descendants.select(:id)).each_batch do |project_group_links|\n+ projects_to_update.merge(project_group_links.pluck(:project_id)) # rubocop: disable CodeReuse/ActiveRecord\n end\n+\n+ AuthorizedProjectUpdate::ProjectAccessChangedService.new(projects_to_update.to_a).execute unless projects_to_update.empty?\n end\n \n def raise_transfer_error(message)\n"},{"old_path":"app/services/issues/base_service.rb","new_path":"app/services/issues/base_service.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -3,6 +3,7 @@\n module Issues\n class BaseService \u003c ::IssuableBaseService\n include IncidentManagement::UsageData\n+ include IssueTypeHelpers\n \n def hook_data(issue, action, old_associations: {})\n hook_data = issue.to_hook_data(current_user, old_associations: old_associations)\n@@ -44,7 +45,7 @@ def find_work_item_type_id(issue_type)\n def filter_params(issue)\n super\n \n- params.delete(:issue_type) unless issue_type_allowed?(issue)\n+ params.delete(:issue_type) unless create_issue_type_allowed?(issue, params[:issue_type])\n filter_incident_label(issue) if params[:issue_type]\n \n moved_issue = params.delete(:moved_issue)\n@@ -89,12 +90,6 @@ def delete_milestone_total_issue_counter_cache(milestone)\n Milestones::IssuesCountService.new(milestone).delete_cache\n end\n \n- # @param object [Issue, Project]\n- def issue_type_allowed?(object)\n- WorkItem::Type.base_types.key?(params[:issue_type]) \u0026\u0026\n- can?(current_user, :\"create_#{params[:issue_type]}\", object)\n- end\n-\n # @param issue [Issue]\n def filter_incident_label(issue)\n return unless add_incident_label?(issue) || remove_incident_label?(issue)\n"},{"old_path":"app/services/issues/build_service.rb","new_path":"app/services/issues/build_service.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -80,7 +80,7 @@ def allowed_issue_params\n ]\n \n allowed_params \u003c\u003c :milestone_id if can?(current_user, :admin_issue, project)\n- allowed_params \u003c\u003c :issue_type if issue_type_allowed?(project)\n+ allowed_params \u003c\u003c :issue_type if create_issue_type_allowed?(project, params[:issue_type])\n \n params.slice(*allowed_params)\n end\n"},{"old_path":"app/services/issues/set_crm_contacts_service.rb","new_path":"app/services/issues/set_crm_contacts_service.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,90 @@\n+# frozen_string_literal: true\n+\n+module Issues\n+ class SetCrmContactsService \u003c ::BaseProjectService\n+ attr_accessor :issue, :errors\n+\n+ MAX_ADDITIONAL_CONTACTS = 6\n+\n+ def execute(issue)\n+ @issue = issue\n+ @errors = []\n+\n+ return error_no_permissions unless allowed?\n+ return error_invalid_params unless valid_params?\n+\n+ determine_changes if params[:crm_contact_ids]\n+\n+ return error_too_many if too_many?\n+\n+ add_contacts if params[:add_crm_contact_ids]\n+ remove_contacts if params[:remove_crm_contact_ids]\n+\n+ if issue.valid?\n+ ServiceResponse.success(payload: issue)\n+ else\n+ # The default error isn't very helpful: \"Issue customer relations contacts is invalid\"\n+ issue.errors.delete(:issue_customer_relations_contacts)\n+ issue.errors.add(:issue_customer_relations_contacts, errors.to_sentence)\n+ ServiceResponse.error(payload: issue, message: issue.errors.full_messages)\n+ end\n+ end\n+\n+ private\n+\n+ def determine_changes\n+ existing_contact_ids = issue.issue_customer_relations_contacts.map(\u0026:contact_id)\n+ params[:add_crm_contact_ids] = params[:crm_contact_ids] - existing_contact_ids\n+ params[:remove_crm_contact_ids] = existing_contact_ids - params[:crm_contact_ids]\n+ end\n+\n+ def add_contacts\n+ params[:add_crm_contact_ids].uniq.each do |contact_id|\n+ issue_contact = issue.issue_customer_relations_contacts.create(contact_id: contact_id)\n+\n+ unless issue_contact.persisted?\n+ # The validation ensures that the id exists and the user has permission\n+ errors \u003c\u003c \"#{contact_id}: The resource that you are attempting to access does not exist or you don't have permission to perform this action\"\n+ end\n+ end\n+ end\n+\n+ def remove_contacts\n+ issue.issue_customer_relations_contacts\n+ .where(contact_id: params[:remove_crm_contact_ids]) # rubocop: disable CodeReuse/ActiveRecord\n+ .delete_all\n+ end\n+\n+ def allowed?\n+ current_user\u0026.can?(:set_issue_crm_contacts, issue)\n+ end\n+\n+ def valid_params?\n+ set_present? ^ add_or_remove_present?\n+ end\n+\n+ def set_present?\n+ params[:crm_contact_ids].present?\n+ end\n+\n+ def add_or_remove_present?\n+ params[:add_crm_contact_ids].present? || params[:remove_crm_contact_ids].present?\n+ end\n+\n+ def too_many?\n+ params[:add_crm_contact_ids] \u0026\u0026 params[:add_crm_contact_ids].length \u003e MAX_ADDITIONAL_CONTACTS\n+ end\n+\n+ def error_no_permissions\n+ ServiceResponse.error(message: ['You have insufficient permissions to set customer relations contacts for this issue'])\n+ end\n+\n+ def error_invalid_params\n+ ServiceResponse.error(message: ['You cannot combine crm_contact_ids with add_crm_contact_ids or remove_crm_contact_ids'])\n+ end\n+\n+ def error_too_many\n+ ServiceResponse.error(payload: issue, message: [\"You can only add up to #{MAX_ADDITIONAL_CONTACTS} contacts at one time\"])\n+ end\n+ end\n+end\n"},{"old_path":"app/services/projects/container_repository/cleanup_tags_service.rb","new_path":"app/services/projects/container_repository/cleanup_tags_service.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -146,8 +146,7 @@ def cache\n \n def caching_enabled?\n container_expiration_policy \u0026\u0026\n- older_than.present? \u0026\u0026\n- Feature.enabled?(:container_registry_expiration_policies_caching, @project)\n+ older_than.present?\n end\n \n def throttling_enabled?\n"},{"old_path":"app/views/admin/application_settings/_sentry.html.haml","new_path":"app/views/admin/application_settings/_sentry.html.haml","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,22 @@\n+= form_for @application_setting, url: metrics_and_profiling_admin_application_settings_path(anchor: 'js-sentry-settings'), html: { class: 'fieldset-form', id: 'sentry-settings' } do |f|\n+ = form_errors(@application_setting)\n+\n+ %span.text-muted\n+ = _('Changing any setting here requires an application restart')\n+\n+ %fieldset\n+ .form-group\n+ .form-check\n+ = f.check_box :sentry_enabled, class: 'form-check-input'\n+ = f.label :sentry_enabled, _('Enable Sentry error tracking'), class: 'form-check-label'\n+ .form-group\n+ = f.label :sentry_dsn, _('DSN'), class: 'label-light'\n+ = f.text_field :sentry_dsn, class: 'form-control gl-form-input', placeholder: 'https://public@sentry.example.com/1'\n+ .form-group\n+ = f.label :sentry_clientside_dsn, _('Clientside DSN'), class: 'label-light'\n+ = f.text_field :sentry_clientside_dsn, class: 'form-control gl-form-input', placeholder: 'https://public@sentry.example.com/2'\n+ .form-group\n+ = f.label :sentry_environment, _('Environment'), class: 'label-light'\n+ = f.text_field :sentry_environment, class: 'form-control gl-form-input', placeholder: Rails.env\n+\n+ = f.submit _('Save changes'), class: 'gl-button btn btn-confirm'\n"},{"old_path":"app/views/admin/application_settings/metrics_and_profiling.html.haml","new_path":"app/views/admin/application_settings/metrics_and_profiling.html.haml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -54,3 +54,15 @@\n = render 'usage'\n \n = render_if_exists 'admin/application_settings/pseudonymizer_settings', expanded: expanded_by_default?\n+\n+- if Feature.enabled?(:configure_sentry_in_application_settings, default_enabled: :yaml)\n+ %section.settings.as-sentry.no-animate#js-sentry-settings{ class: ('expanded' if expanded_by_default?), data: { qa_selector: 'sentry_settings_content' } }\n+ .settings-header\n+ %h4\n+ = _('Sentry')\n+ %button.btn.gl-button.btn-default.js-settings-toggle{ type: 'button' }\n+ = expanded_by_default? ? _('Collapse') : _('Expand')\n+ %p\n+ = _('Configure Sentry integration for error tracking')\n+ .settings-content\n+ = render 'sentry'\n"},{"old_path":"app/views/admin/dev_ops_report/_report.html.haml","new_path":"app/views/admin/dev_ops_report/_score.html.haml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":""},{"old_path":"app/views/admin/dev_ops_report/show.html.haml","new_path":"app/views/admin/dev_ops_report/show.html.haml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -6,5 +6,5 @@\n - if show_adoption?\n = render_if_exists 'admin/dev_ops_report/devops_tabs'\n - else\n- = render 'report'\n+ = render 'score'\n \n"},{"old_path":"app/views/projects/product_analytics/_links.html.haml","new_path":"app/views/projects/product_analytics/_links.html.haml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,10 +1,5 @@\n-.mb-3\n- %ul.nav-links\n- = nav_link(path: 'product_analytics#index') do\n- = link_to _('Events'), project_product_analytics_path(@project)\n- = nav_link(path: 'product_analytics#graphs') do\n- = link_to 'Graphs', graphs_project_product_analytics_path(@project)\n- = nav_link(path: 'product_analytics#test') do\n- = link_to _('Test'), test_project_product_analytics_path(@project)\n- = nav_link(path: 'product_analytics#setup') do\n- = link_to _('Setup'), setup_project_product_analytics_path(@project)\n+= gl_tabs_nav({ class: 'mb-3'}) do\n+ = gl_tab_link_to _('Events'), project_product_analytics_path(@project)\n+ = gl_tab_link_to _('Graphs'), graphs_project_product_analytics_path(@project)\n+ = gl_tab_link_to _('Test'), test_project_product_analytics_path(@project)\n+ = gl_tab_link_to _('Setup'), setup_project_product_analytics_path(@project)\n"},{"old_path":"app/views/shared/issuable/form/_type_selector.html.haml","new_path":"app/views/shared/issuable/form/_type_selector.html.haml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -18,17 +18,19 @@\n = sprite_icon('close', size: 16, css_class: 'dropdown-menu-close-icon')\n .dropdown-content{ data: { testid: 'issue-type-select-dropdown' } }\n %ul\n- %li.js-filter-issuable-type\n- = link_to new_project_issue_path(@project), class: (\"is-active\" if issuable.issue?) do\n- #{sprite_icon(work_item_type_icon(:issue), css_class: 'gl-icon')} #{_(\"Issue\")}\n- %li.js-filter-issuable-type{ data: { track: { action: \"select_issue_type_incident\", label: \"select_issue_type_incident_dropdown_option\" } } }\n- = link_to new_project_issue_path(@project, { issuable_template: 'incident', issue: { issue_type: 'incident' } }), class: (\"is-active\" if issuable.incident?) do\n- #{sprite_icon(work_item_type_icon(:incident), css_class: 'gl-icon')} #{_(\"Incident\")}\n+ - if create_issue_type_allowed?(@project, :issue)\n+ %li.js-filter-issuable-type\n+ = link_to new_project_issue_path(@project), class: (\"is-active\" if issuable.issue?) do\n+ #{sprite_icon(work_item_type_icon(:issue), css_class: 'gl-icon')} #{_('Issue')}\n+ - if create_issue_type_allowed?(@project, :incident)\n+ %li.js-filter-issuable-type{ data: { track: { action: \"select_issue_type_incident\", label: \"select_issue_type_incident_dropdown_option\" } } }\n+ = link_to new_project_issue_path(@project, { issuable_template: 'incident', issue: { issue_type: 'incident' } }), class: (\"is-active\" if issuable.incident?) do\n+ #{sprite_icon(work_item_type_icon(:incident), css_class: 'gl-icon')} #{_('Incident')}\n \n #js-type-popover\n \n - if issuable.incident?\n %p.form-text.text-muted\n - incident_docs_url = help_page_path('operations/incident_management/incidents.md')\n- - incident_docs_start = '\u003ca href=\"%{url}\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e'.html_safe % { url: incident_docs_url }\n- = _('A %{incident_docs_start}modified issue%{incident_docs_end} to guide the resolution of incidents.').html_safe % { incident_docs_start: incident_docs_start, incident_docs_end: '\u003c/a\u003e'.html_safe }\n+ - incident_docs_start = format('\u003ca href=\"%{url}\" target=\"_blank\" rel=\"noopener noreferrer\"\u003e', url: incident_docs_url)\n+ = format(_('A %{incident_docs_start}modified issue%{incident_docs_end} to guide the resolution of incidents.'), incident_docs_start: incident_docs_start, incident_docs_end: '\u003c/a\u003e').html_safe\n"},{"old_path":"app/workers/authorized_project_update/project_recalculate_worker.rb","new_path":"app/workers/authorized_project_update/project_recalculate_worker.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -7,6 +7,8 @@ class ProjectRecalculateWorker\n data_consistency :always\n include Gitlab::ExclusiveLeaseHelpers\n \n+ prepend WaitableWorker\n+\n feature_category :authentication_and_authorization\n urgency :high\n queue_namespace :authorized_project_update\n"},{"old_path":"app/workers/container_expiration_policies/cleanup_container_repository_worker.rb","new_path":"app/workers/container_expiration_policies/cleanup_container_repository_worker.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -159,7 +159,10 @@ def log_cache_ratio(result)\n \n return unless tags_count \u0026\u0026 cached_tags_count \u0026\u0026 tags_count != 0\n \n- log_extra_metadata_on_done(:cleanup_tags_service_cache_hit_ratio, cached_tags_count / tags_count.to_f)\n+ ratio = cached_tags_count / tags_count.to_f\n+ ratio_as_percentage = (ratio * 100).round(2)\n+\n+ log_extra_metadata_on_done(:cleanup_tags_service_cache_hit_ratio, ratio_as_percentage)\n end\n \n def log_truncate(result)\n"},{"old_path":"bin/sidekiq-cluster","new_path":"bin/sidekiq-cluster","a_mode":"100755","b_mode":"100755","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,13 +1,7 @@\n #!/usr/bin/env ruby\n # frozen_string_literal: true\n \n-require 'optparse'\n-require_relative '../lib/gitlab'\n-require_relative '../lib/gitlab/utils'\n-require_relative '../lib/gitlab/sidekiq_config/cli_methods'\n-require_relative '../lib/gitlab/sidekiq_config/worker_matcher'\n-require_relative '../lib/gitlab/sidekiq_cluster'\n-require_relative '../lib/gitlab/sidekiq_cluster/cli'\n+require_relative '../sidekiq_cluster/cli'\n \n Thread.abort_on_exception = true\n \n"},{"old_path":"config/application.rb","new_path":"config/application.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -16,6 +16,8 @@\n \n module Gitlab\n class Application \u003c Rails::Application\n+ config.load_defaults 6.1\n+\n require_dependency Rails.root.join('lib/gitlab')\n require_dependency Rails.root.join('lib/gitlab/utils')\n require_dependency Rails.root.join('lib/gitlab/action_cable/config')\n@@ -37,8 +39,6 @@ class Application \u003c Rails::Application\n require_dependency Rails.root.join('lib/gitlab/runtime')\n require_dependency Rails.root.join('lib/gitlab/patch/legacy_database_config')\n \n- config.autoloader = :zeitwerk\n-\n # To be removed in 15.0\n # This preload is needed to convert legacy `database.yml`\n # from `production: adapter: postgresql`\n@@ -190,11 +190,12 @@ class Application \u003c Rails::Application\n # regardless if schema_search_path is set, or not.\n config.active_record.dump_schemas = :all\n \n- # Use new connection handling so that we can use Rails 6.1+ multiple\n- # database support.\n- config.active_record.legacy_connection_handling = false\n-\n- config.action_mailer.delivery_job = \"ActionMailer::MailDeliveryJob\"\n+ # Override default Active Record settings\n+ # We cannot do this in an initializer because some models are already loaded by then\n+ config.active_record.cache_versioning = false\n+ config.active_record.collection_cache_versioning = false\n+ config.active_record.has_many_inversing = false\n+ config.active_record.belongs_to_required_by_default = false\n \n # Enable the asset pipeline\n config.assets.enabled = true\n@@ -380,6 +381,7 @@ class Application \u003c Rails::Application\n config.cache_store = :redis_cache_store, Gitlab::Redis::Cache.active_support_config\n \n config.active_job.queue_adapter = :sidekiq\n+ config.action_mailer.deliver_later_queue_name = :mailers\n \n # This is needed for gitlab-shell\n ENV['GITLAB_PATH_OUTSIDE_HOOK'] = ENV['PATH']\n"},{"old_path":"config/feature_flags/development/linear_group_ancestor_scopes.yml","new_path":"config/feature_flags/development/api_v3_commits_skip_diff_files.yml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,8 +1,8 @@\n ---\n-name: linear_group_ancestor_scopes\n-introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/70495\n-rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/341115\n-milestone: '14.4'\n+name: api_v3_commits_skip_diff_files\n+introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67647\n+rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/344617\n+milestone: '14.5'\n type: development\n-group: group::access\n+group: group::integrations\n default_enabled: false\n"},{"old_path":"config/feature_flags/development/ci_new_artifact_file_reader.yml","new_path":"config/feature_flags/development/configure_sentry_in_application_settings.yml","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,8 +1,8 @@\n ---\n-name: ci_new_artifact_file_reader\n-introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/46552\n-rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/273755\n-milestone: '13.6'\n+name: configure_sentry_in_application_settings\n+introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/73381\n+rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/344832\n+milestone: '14.5'\n type: development\n-group: group::pipeline authoring\n-default_enabled: true\n+group: group::pipeline execution\n+default_enabled: false\n"},{"old_path":"config/feature_flags/development/container_registry_expiration_policies_caching.yml","new_path":"config/feature_flags/development/container_registry_expiration_policies_caching.yml","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,8 +0,0 @@\n----\n-name: container_registry_expiration_policies_caching\n-introduced_by_url:\n-rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/340606\n-milestone: '14.3'\n-type: development\n-group: group::package\n-default_enabled: false\n"},{"old_path":"config/initializers/0_acts_as_taggable.rb","new_path":"config/initializers/1_acts_as_taggable.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -9,3 +9,9 @@\n # validate that counter cache is disabled\n raise \"Counter cache is not disabled\" if\n ActsAsTaggableOn::Tagging.reflections[\"tag\"].options[:counter_cache]\n+\n+# Redirects retrieve_connection to use Ci::ApplicationRecord's connection\n+[::ActsAsTaggableOn::Tag, ::ActsAsTaggableOn::Tagging].each do |model|\n+ model.connection_specification_name = Ci::ApplicationRecord.connection_specification_name\n+ model.singleton_class.delegate :connection, :sticking, to: '::Ci::ApplicationRecord'\n+end\n"},{"old_path":"config/initializers/action_view.rb","new_path":"config/initializers/action_view.rb","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,7 +0,0 @@\n-# frozen_string_literal: true\n-\n-# This file was introduced during upgrading Rails from 5.2 to 6.0.\n-# This file can be removed when `config.load_defaults 6.0` is introduced.\n-\n-# Don't force requests from old versions of IE to be UTF-8 encoded.\n-Rails.application.config.action_view.default_enforce_utf8 = false\n"},{"old_path":"config/initializers/cookies_serializer.rb","new_path":"config/initializers/cookies_serializer.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -2,6 +2,5 @@\n \n # Be sure to restart your server when you modify this file.\n \n-Rails.application.config.action_dispatch.use_cookies_with_metadata = true\n Rails.application.config.action_dispatch.cookies_serializer =\n Gitlab::Utils.to_boolean(ENV['USE_UNSAFE_HYBRID_COOKIES']) ? :hybrid : :json\n"},{"old_path":"config/initializers/database_query_analyzers.rb","new_path":"config/initializers/database_query_analyzers.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,4 @@\n+# frozen_string_literal: true\n+\n+# Currently we register validator only for `dev` or `test` environment\n+Gitlab::Database::QueryAnalyzer.new.hook! if Gitlab.dev_or_test_env?\n"},{"old_path":"config/initializers/new_framework_defaults.rb","new_path":"config/initializers/new_framework_defaults.rb","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,24 +0,0 @@\n-# frozen_string_literal: true\n-\n-# Remove this `if` condition when upgraded to rails 5.0.\n-# The body must be kept.\n-# Be sure to restart your server when you modify this file.\n-#\n-# This file contains migration options to ease your Rails 5.0 upgrade.\n-#\n-# Once upgraded flip defaults one by one to migrate to the new default.\n-#\n-# Read the Guide for Upgrading Ruby on Rails for more info on each option.\n-\n-# Enable per-form CSRF tokens. Previous versions had false.\n-Rails.application.config.action_controller.per_form_csrf_tokens = false\n-\n-# Enable origin-checking CSRF mitigation. Previous versions had false.\n-Rails.application.config.action_controller.forgery_protection_origin_check = false\n-\n-# Make Ruby 2.4 preserve the timezone of the receiver when calling `to_time`.\n-# Previous versions had false.\n-ActiveSupport.to_time_preserves_timezone = false\n-\n-# Require `belongs_to` associations by default. Previous versions had false.\n-Rails.application.config.active_record.belongs_to_required_by_default = false\n"},{"old_path":"config/initializers_before_autoloader/000_override_framework_defaults.rb","new_path":"config/initializers_before_autoloader/000_override_framework_defaults.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,35 @@\n+# frozen_string_literal: true\n+\n+# This contains configuration from Rails upgrades to override the new defaults so that we\n+# keep existing behavior.\n+#\n+# For boolean values, the new default is the opposite of the value being set in this file.\n+# For other types, the new default is noted in the comments. These are also documented in\n+# https://guides.rubyonrails.org/configuring.html#results-of-config-load-defaults\n+#\n+# To switch a setting to the new default value, we just need to delete the specific line here.\n+\n+Rails.application.configure do\n+ # Rails 6.1\n+ config.action_dispatch.cookies_same_site_protection = nil # New default is :lax\n+ config.action_dispatch.ssl_default_redirect_status = nil # New default is 308\n+ ActiveSupport.utc_to_local_returns_utc_offset_times = false\n+ config.action_controller.urlsafe_csrf_tokens = false\n+ config.action_view.preload_links_header = false\n+\n+ # Rails 5.2\n+ config.action_dispatch.use_authenticated_cookie_encryption = false\n+ config.active_support.use_authenticated_message_encryption = false\n+ config.active_support.hash_digest_class = ::Digest::MD5 # New default is ::Digest::SHA1\n+ config.action_controller.default_protect_from_forgery = false\n+ config.action_view.form_with_generates_ids = false\n+\n+ # Rails 5.1\n+ config.assets.unknown_asset_fallback = true\n+\n+ # Rails 5.0\n+ config.action_controller.per_form_csrf_tokens = false\n+ config.action_controller.forgery_protection_origin_check = false\n+ ActiveSupport.to_time_preserves_timezone = false\n+ config.ssl_options = {} # New default is { hsts: { subdomains: true } }\n+end\n"},{"old_path":"config/plugins/graphql_known_operations_plugin.js","new_path":"config/plugins/graphql_known_operations_plugin.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,112 @@\n+/* eslint-disable no-underscore-dangle */\n+const yaml = require('js-yaml');\n+\n+const PLUGIN_NAME = 'GraphqlKnownOperationsPlugin';\n+const GRAPHQL_PATH_REGEX = /(query|mutation)\\.graphql$/;\n+const OPERATION_NAME_SOURCE_REGEX = /^\\s*module\\.exports.*oneQuery.*\"(\\w+)\"/gm;\n+\n+/**\n+ * Returns whether a given webpack module is a \"graphql\" module\n+ */\n+const isGraphqlModule = (module) =\u003e {\n+ return GRAPHQL_PATH_REGEX.test(module.resource);\n+};\n+\n+/**\n+ * Returns graphql operation names we can parse from the given module\n+ *\n+ * Since webpack gives us the source **after** the graphql-tag/loader runs,\n+ * we can look for specific lines we're guaranteed to have from the\n+ * graphql-tag/loader.\n+ */\n+const getOperationNames = (module) =\u003e {\n+ const originalSource = module.originalSource();\n+\n+ if (!originalSource) {\n+ return [];\n+ }\n+\n+ const matches = originalSource.source().toString().matchAll(OPERATION_NAME_SOURCE_REGEX);\n+\n+ return Array.from(matches).map((match) =\u003e match[1]);\n+};\n+\n+const createFileContents = (knownOperations) =\u003e {\n+ const sourceData = Array.from(knownOperations.values()).sort((a, b) =\u003e a.localeCompare(b));\n+\n+ return yaml.dump(sourceData);\n+};\n+\n+/**\n+ * Creates a webpack4 compatible \"RawSource\"\n+ *\n+ * Inspired from https://sourcegraph.com/github.com/FormidableLabs/webpack-stats-plugin@e050ff8c362d5ddd45c66ade724d4a397ace3e5c/-/blob/lib/stats-writer-plugin.js?L144\n+ */\n+const createWebpackRawSource = (source) =\u003e {\n+ const buff = Buffer.from(source, 'utf-8');\n+\n+ return {\n+ source() {\n+ return buff;\n+ },\n+ size() {\n+ return buff.length;\n+ },\n+ };\n+};\n+\n+const onSucceedModule = ({ module, knownOperations }) =\u003e {\n+ if (!isGraphqlModule(module)) {\n+ return;\n+ }\n+\n+ getOperationNames(module).forEach((x) =\u003e knownOperations.add(x));\n+};\n+\n+const onCompilerEmit = ({ compilation, knownOperations, filename }) =\u003e {\n+ const contents = createFileContents(knownOperations);\n+ const source = createWebpackRawSource(contents);\n+\n+ const asset = compilation.getAsset(filename);\n+ if (asset) {\n+ compilation.updateAsset(filename, source);\n+ } else {\n+ compilation.emitAsset(filename, source);\n+ }\n+};\n+\n+/**\n+ * Webpack plugin that outputs a file containing known graphql operations.\n+ *\n+ * A lot of the mechanices was expired from [this example][1].\n+ *\n+ * [1]: https://sourcegraph.com/github.com/FormidableLabs/webpack-stats-plugin@e050ff8c362d5ddd45c66ade724d4a397ace3e5c/-/blob/lib/stats-writer-plugin.js?L136\n+ */\n+class GraphqlKnownOperationsPlugin {\n+ constructor({ filename }) {\n+ this._filename = filename;\n+ }\n+\n+ apply(compiler) {\n+ const knownOperations = new Set();\n+\n+ compiler.hooks.emit.tap(PLUGIN_NAME, (compilation) =\u003e {\n+ onCompilerEmit({\n+ compilation,\n+ knownOperations,\n+ filename: this._filename,\n+ });\n+ });\n+\n+ compiler.hooks.compilation.tap(PLUGIN_NAME, (compilation) =\u003e {\n+ compilation.hooks.succeedModule.tap(PLUGIN_NAME, (module) =\u003e {\n+ onSucceedModule({\n+ module,\n+ knownOperations,\n+ });\n+ });\n+ });\n+ }\n+}\n+\n+module.exports = GraphqlKnownOperationsPlugin;\n"},{"old_path":"config/webpack.config.js","new_path":"config/webpack.config.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -24,6 +24,7 @@ const IS_JH = require('./helpers/is_jh_env');\n const vendorDllHash = require('./helpers/vendor_dll_hash');\n \n const MonacoWebpackPlugin = require('./plugins/monaco_webpack');\n+const GraphqlKnownOperationsPlugin = require('./plugins/graphql_known_operations_plugin');\n \n const ROOT_PATH = path.resolve(__dirname, '..');\n const SUPPORTED_BROWSERS = fs.readFileSync(path.join(ROOT_PATH, '.browserslistrc'), 'utf-8');\n@@ -456,6 +457,8 @@ module.exports = {\n globalAPI: true,\n }),\n \n+ new GraphqlKnownOperationsPlugin({ filename: 'graphql_known_operations.yml' }),\n+\n // fix legacy jQuery plugins which depend on globals\n new webpack.ProvidePlugin({\n $: 'jquery',\n"},{"old_path":"db/migrate/20211021125908_add_sentry_settings_to_application_settings.rb","new_path":"db/migrate/20211021125908_add_sentry_settings_to_application_settings.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,12 @@\n+# frozen_string_literal: true\n+\n+class AddSentrySettingsToApplicationSettings \u003c Gitlab::Database::Migration[1.0]\n+ # rubocop:disable Migration/AddLimitToTextColumns\n+ def change\n+ add_column :application_settings, :sentry_enabled, :boolean, default: false, null: false\n+ add_column :application_settings, :sentry_dsn, :text\n+ add_column :application_settings, :sentry_clientside_dsn, :text\n+ add_column :application_settings, :sentry_environment, :text\n+ end\n+ # rubocop:enable Migration/AddLimitToTextColumns\n+end\n"},{"old_path":"db/migrate/20211021134458_add_limits_to_sentry_settings_on_application_settings.rb","new_path":"db/migrate/20211021134458_add_limits_to_sentry_settings_on_application_settings.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,17 @@\n+# frozen_string_literal: true\n+\n+class AddLimitsToSentrySettingsOnApplicationSettings \u003c Gitlab::Database::Migration[1.0]\n+ disable_ddl_transaction!\n+\n+ def up\n+ add_text_limit :application_settings, :sentry_dsn, 255\n+ add_text_limit :application_settings, :sentry_clientside_dsn, 255\n+ add_text_limit :application_settings, :sentry_environment, 255\n+ end\n+\n+ def down\n+ remove_text_limit :application_settings, :sentry_dsn\n+ remove_text_limit :application_settings, :sentry_clientside_dsn\n+ remove_text_limit :application_settings, :sentry_environment\n+ end\n+end\n"},{"old_path":"db/post_migrate/20211005194425_schedule_requirements_migration.rb","new_path":"db/post_migrate/20211005194425_schedule_requirements_migration.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,35 @@\n+# frozen_string_literal: true\n+\n+class ScheduleRequirementsMigration \u003c Gitlab::Database::Migration[1.0]\n+ DOWNTIME = false\n+\n+ # 2021-10-05 requirements count: ~12500\n+ #\n+ # Using 30 as batch size and 120 seconds default interval will produce:\n+ # ~420 jobs - taking ~14 hours to perform\n+ BATCH_SIZE = 30\n+\n+ MIGRATION = 'MigrateRequirementsToWorkItems'\n+\n+ disable_ddl_transaction!\n+\n+ class Requirement \u003c ActiveRecord::Base\n+ include EachBatch\n+\n+ self.table_name = 'requirements'\n+ end\n+\n+ def up\n+ queue_background_migration_jobs_by_range_at_intervals(\n+ Requirement.where(issue_id: nil),\n+ MIGRATION,\n+ 2.minutes,\n+ batch_size: BATCH_SIZE,\n+ track_jobs: true\n+ )\n+ end\n+\n+ def down\n+ # NO OP\n+ end\n+end\n"},{"old_path":"db/schema_migrations/20211005194425","new_path":"db/schema_migrations/20211005194425","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1 @@\n+6647e94d315c76629f9726e26bafd124fb2fed361568d65315e7c7557f8d9ecf\n\\ No newline at end of file\n"},{"old_path":"db/schema_migrations/20211021125908","new_path":"db/schema_migrations/20211021125908","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1 @@\n+d6fbe3efc3e45b750d82e277e30b7b0048b960d9f9f5b4f7c6a7a1ed869e76b5\n\\ No newline at end of file\n"},{"old_path":"db/schema_migrations/20211021134458","new_path":"db/schema_migrations/20211021134458","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1 @@\n+1baa8db0d42a8d99e48b61930f5c42d1af5f86555488419b6551e1dbf417d3ad\n\\ No newline at end of file\n"},{"old_path":"db/structure.sql","new_path":"db/structure.sql","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -10457,6 +10457,10 @@ CREATE TABLE application_settings (\n encrypted_content_validation_api_key bytea,\n encrypted_content_validation_api_key_iv bytea,\n content_validation_endpoint_enabled boolean DEFAULT false NOT NULL,\n+ sentry_enabled boolean DEFAULT false NOT NULL,\n+ sentry_dsn text,\n+ sentry_clientside_dsn text,\n+ sentry_environment text,\n CONSTRAINT app_settings_container_reg_cleanup_tags_max_list_size_positive CHECK ((container_registry_cleanup_tags_service_max_list_size \u003e= 0)),\n CONSTRAINT app_settings_dep_proxy_ttl_policies_worker_capacity_positive CHECK ((dependency_proxy_ttl_group_policy_worker_capacity \u003e= 0)),\n CONSTRAINT app_settings_ext_pipeline_validation_service_url_text_limit CHECK ((char_length(external_pipeline_validation_service_url) \u003c= 255)),\n@@ -10465,9 +10469,12 @@ CREATE TABLE application_settings (\n CONSTRAINT app_settings_yaml_max_size_positive CHECK ((max_yaml_size_bytes \u003e 0)),\n CONSTRAINT check_17d9558205 CHECK ((char_length((kroki_url)::text) \u003c= 1024)),\n CONSTRAINT check_2dba05b802 CHECK ((char_length(gitpod_url) \u003c= 255)),\n+ CONSTRAINT check_3def0f1829 CHECK ((char_length(sentry_clientside_dsn) \u003c= 255)),\n+ CONSTRAINT check_4f8b811780 CHECK ((char_length(sentry_dsn) \u003c= 255)),\n CONSTRAINT check_51700b31b5 CHECK ((char_length(default_branch_name) \u003c= 255)),\n CONSTRAINT check_57123c9593 CHECK ((char_length(help_page_documentation_base_url) \u003c= 255)),\n CONSTRAINT check_5a84c3ffdc CHECK ((char_length(content_validation_endpoint_url) \u003c= 255)),\n+ CONSTRAINT check_5bcba483c4 CHECK ((char_length(sentry_environment) \u003c= 255)),\n CONSTRAINT check_718b4458ae CHECK ((char_length(personal_access_token_prefix) \u003c= 20)),\n CONSTRAINT check_7227fad848 CHECK ((char_length(rate_limiting_response_text) \u003c= 255)),\n CONSTRAINT check_85a39b68ff CHECK ((char_length(encrypted_ci_jwt_signing_key_iv) \u003c= 255)),\n"},{"old_path":"doc/api/graphql/reference/index.md","new_path":"doc/api/graphql/reference/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -2820,6 +2820,28 @@ Input type: `IssueSetConfidentialInput`\n | \u003ca id=\"mutationissuesetconfidentialerrors\"\u003e\u003c/a\u003e`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |\n | \u003ca id=\"mutationissuesetconfidentialissue\"\u003e\u003c/a\u003e`issue` | [`Issue`](#issue) | Issue after mutation. |\n \n+### `Mutation.issueSetCrmContacts`\n+\n+Input type: `IssueSetCrmContactsInput`\n+\n+#### Arguments\n+\n+| Name | Type | Description |\n+| ---- | ---- | ----------- |\n+| \u003ca id=\"mutationissuesetcrmcontactsclientmutationid\"\u003e\u003c/a\u003e`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |\n+| \u003ca id=\"mutationissuesetcrmcontactscrmcontactids\"\u003e\u003c/a\u003e`crmContactIds` | [`[CustomerRelationsContactID!]!`](#customerrelationscontactid) | Customer relations contact IDs to set. Replaces existing contacts by default. |\n+| \u003ca id=\"mutationissuesetcrmcontactsiid\"\u003e\u003c/a\u003e`iid` | [`String!`](#string) | IID of the issue to mutate. |\n+| \u003ca id=\"mutationissuesetcrmcontactsoperationmode\"\u003e\u003c/a\u003e`operationMode` | [`MutationOperationMode`](#mutationoperationmode) | Changes the operation mode. Defaults to REPLACE. |\n+| \u003ca id=\"mutationissuesetcrmcontactsprojectpath\"\u003e\u003c/a\u003e`projectPath` | [`ID!`](#id) | Project the issue to mutate is in. |\n+\n+#### Fields\n+\n+| Name | Type | Description |\n+| ---- | ---- | ----------- |\n+| \u003ca id=\"mutationissuesetcrmcontactsclientmutationid\"\u003e\u003c/a\u003e`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |\n+| \u003ca id=\"mutationissuesetcrmcontactserrors\"\u003e\u003c/a\u003e`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |\n+| \u003ca id=\"mutationissuesetcrmcontactsissue\"\u003e\u003c/a\u003e`issue` | [`Issue`](#issue) | Issue after mutation. |\n+\n ### `Mutation.issueSetDueDate`\n \n Input type: `IssueSetDueDateInput`\n"},{"old_path":"doc/api/packages/maven.md","new_path":"doc/api/packages/maven.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -36,13 +36,13 @@ GET packages/maven/*path/:file_name\n | `file_name` | string | yes | The name of the Maven package file. |\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\"\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\"\n ```\n \n To write the output to file:\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n ```\n \n This writes the downloaded file to `mypkg-1.0-SNAPSHOT.jar` in the current directory.\n@@ -63,13 +63,13 @@ GET groups/:id/-/packages/maven/*path/:file_name\n | `file_name` | string | yes | The name of the Maven package file. |\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/groups/1/-/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\"\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/groups/1/-/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\"\n ```\n \n To write the output to file:\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/groups/1/-/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/groups/1/-/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n ```\n \n This writes the downloaded file to `mypkg-1.0-SNAPSHOT.jar` in the current directory.\n@@ -90,13 +90,13 @@ GET projects/:id/packages/maven/*path/:file_name\n | `file_name` | string | yes | The name of the Maven package file. |\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\"\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\"\n ```\n \n To write the output to file:\n \n ```shell\n-curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n+curl --header \"Private-Token: \u003cpersonal_access_token\u003e\" \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.jar\" \u003e\u003e mypkg-1.0-SNAPSHOT.jar\n ```\n \n This writes the downloaded file to `mypkg-1.0-SNAPSHOT.jar` in the current directory.\n@@ -120,5 +120,5 @@ PUT projects/:id/packages/maven/*path/:file_name\n curl --request PUT \\\n --upload-file path/to/mypkg-1.0-SNAPSHOT.pom \\\n --header \"Private-Token: \u003cpersonal_access_token\u003e\" \\\n- \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/baz/mypkg-1.0-SNAPSHOT.pom\"\n+ \"https://gitlab.example.com/api/v4/projects/1/packages/maven/foo/bar/mypkg/1.0-SNAPSHOT/mypkg-1.0-SNAPSHOT.pom\"\n ```\n"},{"old_path":"doc/ci/jobs/index.md","new_path":"doc/ci/jobs/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -82,6 +82,20 @@ For example:\n \n \n \n+## Unavailable names for jobs\n+\n+You can't use these keywords as job names:\n+\n+- `image`\n+- `services`\n+- `stages`\n+- `types`\n+- `before_script`\n+- `after_script`\n+- `variables`\n+- `cache`\n+- `include`\n+\n ## Group jobs in a pipeline\n \n If you have many similar jobs, your [pipeline graph](../pipelines/index.md#visualize-pipelines) becomes long and hard\n"},{"old_path":"doc/ci/quick_start/index.md","new_path":"doc/ci/quick_start/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -141,7 +141,7 @@ The pipeline starts when the commit is committed.\n - You can also use [CI/CD configuration visualization](../pipeline_editor/index.md#visualize-ci-configuration) to\n view a graphical representation of your `.gitlab-ci.yml` file.\n - Each job contains scripts and stages:\n- - The [`default`](../yaml/index.md#custom-default-keyword-values) keyword is for\n+ - The [`default`](../yaml/index.md#default) keyword is for\n custom defaults, for example with [`before_script`](../yaml/index.md#before_script)\n and [`after_script`](../yaml/index.md#after_script).\n - [`stage`](../yaml/index.md#stage) describes the sequential execution of jobs.\n"},{"old_path":"doc/ci/runners/index.md","new_path":"doc/ci/runners/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -10,7 +10,7 @@ type: reference\n If you are using self-managed GitLab or you want to use your own runners on GitLab.com, you can\n [install and configure your own runners](https://docs.gitlab.com/runner/install/).\n \n-If you are using GitLab SaaS (GitLab.com), your CI jobs automatically run on runners in the GitLab Build Cloud.\n+If you are using GitLab SaaS (GitLab.com), your CI jobs automatically run on runners in the GitLab Runner Cloud.\n No configuration is required. Your jobs can run on:\n \n - [Linux runners](build_cloud/linux_build_cloud.md).\n"},{"old_path":"doc/ci/runners/runner_cloud/linux_runner_cloud.md","new_path":"doc/ci/runners/runner_cloud/linux_runner_cloud.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -35,7 +35,7 @@ These runners share a [distributed cache](https://docs.gitlab.com/runner/configu\n \n ## Pre-clone script\n \n-Build Cloud runners for Linux provide a way to run commands in a CI\n+Cloud runners for Linux provide a way to run commands in a CI\n job before the runner attempts to run `git init` and `git fetch` to\n download a GitLab repository. The\n [`pre_clone_script`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section)\n"},{"old_path":"doc/ci/runners/runner_cloud/macos/environment.md","new_path":"doc/ci/runners/runner_cloud/macos/environment.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,9 +4,9 @@ group: Runner\n info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments\n ---\n \n-# VM instances and images for Build Cloud for macOS **(FREE)**\n+# VM instances and images for Runner Cloud for macOS **(FREE)**\n \n-When you use the Build Cloud for macOS:\n+When you use the Runner Cloud for macOS:\n \n - Each of your jobs runs in a newly provisioned VM, which is dedicated to the specific job.\n - The VM is active only for the duration of the job and immediately deleted.\n"},{"old_path":"doc/ci/runners/runner_cloud/macos_runner_cloud.md","new_path":"doc/ci/runners/runner_cloud/macos_runner_cloud.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -11,12 +11,12 @@ Use these runners to build, test, and deploy apps for the Apple ecosystem (macOS\n of all the capabilities of the GitLab single DevOps platform and not have to manage or operate a\n build environment.\n \n-Build Cloud runners for macOS are in [Beta](https://about.gitlab.com/handbook/product/gitlab-the-product/#beta)\n+Cloud runners for macOS are in [Beta](https://about.gitlab.com/handbook/product/gitlab-the-product/#beta)\n and shouldn't be relied upon for mission-critical production jobs.\n \n ## Quickstart\n \n-To start using Build Cloud for macOS Beta, you must submit an access request [issue](https://gitlab.com/gitlab-com/macos-buildcloud-runners-beta/-/issues/new?issuable_template=beta_access_request). After your\n+To start using Runner Cloud for macOS Beta, you must submit an access request [issue](https://gitlab.com/gitlab-com/macos-buildcloud-runners-beta/-/issues/new?issuable_template=beta_access_request). After your\n access has been granted and your build environment configured, you must configure your\n `.gitlab-ci.yml` pipeline file:\n \n"},{"old_path":"doc/ci/yaml/includes.md","new_path":"doc/ci/yaml/includes.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -69,7 +69,7 @@ You can include an array of configuration files:\n \n ## Use `default` configuration from an included configuration file\n \n-You can define a [`default`](index.md#custom-default-keyword-values) section in a\n+You can define a [`default`](index.md#default) section in a\n configuration file. When you use a `default` section with the `include` keyword, the defaults apply to\n all jobs in the pipeline.\n \n"},{"old_path":"doc/ci/yaml/index.md","new_path":"doc/ci/yaml/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":""},{"old_path":"doc/ci/yaml/script.md","new_path":"doc/ci/yaml/script.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -62,7 +62,7 @@ job:\n ## Set a default `before_script` or `after_script` for all jobs\n \n You can use [`before_script`](index.md#before_script) and [`after_script`](index.md#after_script)\n-with [`default`](index.md#custom-default-keyword-values):\n+with [`default`](index.md#default):\n \n - Use `before_script` with `default` to define a default array of commands that\n should run before the `script` commands in all jobs.\n"},{"old_path":"doc/development/avoiding_downtime_in_migrations.md","new_path":"doc/development/avoiding_downtime_in_migrations.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -377,7 +377,181 @@ ensures that no downtime is needed.\n \n This operation does not require downtime.\n \n-## Data Migrations\n+## Migrating `integer` primary keys to `bigint`\n+\n+To [prevent the overflow risk](https://gitlab.com/groups/gitlab-org/-/epics/4785) for some tables\n+with `integer` primary key (PK), we have to migrate their PK to `bigint`. The process to do this\n+without downtime and causing too much load on the database is described below.\n+\n+### Initialize the conversion and start migrating existing data (release N)\n+\n+To start the process, add a regular migration to create the new `bigint` columns. Use the provided\n+`initialize_conversion_of_integer_to_bigint` helper. The helper also creates a database trigger\n+to keep in sync both columns for any new records ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/migrate/20210608072312_initialize_conversion_of_ci_stages_to_bigint.rb)):\n+\n+```ruby\n+class InitializeConversionOfCiStagesToBigint \u003c ActiveRecord::Migration[6.1]\n+ include Gitlab::Database::MigrationHelpers\n+\n+ TABLE = :ci_stages\n+ COLUMNS = %i(id)\n+\n+ def up\n+ initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)\n+ end\n+\n+ def down\n+ revert_initialize_conversion_of_integer_to_bigint(TABLE, COLUMNS)\n+ end\n+end\n+```\n+\n+Ignore the new `bigint` columns:\n+\n+```ruby\n+module Ci\n+ class Stage \u003c Ci::ApplicationRecord\n+ include IgnorableColumns\n+ ignore_column :id_convert_to_bigint, remove_with: '14.2', remove_after: '2021-08-22'\n+ end\n+```\n+\n+To migrate existing data, we introduced new type of _batched background migrations_.\n+Unlike the classic background migrations, built on top of Sidekiq, batched background migrations\n+don't have to enqueue and schedule all the background jobs at the beginning.\n+They also have other advantages, like automatic tuning of the batch size, better progress visibility,\n+and collecting metrics. To start the process, use the provided `backfill_conversion_of_integer_to_bigint`\n+helper ([example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/migrate/20210608072346_backfill_ci_stages_for_bigint_conversion.rb)):\n+\n+```ruby\n+class BackfillCiStagesForBigintConversion \u003c ActiveRecord::Migration[6.1]\n+ include Gitlab::Database::MigrationHelpers\n+\n+ TABLE = :ci_stages\n+ COLUMNS = %i(id)\n+\n+ def up\n+ backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS)\n+ end\n+\n+ def down\n+ revert_backfill_conversion_of_integer_to_bigint(TABLE, COLUMNS)\n+ end\n+end\n+```\n+\n+### Monitor the background migration\n+\n+Check how the migration is performing while it's running. Multiple ways to do this are described below.\n+\n+#### High-level status of batched background migrations\n+\n+See how to [check the status of batched background migrations](../update/index.md#checking-for-background-migrations-before-upgrading).\n+\n+#### Query the database\n+\n+We can query the related database tables directly. Requires access to read-only replica.\n+Example queries:\n+\n+```sql\n+-- Get details for batched background migration for given table\n+SELECT * FROM batched_background_migrations WHERE table_name = 'namespaces'\\gx\n+\n+-- Get count of batched background migration jobs by status for given table\n+SELECT\n+ batched_background_migrations.id, batched_background_migration_jobs.status, COUNT(*)\n+FROM\n+ batched_background_migrations\n+ JOIN batched_background_migration_jobs ON batched_background_migrations.id = batched_background_migration_jobs.batched_background_migration_id\n+WHERE\n+ table_name = 'namespaces'\n+GROUP BY\n+ batched_background_migrations.id, batched_background_migration_jobs.status;\n+\n+-- Batched background migration progress for given table (based on estimated total number of tuples)\n+SELECT\n+ m.table_name,\n+ LEAST(100 * sum(j.batch_size) / pg_class.reltuples, 100) AS percentage_complete\n+FROM\n+ batched_background_migrations m\n+ JOIN batched_background_migration_jobs j ON j.batched_background_migration_id = m.id\n+ JOIN pg_class ON pg_class.relname = m.table_name\n+WHERE\n+ j.status = 3 AND m.table_name = 'namespaces'\n+GROUP BY m.id, pg_class.reltuples;\n+```\n+\n+#### Sidekiq logs\n+\n+We can also use the Sidekiq logs to monitor the worker that executes the batched background\n+migrations:\n+\n+1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.\n+1. Change the index pattern to `pubsub-sidekiq-inf-gprd*`.\n+1. Add filter for `json.queue: cronjob:database_batched_background_migration`.\n+\n+#### PostgerSQL slow queries log\n+\n+Slow queries log keeps track of low queries that took above 1 second to execute. To see them\n+for batched background migration:\n+\n+1. Sign in to [Kibana](https://log.gprd.gitlab.net) with a `@gitlab.com` email address.\n+1. Change the index pattern to `pubsub-postgres-inf-gprd*`.\n+1. Add filter for `json.endpoint_id.keyword: Database::BatchedBackgroundMigrationWorker`.\n+1. Optional. To see only updates, add a filter for `json.command_tag.keyword: UPDATE`.\n+1. Optional. To see only failed statements, add a filter for `json.error_severiry.keyword: ERROR`.\n+1. Optional. Add a filter by table name.\n+\n+#### Grafana dashboards\n+\n+To monitor the health of the database, use these additional metrics:\n+\n+- [PostgreSQL Tuple Statistics](https://dashboards.gitlab.net/d/000000167/postgresql-tuple-statistics?orgId=1\u0026refresh=1m): if you see high rate of updates for the tables being actively converted, or increasing percentage of dead tuples for this table, it might mean that autovacuum cannot keep up.\n+- [PostgreSQL Overview](https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1): if you see high system usage or transactions per second (TPS) on the primary database server, it might mean that the migration is causing problems.\n+\n+### Prometheus metrics\n+\n+Number of [metrics](https://gitlab.com/gitlab-org/gitlab/-/blob/294a92484ce4611f660439aa48eee4dfec2230b5/lib/gitlab/database/background_migration/batched_migration_wrapper.rb#L90-128)\n+for each batched background migration are published to Prometheus. These metrics can be searched for and\n+visualized in Thanos ([see an example](https://thanos-query.ops.gitlab.net/graph?g0.expr=sum%20(rate(batched_migration_job_updated_tuples_total%7Benv%3D%22gprd%22%7D%5B5m%5D))%20by%20(migration_id)%20\u0026g0.tab=0\u0026g0.stacked=0\u0026g0.range_input=3d\u0026g0.max_source_resolution=0s\u0026g0.deduplicate=1\u0026g0.partial_response=0\u0026g0.store_matches=%5B%5D\u0026g0.end_input=2021-06-13%2012%3A18%3A24\u0026g0.moment_input=2021-06-13%2012%3A18%3A24)).\n+\n+### Swap the columns (release N + 1)\n+\n+After the background is completed and the new `bigint` columns are populated for all records, we can\n+swap the columns. Swapping is done with post-deployment migration. The exact process depends on the\n+table being converted, but in general it's done in the following steps:\n+\n+1. Using the provided `ensure_batched_background_migration_is_finished` helper, make sure the batched\n+migration has finished ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L13-18)).\n+If the migration has not completed, the subsequent steps fail anyway. By checking in advance we\n+aim to have more helpful error message.\n+1. Create indexes using the `bigint` columns that match the existing indexes using the `integer`\n+column ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L28-34)).\n+1. Create foreign keys (FK) using the `bigint` columns that match the existing FKs using the\n+`integer` column. Do this both for FK referencing other tables, and FKs that reference the table\n+that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L36-43)).\n+1. Inside a transaction, swap the columns:\n+ 1. Lock the tables involved. To reduce the chance of hitting a deadlock, we recommended to do this in parent to child order ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L47)).\n+ 1. Rename the columns to swap names ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L49-54))\n+ 1. Reset the trigger function ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L56-57)).\n+ 1. Swap the defaults ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L59-62)).\n+ 1. Swap the PK constraint (if any) ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L64-68)).\n+ 1. Remove old indexes and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L70-72)).\n+ 1. Remove old FKs (if still present) and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L74)).\n+\n+See example [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66088), and [migration](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb).\n+\n+### Remove the trigger and old `integer` columns (release N + 2)\n+\n+Using post-deployment migration and the provided `cleanup_conversion_of_integer_to_bigint` helper,\n+drop the database trigger and the old `integer` columns ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69714)).\n+\n+### Remove ignore rules (release N + 3)\n+\n+In the next release after the columns were dropped, remove the ignore rules as we do not need them\n+anymore ([see an example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71161)).\n+\n+## Data migrations\n \n Data migrations can be tricky. The usual approach to migrate data is to take a 3\n step approach:\n"},{"old_path":"doc/development/cicd/templates.md","new_path":"doc/development/cicd/templates.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -60,7 +60,7 @@ don't have any other `.gitlab-ci.yml` files.\n When authoring pipeline templates:\n \n - Place any [global keywords](../../ci/yaml/index.md#global-keywords) like `image`\n- or `before_script` in a [`default`](../../ci/yaml/index.md#custom-default-keyword-values)\n+ or `before_script` in a [`default`](../../ci/yaml/index.md#default)\n section at the top of the template.\n - Note clearly in the [code comments](#explain-the-template-with-comments) if the\n template is designed to be used with the `includes` keyword in an existing\n@@ -77,7 +77,7 @@ other pipeline configuration.\n \n When authoring job templates:\n \n-- Do not use [global](../../ci/yaml/index.md#global-keywords) or [`default`](../../ci/yaml/index.md#custom-default-keyword-values)\n+- Do not use [global](../../ci/yaml/index.md#global-keywords) or [`default`](../../ci/yaml/index.md#default)\n keywords. When a root `.gitlab-ci.yml` includes a template, global or default keywords\n might be overridden and cause unexpected behavior. If a job template requires a\n specific stage, explain in the code comments that users must manually add the stage\n"},{"old_path":"doc/development/migration_style_guide.md","new_path":"doc/development/migration_style_guide.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -135,6 +135,7 @@ various database operations, such as:\n - [dropping and renaming columns](avoiding_downtime_in_migrations.md#dropping-columns)\n - [changing column constraints and types](avoiding_downtime_in_migrations.md#changing-column-constraints)\n - [adding and dropping indexes, tables, and foreign keys](avoiding_downtime_in_migrations.md#adding-indexes)\n+- [migrating `integer` primary keys to `bigint`](avoiding_downtime_in_migrations.md#adding-indexes)\n \n and explains how to perform them without requiring downtime.\n \n"},{"old_path":"doc/development/pipelines.md","new_path":"doc/development/pipelines.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -166,6 +166,13 @@ Our current RSpec tests parallelization setup is as follows:\n \n After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.\n \n+### Flaky tests\n+\n+Tests that are [known to be flaky](testing_guide/flaky_tests.md#automatic-retries-and-flaky-tests-detection) are:\n+\n+- skipped if the `$SKIP_FLAKY_TESTS_AUTOMATICALLY` variable is set to `true` (`false` by default)\n+- run if `$SKIP_FLAKY_TESTS_AUTOMATICALLY` variable is not set to `true` or if the `~\"pipeline:run-flaky-tests\"` label is set on the MR\n+\n ### Monitoring\n \n The GitLab test suite is [monitored](performance.md#rspec-profiling) for the `main` branch, and any branch\n"},{"old_path":"doc/install/requirements.md","new_path":"doc/install/requirements.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -302,7 +302,7 @@ The GitLab Runner server requirements depend on:\n \n Since the nature of the jobs varies for each use case, you need to experiment by adjusting the job concurrency to get the optimum setting.\n \n-For reference, the GitLab.com Build Cloud [auto-scaling runner for Linux](../ci/runners/build_cloud/linux_build_cloud.md) is configured so that a **single job** runs in a **single instance** with:\n+For reference, the GitLab.com Runner Cloud [auto-scaling runner for Linux](../ci/runners/build_cloud/linux_build_cloud.md) is configured so that a **single job** runs in a **single instance** with:\n \n - 1 vCPU.\n - 3.75 GB of RAM.\n"},{"old_path":"doc/integration/jira/index.md","new_path":"doc/integration/jira/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -10,7 +10,7 @@ If your organization uses [Jira](https://www.atlassian.com/software/jira) issues\n you can [migrate your issues from Jira](../../user/project/import/jira.md) and work\n exclusively in GitLab. However, if you'd like to continue to use Jira, you can\n integrate it with GitLab. GitLab offers two types of Jira integrations, and you\n-can use one or both depending on the capabilities you need. It is recommended that you enable both.\n+can use one or both depending on the capabilities you need. We recommend you enable both.\n \n ## Compare integrations\n \n@@ -41,7 +41,7 @@ or the Jira DVCS (distributed version control system) connector,\n \n ### Direct feature comparison\n \n-| Capability | Jira integration | Jira Development panel integration |\n+| Capability | Jira integration | Jira development panel integration |\n |-|-|-|\n | Mention a Jira issue ID in a GitLab commit or merge request, and a link to the Jira issue is created. | Yes. | No. |\n | Mention a Jira issue ID in GitLab and the Jira issue shows the GitLab issue or merge request. | Yes. A Jira comment with the GitLab issue or MR title links to GitLab. The first mention is also added to the Jira issue under **Web links**. | Yes, in the issue's [development panel](https://support.atlassian.com/jira-software-cloud/docs/view-development-information-for-an-issue/). |\n@@ -55,11 +55,11 @@ or the Jira DVCS (distributed version control system) connector,\n \n ## Authentication in Jira\n \n-The process for configuring Jira depends on whether you host Jira on your own server or on\n+The authentication method in Jira depends on whether you host Jira on your own server or on\n [Atlassian cloud](https://www.atlassian.com/cloud):\n \n - **Jira Server** supports basic authentication. When connecting, a **username and password** are\n- required. Connecting to Jira Server via CAS is not possible. For more information, read\n+ required. Connecting to Jira Server using the Central Authentication Service (CAS) is not possible. For more information, read\n how to [set up a user in Jira Server](jira_server_configuration.md).\n - **Jira on Atlassian cloud** supports authentication through an API token. When connecting to Jira on\n Atlassian cloud, an email and API token are required. For more information, read\n@@ -72,11 +72,16 @@ actions in GitLab issues and merge requests linked to a Jira issue leak informat\n about the private project to non-administrator Jira users. If your installation uses Jira Cloud,\n you can use the [GitLab.com for Jira Cloud app](connect-app.md) to avoid this risk.\n \n+## Third-party Jira integrations\n+\n+Developers have built several third-party Jira integrations for GitLab that are\n+listed on the [Atlassian Marketplace](https://marketplace.atlassian.com/search?product=jira\u0026query=gitlab).\n+\n ## Troubleshooting\n \n If these features do not work as expected, it is likely due to a problem with the way the integration settings were configured.\n \n-### GitLab is unable to comment on a Jira issue\n+### GitLab cannot comment on a Jira issue\n \n If GitLab cannot comment on Jira issues, make sure the Jira user you\n set up for the integration has permission to:\n@@ -86,14 +91,16 @@ set up for the integration has permission to:\n \n Jira issue references and update comments do not work if the GitLab issue tracker is disabled.\n \n-### GitLab is unable to close a Jira issue\n+### GitLab cannot close a Jira issue\n+\n+If GitLab cannot close a Jira issue:\n \n-Make sure the `Transition ID` you set in the Jira settings matches the one\n-your project needs to close an issue.\n+- Make sure the `Transition ID` you set in the Jira settings matches the one\n+ your project needs to close an issue.\n \n-Make sure that the Jira issue is not already marked as resolved. That is,\n-the Jira issue resolution field is not set, and the issue is not struck through in\n-Jira lists.\n+- Make sure the Jira issue is not already marked as resolved:\n+ - Check the Jira issue resolution field is not set.\n+ - Check the issue is not struck through in Jira lists.\n \n ### CAPTCHA\n \n@@ -104,8 +111,3 @@ authenticate with the Jira site.\n \n To fix this error, sign in to your Jira instance\n and complete the CAPTCHA.\n-\n-## Third-party Jira integrations\n-\n-Developers have built several third-party Jira integrations for GitLab that are\n-listed on the [Atlassian Marketplace](https://marketplace.atlassian.com/search?product=jira\u0026query=gitlab).\n"},{"old_path":"doc/user/admin_area/index.md","new_path":"doc/user/admin_area/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -257,6 +257,10 @@ To edit a topic, select **Edit** in that topic's row.\n To search for topics by name, enter your criteria in the search box. The topic search is case\n insensitive, and applies partial matching.\n \n+NOTE:\n+Topics are public and visible to everyone, but assignments to projects are not.\n+Do not include sensitive information in the name or description of a topic.\n+\n ### Administering Jobs\n \n You can administer all jobs in the GitLab instance from the Admin Area's Jobs page.\n"},{"old_path":"doc/user/clusters/agent/install/index.md","new_path":"doc/user/clusters/agent/install/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -85,6 +85,7 @@ the Agent in subsequent steps.\n \n In GitLab:\n \n+1. Ensure that [GitLab CI/CD is enabled in your project](../../../../ci/enable_or_disable_ci.md#enable-cicd-in-a-project).\n 1. From your project's sidebar, select **Infrastructure \u003e Kubernetes clusters**.\n 1. Select the **GitLab Agent managed clusters** tab.\n 1. Select **Integrate with the GitLab Agent**.\n"},{"old_path":"doc/user/clusters/agent/repository.md","new_path":"doc/user/clusters/agent/repository.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -173,7 +173,7 @@ To grant projects access to the Agent through the [CI/CD Tunnel](ci_cd_tunnel.md\n 1. Go to your Agent's configuration project.\n 1. Edit the Agent's configuration file (`config.yaml`).\n 1. Add the `projects` attribute into `ci_access`.\n-1. Identify the new project through its path:\n+1. Identify the project through its path:\n \n ```yaml\n ci_access:\n"},{"old_path":"doc/user/clusters/management_project.md","new_path":"doc/user/clusters/management_project.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -12,6 +12,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w\n WARNING:\n This feature was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5.\n \n+To manage cluster applications, use the [GitLab Kubernetes Agent](agent/index.md)\n+with the [Cluster Management Project Template](management_project_template.md).\n+\n A project can be designated as the management project for a cluster.\n A management project can be used to run deployment jobs with\n Kubernetes\n@@ -41,8 +44,7 @@ Management projects are restricted to the following:\n To use a cluster management project to manage your cluster:\n \n 1. Create a new project to serve as the cluster management project\n-for your cluster. We recommend that you\n-[create this project based on the Cluster Management project template](management_project_template.md#create-a-new-project-based-on-the-cluster-management-template).\n+for your cluster.\n 1. [Associate the cluster with the management project](#associate-the-cluster-management-project-with-the-cluster).\n 1. [Configure your cluster's pipelines](#configuring-your-pipeline).\n 1. [Set the environment scope](#setting-the-environment-scope).\n"},{"old_path":"doc/user/clusters/management_project_template.md","new_path":"doc/user/clusters/management_project_template.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,15 +4,17 @@ group: Configure\n info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments\n ---\n \n-# Cluster Management project template **(FREE)**\n+# Manage cluster applications **(FREE)**\n \n \u003e - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25318) in GitLab 12.10 with Helmfile support via Helm v2.\n \u003e - Helm v2 support was [dropped](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/63577) in GitLab 14.0. Use Helm v3 instead.\n+\u003e - [Migrated](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/merge_requests/24) to the GitLab Kubernetes Agent in GitLab 14.5.\n \n-With a [cluster management project](management_project.md) you can manage\n-your cluster's deployment and applications through a repository in GitLab.\n+Use a repository to install, manage, and deploy clusters applications through code.\n \n-The Cluster Management project template provides you a baseline to get\n+## Cluster Management Project Template\n+\n+The Cluster Management Project Template provides you a baseline to get\n started and flexibility to customize your project to your cluster's needs.\n For instance, you can:\n \n@@ -21,49 +23,78 @@ For instance, you can:\n - Remove the built-in cluster applications you don't need.\n - Add other cluster applications using the same structure as the ones already available.\n \n-The template contains the following [components](#available-components):\n+The template contains the following [components](#configure-the-available-components):\n \n-- A pre-configured GitLab CI/CD file so that you can configure deployment pipelines.\n+- A pre-configured GitLab CI/CD file so that you can configure CI/CD pipelines using the [CI/CD Tunnel](agent/ci_cd_tunnel.md).\n - A pre-configured [Helmfile](https://github.com/roboll/helmfile) so that\n you can manage cluster applications with [Helm v3](https://helm.sh/).\n - An `applications` directory with a `helmfile.yaml` configured for each\n application available in the template.\n \n-WARNING:\n-If you used [GitLab Managed Apps](applications.md) to manage your\n-cluster from GitLab, see how to [migrate from GitLab Managed Apps](migrating_from_gma_to_project_template.md) to the Cluster Management\n-project.\n+## Use the Kubernetes Agent with the Cluster Management Project Template\n+\n+To use a new project created from the Cluster Management Project Template\n+with a cluster connected to GitLab through the [GitLab Kubernetes Agent](agent/index.md),\n+you have two options:\n+\n+- [Use one single project](#single-project) to configure the Agent and manage cluster applications.\n+- [Use separate projects](#separate-projects) - one to configure the Agent and another to manage cluster applications.\n+\n+### Single project\n+\n+This setup is particularly useful when you haven't connected your cluster\n+to GitLab through the Agent yet and you want to use the Cluster Management\n+Project Template to manage cluster applications.\n+\n+To use one single project to configure the Agent and to manage cluster applications:\n+\n+1. [Create a new project from the Cluster Management Project Template](#create-a-new-project-based-on-the-cluster-management-template).\n+1. Configure the new project as the [Agent's configuration repository](agent/repository.md)\n+(where the Agent is registered and its `config.yaml` is stored).\n+1. From your project's settings, add a [new environment variable](../../ci/variables/index.md#add-a-cicd-variable-to-a-project) `$KUBE_CONTEXT` and set it to `path/to/agent-configuration-project:your-agent-name`.\n+1. [Configure the components](#configure-the-available-components) inherited from the template.\n+\n+### Separate projects\n+\n+This setup is particularly useful **when you already have a cluster** connected\n+to GitLab through the Agent and want to use the Cluster Management\n+Project Template to manage cluster applications.\n \n-## Set up the management project from the Cluster Management project template\n+To use one project to configure the Agent (\"project A\") and another project to\n+manage cluster applications (\"project B\"), follow the steps below.\n \n-To set up your cluster's management project off of the Cluster Management project template:\n+We assume that you already have a cluster connected through the Agent and\n+[configured through the Agent's configuration repository](agent/repository.md)\n+(\"project A\").\n \n-1. [Create a new project based on the Cluster Management template](#create-a-new-project-based-on-the-cluster-management-template).\n-1. [Associate the cluster management project with your cluster](management_project.md#associate-the-cluster-management-project-with-the-cluster).\n-1. Use the [available components](#available-components) to manage your cluster.\n+1. [Create a new project from the Cluster Management Project Template](#create-a-new-project-based-on-the-cluster-management-template).\n+This new project is \"project B\".\n+1. In your \"project A\", [grant the Agent access to the new project (B) through the CI/CD Tunnel](agent/repository.md#authorize-projects-to-use-an-agent).\n+1. From the \"project's B\" settings, add a [new environment variable](../../ci/variables/index.md#add-a-cicd-variable-to-a-project) `$KUBE_CONTEXT` and set it to `path/to/agent-configuration-project:your-agent-name`.\n+1. In \"project B\", [configure the components](#configure-the-available-components) inherited from the template.\n \n-### Create a new project based on the Cluster Management template\n+## Create a new project based on the Cluster Management Template\n \n To get started, create a new project based on the Cluster Management\n project template to use as a cluster management project.\n \n-You can either create the [new project](../project/working_with_projects.md#create-a-project)\n-from the template or import the project from the URL. Importing\n-the project is useful if you are using a GitLab self-managed\n-instance that may not have the latest version of the template.\n+You can either create the new project from the template or import the\n+project from the URL. Importing the project is useful if you are using\n+a GitLab self-managed instance that may not have the latest version of\n+the template.\n \n-To create the new project:\n+To [create the new project](../project/working_with_projects.md#create-a-project):\n \n - From the template: select the **GitLab Cluster Management** project template.\n - Importing from the URL: use `https://gitlab.com/gitlab-org/project-templates/cluster-management.git`.\n \n-## Available components\n+## Configure the available components\n \n-Use the available components to configure your cluster:\n+Use the available components to configure your cluster applications:\n \n-- [A `.gitlab-ci.yml` file](#the-gitlab-ciyml-file).\n-- [A main `helmfile.yml` file](#the-main-helmfileyml-file).\n-- [A directory with built-in applications](#built-in-applications).\n+- [The `.gitlab-ci.yml` file](#the-gitlab-ciyml-file).\n+- [The main `helmfile.yml` file](#the-main-helmfileyml-file).\n+- [The directory with built-in applications](#built-in-applications).\n \n ### The `.gitlab-ci.yml` file\n \n@@ -107,7 +138,7 @@ The [built-in supported applications](https://gitlab.com/gitlab-org/project-temp\n - [Sentry](../infrastructure/clusters/manage/management_project_applications/sentry.md)\n - [Vault](../infrastructure/clusters/manage/management_project_applications/vault.md)\n \n-#### How to customize your applications\n+#### Customize your applications\n \n Each app has an `applications/{app}/values.yaml` file (`applications/{app}/values.yaml.gotmpl` in case of GitLab Runner). This is the\n place where you can define default values for your app's Helm chart. Some apps already have defaults\n"},{"old_path":"doc/user/gitlab_com/index.md","new_path":"doc/user/gitlab_com/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -198,11 +198,11 @@ The following limits apply for [Webhooks](../project/integrations/webhooks.md):\n | [Number of webhooks](../../administration/instance_limits.md#number-of-webhooks) | `100` per project, `50` per group | `100` per project, `50` per group |\n | Maximum payload size | 25 MB | 25 MB |\n \n-## Shared Build Cloud runners\n+## Shared Runner Cloud runners\n \n GitLab has shared runners on GitLab.com that you can use to run your CI jobs.\n \n-For more information, see [GitLab Build Cloud runners](../../ci/runners/index.md).\n+For more information, see [GitLab Runner Cloud runners](../../ci/runners/index.md).\n \n ## Sidekiq\n \n"},{"old_path":"doc/user/group/saml_sso/scim_setup.md","new_path":"doc/user/group/saml_sso/scim_setup.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -35,9 +35,10 @@ The following identity providers are supported:\n \n Once [Group Single Sign-On](index.md) has been configured, we can:\n \n-1. Navigate to the group and click **Administration \u003e SAML SSO**.\n-1. Click on the **Generate a SCIM token** button.\n-1. Save the token and URL so they can be used in the next step.\n+1. On the top bar, select **Menu \u003e Groups** and find your group.\n+1. On the left sidebar, select **Settings \u003e SAML SSO**.\n+1. Select **Generate a SCIM token**.\n+1. Save the token and URL for use in the next step.\n \n \n \n@@ -50,14 +51,14 @@ Once [Group Single Sign-On](index.md) has been configured, we can:\n \n The SAML application that was created during [Single sign-on](index.md) setup for [Azure](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/view-applications-portal) now needs to be set up for SCIM.\n \n-1. Set up automatic provisioning and administrative credentials by following the\n+1. Enable automatic provisioning and administrative credentials by following the\n [Azure's SCIM setup documentation](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#provisioning-users-and-groups-to-applications-that-support-scim).\n \n During this configuration, note the following:\n \n-- The `Tenant URL` and `secret token` are the ones retrieved in the\n+- The `Tenant URL` and `secret token` are the items retrieved in the\n [previous step](#gitlab-configuration).\n-- It is recommended to set a notification email and check the **Send an email notification when a failure occurs** checkbox.\n+- We recommend setting a notification email and selecting the **Send an email notification when a failure occurs** checkbox.\n - For mappings, we only leave `Synchronize Azure Active Directory Users to AppName` enabled.\n `Synchronize Azure Active Directory Groups to AppName` is usually disabled. However, this\n does not mean Azure AD users cannot be provisioned in groups. Leaving it enabled does not break\n@@ -113,29 +114,27 @@ Make sure that the Okta setup matches our documentation exactly, especially the\n configuration. Otherwise, the Okta SCIM app may not work properly.\n \n 1. Sign in to Okta.\n-1. If you see an **Admin** button in the top right, click the button. This will\n- ensure you are in the Admin area.\n+1. Ensure you are in the Admin section by selecting the **Admin** button located in the top right. The admin button is not visible from the admin page.\n \n NOTE:\n- If you're using the Developer Console, click **Developer Console** in the top\n- bar and select **Classic UI**. Otherwise, you may not see the buttons described\n- in the following steps:\n+ If you're using the Developer Console, select **Developer Console** in the top\n+ bar and then select **Classic UI**. Otherwise, you may not see the buttons described in the following steps:\n \n-1. In the **Application** tab, click **Add Application**.\n-1. Search for **GitLab**, find and click on the 'GitLab' application.\n-1. On the GitLab application overview page, click **Add**.\n+1. In the **Application** tab, select **Add Application**.\n+1. Search for **GitLab**, find and select on the 'GitLab' application.\n+1. On the GitLab application overview page, select **Add**.\n 1. Under **Application Visibility** select both checkboxes. Currently the GitLab application does not support SAML authentication so the icon should not be shown to users.\n-1. Click **Done** to finish adding the application.\n-1. In the **Provisioning** tab, click **Configure API integration**.\n+1. Select **Done** to finish adding the application.\n+1. In the **Provisioning** tab, select **Configure API integration**.\n 1. Select **Enable API integration**.\n - For **Base URL** enter the URL obtained from the GitLab SCIM configuration page\n - For **API Token** enter the SCIM token obtained from the GitLab SCIM configuration page\n-1. Click 'Test API Credentials' to verify configuration.\n-1. Click **Save** to apply the settings.\n-1. After saving the API integration details, new settings tabs appear on the left. Choose **To App**.\n-1. Click **Edit**.\n-1. Check the box to **Enable** for both **Create Users** and **Deactivate Users**.\n-1. Click **Save**.\n+1. Select 'Test API Credentials' to verify configuration.\n+1. Select **Save** to apply the settings.\n+1. After saving the API integration details, new settings tabs appear on the left. Select **To App**.\n+1. Select **Edit**.\n+1. Select the **Enable** checkbox for both **Create Users** and **Deactivate Users**.\n+1. Select **Save**.\n 1. Assign users in the **Assignments** tab. Assigned users are created and\n managed in your GitLab group.\n \n@@ -147,8 +146,8 @@ application described above.\n \n ### OneLogin\n \n-OneLogin provides a \"GitLab (SaaS)\" app in their catalog, which includes a SCIM integration.\n-As the app is developed by OneLogin, please reach out to OneLogin if you encounter issues.\n+As the developers of this app, OneLogin provides a \"GitLab (SaaS)\" app in their catalog, which includes a SCIM integration.\n+Please reach out to OneLogin if you encounter issues.\n \n ## User access and linking setup\n \n@@ -177,8 +176,8 @@ As long as [Group SAML](index.md) has been configured, existing GitLab.com users\n - By following these steps:\n \n 1. Sign in to GitLab.com if needed.\n- 1. Click on the GitLab app in the identity provider's dashboard or visit the **GitLab single sign-on URL**.\n- 1. Click on the **Authorize** button.\n+ 1. In the identity provider's dashboard select the GitLab app or visit the **GitLab single sign-on URL**.\n+ 1. Select the **Authorize**.\n \n We recommend users do this prior to turning on sync, because while synchronization is active, there may be provisioning errors for existing users.\n \n"},{"old_path":"doc/user/infrastructure/clusters/index.md","new_path":"doc/user/infrastructure/clusters/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w\n \n To connect clusters to GitLab, use the [GitLab Kubernetes Agent](../../clusters/agent/index.md).\n \n-## Certificate-based Kubernetes integration (DEPRECATED) **(FREE)**\n+## Certificate-based Kubernetes integration (DEPRECATED)\n \n WARNING:\n In GitLab 14.5, the certificate-based method to connect Kubernetes clusters\n"},{"old_path":"doc/user/infrastructure/clusters/manage/clusters_health.md","new_path":"doc/user/infrastructure/clusters/manage/clusters_health.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -6,8 +6,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w\n \n # Clusters health **(FREE)**\n \n-\u003e - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.\n-\u003e - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Free in 13.2.\n+\u003e - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in GitLab 10.6.\n+\u003e - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) from GitLab Ultimate to GitLab Free in 13.2.\n \n When [the Prometheus cluster integration is enabled](../../../clusters/integrations.md#prometheus-cluster-integration), GitLab monitors the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.\n \n"},{"old_path":"doc/user/infrastructure/clusters/manage/management_project_applications/certmanager.md","new_path":"doc/user/infrastructure/clusters/manage/management_project_applications/certmanager.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,7 +4,7 @@ group: Configure\n info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments\n ---\n \n-# Install cert-manager with a cluster management project\n+# Install cert-manager with a cluster management project **(FREE)**\n \n \u003e - [Introduced](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/merge_requests/5) in GitLab 14.0.\n \u003e - Support for cert-manager v1.4 was [introduced](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/merge_requests/69405) in GitLab 14.3.\n"},{"old_path":"doc/user/infrastructure/clusters/manage/management_project_applications/ingress.md","new_path":"doc/user/infrastructure/clusters/manage/management_project_applications/ingress.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,7 +4,7 @@ group: Configure\n info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments\n ---\n \n-# Install Ingress with a cluster management project\n+# Install Ingress with a cluster management project **(FREE)**\n \n \u003e [Introduced](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/merge_requests/5) in GitLab 14.0.\n \n"},{"old_path":"doc/user/infrastructure/index.md","new_path":"doc/user/infrastructure/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -29,13 +29,12 @@ Learn more about how GitLab can help you run [Infrastructure as Code](iac/index.\n \n ## Integrated Kubernetes management\n \n-GitLab has special integrations with Kubernetes to help you deploy, manage and troubleshoot\n-third-party or custom applications in Kubernetes clusters. Auto DevOps provides a full\n-DevSecOps pipeline by default targeted at Kubernetes based deployments. To support\n-all the GitLab features, GitLab offers a cluster management project for easy onboarding.\n-The deploy boards provide quick insights into your cluster, including pod logs tailing.\n+The GitLab integration with Kubernetes helps you to install, configure, manage, deploy, and troubleshoot\n+cluster applications. With the GitLab Kubernetes Agent, you can connect clusters behind a firewall,\n+have real-time access to API endpoints, perform pull-beased or push-based deployments for production\n+and non-production environments, and much more.\n \n-Learn more about the [GitLab integration with Kubernetes](clusters/index.md).\n+Learn more about the [GitLab Kubernetes Agent](../clusters/agent/index.md).\n \n ## Runbooks in GitLab\n \n"},{"old_path":"doc/user/packages/composer_repository/index.md","new_path":"doc/user/packages/composer_repository/index.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -149,7 +149,7 @@ Do not save unless you want to overwrite the existing CI/CD file.\n When you publish:\n \n - The same package with different data, it overwrites the existing package.\n-- The same package with the same data, a `404 Bad request` error occurs.\n+- The same package with the same data, a `400 Bad request` error occurs.\n \n ## Install a Composer package\n \n"},{"old_path":"doc/user/project/settings/img/import_export_download_export.png","new_path":"doc/user/project/settings/img/import_export_download_export.png","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"Binary files a/doc/user/project/settings/img/import_export_download_export.png and b/doc/user/project/settings/img/import_export_download_export.png differ\n"},{"old_path":"doc/user/project/settings/img/import_export_export_button.png","new_path":"doc/user/project/settings/img/import_export_export_button.png","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"Binary files a/doc/user/project/settings/img/import_export_export_button.png and b/doc/user/project/settings/img/import_export_export_button.png differ\n"},{"old_path":"doc/user/project/settings/import_export.md","new_path":"doc/user/project/settings/import_export.md","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -160,6 +160,8 @@ To export a project and its data, follow these steps:\n \n 1. Select **Settings** in the sidebar.\n \n+1. Scroll down and expand the **Advanced** section.\n+\n 1. Scroll down to find the **Export project** button:\n \n \n"},{"old_path":"lib/api/github/entities.rb","new_path":"lib/api/github/entities.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -59,8 +59,8 @@ class RepoCommit \u003c Grape::Entity\n expose :parents do |commit|\n commit.parent_ids.map { |id| { sha: id } }\n end\n- expose :files do |commit|\n- commit.diffs.diff_files.flat_map do |diff|\n+ expose :files do |_commit, options|\n+ options[:diff_files].flat_map do |diff|\n additions = diff.added_lines\n deletions = diff.removed_lines\n \n"},{"old_path":"lib/api/v3/github.rb","new_path":"lib/api/v3/github.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -20,6 +20,9 @@ class Github \u003c ::API::Base\n # Jira Server user agent format: Jira DVCS Connector/version\n JIRA_DVCS_CLOUD_USER_AGENT = 'Jira DVCS Connector Vertigo'\n \n+ GITALY_TIMEOUT_CACHE_KEY = 'api:v3:Gitaly-timeout-cache-key'\n+ GITALY_TIMEOUT_CACHE_EXPIRY = 1.day\n+\n include PaginationParams\n \n feature_category :integrations\n@@ -93,6 +96,32 @@ def find_notes(noteable)\n notes.select { |n| n.readable_by?(current_user) }\n end\n # rubocop: enable CodeReuse/ActiveRecord\n+\n+ # Returns an empty Array instead of the Commit diff files for a period\n+ # of time after a Gitaly timeout, to mitigate frequent Gitaly timeouts\n+ # for some Commit diffs.\n+ def diff_files(commit)\n+ return commit.diffs.diff_files unless Feature.enabled?(:api_v3_commits_skip_diff_files, commit.project)\n+\n+ cache_key = [\n+ GITALY_TIMEOUT_CACHE_KEY,\n+ commit.project.id,\n+ commit.cache_key\n+ ].join(':')\n+\n+ return [] if Rails.cache.read(cache_key).present?\n+\n+ begin\n+ commit.diffs.diff_files\n+ rescue GRPC::DeadlineExceeded =\u003e error\n+ # Gitaly fails to load diffs consistently for some commits. The other information\n+ # is still valuable for Jira. So we skip the loading and respond with a 200 excluding diffs\n+ # Remove this when https://gitlab.com/gitlab-org/gitaly/-/issues/3741 is fixed.\n+ Rails.cache.write(cache_key, 1, expires_in: GITALY_TIMEOUT_CACHE_EXPIRY)\n+ Gitlab::ErrorTracking.track_exception(error)\n+ []\n+ end\n+ end\n end\n \n resource :orgs do\n@@ -228,10 +257,9 @@ def find_notes(noteable)\n user_project = find_project_with_access(params)\n \n commit = user_project.commit(params[:sha])\n-\n not_found! 'Commit' unless commit\n \n- present commit, with: ::API::Github::Entities::RepoCommit\n+ present commit, with: ::API::Github::Entities::RepoCommit, diff_files: diff_files(commit)\n end\n end\n end\n"},{"old_path":"lib/gitlab/background_migration/migrate_requirements_to_work_items.rb","new_path":"lib/gitlab/background_migration/migrate_requirements_to_work_items.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,13 @@\n+# frozen_string_literal: true\n+\n+module Gitlab\n+ module BackgroundMigration\n+ # No op on CE\n+ class MigrateRequirementsToWorkItems\n+ def perform(start_id, end_id)\n+ end\n+ end\n+ end\n+end\n+\n+Gitlab::BackgroundMigration::MigrateRequirementsToWorkItems.prepend_mod_with('Gitlab::BackgroundMigration::MigrateRequirementsToWorkItems')\n"},{"old_path":"lib/gitlab/ci/artifact_file_reader.rb","new_path":"lib/gitlab/ci/artifact_file_reader.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -45,14 +45,6 @@ def validate!\n end\n \n def read_zip_file!(file_path)\n- if ::Feature.enabled?(:ci_new_artifact_file_reader, job.project, default_enabled: :yaml)\n- read_with_new_artifact_file_reader(file_path)\n- else\n- read_with_legacy_artifact_file_reader(file_path)\n- end\n- end\n-\n- def read_with_new_artifact_file_reader(file_path)\n job.artifacts_file.use_open_file do |file|\n zip_file = Zip::File.new(file, false, true)\n entry = zip_file.find_entry(file_path)\n@@ -69,25 +61,6 @@ def read_with_new_artifact_file_reader(file_path)\n end\n end\n \n- def read_with_legacy_artifact_file_reader(file_path)\n- job.artifacts_file.use_file do |archive_path|\n- Zip::File.open(archive_path) do |zip_file|\n- entry = zip_file.find_entry(file_path)\n- unless entry\n- raise Error, \"Path `#{file_path}` does not exist inside the `#{job.name}` artifacts archive!\"\n- end\n-\n- if entry.name_is_directory?\n- raise Error, \"Path `#{file_path}` was expected to be a file but it was a directory!\"\n- end\n-\n- zip_file.get_input_stream(entry) do |is|\n- is.read\n- end\n- end\n- end\n- end\n-\n def max_archive_size_in_mb\n ActiveSupport::NumberHelper.number_to_human_size(MAX_ARCHIVE_SIZE)\n end\n"},{"old_path":"lib/gitlab/database/gitlab_schema.rb","new_path":"lib/gitlab/database/gitlab_schema.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -8,17 +8,84 @@\n # - gitlab_shared - defines a set of tables that are found on all databases (data accessed is dependent on connection)\n # - gitlab_main / gitlab_ci - defines a set of tables that can only exist on a given database\n #\n+# Tables for the purpose of tests should be prefixed with `_test_my_table_name`\n \n module Gitlab\n module Database\n module GitlabSchema\n+ # These tables are deleted/renamed, but still referenced by migrations.\n+ # This is needed for now, but should be removed in the future\n+ DELETED_TABLES = {\n+ # main tables\n+ 'alerts_service_data' =\u003e :gitlab_main,\n+ 'analytics_devops_adoption_segment_selections' =\u003e :gitlab_main,\n+ 'analytics_repository_file_commits' =\u003e :gitlab_main,\n+ 'analytics_repository_file_edits' =\u003e :gitlab_main,\n+ 'analytics_repository_files' =\u003e :gitlab_main,\n+ 'audit_events_archived' =\u003e :gitlab_main,\n+ 'backup_labels' =\u003e :gitlab_main,\n+ 'clusters_applications_fluentd' =\u003e :gitlab_main,\n+ 'forked_project_links' =\u003e :gitlab_main,\n+ 'issue_milestones' =\u003e :gitlab_main,\n+ 'merge_request_milestones' =\u003e :gitlab_main,\n+ 'namespace_onboarding_actions' =\u003e :gitlab_main,\n+ 'services' =\u003e :gitlab_main,\n+ 'terraform_state_registry' =\u003e :gitlab_main,\n+ 'tmp_fingerprint_sha256_migration' =\u003e :gitlab_main, # used by lib/gitlab/background_migration/migrate_fingerprint_sha256_within_keys.rb\n+ 'web_hook_logs_archived' =\u003e :gitlab_main,\n+ 'vulnerability_export_registry' =\u003e :gitlab_main,\n+ 'vulnerability_finding_fingerprints' =\u003e :gitlab_main,\n+ 'vulnerability_export_verification_status' =\u003e :gitlab_main,\n+\n+ # CI tables\n+ 'ci_build_trace_sections' =\u003e :gitlab_ci,\n+ 'ci_build_trace_section_names' =\u003e :gitlab_ci,\n+ 'ci_daily_report_results' =\u003e :gitlab_ci,\n+ 'ci_test_cases' =\u003e :gitlab_ci,\n+ 'ci_test_case_failures' =\u003e :gitlab_ci,\n+\n+ # leftovers from early implementation of partitioning\n+ 'audit_events_part_5fc467ac26' =\u003e :gitlab_main,\n+ 'web_hook_logs_part_0c5294f417' =\u003e :gitlab_main\n+ }.freeze\n+\n def self.table_schemas(tables)\n tables.map { |table| table_schema(table) }.to_set\n end\n \n def self.table_schema(name)\n+ schema_name, table_name = name.split('.', 2) # Strip schema name like: `public.`\n+\n+ # Most of names do not have schemas, ensure that this is table\n+ unless table_name\n+ table_name = schema_name\n+ schema_name = nil\n+ end\n+\n+ # strip partition number of a form `loose_foreign_keys_deleted_records_1`\n+ table_name.gsub!(/_[0-9]+$/, '')\n+\n+ # Tables that are properly mapped\n+ if gitlab_schema = tables_to_schema[table_name]\n+ return gitlab_schema\n+ end\n+\n+ # Tables that are deleted, but we still need to reference them\n+ if gitlab_schema = DELETED_TABLES[table_name]\n+ return gitlab_schema\n+ end\n+\n+ # All tables from `information_schema.` are `:gitlab_shared`\n+ return :gitlab_shared if schema_name == 'information_schema'\n+\n+ # All tables that start with `_test_` are shared and ignored\n+ return :gitlab_shared if table_name.start_with?('_test_')\n+\n+ # All `pg_` tables are marked as `shared`\n+ return :gitlab_shared if table_name.start_with?('pg_')\n+\n # When undefined it's best to return a unique name so that we don't incorrectly assume that 2 undefined schemas belong on the same database\n- tables_to_schema[name] || :\"undefined_#{name}\"\n+ :\"undefined_#{table_name}\"\n end\n \n def self.tables_to_schema\n"},{"old_path":"lib/gitlab/database/load_balancing/configuration.rb","new_path":"lib/gitlab/database/load_balancing/configuration.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -77,6 +77,10 @@ def primary_connection_specification_name\n (@primary_model || @model).connection_specification_name\n end\n \n+ def primary_db_config\n+ (@primary_model || @model).connection_db_config\n+ end\n+\n def replica_db_config\n @model.connection_db_config\n end\n"},{"old_path":"lib/gitlab/database/query_analyzer.rb","new_path":"lib/gitlab/database/query_analyzer.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,60 @@\n+# frozen_string_literal: true\n+\n+module Gitlab\n+ module Database\n+ # The purpose of this class is to implement a various query analyzers based on `pg_query`\n+ # And process them all via `Gitlab::Database::QueryAnalyzers::*`\n+ class QueryAnalyzer\n+ ANALYZERS = [].freeze\n+\n+ Parsed = Struct.new(\n+ :sql, :connection, :pg\n+ )\n+\n+ def hook!\n+ @subscriber = ActiveSupport::Notifications.subscribe('sql.active_record') do |event|\n+ process_sql(event.payload[:sql], event.payload[:connection])\n+ end\n+ end\n+\n+ private\n+\n+ def process_sql(sql, connection)\n+ analyzers = enabled_analyzers(connection)\n+ return unless analyzers.any?\n+\n+ parsed = parse(sql, connection)\n+ return unless parsed\n+\n+ analyzers.each do |analyzer|\n+ analyzer.analyze(parsed)\n+ rescue =\u003e e # rubocop:disable Style/RescueStandardError\n+ # We catch all standard errors to prevent validation errors to introduce fatal errors in production\n+ Gitlab::ErrorTracking.track_and_raise_for_dev_exception(e)\n+ end\n+ end\n+\n+ def enabled_analyzers(connection)\n+ ANALYZERS.select do |analyzer|\n+ analyzer.enabled?(connection)\n+ rescue StandardError =\u003e e # rubocop:disable Style/RescueStandardError\n+ # We catch all standard errors to prevent validation errors to introduce fatal errors in production\n+ Gitlab::ErrorTracking.track_and_raise_for_dev_exception(e)\n+ end\n+ end\n+\n+ def parse(sql, connection)\n+ parsed = PgQuery.parse(sql)\n+ return unless parsed\n+\n+ normalized = PgQuery.normalize(sql)\n+ Parsed.new(normalized, connection, parsed)\n+ rescue PgQuery::ParseError =\u003e e\n+ # Ignore PgQuery parse errors (due to depth limit or other reasons)\n+ Gitlab::ErrorTracking.track_exception(e)\n+\n+ nil\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"lib/gitlab/database/query_analyzers/base.rb","new_path":"lib/gitlab/database/query_analyzers/base.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,17 @@\n+# frozen_string_literal: true\n+\n+module Gitlab\n+ module Database\n+ module QueryAnalyzers\n+ class Base\n+ def self.enabled?(connection)\n+ raise NotImplementedError\n+ end\n+\n+ def self.analyze(parsed)\n+ raise NotImplementedError\n+ end\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"lib/gitlab/graphql/known_operations.rb","new_path":"lib/gitlab/graphql/known_operations.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,40 @@\n+# frozen_string_literal: true\n+\n+module Gitlab\n+ module Graphql\n+ class KnownOperations\n+ Operation = Struct.new(:name) do\n+ def to_caller_id\n+ \"graphql:#{name}\"\n+ end\n+ end\n+\n+ ANONYMOUS = Operation.new(\"anonymous\").freeze\n+ UNKNOWN = Operation.new(\"unknown\").freeze\n+\n+ def self.default\n+ @default ||= self.new(Gitlab::Webpack::GraphqlKnownOperations.load)\n+ end\n+\n+ def initialize(operation_names)\n+ @operation_hash = operation_names\n+ .map { |name| Operation.new(name).freeze }\n+ .concat([ANONYMOUS, UNKNOWN])\n+ .index_by(\u0026:name)\n+ end\n+\n+ # Returns the known operation from the given ::GraphQL::Query object\n+ def from_query(query)\n+ operation_name = query.selected_operation_name\n+\n+ return ANONYMOUS unless operation_name\n+\n+ @operation_hash[operation_name] || UNKNOWN\n+ end\n+\n+ def operations\n+ @operation_hash.values\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"lib/gitlab/sidekiq_config.rb","new_path":"lib/gitlab/sidekiq_config.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -6,11 +6,13 @@ module Gitlab\n module SidekiqConfig\n FOSS_QUEUE_CONFIG_PATH = 'app/workers/all_queues.yml'\n EE_QUEUE_CONFIG_PATH = 'ee/app/workers/all_queues.yml'\n+ JH_QUEUE_CONFIG_PATH = 'jh/app/workers/all_queues.yml'\n SIDEKIQ_QUEUES_PATH = 'config/sidekiq_queues.yml'\n \n QUEUE_CONFIG_PATHS = [\n FOSS_QUEUE_CONFIG_PATH,\n- (EE_QUEUE_CONFIG_PATH if Gitlab.ee?)\n+ (EE_QUEUE_CONFIG_PATH if Gitlab.ee?),\n+ (JH_QUEUE_CONFIG_PATH if Gitlab.jh?)\n ].compact.freeze\n \n # This maps workers not in our application code to queues. We need\n@@ -33,7 +35,7 @@ module SidekiqConfig\n weight: 2,\n tags: []\n )\n- }.transform_values { |worker| Gitlab::SidekiqConfig::Worker.new(worker, ee: false) }.freeze\n+ }.transform_values { |worker| Gitlab::SidekiqConfig::Worker.new(worker, ee: false, jh: false) }.freeze\n \n class \u003c\u003c self\n include Gitlab::SidekiqConfig::CliMethods\n@@ -58,10 +60,14 @@ def workers\n @workers ||= begin\n result = []\n result.concat(DEFAULT_WORKERS.values)\n- result.concat(find_workers(Rails.root.join('app', 'workers'), ee: false))\n+ result.concat(find_workers(Rails.root.join('app', 'workers'), ee: false, jh: false))\n \n if Gitlab.ee?\n- result.concat(find_workers(Rails.root.join('ee', 'app', 'workers'), ee: true))\n+ result.concat(find_workers(Rails.root.join('ee', 'app', 'workers'), ee: true, jh: false))\n+ end\n+\n+ if Gitlab.jh?\n+ result.concat(find_workers(Rails.root.join('jh', 'app', 'workers'), ee: false, jh: true))\n end\n \n result\n@@ -69,16 +75,26 @@ def workers\n end\n \n def workers_for_all_queues_yml\n- workers.partition(\u0026:ee?).reverse.map(\u0026:sort)\n+ workers.each_with_object([[], [], []]) do |worker, array|\n+ if worker.jh?\n+ array[2].push(worker)\n+ elsif worker.ee?\n+ array[1].push(worker)\n+ else\n+ array[0].push(worker)\n+ end\n+ end.map(\u0026:sort)\n end\n \n # YAML.load_file is OK here as we control the file contents\n def all_queues_yml_outdated?\n- foss_workers, ee_workers = workers_for_all_queues_yml\n+ foss_workers, ee_workers, jh_workers = workers_for_all_queues_yml\n \n return true if foss_workers != YAML.load_file(FOSS_QUEUE_CONFIG_PATH)\n \n- Gitlab.ee? \u0026\u0026 ee_workers != YAML.load_file(EE_QUEUE_CONFIG_PATH)\n+ return true if Gitlab.ee? \u0026\u0026 ee_workers != YAML.load_file(EE_QUEUE_CONFIG_PATH)\n+\n+ Gitlab.jh? \u0026\u0026 File.exist?(JH_QUEUE_CONFIG_PATH) \u0026\u0026 jh_workers != YAML.load_file(JH_QUEUE_CONFIG_PATH)\n end\n \n def queues_for_sidekiq_queues_yml\n@@ -120,14 +136,14 @@ def current_worker_queue_mappings\n \n private\n \n- def find_workers(root, ee:)\n+ def find_workers(root, ee:, jh:)\n concerns = root.join('concerns').to_s\n \n Dir[root.join('**', '*.rb')]\n .reject { |path| path.start_with?(concerns) }\n .map { |path| worker_from_path(path, root) }\n .select { |worker| worker \u003c Sidekiq::Worker }\n- .map { |worker| Gitlab::SidekiqConfig::Worker.new(worker, ee: ee) }\n+ .map { |worker| Gitlab::SidekiqConfig::Worker.new(worker, ee: ee, jh: jh) }\n end\n \n def worker_from_path(path, root)\n"},{"old_path":"lib/gitlab/sidekiq_config/cli_methods.rb","new_path":"lib/gitlab/sidekiq_config/cli_methods.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -18,6 +18,7 @@ module CliMethods\n QUEUE_CONFIG_PATHS = begin\n result = %w[app/workers/all_queues.yml]\n result \u003c\u003c 'ee/app/workers/all_queues.yml' if Gitlab.ee?\n+ result \u003c\u003c 'jh/app/workers/all_queues.yml' if Gitlab.jh?\n result\n end.freeze\n \n"},{"old_path":"lib/gitlab/sidekiq_config/worker.rb","new_path":"lib/gitlab/sidekiq_config/worker.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -13,15 +13,20 @@ class Worker\n :worker_has_external_dependencies?,\n to: :klass\n \n- def initialize(klass, ee:)\n+ def initialize(klass, ee:, jh: false)\n @klass = klass\n @ee = ee\n+ @jh = jh\n end\n \n def ee?\n @ee\n end\n \n+ def jh?\n+ @jh\n+ end\n+\n def ==(other)\n to_yaml == case other\n when self.class\n"},{"old_path":"lib/gitlab/webpack/file_loader.rb","new_path":"lib/gitlab/webpack/file_loader.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,65 @@\n+# frozen_string_literal: true\n+\n+require 'net/http'\n+require 'uri'\n+\n+module Gitlab\n+ module Webpack\n+ class FileLoader\n+ class BaseError \u003c StandardError\n+ attr_reader :original_error, :uri\n+\n+ def initialize(uri, orig)\n+ super orig.message\n+ @uri = uri.to_s\n+ @original_error = orig\n+ end\n+ end\n+\n+ StaticLoadError = Class.new(BaseError)\n+ DevServerLoadError = Class.new(BaseError)\n+ DevServerSSLError = Class.new(BaseError)\n+\n+ def self.load(path)\n+ if Gitlab.config.webpack.dev_server.enabled\n+ self.load_from_dev_server(path)\n+ else\n+ self.load_from_static(path)\n+ end\n+ end\n+\n+ def self.load_from_dev_server(path)\n+ host = Gitlab.config.webpack.dev_server.host\n+ port = Gitlab.config.webpack.dev_server.port\n+ scheme = Gitlab.config.webpack.dev_server.https ? 'https' : 'http'\n+ uri = Addressable::URI.new(scheme: scheme, host: host, port: port, path: self.dev_server_path(path))\n+\n+ # localhost could be blocked via Gitlab::HTTP\n+ response = HTTParty.get(uri.to_s, verify: false) # rubocop:disable Gitlab/HTTParty\n+\n+ return response.body if response.code == 200\n+\n+ raise \"HTTP error #{response.code}\"\n+ rescue OpenSSL::SSL::SSLError, EOFError =\u003e e\n+ raise DevServerSSLError.new(uri, e)\n+ rescue StandardError =\u003e e\n+ raise DevServerLoadError.new(uri, e)\n+ end\n+\n+ def self.load_from_static(path)\n+ file_uri = ::Rails.root.join(\n+ Gitlab.config.webpack.output_dir,\n+ path\n+ )\n+\n+ File.read(file_uri)\n+ rescue StandardError =\u003e e\n+ raise StaticLoadError.new(file_uri, e)\n+ end\n+\n+ def self.dev_server_path(path)\n+ \"/#{Gitlab.config.webpack.public_path}/#{path}\"\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"lib/gitlab/webpack/graphql_known_operations.rb","new_path":"lib/gitlab/webpack/graphql_known_operations.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,25 @@\n+# frozen_string_literal: true\n+\n+module Gitlab\n+ module Webpack\n+ class GraphqlKnownOperations\n+ class \u003c\u003c self\n+ include Gitlab::Utils::StrongMemoize\n+\n+ def clear_memoization!\n+ clear_memoization(:graphql_known_operations)\n+ end\n+\n+ def load\n+ strong_memoize(:graphql_known_operations) do\n+ data = ::Gitlab::Webpack::FileLoader.load(\"graphql_known_operations.yml\")\n+\n+ YAML.safe_load(data)\n+ rescue StandardError\n+ []\n+ end\n+ end\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"lib/gitlab/webpack/manifest.rb","new_path":"lib/gitlab/webpack/manifest.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,8 +1,5 @@\n # frozen_string_literal: true\n \n-require 'net/http'\n-require 'uri'\n-\n module Gitlab\n module Webpack\n class Manifest\n@@ -78,49 +75,16 @@ def manifest\n end\n \n def load_manifest\n- data = if Gitlab.config.webpack.dev_server.enabled\n- load_dev_server_manifest\n- else\n- load_static_manifest\n- end\n+ data = Gitlab::Webpack::FileLoader.load(Gitlab.config.webpack.manifest_filename)\n \n Gitlab::Json.parse(data)\n- end\n-\n- def load_dev_server_manifest\n- host = Gitlab.config.webpack.dev_server.host\n- port = Gitlab.config.webpack.dev_server.port\n- scheme = Gitlab.config.webpack.dev_server.https ? 'https' : 'http'\n- uri = Addressable::URI.new(scheme: scheme, host: host, port: port, path: dev_server_path)\n-\n- # localhost could be blocked via Gitlab::HTTP\n- response = HTTParty.get(uri.to_s, verify: false) # rubocop:disable Gitlab/HTTParty\n-\n- return response.body if response.code == 200\n-\n- raise \"HTTP error #{response.code}\"\n- rescue OpenSSL::SSL::SSLError, EOFError =\u003e e\n+ rescue Gitlab::Webpack::FileLoader::StaticLoadError =\u003e e\n+ raise ManifestLoadError.new(\"Could not load compiled manifest from #{e.uri}.\\n\\nHave you run `rake gitlab:assets:compile`?\", e.original_error)\n+ rescue Gitlab::Webpack::FileLoader::DevServerSSLError =\u003e e\n ssl_status = Gitlab.config.webpack.dev_server.https ? ' over SSL' : ''\n- raise ManifestLoadError.new(\"Could not connect to webpack-dev-server at #{uri}#{ssl_status}.\\n\\nIs SSL enabled? Check that settings in `gitlab.yml` and webpack-dev-server match.\", e)\n- rescue StandardError =\u003e e\n- raise ManifestLoadError.new(\"Could not load manifest from webpack-dev-server at #{uri}.\\n\\nIs webpack-dev-server running? Try running `gdk status webpack` or `gdk tail webpack`.\", e)\n- end\n-\n- def load_static_manifest\n- File.read(static_manifest_path)\n- rescue StandardError =\u003e e\n- raise ManifestLoadError.new(\"Could not load compiled manifest from #{static_manifest_path}.\\n\\nHave you run `rake gitlab:assets:compile`?\", e)\n- end\n-\n- def static_manifest_path\n- ::Rails.root.join(\n- Gitlab.config.webpack.output_dir,\n- Gitlab.config.webpack.manifest_filename\n- )\n- end\n-\n- def dev_server_path\n- \"/#{Gitlab.config.webpack.public_path}/#{Gitlab.config.webpack.manifest_filename}\"\n+ raise ManifestLoadError.new(\"Could not connect to webpack-dev-server at #{e.uri}#{ssl_status}.\\n\\nIs SSL enabled? Check that settings in `gitlab.yml` and webpack-dev-server match.\", e.original_error)\n+ rescue Gitlab::Webpack::FileLoader::DevServerLoadError =\u003e e\n+ raise ManifestLoadError.new(\"Could not load manifest from webpack-dev-server at #{e.uri}.\\n\\nIs webpack-dev-server running? Try running `gdk status webpack` or `gdk tail webpack`.\", e.original_error)\n end\n end\n end\n"},{"old_path":"lib/sidebars/projects/menus/infrastructure_menu.rb","new_path":"lib/sidebars/projects/menus/infrastructure_menu.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -100,7 +100,7 @@ def google_cloud_menu_item\n ::Sidebars::MenuItem.new(\n title: _('Google Cloud'),\n link: project_google_cloud_index_path(context.project),\n- active_routes: {},\n+ active_routes: { controller: :google_cloud },\n item_id: :google_cloud\n )\n end\n"},{"old_path":"lib/tasks/gitlab/sidekiq.rake","new_path":"lib/tasks/gitlab/sidekiq.rake","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -36,13 +36,17 @@ namespace :gitlab do\n # Do not edit it manually!\n BANNER\n \n- foss_workers, ee_workers = Gitlab::SidekiqConfig.workers_for_all_queues_yml\n+ foss_workers, ee_workers, jh_workers = Gitlab::SidekiqConfig.workers_for_all_queues_yml\n \n write_yaml(Gitlab::SidekiqConfig::FOSS_QUEUE_CONFIG_PATH, banner, foss_workers)\n \n if Gitlab.ee?\n write_yaml(Gitlab::SidekiqConfig::EE_QUEUE_CONFIG_PATH, banner, ee_workers)\n end\n+\n+ if Gitlab.jh?\n+ write_yaml(Gitlab::SidekiqConfig::JH_QUEUE_CONFIG_PATH, banner, jh_workers)\n+ end\n end\n \n desc 'GitLab | Sidekiq | Validate that all_queues.yml matches worker definitions'\n@@ -57,6 +61,7 @@ namespace :gitlab do\n \n - #{Gitlab::SidekiqConfig::FOSS_QUEUE_CONFIG_PATH}\n - #{Gitlab::SidekiqConfig::EE_QUEUE_CONFIG_PATH}\n+ #{\"- \" + Gitlab::SidekiqConfig::JH_QUEUE_CONFIG_PATH if Gitlab.jh?}\n \n MSG\n end\n"},{"old_path":"locale/gitlab.pot","new_path":"locale/gitlab.pot","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1107,9 +1107,6 @@ msgstr \"\"\n msgid \"(check progress)\"\n msgstr \"\"\n \n-msgid \"(commits will be squashed)\"\n-msgstr \"\"\n-\n msgid \"(deleted)\"\n msgstr \"\"\n \n@@ -1128,6 +1125,11 @@ msgstr \"\"\n msgid \"(revoked)\"\n msgstr \"\"\n \n+msgid \"(squashes %d commit)\"\n+msgid_plural \"(squashes %d commits)\"\n+msgstr[0] \"\"\n+msgstr[1] \"\"\n+\n msgid \"(this user)\"\n msgstr \"\"\n \n@@ -4037,6 +4039,9 @@ msgid_plural \"ApplicationSettings|Approve %d users\"\n msgstr[0] \"\"\n msgstr[1] \"\"\n \n+msgid \"ApplicationSettings|Approve users\"\n+msgstr \"\"\n+\n msgid \"ApplicationSettings|Approve users in the pending approval status?\"\n msgstr \"\"\n \n@@ -4045,6 +4050,9 @@ msgid_plural \"ApplicationSettings|By making this change, you will automatically\n msgstr[0] \"\"\n msgstr[1] \"\"\n \n+msgid \"ApplicationSettings|By making this change, you will automatically approve all users in pending approval status.\"\n+msgstr \"\"\n+\n msgid \"ApplicationSettings|Denied domains for sign-ups\"\n msgstr \"\"\n \n@@ -6547,6 +6555,9 @@ msgstr \"\"\n msgid \"Changes to the title have not been saved\"\n msgstr \"\"\n \n+msgid \"Changing any setting here requires an application restart\"\n+msgstr \"\"\n+\n msgid \"Changing group URL can have unintended side effects.\"\n msgstr \"\"\n \n@@ -7190,6 +7201,9 @@ msgstr \"\"\n msgid \"Clients\"\n msgstr \"\"\n \n+msgid \"Clientside DSN\"\n+msgstr \"\"\n+\n msgid \"Clone\"\n msgstr \"\"\n \n@@ -8683,6 +8697,9 @@ msgstr \"\"\n msgid \"Configure Secret Detection in `.gitlab-ci.yml`, creating this file if it does not already exist\"\n msgstr \"\"\n \n+msgid \"Configure Sentry integration for error tracking\"\n+msgstr \"\"\n+\n msgid \"Configure Tracing\"\n msgstr \"\"\n \n@@ -10391,6 +10408,9 @@ msgstr \"\"\n msgid \"DORA4Metrics|The chart displays the median time between a merge request being merged and deployed to production environment(s) that are based on the %{linkStart}deployment_tier%{linkEnd} value.\"\n msgstr \"\"\n \n+msgid \"DSN\"\n+msgstr \"\"\n+\n msgid \"Dashboard\"\n msgstr \"\"\n \n@@ -11158,6 +11178,12 @@ msgstr \"\"\n msgid \"Deleted projects cannot be restored!\"\n msgstr \"\"\n \n+msgid \"Deletes the source branch\"\n+msgstr \"\"\n+\n+msgid \"Deletes the source branch.\"\n+msgstr \"\"\n+\n msgid \"Deleting\"\n msgstr \"\"\n \n@@ -12208,6 +12234,9 @@ msgstr \"\"\n msgid \"Does not apply to projects in personal namespaces, which are deleted immediately on request.\"\n msgstr \"\"\n \n+msgid \"Does not delete the source branch.\"\n+msgstr \"\"\n+\n msgid \"Domain\"\n msgstr \"\"\n \n@@ -12703,6 +12732,9 @@ msgstr \"\"\n msgid \"Enable SSL verification\"\n msgstr \"\"\n \n+msgid \"Enable Sentry error tracking\"\n+msgstr \"\"\n+\n msgid \"Enable Service Ping\"\n msgstr \"\"\n \n@@ -16109,6 +16141,9 @@ msgstr \"\"\n msgid \"GraphViewType|Stage\"\n msgstr \"\"\n \n+msgid \"Graphs\"\n+msgstr \"\"\n+\n msgid \"Gravatar\"\n msgstr \"\"\n \n@@ -29691,6 +29726,9 @@ msgstr \"\"\n msgid \"Runners|New runner, has not connected yet\"\n msgstr \"\"\n \n+msgid \"Runners|No recent contact from this runner; last contact was %{timeAgo}\"\n+msgstr \"\"\n+\n msgid \"Runners|Not available to run jobs\"\n msgstr \"\"\n \n@@ -29742,6 +29780,9 @@ msgstr \"\"\n msgid \"Runners|Runner #%{runner_id}\"\n msgstr \"\"\n \n+msgid \"Runners|Runner ID\"\n+msgstr \"\"\n+\n msgid \"Runners|Runner assigned to project.\"\n msgstr \"\"\n \n@@ -29751,6 +29792,9 @@ msgstr \"\"\n msgid \"Runners|Runner is online, last contact was %{runner_contact} ago\"\n msgstr \"\"\n \n+msgid \"Runners|Runner is online; last contact was %{timeAgo}\"\n+msgstr \"\"\n+\n msgid \"Runners|Runner is paused, last contact was %{runner_contact} ago\"\n msgstr \"\"\n \n@@ -29781,12 +29825,18 @@ msgstr \"\"\n msgid \"Runners|Something went wrong while fetching the tags suggestions\"\n msgstr \"\"\n \n+msgid \"Runners|Status\"\n+msgstr \"\"\n+\n msgid \"Runners|Stop the runner from accepting new jobs.\"\n msgstr \"\"\n \n msgid \"Runners|Tags\"\n msgstr \"\"\n \n+msgid \"Runners|This runner has never connected to this instance\"\n+msgstr \"\"\n+\n msgid \"Runners|This runner is associated with one or more projects.\"\n msgstr \"\"\n \n@@ -29853,6 +29903,15 @@ msgstr \"\"\n msgid \"Runners|locked\"\n msgstr \"\"\n \n+msgid \"Runners|not connected\"\n+msgstr \"\"\n+\n+msgid \"Runners|offline\"\n+msgstr \"\"\n+\n+msgid \"Runners|online\"\n+msgstr \"\"\n+\n msgid \"Runners|paused\"\n msgstr \"\"\n \n@@ -32473,12 +32532,6 @@ msgstr \"\"\n msgid \"Source branch\"\n msgstr \"\"\n \n-msgid \"Source branch will be deleted.\"\n-msgstr \"\"\n-\n-msgid \"Source branch will not be deleted.\"\n-msgstr \"\"\n-\n msgid \"Source branch: %{source_branch_open}%{source_branch}%{source_branch_close}\"\n msgstr \"\"\n \n@@ -34136,7 +34189,7 @@ msgstr \"\"\n msgid \"The connection will time out after %{timeout}. For repositories that take longer, use a clone/push combination.\"\n msgstr \"\"\n \n-msgid \"The contact does not belong to the same group as the issue.\"\n+msgid \"The contact does not belong to the same group as the issue\"\n msgstr \"\"\n \n msgid \"The content of this page is not encoded in UTF-8. Edits can only be made via the Git repository.\"\n@@ -34474,9 +34527,6 @@ msgstr \"\"\n msgid \"The snippet is visible to any logged in user except external users.\"\n msgstr \"\"\n \n-msgid \"The source branch will be deleted\"\n-msgstr \"\"\n-\n msgid \"The specified tab is invalid, please select another\"\n msgstr \"\"\n \n@@ -40986,13 +41036,13 @@ msgstr \"\"\n msgid \"most recent deployment\"\n msgstr \"\"\n \n-msgid \"mrWidgetCommitsAdded|%{commitCount} and %{mergeCommitCount} will be added to %{targetBranch}%{squashedCommits}.\"\n+msgid \"mrWidgetCommitsAdded|1 merge commit\"\n msgstr \"\"\n \n-msgid \"mrWidgetCommitsAdded|%{commitCount} will be added to %{targetBranch}.\"\n+msgid \"mrWidgetCommitsAdded|Adds %{commitCount} and %{mergeCommitCount} to %{targetBranch}%{squashedCommits}.\"\n msgstr \"\"\n \n-msgid \"mrWidgetCommitsAdded|1 merge commit\"\n+msgid \"mrWidgetCommitsAdded|Adds %{commitCount} to %{targetBranch}.\"\n msgstr \"\"\n \n msgid \"mrWidgetNothingToMerge|This merge request contains no changes.\"\n@@ -41102,6 +41152,9 @@ msgstr \"\"\n msgid \"mrWidget|Delete source branch\"\n msgstr \"\"\n \n+msgid \"mrWidget|Deletes the source branch\"\n+msgstr \"\"\n+\n msgid \"mrWidget|Deployment statistics are not available currently\"\n msgstr \"\"\n \n@@ -41111,6 +41164,9 @@ msgstr \"\"\n msgid \"mrWidget|Dismiss\"\n msgstr \"\"\n \n+msgid \"mrWidget|Does not delete the source branch\"\n+msgstr \"\"\n+\n msgid \"mrWidget|Email patches\"\n msgstr \"\"\n \n@@ -41167,6 +41223,9 @@ msgstr \"\"\n msgid \"mrWidget|Merged by\"\n msgstr \"\"\n \n+msgid \"mrWidget|Merges changes into\"\n+msgstr \"\"\n+\n msgid \"mrWidget|Merging! Changes are being shipped…\"\n msgstr \"\"\n \n@@ -41251,9 +41310,6 @@ msgstr \"\"\n msgid \"mrWidget|The changes were not merged into\"\n msgstr \"\"\n \n-msgid \"mrWidget|The changes will be merged into\"\n-msgstr \"\"\n-\n msgid \"mrWidget|The pipeline for this merge request did not complete. Push a new commit to fix the failure, or check the %{linkStart}troubleshooting documentation%{linkEnd} to see other possible actions.\"\n msgstr \"\"\n \n@@ -41269,12 +41325,6 @@ msgstr \"\"\n msgid \"mrWidget|The source branch is being deleted\"\n msgstr \"\"\n \n-msgid \"mrWidget|The source branch will be deleted\"\n-msgstr \"\"\n-\n-msgid \"mrWidget|The source branch will not be deleted\"\n-msgstr \"\"\n-\n msgid \"mrWidget|There are merge conflicts\"\n msgstr \"\"\n \n"},{"old_path":"qa/Gemfile.lock","new_path":"qa/Gemfile.lock","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -227,10 +227,10 @@ GEM\n watir (6.19.1)\n regexp_parser (\u003e= 1.2, \u003c 3)\n selenium-webdriver (\u003e= 3.142.7)\n- webdrivers (4.7.0)\n+ webdrivers (5.0.0)\n nokogiri (~\u003e 1.6)\n rubyzip (\u003e= 1.3.0)\n- selenium-webdriver (\u003e 3.141, \u003c 5.0)\n+ selenium-webdriver (~\u003e 4.0)\n xpath (3.2.0)\n nokogiri (~\u003e 1.8)\n zeitwerk (2.4.2)\n@@ -263,10 +263,10 @@ DEPENDENCIES\n rspec-retry (~\u003e 0.6.1)\n rspec_junit_formatter (~\u003e 0.4.1)\n ruby-debug-ide (~\u003e 0.7.0)\n- selenium-webdriver (~\u003e 4.0.0.rc1)\n+ selenium-webdriver (~\u003e 4.0)\n timecop (~\u003e 0.9.1)\n- webdrivers (~\u003e 4.6)\n+ webdrivers (~\u003e 5.0)\n zeitwerk (~\u003e 2.4)\n \n BUNDLED WITH\n- 2.2.29\n+ 2.2.30\n"},{"old_path":"qa/qa/page/group/settings/package_registries.rb","new_path":"qa/qa/page/group/settings/package_registries.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,5 +1,4 @@\n # frozen_string_literal: true\n-\n module QA\n module Page\n module Group\n@@ -20,22 +19,33 @@ class PackageRegistries \u003c QA::Page::Base\n \n def set_allow_duplicates_disabled\n expand_content :package_registry_settings_content do\n- click_element(:allow_duplicates_toggle) if duplicates_enabled?\n+ click_on_allow_duplicates_button if duplicates_enabled?\n end\n end\n \n def set_allow_duplicates_enabled\n expand_content :package_registry_settings_content do\n- click_element(:allow_duplicates_toggle) if duplicates_disabled?\n+ click_on_allow_duplicates_button unless duplicates_enabled?\n+ end\n+ end\n+\n+ def click_on_allow_duplicates_button\n+ with_allow_duplicates_button do |button|\n+ button.click\n end\n end\n \n def duplicates_enabled?\n- has_element?(:allow_duplicates_label, text: 'Allow duplicates')\n+ with_allow_duplicates_button do |button|\n+ button[:class].include?('is-checked')\n+ end\n end\n \n- def duplicates_disabled?\n- has_element?(:allow_duplicates_label, text: 'Do not allow duplicates')\n+ def with_allow_duplicates_button\n+ within_element :allow_duplicates_toggle do\n+ toggle = find('button.gl-toggle')\n+ yield(toggle)\n+ end\n end\n \n def has_dependency_proxy_enabled?\n"},{"old_path":"qa/qa/specs/features/browser_ui/1_manage/login/maintain_log_in_mixed_env_spec.rb","new_path":"qa/qa/specs/features/browser_ui/1_manage/login/maintain_log_in_mixed_env_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,7 +1,7 @@\n # frozen_string_literal: true\n \n module QA\n- RSpec.describe 'Manage', :mixed_env, :smoke, only: { subdomain: :staging } do\n+ RSpec.describe 'Manage', only: { subdomain: :staging }, quarantine: { issue: 'https://gitlab.com/gitlab-org/gitlab/-/issues/344213', type: :stale } do\n describe 'basic user' do\n it 'remains logged in when redirected from canary to non-canary node', testcase: 'https://gitlab.com/gitlab-org/quality/testcases/-/quality/test_cases/2251' do\n Runtime::Browser.visit(:gitlab, Page::Main::Login)\n"},{"old_path":"scripts/rspec_helpers.sh","new_path":"scripts/rspec_helpers.sh","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -166,6 +166,7 @@ function rspec_paralellized_job() {\n export SUITE_FLAKY_RSPEC_REPORT_PATH=\"${FLAKY_RSPEC_SUITE_REPORT_PATH}\"\n export FLAKY_RSPEC_REPORT_PATH=\"rspec_flaky/all_${report_name}_report.json\"\n export NEW_FLAKY_RSPEC_REPORT_PATH=\"rspec_flaky/new_${report_name}_report.json\"\n+ export SKIPPED_FLAKY_TESTS_REPORT_PATH=\"rspec_flaky/skipped_flaky_tests_${report_name}_report.txt\"\n \n if [[ ! -f $FLAKY_RSPEC_REPORT_PATH ]]; then\n echo \"{}\" \u003e \"${FLAKY_RSPEC_REPORT_PATH}\"\n"},{"old_path":"lib/gitlab/sidekiq_cluster/cli.rb","new_path":"sidekiq_cluster/cli.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -4,27 +4,21 @@\n require 'logger'\n require 'time'\n \n+# In environments where code is preloaded and cached such as `spring`,\n+# we may run into \"already initialized\" warnings, hence the check.\n+require_relative '../lib/gitlab' unless Object.const_defined?('Gitlab')\n+require_relative '../lib/gitlab/utils'\n+require_relative '../lib/gitlab/sidekiq_config/cli_methods'\n+require_relative '../lib/gitlab/sidekiq_config/worker_matcher'\n+require_relative '../lib/gitlab/sidekiq_logging/json_formatter'\n+require_relative 'sidekiq_cluster'\n+\n module Gitlab\n module SidekiqCluster\n class CLI\n- CHECK_TERMINATE_INTERVAL_SECONDS = 1\n-\n- # How long to wait when asking for a clean termination.\n- # It maps the Sidekiq default timeout:\n- # https://github.com/mperham/sidekiq/wiki/Signals#term\n- #\n- # This value is passed to Sidekiq's `-t` if none\n- # is given through arguments.\n- DEFAULT_SOFT_TIMEOUT_SECONDS = 25\n-\n- # After surpassing the soft timeout.\n- DEFAULT_HARD_TIMEOUT_SECONDS = 5\n-\n CommandError = Class.new(StandardError)\n \n def initialize(log_output = $stderr)\n- require_relative '../../../lib/gitlab/sidekiq_logging/json_formatter'\n-\n # As recommended by https://github.com/mperham/sidekiq/wiki/Advanced-Options#concurrency\n @max_concurrency = 50\n @min_concurrency = 0\n"},{"old_path":"sidekiq_cluster/dependencies.rb","new_path":"sidekiq_cluster/dependencies.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,6 @@\n+# rubocop:disable Naming/FileName\n+# frozen_string_literal: true\n+\n+require 'shellwords'\n+\n+# rubocop:enable Naming/FileName\n"},{"old_path":"lib/gitlab/sidekiq_cluster.rb","new_path":"sidekiq_cluster/sidekiq_cluster.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,9 +1,22 @@\n # frozen_string_literal: true\n \n-require 'shellwords'\n+require_relative 'dependencies'\n \n module Gitlab\n module SidekiqCluster\n+ CHECK_TERMINATE_INTERVAL_SECONDS = 1\n+\n+ # How long to wait when asking for a clean termination.\n+ # It maps the Sidekiq default timeout:\n+ # https://github.com/mperham/sidekiq/wiki/Signals#term\n+ #\n+ # This value is passed to Sidekiq's `-t` if none\n+ # is given through arguments.\n+ DEFAULT_SOFT_TIMEOUT_SECONDS = 25\n+\n+ # After surpassing the soft timeout.\n+ DEFAULT_HARD_TIMEOUT_SECONDS = 5\n+\n # The signals that should terminate both the master and workers.\n TERMINATE_SIGNALS = %i(INT TERM).freeze\n \n@@ -62,7 +75,7 @@ def self.signal_processes(pids, signal)\n # directory - The directory of the Rails application.\n #\n # Returns an Array containing the PIDs of the started processes.\n- def self.start(queues, env: :development, directory: Dir.pwd, max_concurrency: 50, min_concurrency: 0, timeout: CLI::DEFAULT_SOFT_TIMEOUT_SECONDS, dryrun: false)\n+ def self.start(queues, env: :development, directory: Dir.pwd, max_concurrency: 50, min_concurrency: 0, timeout: DEFAULT_SOFT_TIMEOUT_SECONDS, dryrun: false)\n queues.map.with_index do |pair, index|\n start_sidekiq(pair, env: env,\n directory: directory,\n"},{"old_path":"spec/lib/gitlab/sidekiq_cluster/cli_spec.rb","new_path":"spec/commands/sidekiq_cluster/cli_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -3,9 +3,11 @@\n require 'fast_spec_helper'\n require 'rspec-parameterized'\n \n-RSpec.describe Gitlab::SidekiqCluster::CLI do\n+require_relative '../../../sidekiq_cluster/cli'\n+\n+RSpec.describe Gitlab::SidekiqCluster::CLI do # rubocop:disable RSpec/FilePath\n let(:cli) { described_class.new('/dev/null') }\n- let(:timeout) { described_class::DEFAULT_SOFT_TIMEOUT_SECONDS }\n+ let(:timeout) { Gitlab::SidekiqCluster::DEFAULT_SOFT_TIMEOUT_SECONDS }\n let(:default_options) do\n { env: 'test', directory: Dir.pwd, max_concurrency: 50, min_concurrency: 0, dryrun: false, timeout: timeout }\n end\n@@ -103,7 +105,7 @@\n \n it 'when not given', 'starts Sidekiq workers with default timeout' do\n expect(Gitlab::SidekiqCluster).to receive(:start)\n- .with([['foo']], default_options.merge(timeout: described_class::DEFAULT_SOFT_TIMEOUT_SECONDS))\n+ .with([['foo']], default_options.merge(timeout: Gitlab::SidekiqCluster::DEFAULT_SOFT_TIMEOUT_SECONDS))\n \n cli.run(%w(foo))\n end\n@@ -271,7 +273,7 @@\n expect(Gitlab::SidekiqCluster).to receive(:signal_processes)\n .with([], \"-KILL\")\n \n- stub_const(\"Gitlab::SidekiqCluster::CLI::CHECK_TERMINATE_INTERVAL_SECONDS\", 0.1)\n+ stub_const(\"Gitlab::SidekiqCluster::CHECK_TERMINATE_INTERVAL_SECONDS\", 0.1)\n allow(cli).to receive(:terminate_timeout_seconds) { 1 }\n \n cli.wait_for_termination\n@@ -301,7 +303,7 @@\n \n cli.run(%w(foo))\n \n- stub_const(\"Gitlab::SidekiqCluster::CLI::CHECK_TERMINATE_INTERVAL_SECONDS\", 0.1)\n+ stub_const(\"Gitlab::SidekiqCluster::CHECK_TERMINATE_INTERVAL_SECONDS\", 0.1)\n allow(cli).to receive(:terminate_timeout_seconds) { 1 }\n \n cli.wait_for_termination\n"},{"old_path":"spec/controllers/concerns/renders_commits_spec.rb","new_path":"spec/controllers/concerns/renders_commits_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -64,6 +64,12 @@ def go\n subject.prepare_commits_for_rendering(merge_request.commits.take(1))\n end\n \n+ # Populate Banzai::Filter::References::ReferenceCache\n+ subject.prepare_commits_for_rendering(merge_request.commits)\n+\n+ # Reset lazy_latest_pipeline cache to simulate a new request\n+ BatchLoader::Executor.clear_current\n+\n expect do\n subject.prepare_commits_for_rendering(merge_request.commits)\n merge_request.commits.each(\u0026:latest_pipeline)\n"},{"old_path":"spec/features/graphql_known_operations_spec.rb","new_path":"spec/features/graphql_known_operations_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,29 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+# We need to distinguish between known and unknown GraphQL operations. This spec\n+# tests that we set up Gitlab::Graphql::KnownOperations.default which requires\n+# integration of FE queries, webpack plugin, and BE.\n+RSpec.describe 'Graphql known operations', :js do\n+ around do |example|\n+ # Let's make sure we aren't receiving or leaving behind any side-effects\n+ # https://gitlab.com/gitlab-org/gitlab/-/jobs/1743294100\n+ ::Gitlab::Graphql::KnownOperations.instance_variable_set(:@default, nil)\n+ ::Gitlab::Webpack::GraphqlKnownOperations.clear_memoization!\n+\n+ example.run\n+\n+ ::Gitlab::Graphql::KnownOperations.instance_variable_set(:@default, nil)\n+ ::Gitlab::Webpack::GraphqlKnownOperations.clear_memoization!\n+ end\n+\n+ it 'collects known Graphql operations from the code', :aggregate_failures do\n+ # Check that we include some arbitrary operation name we expect\n+ known_operations = Gitlab::Graphql::KnownOperations.default.operations.map(\u0026:name)\n+\n+ expect(known_operations).to include(\"searchProjects\")\n+ expect(known_operations.length).to be \u003e 20\n+ expect(known_operations).to all( match(%r{^[a-z]+}i) )\n+ end\n+end\n"},{"old_path":"spec/features/issues/form_spec.rb","new_path":"spec/features/issues/form_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -4,25 +4,29 @@\n \n RSpec.describe 'New/edit issue', :js do\n include ActionView::Helpers::JavaScriptHelper\n- include FormHelper\n \n let_it_be(:project) { create(:project) }\n- let_it_be(:user) { create(:user)}\n- let_it_be(:user2) { create(:user)}\n+ let_it_be(:user) { create(:user) }\n+ let_it_be(:user2) { create(:user) }\n let_it_be(:milestone) { create(:milestone, project: project) }\n let_it_be(:label) { create(:label, project: project) }\n let_it_be(:label2) { create(:label, project: project) }\n let_it_be(:issue) { create(:issue, project: project, assignees: [user], milestone: milestone) }\n \n- before do\n- stub_licensed_features(multiple_issue_assignees: false, issue_weights: false)\n+ let(:current_user) { user }\n \n+ before_all do\n project.add_maintainer(user)\n project.add_maintainer(user2)\n- sign_in(user)\n end\n \n- context 'new issue' do\n+ before do\n+ stub_licensed_features(multiple_issue_assignees: false, issue_weights: false)\n+\n+ sign_in(current_user)\n+ end\n+\n+ describe 'new issue' do\n before do\n visit new_project_issue_path(project)\n end\n@@ -235,29 +239,42 @@\n end\n \n describe 'displays issue type options in the dropdown' do\n+ shared_examples 'type option is visible' do |label:, identifier:|\n+ it \"shows #{identifier} option\", :aggregate_failures do\n+ page.within('[data-testid=\"issue-type-select-dropdown\"]') do\n+ expect(page).to have_selector(%([data-testid=\"issue-type-#{identifier}-icon\"]))\n+ expect(page).to have_content(label)\n+ end\n+ end\n+ end\n+\n before do\n page.within('.issue-form') do\n click_button 'Issue'\n end\n end\n \n- it 'correctly displays the Issue type option with an icon', :aggregate_failures do\n- page.within('[data-testid=\"issue-type-select-dropdown\"]') do\n- expect(page).to have_selector('[data-testid=\"issue-type-issue-icon\"]')\n- expect(page).to have_content('Issue')\n- end\n- end\n+ it_behaves_like 'type option is visible', label: 'Issue', identifier: :issue\n+ it_behaves_like 'type option is visible', label: 'Incident', identifier: :incident\n \n- it 'correctly displays the Incident type option with an icon', :aggregate_failures do\n- page.within('[data-testid=\"issue-type-select-dropdown\"]') do\n- expect(page).to have_selector('[data-testid=\"issue-type-incident-icon\"]')\n- expect(page).to have_content('Incident')\n+ context 'when user is guest' do\n+ let_it_be(:guest) { create(:user) }\n+\n+ let(:current_user) { guest }\n+\n+ before_all do\n+ project.add_guest(guest)\n end\n+\n+ it_behaves_like 'type option is visible', label: 'Issue', identifier: :issue\n+ it_behaves_like 'type option is visible', label: 'Incident', identifier: :incident\n end\n end\n \n describe 'milestone' do\n- let!(:milestone) { create(:milestone, title: '\"\u003e\u0026lt;img src=x onerror=alert(document.domain)\u0026gt;', project: project) }\n+ let!(:milestone) do\n+ create(:milestone, title: '\"\u003e\u0026lt;img src=x onerror=alert(document.domain)\u0026gt;', project: project)\n+ end\n \n it 'escapes milestone' do\n click_button 'Milestone'\n@@ -274,7 +291,7 @@\n end\n end\n \n- context 'edit issue' do\n+ describe 'edit issue' do\n before do\n visit edit_project_issue_path(project, issue)\n end\n@@ -329,7 +346,7 @@\n end\n end\n \n- context 'inline edit' do\n+ describe 'inline edit' do\n before do\n visit project_issue_path(project, issue)\n end\n"},{"old_path":"spec/features/merge_request/user_merges_when_pipeline_succeeds_spec.rb","new_path":"spec/features/merge_request/user_merges_when_pipeline_succeeds_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -36,7 +36,7 @@\n click_button \"Merge when pipeline succeeds\"\n \n expect(page).to have_content \"Set by #{user.name} to be merged automatically when the pipeline succeeds\"\n- expect(page).to have_content \"The source branch will not be deleted\"\n+ expect(page).to have_content \"Does not delete the source branch\"\n expect(page).to have_selector \".js-cancel-auto-merge\"\n visit project_merge_request_path(project, merge_request) # Needed to refresh the page\n expect(page).to have_content /enabled an automatic merge when the pipeline for \\h{8} succeeds/i\n@@ -126,7 +126,7 @@\n it 'allows to delete source branch' do\n click_button \"Delete source branch\"\n \n- expect(page).to have_content \"The source branch will be deleted\"\n+ expect(page).to have_content \"Deletes the source branch\"\n end\n \n context 'when pipeline succeeds' do\n"},{"old_path":"spec/features/merge_request/user_sees_merge_widget_spec.rb","new_path":"spec/features/merge_request/user_sees_merge_widget_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -426,7 +426,7 @@\n \n it 'user cannot remove source branch', :sidekiq_might_not_need_inline do\n expect(page).not_to have_field('remove-source-branch-input')\n- expect(page).to have_content('The source branch will be deleted')\n+ expect(page).to have_content('Deletes the source branch')\n end\n end\n \n"},{"old_path":"spec/frontend/admin/analytics/devops_score/components/devops_score_callout_spec.js","new_path":"spec/frontend/admin/analytics/devops_score/components/devops_score_callout_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,7 +1,7 @@\n import { GlBanner } from '@gitlab/ui';\n import { shallowMount } from '@vue/test-utils';\n-import DevopsScoreCallout from '~/analytics/devops_report/components/devops_score_callout.vue';\n-import { INTRO_COOKIE_KEY } from '~/analytics/devops_report/constants';\n+import DevopsScoreCallout from '~/analytics/devops_reports/components/devops_score_callout.vue';\n+import { INTRO_COOKIE_KEY } from '~/analytics/devops_reports/constants';\n import * as utils from '~/lib/utils/common_utils';\n import { devopsReportDocsPath, devopsScoreIntroImagePath } from '../mock_data';\n \n"},{"old_path":"spec/frontend/admin/analytics/devops_score/components/devops_score_spec.js","new_path":"spec/frontend/admin/analytics/devops_score/components/devops_score_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -2,8 +2,8 @@ import { GlTable, GlBadge, GlEmptyState } from '@gitlab/ui';\n import { GlSingleStat } from '@gitlab/ui/dist/charts';\n import { mount } from '@vue/test-utils';\n import { extendedWrapper } from 'helpers/vue_test_utils_helper';\n-import DevopsScore from '~/analytics/devops_report/components/devops_score.vue';\n-import DevopsScoreCallout from '~/analytics/devops_report/components/devops_score_callout.vue';\n+import DevopsScore from '~/analytics/devops_reports/components/devops_score.vue';\n+import DevopsScoreCallout from '~/analytics/devops_reports/components/devops_score_callout.vue';\n import { devopsScoreMetricsData, noDataImagePath, devopsScoreTableHeaders } from '../mock_data';\n \n describe('DevopsScore', () =\u003e {\n"},{"old_path":"spec/frontend/analytics/devops_report/components/service_ping_disabled_spec.js","new_path":"spec/frontend/analytics/devops_reports/components/service_ping_disabled_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,9 +1,9 @@\n import { GlEmptyState, GlSprintf } from '@gitlab/ui';\n import { TEST_HOST } from 'helpers/test_constants';\n import { mountExtended } from 'helpers/vue_test_utils_helper';\n-import ServicePingDisabled from '~/analytics/devops_report/components/service_ping_disabled.vue';\n+import ServicePingDisabled from '~/analytics/devops_reports/components/service_ping_disabled.vue';\n \n-describe('~/analytics/devops_report/components/service_ping_disabled.vue', () =\u003e {\n+describe('~/analytics/devops_reports/components/service_ping_disabled.vue', () =\u003e {\n let wrapper;\n \n afterEach(() =\u003e {\n"},{"old_path":"spec/frontend/diffs/components/diff_discussions_spec.js","new_path":"spec/frontend/diffs/components/diff_discussions_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,6 +1,7 @@\n import { GlIcon } from '@gitlab/ui';\n import { mount, createLocalVue } from '@vue/test-utils';\n import DiffDiscussions from '~/diffs/components/diff_discussions.vue';\n+import { discussionIntersectionObserverHandlerFactory } from '~/diffs/utils/discussions';\n import { createStore } from '~/mr_notes/stores';\n import DiscussionNotes from '~/notes/components/discussion_notes.vue';\n import NoteableDiscussion from '~/notes/components/noteable_discussion.vue';\n@@ -19,6 +20,9 @@ describe('DiffDiscussions', () =\u003e {\n store = createStore();\n wrapper = mount(localVue.extend(DiffDiscussions), {\n store,\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData: {\n discussions: getDiscussionsMockData(),\n ...props,\n"},{"old_path":"spec/frontend/diffs/utils/discussions_spec.js","new_path":"spec/frontend/diffs/utils/discussions_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,133 @@\n+import { discussionIntersectionObserverHandlerFactory } from '~/diffs/utils/discussions';\n+\n+describe('Diff Discussions Utils', () =\u003e {\n+ describe('discussionIntersectionObserverHandlerFactory', () =\u003e {\n+ it('creates a handler function', () =\u003e {\n+ expect(discussionIntersectionObserverHandlerFactory()).toBeInstanceOf(Function);\n+ });\n+\n+ describe('intersection observer handler', () =\u003e {\n+ const functions = {\n+ setCurrentDiscussionId: jest.fn(),\n+ getPreviousUnresolvedDiscussionId: jest.fn().mockImplementation((id) =\u003e {\n+ return Number(id) - 1;\n+ }),\n+ };\n+ const defaultProcessableWrapper = {\n+ entry: {\n+ time: 0,\n+ isIntersecting: true,\n+ rootBounds: {\n+ bottom: 0,\n+ },\n+ boundingClientRect: {\n+ top: 0,\n+ },\n+ },\n+ currentDiscussion: {\n+ id: 1,\n+ },\n+ isFirstUnresolved: false,\n+ isDiffsPage: true,\n+ };\n+ let handler;\n+ let getMock;\n+ let setMock;\n+\n+ beforeEach(() =\u003e {\n+ functions.setCurrentDiscussionId.mockClear();\n+ functions.getPreviousUnresolvedDiscussionId.mockClear();\n+\n+ defaultProcessableWrapper.functions = functions;\n+\n+ setMock = functions.setCurrentDiscussionId.mock;\n+ getMock = functions.getPreviousUnresolvedDiscussionId.mock;\n+ handler = discussionIntersectionObserverHandlerFactory();\n+ });\n+\n+ it('debounces multiple simultaneous requests into one queue', () =\u003e {\n+ handler(defaultProcessableWrapper);\n+ handler(defaultProcessableWrapper);\n+ handler(defaultProcessableWrapper);\n+ handler(defaultProcessableWrapper);\n+\n+ expect(setTimeout).toHaveBeenCalledTimes(4);\n+ expect(clearTimeout).toHaveBeenCalledTimes(3);\n+\n+ // By only advancing to one timer, we ensure it's all being batched into one queue\n+ jest.advanceTimersToNextTimer();\n+\n+ expect(functions.setCurrentDiscussionId).toHaveBeenCalledTimes(4);\n+ });\n+\n+ it('properly processes, sorts and executes the correct actions for a set of observed intersections', () =\u003e {\n+ handler(defaultProcessableWrapper);\n+ handler({\n+ // This observation is here to be filtered out because it's a scrollDown\n+ ...defaultProcessableWrapper,\n+ entry: {\n+ ...defaultProcessableWrapper.entry,\n+ isIntersecting: false,\n+ boundingClientRect: { top: 10 },\n+ rootBounds: { bottom: 100 },\n+ },\n+ });\n+ handler({\n+ ...defaultProcessableWrapper,\n+ entry: {\n+ ...defaultProcessableWrapper.entry,\n+ time: 101,\n+ isIntersecting: false,\n+ rootBounds: { bottom: -100 },\n+ },\n+ currentDiscussion: { id: 20 },\n+ });\n+ handler({\n+ ...defaultProcessableWrapper,\n+ entry: {\n+ ...defaultProcessableWrapper.entry,\n+ time: 100,\n+ isIntersecting: false,\n+ boundingClientRect: { top: 100 },\n+ },\n+ currentDiscussion: { id: 30 },\n+ isDiffsPage: false,\n+ });\n+ handler({\n+ ...defaultProcessableWrapper,\n+ isFirstUnresolved: true,\n+ entry: {\n+ ...defaultProcessableWrapper.entry,\n+ time: 100,\n+ isIntersecting: false,\n+ boundingClientRect: { top: 200 },\n+ },\n+ });\n+\n+ jest.advanceTimersToNextTimer();\n+\n+ expect(setMock.calls.length).toBe(4);\n+ expect(setMock.calls[0]).toEqual([1]);\n+ expect(setMock.calls[1]).toEqual([29]);\n+ expect(setMock.calls[2]).toEqual([null]);\n+ expect(setMock.calls[3]).toEqual([19]);\n+\n+ expect(getMock.calls.length).toBe(2);\n+ expect(getMock.calls[0]).toEqual([30, false]);\n+ expect(getMock.calls[1]).toEqual([20, true]);\n+\n+ [\n+ setMock.invocationCallOrder[0],\n+ getMock.invocationCallOrder[0],\n+ setMock.invocationCallOrder[1],\n+ setMock.invocationCallOrder[2],\n+ getMock.invocationCallOrder[1],\n+ setMock.invocationCallOrder[3],\n+ ].forEach((order, idx, list) =\u003e {\n+ // Compare each invocation sequence to the one before it (except the first one)\n+ expect(list[idx - 1] || -1).toBeLessThan(order);\n+ });\n+ });\n+ });\n+ });\n+});\n"},{"old_path":"spec/frontend/notes/components/discussion_notes_spec.js","new_path":"spec/frontend/notes/components/discussion_notes_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,6 +1,7 @@\n import { getByRole } from '@testing-library/dom';\n import { shallowMount, mount } from '@vue/test-utils';\n import '~/behaviors/markdown/render_gfm';\n+import { discussionIntersectionObserverHandlerFactory } from '~/diffs/utils/discussions';\n import DiscussionNotes from '~/notes/components/discussion_notes.vue';\n import NoteableNote from '~/notes/components/noteable_note.vue';\n import { SYSTEM_NOTE } from '~/notes/constants';\n@@ -26,6 +27,9 @@ describe('DiscussionNotes', () =\u003e {\n const createComponent = (props, mountingMethod = shallowMount) =\u003e {\n wrapper = mountingMethod(DiscussionNotes, {\n store,\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData: {\n discussion: discussionMock,\n isExpanded: false,\n"},{"old_path":"spec/frontend/notes/components/noteable_discussion_spec.js","new_path":"spec/frontend/notes/components/noteable_discussion_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -3,6 +3,7 @@ import { nextTick } from 'vue';\n import discussionWithTwoUnresolvedNotes from 'test_fixtures/merge_requests/resolved_diff_discussion.json';\n import { trimText } from 'helpers/text_helper';\n import mockDiffFile from 'jest/diffs/mock_data/diff_file';\n+import { discussionIntersectionObserverHandlerFactory } from '~/diffs/utils/discussions';\n import DiscussionNotes from '~/notes/components/discussion_notes.vue';\n import ReplyPlaceholder from '~/notes/components/discussion_reply_placeholder.vue';\n import ResolveWithIssueButton from '~/notes/components/discussion_resolve_with_issue_button.vue';\n@@ -31,6 +32,9 @@ describe('noteable_discussion component', () =\u003e {\n \n wrapper = mount(NoteableDiscussion, {\n store,\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData: { discussion: discussionMock },\n });\n });\n@@ -167,6 +171,9 @@ describe('noteable_discussion component', () =\u003e {\n \n wrapper = mount(NoteableDiscussion, {\n store,\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData: { discussion: discussionMock },\n });\n });\n@@ -185,6 +192,9 @@ describe('noteable_discussion component', () =\u003e {\n \n wrapper = mount(NoteableDiscussion, {\n store,\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData: { discussion: discussionMock },\n });\n });\n"},{"old_path":"spec/frontend/notes/components/notes_app_spec.js","new_path":"spec/frontend/notes/components/notes_app_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -9,6 +9,7 @@ import DraftNote from '~/batch_comments/components/draft_note.vue';\n import batchComments from '~/batch_comments/stores/modules/batch_comments';\n import axios from '~/lib/utils/axios_utils';\n import * as urlUtility from '~/lib/utils/url_utility';\n+import { discussionIntersectionObserverHandlerFactory } from '~/diffs/utils/discussions';\n import CommentForm from '~/notes/components/comment_form.vue';\n import NotesApp from '~/notes/components/notes_app.vue';\n import * as constants from '~/notes/constants';\n@@ -78,6 +79,9 @@ describe('note_app', () =\u003e {\n \u003c/div\u003e`,\n },\n {\n+ provide: {\n+ discussionObserverHandler: discussionIntersectionObserverHandlerFactory(),\n+ },\n propsData,\n store,\n },\n"},{"old_path":"spec/frontend/runner/components/cells/runner_actions_cell_spec.js","new_path":"spec/frontend/runner/components/cells/runner_actions_cell_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -8,12 +8,11 @@ import RunnerActionCell from '~/runner/components/cells/runner_actions_cell.vue'\n import getGroupRunnersQuery from '~/runner/graphql/get_group_runners.query.graphql';\n import getRunnersQuery from '~/runner/graphql/get_runners.query.graphql';\n import runnerDeleteMutation from '~/runner/graphql/runner_delete.mutation.graphql';\n-import runnerUpdateMutation from '~/runner/graphql/runner_update.mutation.graphql';\n+import runnerActionsUpdateMutation from '~/runner/graphql/runner_actions_update.mutation.graphql';\n import { captureException } from '~/runner/sentry_utils';\n-import { runnersData, runnerData } from '../../mock_data';\n+import { runnersData } from '../../mock_data';\n \n const mockRunner = runnersData.data.runners.nodes[0];\n-const mockRunnerDetails = runnerData.data.runner;\n \n const getRunnersQueryName = getRunnersQuery.definitions[0].name.value;\n const getGroupRunnersQueryName = getGroupRunnersQuery.definitions[0].name.value;\n@@ -27,7 +26,7 @@ jest.mock('~/runner/sentry_utils');\n describe('RunnerTypeCell', () =\u003e {\n let wrapper;\n const runnerDeleteMutationHandler = jest.fn();\n- const runnerUpdateMutationHandler = jest.fn();\n+ const runnerActionsUpdateMutationHandler = jest.fn();\n \n const findEditBtn = () =\u003e wrapper.findByTestId('edit-runner');\n const findToggleActiveBtn = () =\u003e wrapper.findByTestId('toggle-active-runner');\n@@ -46,7 +45,7 @@ describe('RunnerTypeCell', () =\u003e {\n localVue,\n apolloProvider: createMockApollo([\n [runnerDeleteMutation, runnerDeleteMutationHandler],\n- [runnerUpdateMutation, runnerUpdateMutationHandler],\n+ [runnerActionsUpdateMutation, runnerActionsUpdateMutationHandler],\n ]),\n ...options,\n }),\n@@ -62,10 +61,10 @@ describe('RunnerTypeCell', () =\u003e {\n },\n });\n \n- runnerUpdateMutationHandler.mockResolvedValue({\n+ runnerActionsUpdateMutationHandler.mockResolvedValue({\n data: {\n runnerUpdate: {\n- runner: mockRunnerDetails,\n+ runner: mockRunner,\n errors: [],\n },\n },\n@@ -74,7 +73,7 @@ describe('RunnerTypeCell', () =\u003e {\n \n afterEach(() =\u003e {\n runnerDeleteMutationHandler.mockReset();\n- runnerUpdateMutationHandler.mockReset();\n+ runnerActionsUpdateMutationHandler.mockReset();\n \n wrapper.destroy();\n });\n@@ -116,12 +115,12 @@ describe('RunnerTypeCell', () =\u003e {\n \n describe(`When clicking on the ${icon} button`, () =\u003e {\n it(`The apollo mutation to set active to ${newActiveValue} is called`, async () =\u003e {\n- expect(runnerUpdateMutationHandler).toHaveBeenCalledTimes(0);\n+ expect(runnerActionsUpdateMutationHandler).toHaveBeenCalledTimes(0);\n \n await findToggleActiveBtn().vm.$emit('click');\n \n- expect(runnerUpdateMutationHandler).toHaveBeenCalledTimes(1);\n- expect(runnerUpdateMutationHandler).toHaveBeenCalledWith({\n+ expect(runnerActionsUpdateMutationHandler).toHaveBeenCalledTimes(1);\n+ expect(runnerActionsUpdateMutationHandler).toHaveBeenCalledWith({\n input: {\n id: mockRunner.id,\n active: newActiveValue,\n@@ -145,7 +144,7 @@ describe('RunnerTypeCell', () =\u003e {\n const mockErrorMsg = 'Update error!';\n \n beforeEach(async () =\u003e {\n- runnerUpdateMutationHandler.mockRejectedValueOnce(new Error(mockErrorMsg));\n+ runnerActionsUpdateMutationHandler.mockRejectedValueOnce(new Error(mockErrorMsg));\n \n await findToggleActiveBtn().vm.$emit('click');\n });\n@@ -167,10 +166,10 @@ describe('RunnerTypeCell', () =\u003e {\n const mockErrorMsg2 = 'User not allowed!';\n \n beforeEach(async () =\u003e {\n- runnerUpdateMutationHandler.mockResolvedValue({\n+ runnerActionsUpdateMutationHandler.mockResolvedValue({\n data: {\n runnerUpdate: {\n- runner: runnerData.data.runner,\n+ runner: mockRunner,\n errors: [mockErrorMsg, mockErrorMsg2],\n },\n },\n"},{"old_path":"spec/frontend/runner/components/cells/runner_type_cell_spec.js","new_path":"spec/frontend/runner/components/cells/runner_status_cell_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,20 +1,20 @@\n import { GlBadge } from '@gitlab/ui';\n import { mount } from '@vue/test-utils';\n-import RunnerTypeCell from '~/runner/components/cells/runner_type_cell.vue';\n-import { INSTANCE_TYPE } from '~/runner/constants';\n+import RunnerStatusCell from '~/runner/components/cells/runner_status_cell.vue';\n+import { INSTANCE_TYPE, STATUS_ONLINE, STATUS_OFFLINE } from '~/runner/constants';\n \n describe('RunnerTypeCell', () =\u003e {\n let wrapper;\n \n- const findBadges = () =\u003e wrapper.findAllComponents(GlBadge);\n+ const findBadgeAt = (i) =\u003e wrapper.findAllComponents(GlBadge).at(i);\n \n const createComponent = ({ runner = {} } = {}) =\u003e {\n- wrapper = mount(RunnerTypeCell, {\n+ wrapper = mount(RunnerStatusCell, {\n propsData: {\n runner: {\n runnerType: INSTANCE_TYPE,\n active: true,\n- locked: false,\n+ status: STATUS_ONLINE,\n ...runner,\n },\n },\n@@ -25,24 +25,45 @@ describe('RunnerTypeCell', () =\u003e {\n wrapper.destroy();\n });\n \n- it('Displays the runner type', () =\u003e {\n+ it('Displays online status', () =\u003e {\n createComponent();\n \n- expect(findBadges()).toHaveLength(1);\n- expect(findBadges().at(0).text()).toBe('shared');\n+ expect(wrapper.text()).toMatchInterpolatedText('online');\n+ expect(findBadgeAt(0).text()).toBe('online');\n });\n \n- it('Displays locked and paused states', () =\u003e {\n+ it('Displays offline status', () =\u003e {\n+ createComponent({\n+ runner: {\n+ status: STATUS_OFFLINE,\n+ },\n+ });\n+\n+ expect(wrapper.text()).toMatchInterpolatedText('offline');\n+ expect(findBadgeAt(0).text()).toBe('offline');\n+ });\n+\n+ it('Displays paused status', () =\u003e {\n createComponent({\n runner: {\n active: false,\n- locked: true,\n+ status: STATUS_ONLINE,\n+ },\n+ });\n+\n+ expect(wrapper.text()).toMatchInterpolatedText('online paused');\n+\n+ expect(findBadgeAt(0).text()).toBe('online');\n+ expect(findBadgeAt(1).text()).toBe('paused');\n+ });\n+\n+ it('Is empty when data is missing', () =\u003e {\n+ createComponent({\n+ runner: {\n+ status: null,\n },\n });\n \n- expect(findBadges()).toHaveLength(3);\n- expect(findBadges().at(0).text()).toBe('shared');\n- expect(findBadges().at(1).text()).toBe('locked');\n- expect(findBadges().at(2).text()).toBe('paused');\n+ expect(wrapper.text()).toBe('');\n });\n });\n"},{"old_path":"spec/frontend/runner/components/cells/runner_summary_cell_spec.js","new_path":"spec/frontend/runner/components/cells/runner_summary_cell_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -1,5 +1,6 @@\n-import { mount } from '@vue/test-utils';\n+import { mountExtended } from 'helpers/vue_test_utils_helper';\n import RunnerSummaryCell from '~/runner/components/cells/runner_summary_cell.vue';\n+import { INSTANCE_TYPE, PROJECT_TYPE } from '~/runner/constants';\n \n const mockId = '1';\n const mockShortSha = '2P6oDVDm';\n@@ -8,13 +9,17 @@ const mockDescription = 'runner-1';\n describe('RunnerTypeCell', () =\u003e {\n let wrapper;\n \n- const createComponent = (options) =\u003e {\n- wrapper = mount(RunnerSummaryCell, {\n+ const findLockIcon = () =\u003e wrapper.findByTestId('lock-icon');\n+\n+ const createComponent = (runner, options) =\u003e {\n+ wrapper = mountExtended(RunnerSummaryCell, {\n propsData: {\n runner: {\n id: `gid://gitlab/Ci::Runner/${mockId}`,\n shortSha: mockShortSha,\n description: mockDescription,\n+ runnerType: INSTANCE_TYPE,\n+ ...runner,\n },\n },\n ...options,\n@@ -33,6 +38,23 @@ describe('RunnerTypeCell', () =\u003e {\n expect(wrapper.text()).toContain(`#${mockId} (${mockShortSha})`);\n });\n \n+ it('Displays the runner type', () =\u003e {\n+ expect(wrapper.text()).toContain('shared');\n+ });\n+\n+ it('Does not display the locked icon', () =\u003e {\n+ expect(findLockIcon().exists()).toBe(false);\n+ });\n+\n+ it('Displays the locked icon for locked runners', () =\u003e {\n+ createComponent({\n+ runnerType: PROJECT_TYPE,\n+ locked: true,\n+ });\n+\n+ expect(findLockIcon().exists()).toBe(true);\n+ });\n+\n it('Displays the runner description', () =\u003e {\n expect(wrapper.text()).toContain(mockDescription);\n });\n@@ -40,11 +62,14 @@ describe('RunnerTypeCell', () =\u003e {\n it('Displays a custom slot', () =\u003e {\n const slotContent = 'My custom runner summary';\n \n- createComponent({\n- slots: {\n- 'runner-name': slotContent,\n+ createComponent(\n+ {},\n+ {\n+ slots: {\n+ 'runner-name': slotContent,\n+ },\n },\n- });\n+ );\n \n expect(wrapper.text()).toContain(slotContent);\n });\n"},{"old_path":"spec/frontend/runner/components/runner_contacted_state_badge_spec.js","new_path":"spec/frontend/runner/components/runner_contacted_state_badge_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,86 @@\n+import { GlBadge } from '@gitlab/ui';\n+import { shallowMount } from '@vue/test-utils';\n+import RunnerContactedStateBadge from '~/runner/components/runner_contacted_state_badge.vue';\n+import { createMockDirective, getBinding } from 'helpers/vue_mock_directive';\n+import { STATUS_ONLINE, STATUS_OFFLINE, STATUS_NOT_CONNECTED } from '~/runner/constants';\n+\n+describe('RunnerTypeBadge', () =\u003e {\n+ let wrapper;\n+\n+ const findBadge = () =\u003e wrapper.findComponent(GlBadge);\n+ const getTooltip = () =\u003e getBinding(findBadge().element, 'gl-tooltip');\n+\n+ const createComponent = ({ runner = {} } = {}) =\u003e {\n+ wrapper = shallowMount(RunnerContactedStateBadge, {\n+ propsData: {\n+ runner: {\n+ contactedAt: '2021-01-01T00:00:00Z',\n+ status: STATUS_ONLINE,\n+ ...runner,\n+ },\n+ },\n+ directives: {\n+ GlTooltip: createMockDirective(),\n+ },\n+ });\n+ };\n+\n+ beforeEach(() =\u003e {\n+ jest.useFakeTimers('modern');\n+ });\n+\n+ afterEach(() =\u003e {\n+ jest.useFakeTimers('legacy');\n+\n+ wrapper.destroy();\n+ });\n+\n+ it('renders online state', () =\u003e {\n+ jest.setSystemTime(new Date('2021-01-01T00:01:00Z'));\n+\n+ createComponent();\n+\n+ expect(wrapper.text()).toBe('online');\n+ expect(findBadge().props('variant')).toBe('success');\n+ expect(getTooltip().value).toBe('Runner is online; last contact was 1 minute ago');\n+ });\n+\n+ it('renders offline state', () =\u003e {\n+ jest.setSystemTime(new Date('2021-01-02T00:00:00Z'));\n+\n+ createComponent({\n+ runner: {\n+ status: STATUS_OFFLINE,\n+ },\n+ });\n+\n+ expect(wrapper.text()).toBe('offline');\n+ expect(findBadge().props('variant')).toBe('muted');\n+ expect(getTooltip().value).toBe(\n+ 'No recent contact from this runner; last contact was 1 day ago',\n+ );\n+ });\n+\n+ it('renders not connected state', () =\u003e {\n+ createComponent({\n+ runner: {\n+ contactedAt: null,\n+ status: STATUS_NOT_CONNECTED,\n+ },\n+ });\n+\n+ expect(wrapper.text()).toBe('not connected');\n+ expect(findBadge().props('variant')).toBe('muted');\n+ expect(getTooltip().value).toMatch('This runner has never connected');\n+ });\n+\n+ it('does not fail when data is missing', () =\u003e {\n+ createComponent({\n+ runner: {\n+ status: null,\n+ },\n+ });\n+\n+ expect(wrapper.text()).toBe('');\n+ });\n+});\n"},{"old_path":"spec/frontend/runner/components/runner_list_spec.js","new_path":"spec/frontend/runner/components/runner_list_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -42,8 +42,8 @@ describe('RunnerList', () =\u003e {\n const headerLabels = findHeaders().wrappers.map((w) =\u003e w.text());\n \n expect(headerLabels).toEqual([\n- 'Type/State',\n- 'Runner',\n+ 'Status',\n+ 'Runner ID',\n 'Version',\n 'IP Address',\n 'Tags',\n@@ -62,7 +62,7 @@ describe('RunnerList', () =\u003e {\n const { id, description, version, ipAddress, shortSha } = mockRunners[0];\n \n // Badges\n- expect(findCell({ fieldKey: 'type' }).text()).toMatchInterpolatedText('specific paused');\n+ expect(findCell({ fieldKey: 'status' }).text()).toMatchInterpolatedText('not connected paused');\n \n // Runner summary\n expect(findCell({ fieldKey: 'summary' }).text()).toContain(\n"},{"old_path":"spec/frontend/runner/components/runner_state_paused_badge_spec.js","new_path":"spec/frontend/runner/components/runner_paused_badge_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,6 +1,6 @@\n import { GlBadge } from '@gitlab/ui';\n import { shallowMount } from '@vue/test-utils';\n-import RunnerStatePausedBadge from '~/runner/components/runner_state_paused_badge.vue';\n+import RunnerStatePausedBadge from '~/runner/components/runner_paused_badge.vue';\n import { createMockDirective, getBinding } from 'helpers/vue_mock_directive';\n \n describe('RunnerTypeBadge', () =\u003e {\n"},{"old_path":"spec/frontend/runner/components/runner_state_locked_badge_spec.js","new_path":"spec/frontend/runner/components/runner_state_locked_badge_spec.js","a_mode":"100644","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,45 +0,0 @@\n-import { GlBadge } from '@gitlab/ui';\n-import { shallowMount } from '@vue/test-utils';\n-import RunnerStateLockedBadge from '~/runner/components/runner_state_locked_badge.vue';\n-import { createMockDirective, getBinding } from 'helpers/vue_mock_directive';\n-\n-describe('RunnerTypeBadge', () =\u003e {\n- let wrapper;\n-\n- const findBadge = () =\u003e wrapper.findComponent(GlBadge);\n- const getTooltip = () =\u003e getBinding(findBadge().element, 'gl-tooltip');\n-\n- const createComponent = ({ props = {} } = {}) =\u003e {\n- wrapper = shallowMount(RunnerStateLockedBadge, {\n- propsData: {\n- ...props,\n- },\n- directives: {\n- GlTooltip: createMockDirective(),\n- },\n- });\n- };\n-\n- beforeEach(() =\u003e {\n- createComponent();\n- });\n-\n- afterEach(() =\u003e {\n- wrapper.destroy();\n- });\n-\n- it('renders locked state', () =\u003e {\n- expect(wrapper.text()).toBe('locked');\n- expect(findBadge().props('variant')).toBe('warning');\n- });\n-\n- it('renders tooltip', () =\u003e {\n- expect(getTooltip().value).toBeDefined();\n- });\n-\n- it('passes arbitrary attributes to the badge', () =\u003e {\n- createComponent({ props: { size: 'sm' } });\n-\n- expect(findBadge().props('size')).toBe('sm');\n- });\n-});\n"},{"old_path":"spec/frontend/runner/components/runner_type_alert_spec.js","new_path":"spec/frontend/runner/components/runner_type_alert_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -23,11 +23,11 @@ describe('RunnerTypeAlert', () =\u003e {\n });\n \n describe.each`\n- type | exampleText | anchor | variant\n- ${INSTANCE_TYPE} | ${'This runner is available to all groups and projects'} | ${'#shared-runners'} | ${'success'}\n- ${GROUP_TYPE} | ${'This runner is available to all projects and subgroups in a group'} | ${'#group-runners'} | ${'success'}\n- ${PROJECT_TYPE} | ${'This runner is associated with one or more projects'} | ${'#specific-runners'} | ${'info'}\n- `('When it is an $type level runner', ({ type, exampleText, anchor, variant }) =\u003e {\n+ type | exampleText | anchor\n+ ${INSTANCE_TYPE} | ${'This runner is available to all groups and projects'} | ${'#shared-runners'}\n+ ${GROUP_TYPE} | ${'This runner is available to all projects and subgroups in a group'} | ${'#group-runners'}\n+ ${PROJECT_TYPE} | ${'This runner is associated with one or more projects'} | ${'#specific-runners'}\n+ `('When it is an $type level runner', ({ type, exampleText, anchor }) =\u003e {\n beforeEach(() =\u003e {\n createComponent({ props: { type } });\n });\n@@ -36,8 +36,8 @@ describe('RunnerTypeAlert', () =\u003e {\n expect(wrapper.text()).toMatch(exampleText);\n });\n \n- it(`Shows a ${variant} variant`, () =\u003e {\n- expect(findAlert().props('variant')).toBe(variant);\n+ it(`Shows an \"info\" variant`, () =\u003e {\n+ expect(findAlert().props('variant')).toBe('info');\n });\n \n it(`Links to anchor \"${anchor}\"`, () =\u003e {\n"},{"old_path":"spec/frontend/runner/components/runner_type_badge_spec.js","new_path":"spec/frontend/runner/components/runner_type_badge_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -26,18 +26,18 @@ describe('RunnerTypeBadge', () =\u003e {\n });\n \n describe.each`\n- type | text | variant\n- ${INSTANCE_TYPE} | ${'shared'} | ${'success'}\n- ${GROUP_TYPE} | ${'group'} | ${'success'}\n- ${PROJECT_TYPE} | ${'specific'} | ${'info'}\n- `('displays $type runner', ({ type, text, variant }) =\u003e {\n+ type | text\n+ ${INSTANCE_TYPE} | ${'shared'}\n+ ${GROUP_TYPE} | ${'group'}\n+ ${PROJECT_TYPE} | ${'specific'}\n+ `('displays $type runner', ({ type, text }) =\u003e {\n beforeEach(() =\u003e {\n createComponent({ props: { type } });\n });\n \n- it(`as \"${text}\" with a ${variant} variant`, () =\u003e {\n+ it(`as \"${text}\" with an \"info\" variant`, () =\u003e {\n expect(findBadge().text()).toBe(text);\n- expect(findBadge().props('variant')).toBe(variant);\n+ expect(findBadge().props('variant')).toBe('info');\n });\n \n it('with a tooltip', () =\u003e {\n"},{"old_path":"spec/frontend/vue_mr_widget/components/states/__snapshots__/mr_widget_auto_merge_enabled_spec.js.snap","new_path":"spec/frontend/vue_mr_widget/components/states/__snapshots__/mr_widget_auto_merge_enabled_spec.js.snap","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -50,7 +50,7 @@ exports[`MRWidgetAutoMergeEnabled when graphql is disabled template should have\n \u003cspan\n class=\"gl-mr-3\"\n \u003e\n- The source branch will not be deleted\n+ Does not delete the source branch\n \u003c/span\u003e\n \n \u003cgl-button-stub\n@@ -122,7 +122,7 @@ exports[`MRWidgetAutoMergeEnabled when graphql is enabled template should have c\n \u003cspan\n class=\"gl-mr-3\"\n \u003e\n- The source branch will not be deleted\n+ Does not delete the source branch\n \u003c/span\u003e\n \n \u003cgl-button-stub\n"},{"old_path":"spec/frontend/vue_mr_widget/components/states/mr_widget_auto_merge_enabled_spec.js","new_path":"spec/frontend/vue_mr_widget/components/states/mr_widget_auto_merge_enabled_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -270,8 +270,8 @@ describe('MRWidgetAutoMergeEnabled', () =\u003e {\n \n const normalizedText = wrapper.text().replace(/\\s+/g, ' ');\n \n- expect(normalizedText).toContain('The source branch will be deleted');\n- expect(normalizedText).not.toContain('The source branch will not be deleted');\n+ expect(normalizedText).toContain('Deletes the source branch');\n+ expect(normalizedText).not.toContain('Does not delete the source branch');\n });\n \n it('should not show delete source branch button when user not able to delete source branch', () =\u003e {\n"},{"old_path":"spec/frontend/vue_mr_widget/components/states/mr_widget_merging_spec.js","new_path":"spec/frontend/vue_mr_widget/components/states/mr_widget_merging_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -42,7 +42,7 @@ describe('MRWidgetMerging', () =\u003e {\n .trim()\n .replace(/\\s\\s+/g, ' ')\n .replace(/[\\r\\n]+/g, ' '),\n- ).toEqual('The changes will be merged into branch');\n+ ).toEqual('Merges changes into branch');\n \n expect(wrapper.find('a').attributes('href')).toBe('/branch-path');\n });\n"},{"old_path":"spec/frontend/vue_mr_widget/mr_widget_options_spec.js","new_path":"spec/frontend/vue_mr_widget/mr_widget_options_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -543,7 +543,7 @@ describe('MrWidgetOptions', () =\u003e {\n nextTick(() =\u003e {\n const tooltip = wrapper.find('[data-testid=\"question-o-icon\"]');\n \n- expect(wrapper.text()).toContain('The source branch will be deleted');\n+ expect(wrapper.text()).toContain('Deletes the source branch');\n expect(tooltip.attributes('title')).toBe(\n 'A user with write access to the source branch selected this option',\n );\n@@ -559,7 +559,7 @@ describe('MrWidgetOptions', () =\u003e {\n \n nextTick(() =\u003e {\n expect(wrapper.text()).toContain('The source branch has been deleted');\n- expect(wrapper.text()).not.toContain('The source branch will be deleted');\n+ expect(wrapper.text()).not.toContain('Deletes the source branch');\n \n done();\n });\n"},{"old_path":"spec/lib/gitlab/ci/artifact_file_reader_spec.rb","new_path":"spec/lib/gitlab/ci/artifact_file_reader_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -18,17 +18,6 @@\n expect(YAML.safe_load(subject).keys).to contain_exactly('rspec', 'time', 'custom')\n end\n \n- context 'when FF ci_new_artifact_file_reader is disabled' do\n- before do\n- stub_feature_flags(ci_new_artifact_file_reader: false)\n- end\n-\n- it 'returns the content at the path' do\n- is_expected.to be_present\n- expect(YAML.safe_load(subject).keys).to contain_exactly('rspec', 'time', 'custom')\n- end\n- end\n-\n context 'when path does not exist' do\n let(:path) { 'file/does/not/exist.txt' }\n let(:expected_error) do\n"},{"old_path":"spec/lib/gitlab/database/gitlab_schema_spec.rb","new_path":"spec/lib/gitlab/database/gitlab_schema_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -35,4 +35,24 @@\n end\n end\n end\n+\n+ describe '.table_schema' do\n+ using RSpec::Parameterized::TableSyntax\n+\n+ where(:name, :classification) do\n+ 'ci_builds' | :gitlab_ci\n+ 'my_schema.ci_builds' | :gitlab_ci\n+ 'information_schema.columns' | :gitlab_shared\n+ 'audit_events_part_5fc467ac26' | :gitlab_main\n+ '_test_my_table' | :gitlab_shared\n+ 'pg_attribute' | :gitlab_shared\n+ 'my_other_table' | :undefined_my_other_table\n+ end\n+\n+ with_them do\n+ subject { described_class.table_schema(name) }\n+\n+ it { is_expected.to eq(classification) }\n+ end\n+ end\n end\n"},{"old_path":"spec/lib/gitlab/database/query_analyzer_spec.rb","new_path":"spec/lib/gitlab/database/query_analyzer_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,72 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe Gitlab::Database::QueryAnalyzer do\n+ let(:analyzer) { double(:query_analyzer) }\n+\n+ before do\n+ stub_const('Gitlab::Database::QueryAnalyzer::ANALYZERS', [analyzer])\n+ end\n+\n+ context 'the hook is enabled by default in specs' do\n+ it 'does process queries and gets normalized SQL' do\n+ expect(analyzer).to receive(:enabled?).and_return(true)\n+ expect(analyzer).to receive(:analyze) do |parsed|\n+ expect(parsed.sql).to include(\"SELECT $1 FROM projects\")\n+ expect(parsed.pg.tables).to eq(%w[projects])\n+ end\n+\n+ Project.connection.execute(\"SELECT 1 FROM projects\")\n+ end\n+ end\n+\n+ describe '#process_sql' do\n+ it 'does not analyze query if not enabled' do\n+ expect(analyzer).to receive(:enabled?).and_return(false)\n+ expect(analyzer).not_to receive(:analyze)\n+\n+ process_sql(\"SELECT 1 FROM projects\")\n+ end\n+\n+ it 'does analyze query if enabled' do\n+ expect(analyzer).to receive(:enabled?).and_return(true)\n+ expect(analyzer).to receive(:analyze) do |parsed|\n+ expect(parsed.sql).to eq(\"SELECT $1 FROM projects\")\n+ expect(parsed.pg.tables).to eq(%w[projects])\n+ end\n+\n+ process_sql(\"SELECT 1 FROM projects\")\n+ end\n+\n+ it 'does track exception if query cannot be parsed' do\n+ expect(analyzer).to receive(:enabled?).and_return(true)\n+ expect(analyzer).not_to receive(:analyze)\n+ expect(Gitlab::ErrorTracking).to receive(:track_exception)\n+\n+ expect { process_sql(\"invalid query\") }.not_to raise_error\n+ end\n+\n+ it 'does track exception if analyzer raises exception on enabled?' do\n+ expect(analyzer).to receive(:enabled?).and_raise('exception')\n+ expect(analyzer).not_to receive(:analyze)\n+ expect(Gitlab::ErrorTracking).to receive(:track_and_raise_for_dev_exception)\n+\n+ expect { process_sql(\"SELECT 1 FROM projects\") }.not_to raise_error\n+ end\n+\n+ it 'does track exception if analyzer raises exception on analyze' do\n+ expect(analyzer).to receive(:enabled?).and_return(true)\n+ expect(analyzer).to receive(:analyze).and_raise('exception')\n+ expect(Gitlab::ErrorTracking).to receive(:track_and_raise_for_dev_exception)\n+\n+ expect { process_sql(\"SELECT 1 FROM projects\") }.not_to raise_error\n+ end\n+\n+ def process_sql(sql)\n+ ApplicationRecord.connection.load_balancer.read_write do |connection|\n+ described_class.new.send(:process_sql, sql, connection)\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"spec/lib/gitlab/graphql/known_operations_spec.rb","new_path":"spec/lib/gitlab/graphql/known_operations_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,72 @@\n+# frozen_string_literal: true\n+\n+require 'fast_spec_helper'\n+require 'rspec-parameterized'\n+require \"support/graphql/fake_query_type\"\n+\n+RSpec.describe Gitlab::Graphql::KnownOperations do\n+ using RSpec::Parameterized::TableSyntax\n+\n+ # Include duplicated operation names to test that we are unique-ifying them\n+ let(:fake_operations) { %w(foo foo bar bar) }\n+ let(:fake_schema) do\n+ Class.new(GraphQL::Schema) do\n+ query Graphql::FakeQueryType\n+ end\n+ end\n+\n+ subject { described_class.new(fake_operations) }\n+\n+ describe \"#from_query\" do\n+ where(:query_string, :expected) do\n+ \"query { helloWorld }\" | described_class::ANONYMOUS\n+ \"query fuzzyyy { helloWorld }\" | described_class::UNKNOWN\n+ \"query foo { helloWorld }\" | described_class::Operation.new(\"foo\")\n+ end\n+\n+ with_them do\n+ it \"returns known operation name from GraphQL Query\" do\n+ query = ::GraphQL::Query.new(fake_schema, query_string)\n+\n+ expect(subject.from_query(query)).to eq(expected)\n+ end\n+ end\n+ end\n+\n+ describe \"#operations\" do\n+ it \"returns array of known operations\" do\n+ expect(subject.operations.map(\u0026:name)).to match_array(%w(anonymous unknown foo bar))\n+ end\n+ end\n+\n+ describe \"Operation#to_caller_id\" do\n+ where(:query_string, :expected) do\n+ \"query { helloWorld }\" | \"graphql:#{described_class::ANONYMOUS.name}\"\n+ \"query foo { helloWorld }\" | \"graphql:foo\"\n+ end\n+\n+ with_them do\n+ it \"formats operation name for caller_id metric property\" do\n+ query = ::GraphQL::Query.new(fake_schema, query_string)\n+\n+ expect(subject.from_query(query).to_caller_id).to eq(expected)\n+ end\n+ end\n+ end\n+\n+ describe \".default\" do\n+ it \"returns a memoization of values from webpack\", :aggregate_failures do\n+ # .default could have been referenced in another spec, so we need to clean it up here\n+ described_class.instance_variable_set(:@default, nil)\n+\n+ expect(Gitlab::Webpack::GraphqlKnownOperations).to receive(:load).once.and_return(fake_operations)\n+\n+ 2.times { described_class.default }\n+\n+ # Uses reference equality to verify memoization\n+ expect(described_class.default).to equal(described_class.default)\n+ expect(described_class.default).to be_a(described_class)\n+ expect(described_class.default.operations.map(\u0026:name)).to include(*fake_operations)\n+ end\n+ end\n+end\n"},{"old_path":"spec/lib/gitlab/sidekiq_config/cli_methods_spec.rb","new_path":"spec/lib/gitlab/sidekiq_config/cli_methods_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -11,12 +11,12 @@ def expand_path(path)\n end\n \n def stub_exists(exists: true)\n- ['app/workers/all_queues.yml', 'ee/app/workers/all_queues.yml'].each do |path|\n+ ['app/workers/all_queues.yml', 'ee/app/workers/all_queues.yml', 'jh/app/workers/all_queues.yml'].each do |path|\n allow(File).to receive(:exist?).with(expand_path(path)).and_return(exists)\n end\n end\n \n- def stub_contents(foss_queues, ee_queues)\n+ def stub_contents(foss_queues, ee_queues, jh_queues)\n allow(YAML).to receive(:load_file)\n .with(expand_path('app/workers/all_queues.yml'))\n .and_return(foss_queues)\n@@ -24,6 +24,10 @@ def stub_contents(foss_queues, ee_queues)\n allow(YAML).to receive(:load_file)\n .with(expand_path('ee/app/workers/all_queues.yml'))\n .and_return(ee_queues)\n+\n+ allow(YAML).to receive(:load_file)\n+ .with(expand_path('jh/app/workers/all_queues.yml'))\n+ .and_return(jh_queues)\n end\n \n before do\n@@ -45,8 +49,9 @@ def stub_contents(foss_queues, ee_queues)\n end\n \n it 'flattens and joins the contents' do\n- expected_queues = %w[queue_a queue_b]\n- expected_queues = expected_queues.first(1) unless Gitlab.ee?\n+ expected_queues = %w[queue_a]\n+ expected_queues \u003c\u003c 'queue_b' if Gitlab.ee?\n+ expected_queues \u003c\u003c 'queue_c' if Gitlab.jh?\n \n expect(described_class.worker_queues(dummy_root))\n .to match_array(expected_queues)\n@@ -55,7 +60,7 @@ def stub_contents(foss_queues, ee_queues)\n \n context 'when the file contains an array of hashes' do\n before do\n- stub_contents([{ name: 'queue_a' }], [{ name: 'queue_b' }])\n+ stub_contents([{ name: 'queue_a' }], [{ name: 'queue_b' }], [{ name: 'queue_c' }])\n end\n \n include_examples 'valid file contents'\n"},{"old_path":"spec/lib/gitlab/sidekiq_config/worker_spec.rb","new_path":"spec/lib/gitlab/sidekiq_config/worker_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -18,19 +18,26 @@ def create_worker(queue:, **attributes)\n get_tags: attributes[:tags]\n )\n \n- described_class.new(inner_worker, ee: false)\n+ described_class.new(inner_worker, ee: false, jh: false)\n end\n \n describe '#ee?' do\n it 'returns the EE status set on creation' do\n- expect(described_class.new(double, ee: true)).to be_ee\n- expect(described_class.new(double, ee: false)).not_to be_ee\n+ expect(described_class.new(double, ee: true, jh: false)).to be_ee\n+ expect(described_class.new(double, ee: false, jh: false)).not_to be_ee\n+ end\n+ end\n+\n+ describe '#jh?' do\n+ it 'returns the JH status set on creation' do\n+ expect(described_class.new(double, ee: false, jh: true)).to be_jh\n+ expect(described_class.new(double, ee: false, jh: false)).not_to be_jh\n end\n end\n \n describe '#==' do\n def worker_with_yaml(yaml)\n- described_class.new(double, ee: false).tap do |worker|\n+ described_class.new(double, ee: false, jh: false).tap do |worker|\n allow(worker).to receive(:to_yaml).and_return(yaml)\n end\n end\n@@ -57,7 +64,7 @@ def worker_with_yaml(yaml)\n \n expect(worker).to receive(meth)\n \n- described_class.new(worker, ee: false).send(meth)\n+ described_class.new(worker, ee: false, jh: false).send(meth)\n end\n end\n end\n"},{"old_path":"spec/lib/gitlab/webpack/file_loader_spec.rb","new_path":"spec/lib/gitlab/webpack/file_loader_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,79 @@\n+# frozen_string_literal: true\n+\n+require 'fast_spec_helper'\n+require 'support/helpers/file_read_helpers'\n+require 'support/webmock'\n+\n+RSpec.describe Gitlab::Webpack::FileLoader do\n+ include FileReadHelpers\n+ include WebMock::API\n+\n+ let(:error_file_path) { \"error.yml\" }\n+ let(:file_path) { \"my_test_file.yml\" }\n+ let(:file_contents) do\n+ \u003c\u003c-EOF\n+ - hello\n+ - world\n+ - test\n+ EOF\n+ end\n+\n+ before do\n+ allow(Gitlab.config.webpack.dev_server).to receive_messages(host: 'hostname', port: 2000, https: false)\n+ allow(Gitlab.config.webpack).to receive(:public_path).and_return('public_path')\n+ allow(Gitlab.config.webpack).to receive(:output_dir).and_return('webpack_output')\n+ end\n+\n+ context \"with dev server enabled\" do\n+ before do\n+ allow(Gitlab.config.webpack.dev_server).to receive(:enabled).and_return(true)\n+\n+ stub_request(:get, \"http://hostname:2000/public_path/not_found\").to_return(status: 404)\n+ stub_request(:get, \"http://hostname:2000/public_path/#{file_path}\").to_return(body: file_contents, status: 200)\n+ stub_request(:get, \"http://hostname:2000/public_path/#{error_file_path}\").to_raise(StandardError)\n+ end\n+\n+ it \"returns content when respondes succesfully\" do\n+ expect(Gitlab::Webpack::FileLoader.load(file_path)).to be(file_contents)\n+ end\n+\n+ it \"raises error when 404\" do\n+ expect { Gitlab::Webpack::FileLoader.load(\"not_found\") }.to raise_error(\"HTTP error 404\")\n+ end\n+\n+ it \"raises error when errors out\" do\n+ expect { Gitlab::Webpack::FileLoader.load(error_file_path) }.to raise_error(Gitlab::Webpack::FileLoader::DevServerLoadError)\n+ end\n+ end\n+\n+ context \"with dev server enabled and https\" do\n+ before do\n+ allow(Gitlab.config.webpack.dev_server).to receive(:enabled).and_return(true)\n+ allow(Gitlab.config.webpack.dev_server).to receive(:https).and_return(true)\n+\n+ stub_request(:get, \"https://hostname:2000/public_path/#{error_file_path}\").to_raise(EOFError)\n+ end\n+\n+ it \"raises error if catches SSLError\" do\n+ expect { Gitlab::Webpack::FileLoader.load(error_file_path) }.to raise_error(Gitlab::Webpack::FileLoader::DevServerSSLError)\n+ end\n+ end\n+\n+ context \"with dev server disabled\" do\n+ before do\n+ allow(Gitlab.config.webpack.dev_server).to receive(:enabled).and_return(false)\n+ stub_file_read(::Rails.root.join(\"webpack_output/#{file_path}\"), content: file_contents)\n+ stub_file_read(::Rails.root.join(\"webpack_output/#{error_file_path}\"), error: Errno::ENOENT)\n+ end\n+\n+ describe \".load\" do\n+ it \"returns file content from file path\" do\n+ expect(Gitlab::Webpack::FileLoader.load(file_path)).to be(file_contents)\n+ end\n+\n+ it \"throws error if file cannot be read\" do\n+ expect { Gitlab::Webpack::FileLoader.load(error_file_path) }.to raise_error(Gitlab::Webpack::FileLoader::StaticLoadError)\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"spec/lib/gitlab/webpack/graphql_known_operations_spec.rb","new_path":"spec/lib/gitlab/webpack/graphql_known_operations_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,47 @@\n+# frozen_string_literal: true\n+\n+require 'fast_spec_helper'\n+\n+RSpec.describe Gitlab::Webpack::GraphqlKnownOperations do\n+ let(:content) do\n+ \u003c\u003c-EOF\n+ - hello\n+ - world\n+ - test\n+ EOF\n+ end\n+\n+ around do |example|\n+ described_class.clear_memoization!\n+\n+ example.run\n+\n+ described_class.clear_memoization!\n+ end\n+\n+ describe \".load\" do\n+ context \"when file loader returns\" do\n+ before do\n+ allow(::Gitlab::Webpack::FileLoader).to receive(:load).with(\"graphql_known_operations.yml\").and_return(content)\n+ end\n+\n+ it \"returns memoized value\" do\n+ expect(::Gitlab::Webpack::FileLoader).to receive(:load).once\n+\n+ 2.times { ::Gitlab::Webpack::GraphqlKnownOperations.load }\n+\n+ expect(::Gitlab::Webpack::GraphqlKnownOperations.load).to eq(%w(hello world test))\n+ end\n+ end\n+\n+ context \"when file loader errors\" do\n+ before do\n+ allow(::Gitlab::Webpack::FileLoader).to receive(:load).and_raise(StandardError.new(\"test\"))\n+ end\n+\n+ it \"returns empty array\" do\n+ expect(::Gitlab::Webpack::GraphqlKnownOperations.load).to eq([])\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"spec/lib/sidebars/projects/menus/infrastructure_menu_spec.rb","new_path":"spec/lib/sidebars/projects/menus/infrastructure_menu_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -51,6 +51,16 @@\n it 'menu link points to Terraform page' do\n expect(subject.link).to eq find_menu_item(:terraform).link\n end\n+\n+ context 'when Terraform menu is not visible' do\n+ before do\n+ subject.renderable_items.delete(find_menu_item(:terraform))\n+ end\n+\n+ it 'menu link points to Google Cloud page' do\n+ expect(subject.link).to eq find_menu_item(:google_cloud).link\n+ end\n+ end\n end\n end\n \n@@ -89,5 +99,11 @@ def find_menu_item(menu_item)\n \n it_behaves_like 'access rights checks'\n end\n+\n+ describe 'Google Cloud' do\n+ let(:item_id) { :google_cloud }\n+\n+ it_behaves_like 'access rights checks'\n+ end\n end\n end\n"},{"old_path":"spec/models/acts_as_taggable_on/tag_spec.rb","new_path":"spec/models/acts_as_taggable_on/tag_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,16 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe ActsAsTaggableOn::Tag do\n+ it 'has the same connection as Ci::ApplicationRecord' do\n+ query = 'select current_database()'\n+\n+ expect(described_class.connection.execute(query).first).to eq(Ci::ApplicationRecord.connection.execute(query).first)\n+ expect(described_class.retrieve_connection.execute(query).first).to eq(Ci::ApplicationRecord.retrieve_connection.execute(query).first)\n+ end\n+\n+ it 'has the same sticking as Ci::ApplicationRecord' do\n+ expect(described_class.sticking).to eq(Ci::ApplicationRecord.sticking)\n+ end\n+end\n"},{"old_path":"spec/models/acts_as_taggable_on/tagging_spec.rb","new_path":"spec/models/acts_as_taggable_on/tagging_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,16 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe ActsAsTaggableOn::Tagging do\n+ it 'has the same connection as Ci::ApplicationRecord' do\n+ query = 'select current_database()'\n+\n+ expect(described_class.connection.execute(query).first).to eq(Ci::ApplicationRecord.connection.execute(query).first)\n+ expect(described_class.retrieve_connection.execute(query).first).to eq(Ci::ApplicationRecord.retrieve_connection.execute(query).first)\n+ end\n+\n+ it 'has the same sticking as Ci::ApplicationRecord' do\n+ expect(described_class.sticking).to eq(Ci::ApplicationRecord.sticking)\n+ end\n+end\n"},{"old_path":"spec/models/group_spec.rb","new_path":"spec/models/group_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -2648,14 +2648,6 @@ def setup_group_members(group)\n end\n \n it_behaves_like 'returns namespaces with disabled email'\n-\n- context 'when feature flag :linear_group_ancestor_scopes is disabled' do\n- before do\n- stub_feature_flags(linear_group_ancestor_scopes: false)\n- end\n-\n- it_behaves_like 'returns namespaces with disabled email'\n- end\n end\n \n describe '.timelogs' do\n"},{"old_path":"spec/models/namespace_spec.rb","new_path":"spec/models/namespace_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -284,7 +284,7 @@\n end\n end\n \n- context 'creating a default Namespace' do\n+ context 'creating a Namespace with nil type' do\n let(:namespace_type) { nil }\n \n it 'is the correct type of namespace' do\n@@ -295,7 +295,7 @@\n end\n \n context 'creating an unknown Namespace type' do\n- let(:namespace_type) { 'One' }\n+ let(:namespace_type) { 'nonsense' }\n \n it 'creates a default Namespace' do\n expect(Namespace.find(namespace.id)).to be_a(Namespace)\n"},{"old_path":"spec/requests/api/graphql/mutations/issues/set_crm_contacts_spec.rb","new_path":"spec/requests/api/graphql/mutations/issues/set_crm_contacts_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,161 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe 'Setting issues crm contacts' do\n+ include GraphqlHelpers\n+\n+ let_it_be(:user) { create(:user) }\n+ let_it_be(:group) { create(:group) }\n+ let_it_be(:project) { create(:project, group: group) }\n+ let_it_be(:contacts) { create_list(:contact, 4, group: group) }\n+\n+ let(:issue) { create(:issue, project: project) }\n+ let(:operation_mode) { Types::MutationOperationModeEnum.default_mode }\n+ let(:crm_contact_ids) { [global_id_of(contacts[1]), global_id_of(contacts[2])] }\n+ let(:does_not_exist_or_no_permission) { \"The resource that you are attempting to access does not exist or you don't have permission to perform this action\" }\n+\n+ let(:mutation) do\n+ variables = {\n+ project_path: issue.project.full_path,\n+ iid: issue.iid.to_s,\n+ operation_mode: operation_mode,\n+ crm_contact_ids: crm_contact_ids\n+ }\n+\n+ graphql_mutation(:issue_set_crm_contacts, variables,\n+ \u003c\u003c-QL.strip_heredoc\n+ clientMutationId\n+ errors\n+ issue {\n+ customerRelationsContacts {\n+ nodes {\n+ id\n+ }\n+ }\n+ }\n+ QL\n+ )\n+ end\n+\n+ def mutation_response\n+ graphql_mutation_response(:issue_set_crm_contacts)\n+ end\n+\n+ before do\n+ create(:issue_customer_relations_contact, issue: issue, contact: contacts[0])\n+ create(:issue_customer_relations_contact, issue: issue, contact: contacts[1])\n+ end\n+\n+ context 'when the user has no permission' do\n+ it 'returns expected error' do\n+ error = Gitlab::Graphql::Authorize::AuthorizeResource::RESOURCE_ACCESS_ERROR\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_errors).to include(a_hash_including('message' =\u003e error))\n+ end\n+ end\n+\n+ context 'when the user has permission' do\n+ before do\n+ group.add_reporter(user)\n+ end\n+\n+ context 'when the feature is disabled' do\n+ before do\n+ stub_feature_flags(customer_relations: false)\n+ end\n+\n+ it 'raises expected error' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_errors).to include(a_hash_including('message' =\u003e 'Feature disabled'))\n+ end\n+ end\n+\n+ context 'replace' do\n+ it 'updates the issue with correct contacts' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :issue, :customer_relations_contacts, :nodes, :id))\n+ .to match_array([global_id_of(contacts[1]), global_id_of(contacts[2])])\n+ end\n+ end\n+\n+ context 'append' do\n+ let(:crm_contact_ids) { [global_id_of(contacts[3])] }\n+ let(:operation_mode) { Types::MutationOperationModeEnum.enum[:append] }\n+\n+ it 'updates the issue with correct contacts' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :issue, :customer_relations_contacts, :nodes, :id))\n+ .to match_array([global_id_of(contacts[0]), global_id_of(contacts[1]), global_id_of(contacts[3])])\n+ end\n+ end\n+\n+ context 'remove' do\n+ let(:crm_contact_ids) { [global_id_of(contacts[0])] }\n+ let(:operation_mode) { Types::MutationOperationModeEnum.enum[:remove] }\n+\n+ it 'updates the issue with correct contacts' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :issue, :customer_relations_contacts, :nodes, :id))\n+ .to match_array([global_id_of(contacts[1])])\n+ end\n+ end\n+\n+ context 'when the contact does not exist' do\n+ let(:crm_contact_ids) { [\"gid://gitlab/CustomerRelations::Contact/#{non_existing_record_id}\"] }\n+\n+ it 'returns expected error' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :errors))\n+ .to match_array([\"Issue customer relations contacts #{non_existing_record_id}: #{does_not_exist_or_no_permission}\"])\n+ end\n+ end\n+\n+ context 'when the contact belongs to a different group' do\n+ let(:group2) { create(:group) }\n+ let(:contact) { create(:contact, group: group2) }\n+ let(:crm_contact_ids) { [global_id_of(contact)] }\n+\n+ before do\n+ group2.add_reporter(user)\n+ end\n+\n+ it 'returns expected error' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :errors))\n+ .to match_array([\"Issue customer relations contacts #{contact.id}: #{does_not_exist_or_no_permission}\"])\n+ end\n+ end\n+\n+ context 'when attempting to add more than 6' do\n+ let(:operation_mode) { Types::MutationOperationModeEnum.enum[:append] }\n+ let(:gid) { global_id_of(contacts[0]) }\n+ let(:crm_contact_ids) { [gid, gid, gid, gid, gid, gid, gid] }\n+\n+ it 'returns expected error' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :errors))\n+ .to match_array([\"You can only add up to 6 contacts at one time\"])\n+ end\n+ end\n+\n+ context 'when trying to remove non-existent contact' do\n+ let(:operation_mode) { Types::MutationOperationModeEnum.enum[:remove] }\n+ let(:crm_contact_ids) { [\"gid://gitlab/CustomerRelations::Contact/#{non_existing_record_id}\"] }\n+\n+ it 'raises expected error' do\n+ post_graphql_mutation(mutation, current_user: user)\n+\n+ expect(graphql_data_at(:issue_set_crm_contacts, :errors)).to be_empty\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"spec/requests/api/settings_spec.rb","new_path":"spec/requests/api/settings_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -612,5 +612,46 @@\n expect(json_response.slice(*settings.keys)).to eq(settings)\n end\n end\n+\n+ context 'Sentry settings' do\n+ let(:settings) do\n+ {\n+ sentry_enabled: true,\n+ sentry_dsn: 'http://sentry.example.com',\n+ sentry_clientside_dsn: 'http://sentry.example.com',\n+ sentry_environment: 'production'\n+ }\n+ end\n+\n+ let(:attribute_names) { settings.keys.map(\u0026:to_s) }\n+\n+ it 'includes the attributes in the API' do\n+ get api('/application/settings', admin)\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ attribute_names.each do |attribute|\n+ expect(json_response.keys).to include(attribute)\n+ end\n+ end\n+\n+ it 'allows updating the settings' do\n+ put api('/application/settings', admin), params: settings\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ settings.each do |attribute, value|\n+ expect(ApplicationSetting.current.public_send(attribute)).to eq(value)\n+ end\n+ end\n+\n+ context 'missing sentry_dsn value when sentry_enabled is true' do\n+ it 'returns a blank parameter error message' do\n+ put api('/application/settings', admin), params: { sentry_enabled: true }\n+\n+ expect(response).to have_gitlab_http_status(:bad_request)\n+ message = json_response['message']\n+ expect(message[\"sentry_dsn\"]).to include(a_string_matching(\"can't be blank\"))\n+ end\n+ end\n+ end\n end\n end\n"},{"old_path":"spec/requests/api/v3/github_spec.rb","new_path":"spec/requests/api/v3/github_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -6,7 +6,7 @@\n let_it_be(:user) { create(:user) }\n let_it_be(:unauthorized_user) { create(:user) }\n let_it_be(:admin) { create(:user, :admin) }\n- let_it_be(:project) { create(:project, :repository, creator: user) }\n+ let_it_be_with_reload(:project) { create(:project, :repository, creator: user) }\n \n before do\n project.add_maintainer(user)\n@@ -506,11 +506,18 @@ def expect_project_under_namespace(projects, namespace, user)\n \n describe 'GET /repos/:namespace/:project/commits/:sha' do\n let(:commit) { project.repository.commit }\n- let(:commit_id) { commit.id }\n+\n+ def call_api(commit_id: commit.id)\n+ jira_get v3_api(\"/repos/#{project.namespace.path}/#{project.path}/commits/#{commit_id}\", user)\n+ end\n+\n+ def response_diff_files(response)\n+ Gitlab::Json.parse(response.body)['files']\n+ end\n \n context 'authenticated' do\n- it 'returns commit with github format' do\n- jira_get v3_api(\"/repos/#{project.namespace.path}/#{project.path}/commits/#{commit_id}\", user)\n+ it 'returns commit with github format', :aggregate_failures do\n+ call_api\n \n expect(response).to have_gitlab_http_status(:ok)\n expect(response).to match_response_schema('entities/github/commit')\n@@ -519,36 +526,130 @@ def expect_project_under_namespace(projects, namespace, user)\n it 'returns 200 when project path include a dot' do\n project.update!(path: 'foo.bar')\n \n- jira_get v3_api(\"/repos/#{project.namespace.path}/#{project.path}/commits/#{commit_id}\", user)\n+ call_api\n \n expect(response).to have_gitlab_http_status(:ok)\n end\n \n- it 'returns 200 when namespace path include a dot' do\n- group = create(:group, path: 'foo.bar')\n- project = create(:project, :repository, group: group)\n- project.add_reporter(user)\n+ context 'when namespace path includes a dot' do\n+ let(:group) { create(:group, path: 'foo.bar') }\n+ let(:project) { create(:project, :repository, group: group) }\n \n- jira_get v3_api(\"/repos/#{group.path}/#{project.path}/commits/#{commit_id}\", user)\n+ it 'returns 200 when namespace path include a dot' do\n+ project.add_reporter(user)\n \n- expect(response).to have_gitlab_http_status(:ok)\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ end\n+ end\n+\n+ context 'when the Gitaly `CommitDiff` RPC times out', :use_clean_rails_memory_store_caching do\n+ let(:commit_diff_args) { [project.repository_storage, :diff_service, :commit_diff, any_args] }\n+\n+ before do\n+ allow(Gitlab::GitalyClient).to receive(:call)\n+ .and_call_original\n+ end\n+\n+ it 'handles the error, logs it, and returns empty diff files', :aggregate_failures do\n+ allow(Gitlab::GitalyClient).to receive(:call)\n+ .with(*commit_diff_args)\n+ .and_raise(GRPC::DeadlineExceeded)\n+\n+ expect(Gitlab::ErrorTracking)\n+ .to receive(:track_exception)\n+ .with an_instance_of(GRPC::DeadlineExceeded)\n+\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response)).to be_blank\n+ end\n+\n+ it 'does not handle the error when feature flag is disabled', :aggregate_failures do\n+ stub_feature_flags(api_v3_commits_skip_diff_files: false)\n+\n+ allow(Gitlab::GitalyClient).to receive(:call)\n+ .with(*commit_diff_args)\n+ .and_raise(GRPC::DeadlineExceeded)\n+\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:error)\n+ end\n+\n+ it 'only calls Gitaly once for all attempts within a period of time', :aggregate_failures do\n+ expect(Gitlab::GitalyClient).to receive(:call)\n+ .with(*commit_diff_args)\n+ .once # \u003c- once\n+ .and_raise(GRPC::DeadlineExceeded)\n+\n+ 3.times do\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response)).to be_blank\n+ end\n+ end\n+\n+ it 'calls Gitaly again after a period of time', :aggregate_failures do\n+ expect(Gitlab::GitalyClient).to receive(:call)\n+ .with(*commit_diff_args)\n+ .twice # \u003c- twice\n+ .and_raise(GRPC::DeadlineExceeded)\n+\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response)).to be_blank\n+\n+ travel_to((described_class::GITALY_TIMEOUT_CACHE_EXPIRY + 1.second).from_now) do\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response)).to be_blank\n+ end\n+ end\n+\n+ it 'uses a unique cache key, allowing other calls to succeed' do\n+ cache_key = [described_class::GITALY_TIMEOUT_CACHE_KEY, project.id, commit.cache_key].join(':')\n+ Rails.cache.write(cache_key, 1)\n+\n+ expect(Gitlab::GitalyClient).to receive(:call)\n+ .with(*commit_diff_args)\n+ .once # \u003c- once\n+\n+ call_api\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response)).to be_blank\n+\n+ call_api(commit_id: commit.parent.id)\n+\n+ expect(response).to have_gitlab_http_status(:ok)\n+ expect(response_diff_files(response).length).to eq(1)\n+ end\n end\n end\n \n context 'unauthenticated' do\n+ let(:user) { nil }\n+\n it 'returns 401' do\n- jira_get v3_api(\"/repos/#{project.namespace.path}/#{project.path}/commits/#{commit_id}\", nil)\n+ call_api\n \n expect(response).to have_gitlab_http_status(:unauthorized)\n end\n end\n \n context 'unauthorized' do\n+ let(:user) { unauthorized_user }\n+\n it 'returns 404 when lower access level' do\n- project.add_guest(unauthorized_user)\n+ project.add_guest(user)\n \n- jira_get v3_api(\"/repos/#{project.namespace.path}/#{project.path}/commits/#{commit_id}\",\n- unauthorized_user)\n+ call_api\n \n expect(response).to have_gitlab_http_status(:not_found)\n end\n"},{"old_path":"spec/services/authorized_project_update/project_access_changed_service_spec.rb","new_path":"spec/services/authorized_project_update/project_access_changed_service_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,21 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe AuthorizedProjectUpdate::ProjectAccessChangedService do\n+ describe '#execute' do\n+ it 'schedules the project IDs' do\n+ expect(AuthorizedProjectUpdate::ProjectRecalculateWorker).to receive(:bulk_perform_and_wait)\n+ .with([[1], [2]])\n+\n+ described_class.new([1, 2]).execute\n+ end\n+\n+ it 'permits non-blocking operation' do\n+ expect(AuthorizedProjectUpdate::ProjectRecalculateWorker).to receive(:bulk_perform_async)\n+ .with([[1], [2]])\n+\n+ described_class.new([1, 2]).execute(blocking: false)\n+ end\n+ end\n+end\n"},{"old_path":"spec/services/groups/transfer_service_spec.rb","new_path":"spec/services/groups/transfer_service_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -593,11 +593,16 @@\n let_it_be_with_reload(:group) { create(:group, :private, parent: old_parent_group) }\n let_it_be(:new_group_member) { create(:user) }\n let_it_be(:old_group_member) { create(:user) }\n+ let_it_be(:unique_subgroup_member) { create(:user) }\n+ let_it_be(:direct_project_member) { create(:user) }\n \n before do\n new_parent_group.add_maintainer(new_group_member)\n old_parent_group.add_maintainer(old_group_member)\n+ subgroup1.add_developer(unique_subgroup_member)\n+ nested_project.add_developer(direct_project_member)\n group.refresh_members_authorized_projects\n+ subgroup1.refresh_members_authorized_projects\n end\n \n it 'removes old project authorizations' do\n@@ -613,7 +618,7 @@\n end\n \n it 'performs authorizations job immediately' do\n- expect(AuthorizedProjectsWorker).to receive(:bulk_perform_inline)\n+ expect(AuthorizedProjectUpdate::ProjectRecalculateWorker).to receive(:bulk_perform_inline)\n \n transfer_service.execute(new_parent_group)\n end\n@@ -630,14 +635,24 @@\n ProjectAuthorization.where(project_id: nested_project.id, user_id: new_group_member.id).size\n }.from(0).to(1)\n end\n+\n+ it 'preserves existing project authorizations for direct project members' do\n+ expect { transfer_service.execute(new_parent_group) }.not_to change {\n+ ProjectAuthorization.where(project_id: nested_project.id, user_id: direct_project_member.id).count\n+ }\n+ end\n end\n \n- context 'for groups with many members' do\n- before do\n- 11.times do\n- new_parent_group.add_maintainer(create(:user))\n- end\n+ context 'for nested groups with unique members' do\n+ it 'preserves existing project authorizations' do\n+ expect { transfer_service.execute(new_parent_group) }.not_to change {\n+ ProjectAuthorization.where(project_id: nested_project.id, user_id: unique_subgroup_member.id).count\n+ }\n end\n+ end\n+\n+ context 'for groups with many projects' do\n+ let_it_be(:project_list) { create_list(:project, 11, :repository, :private, namespace: group) }\n \n it 'adds new project authorizations for the user which makes a transfer' do\n transfer_service.execute(new_parent_group)\n@@ -646,9 +661,21 @@\n expect(ProjectAuthorization.where(project_id: nested_project.id, user_id: user.id).size).to eq(1)\n end\n \n+ it 'adds project authorizations for users in the new hierarchy' do\n+ expect { transfer_service.execute(new_parent_group) }.to change {\n+ ProjectAuthorization.where(project_id: project_list.map { |project| project.id }, user_id: new_group_member.id).size\n+ }.from(0).to(project_list.count)\n+ end\n+\n+ it 'removes project authorizations for users in the old hierarchy' do\n+ expect { transfer_service.execute(new_parent_group) }.to change {\n+ ProjectAuthorization.where(project_id: project_list.map { |project| project.id }, user_id: old_group_member.id).size\n+ }.from(project_list.count).to(0)\n+ end\n+\n it 'schedules authorizations job' do\n- expect(AuthorizedProjectsWorker).to receive(:bulk_perform_async)\n- .with(array_including(new_parent_group.members_with_parents.pluck(:user_id).map {|id| [id, anything] }))\n+ expect(AuthorizedProjectUpdate::ProjectRecalculateWorker).to receive(:bulk_perform_async)\n+ .with(array_including(group.all_projects.ids.map { |id| [id, anything] }))\n \n transfer_service.execute(new_parent_group)\n end\n"},{"old_path":"spec/services/issues/set_crm_contacts_service_spec.rb","new_path":"spec/services/issues/set_crm_contacts_service_spec.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,162 @@\n+# frozen_string_literal: true\n+\n+require 'spec_helper'\n+\n+RSpec.describe Issues::SetCrmContactsService do\n+ let_it_be(:user) { create(:user) }\n+ let_it_be(:group) { create(:group) }\n+ let_it_be(:project) { create(:project, group: group) }\n+ let_it_be(:contacts) { create_list(:contact, 4, group: group) }\n+\n+ let(:issue) { create(:issue, project: project) }\n+ let(:does_not_exist_or_no_permission) { \"The resource that you are attempting to access does not exist or you don't have permission to perform this action\" }\n+\n+ before do\n+ create(:issue_customer_relations_contact, issue: issue, contact: contacts[0])\n+ create(:issue_customer_relations_contact, issue: issue, contact: contacts[1])\n+ end\n+\n+ subject(:set_crm_contacts) do\n+ described_class.new(project: project, current_user: user, params: params).execute(issue)\n+ end\n+\n+ describe '#execute' do\n+ context 'when the user has no permission' do\n+ let(:params) { { crm_contact_ids: [contacts[1].id, contacts[2].id] } }\n+\n+ it 'returns expected error response' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array(['You have insufficient permissions to set customer relations contacts for this issue'])\n+ end\n+ end\n+\n+ context 'when user has permission' do\n+ before do\n+ group.add_reporter(user)\n+ end\n+\n+ context 'when the contact does not exist' do\n+ let(:params) { { crm_contact_ids: [non_existing_record_id] } }\n+\n+ it 'returns expected error response' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array([\"Issue customer relations contacts #{non_existing_record_id}: #{does_not_exist_or_no_permission}\"])\n+ end\n+ end\n+\n+ context 'when the contact belongs to a different group' do\n+ let(:group2) { create(:group) }\n+ let(:contact) { create(:contact, group: group2) }\n+ let(:params) { { crm_contact_ids: [contact.id] } }\n+\n+ before do\n+ group2.add_reporter(user)\n+ end\n+\n+ it 'returns expected error response' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array([\"Issue customer relations contacts #{contact.id}: #{does_not_exist_or_no_permission}\"])\n+ end\n+ end\n+\n+ context 'replace' do\n+ let(:params) { { crm_contact_ids: [contacts[1].id, contacts[2].id] } }\n+\n+ it 'updates the issue with correct contacts' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_success\n+ expect(issue.customer_relations_contacts).to match_array([contacts[1], contacts[2]])\n+ end\n+ end\n+\n+ context 'add' do\n+ let(:params) { { add_crm_contact_ids: [contacts[3].id] } }\n+\n+ it 'updates the issue with correct contacts' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_success\n+ expect(issue.customer_relations_contacts).to match_array([contacts[0], contacts[1], contacts[3]])\n+ end\n+ end\n+\n+ context 'remove' do\n+ let(:params) { { remove_crm_contact_ids: [contacts[0].id] } }\n+\n+ it 'updates the issue with correct contacts' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_success\n+ expect(issue.customer_relations_contacts).to match_array([contacts[1]])\n+ end\n+ end\n+\n+ context 'when attempting to add more than 6' do\n+ let(:id) { contacts[0].id }\n+ let(:params) { { add_crm_contact_ids: [id, id, id, id, id, id, id] } }\n+\n+ it 'returns expected error message' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array(['You can only add up to 6 contacts at one time'])\n+ end\n+ end\n+\n+ context 'when trying to remove non-existent contact' do\n+ let(:params) { { remove_crm_contact_ids: [non_existing_record_id] } }\n+\n+ it 'returns expected error message' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_success\n+ expect(response.message).to be_nil\n+ end\n+ end\n+\n+ context 'when combining params' do\n+ let(:error_invalid_params) { 'You cannot combine crm_contact_ids with add_crm_contact_ids or remove_crm_contact_ids' }\n+\n+ context 'add and remove' do\n+ let(:params) { { remove_crm_contact_ids: [contacts[1].id], add_crm_contact_ids: [contacts[3].id] } }\n+\n+ it 'updates the issue with correct contacts' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_success\n+ expect(issue.customer_relations_contacts).to match_array([contacts[0], contacts[3]])\n+ end\n+ end\n+\n+ context 'replace and remove' do\n+ let(:params) { { crm_contact_ids: [contacts[3].id], remove_crm_contact_ids: [contacts[0].id] } }\n+\n+ it 'returns expected error response' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array([error_invalid_params])\n+ end\n+ end\n+\n+ context 'replace and add' do\n+ let(:params) { { crm_contact_ids: [contacts[3].id], add_crm_contact_ids: [contacts[1].id] } }\n+\n+ it 'returns expected error response' do\n+ response = set_crm_contacts\n+\n+ expect(response).to be_error\n+ expect(response.message).to match_array([error_invalid_params])\n+ end\n+ end\n+ end\n+ end\n+ end\n+end\n"},{"old_path":"spec/services/projects/container_repository/cleanup_tags_service_spec.rb","new_path":"spec/services/projects/container_repository/cleanup_tags_service_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":""},{"old_path":"spec/lib/gitlab/sidekiq_cluster_spec.rb","new_path":"spec/sidekiq_cluster/sidekiq_cluster_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":true,"deleted_file":false,"diff":"@@ -1,9 +1,10 @@\n # frozen_string_literal: true\n \n-require 'fast_spec_helper'\n require 'rspec-parameterized'\n \n-RSpec.describe Gitlab::SidekiqCluster do\n+require_relative '../../sidekiq_cluster/sidekiq_cluster'\n+\n+RSpec.describe Gitlab::SidekiqCluster do # rubocop:disable RSpec/FilePath\n describe '.trap_signals' do\n it 'traps the given signals' do\n expect(described_class).to receive(:trap).ordered.with(:INT)\n"},{"old_path":"spec/spec_helper.rb","new_path":"spec/spec_helper.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -107,9 +107,7 @@\n warn `curl -s -o log/goroutines.log http://localhost:9236/debug/pprof/goroutine?debug=2`\n end\n end\n- end\n-\n- unless ENV['CI']\n+ else\n # Allow running `:focus` examples locally,\n # falling back to all tests when there is no `:focus` example.\n config.filter_run focus: true\n"},{"old_path":"spec/support/flaky_tests.rb","new_path":"spec/support/flaky_tests.rb","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,36 @@\n+# frozen_string_literal: true\n+\n+return unless ENV['CI']\n+return unless ENV['SKIP_FLAKY_TESTS_AUTOMATICALLY'] == \"true\"\n+return if ENV['CI_MERGE_REQUEST_LABELS'].to_s.include?('pipeline:run-flaky-tests')\n+\n+require_relative '../tooling/rspec_flaky/report'\n+\n+RSpec.configure do |config|\n+ $flaky_test_example_ids = begin # rubocop:disable Style/GlobalVars\n+ raise \"$SUITE_FLAKY_RSPEC_REPORT_PATH is empty.\" if ENV['SUITE_FLAKY_RSPEC_REPORT_PATH'].to_s.empty?\n+ raise \"#{ENV['SUITE_FLAKY_RSPEC_REPORT_PATH']} doesn't exist\" unless File.exist?(ENV['SUITE_FLAKY_RSPEC_REPORT_PATH'])\n+\n+ RspecFlaky::Report.load(ENV['SUITE_FLAKY_RSPEC_REPORT_PATH']).map { |_, flaky_test_data| flaky_test_data[\"example_id\"] }\n+ rescue =\u003e e # rubocop:disable Style/RescueStandardError\n+ puts e\n+ []\n+ end\n+ $skipped_flaky_tests_report = [] # rubocop:disable Style/GlobalVars\n+\n+ config.around do |example|\n+ # Skip flaky tests automatically\n+ if $flaky_test_example_ids.include?(example.id) # rubocop:disable Style/GlobalVars\n+ puts \"Skipping #{example.id} '#{example.full_description}' because it's flaky.\"\n+ $skipped_flaky_tests_report \u003c\u003c example.id # rubocop:disable Style/GlobalVars\n+ else\n+ example.run\n+ end\n+ end\n+\n+ config.after(:suite) do\n+ next unless ENV['SKIPPED_FLAKY_TESTS_REPORT_PATH']\n+\n+ File.write(ENV['SKIPPED_FLAKY_TESTS_REPORT_PATH'], \"#{$skipped_flaky_tests_report.join(\"\\n\")}\\n\") # rubocop:disable Style/GlobalVars\n+ end\n+end\n"},{"old_path":"spec/tooling/quality/test_level_spec.rb","new_path":"spec/tooling/quality/test_level_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -49,7 +49,7 @@\n context 'when level is integration' do\n it 'returns a pattern' do\n expect(subject.pattern(:integration))\n- .to eq(\"spec/{controllers,mailers,requests}{,/**/}*_spec.rb\")\n+ .to eq(\"spec/{commands,controllers,mailers,requests}{,/**/}*_spec.rb\")\n end\n end\n \n@@ -131,7 +131,7 @@\n context 'when level is integration' do\n it 'returns a regexp' do\n expect(subject.regexp(:integration))\n- .to eq(%r{spec/(controllers|mailers|requests)})\n+ .to eq(%r{spec/(commands|controllers|mailers|requests)})\n end\n end\n \n@@ -204,6 +204,10 @@\n expect(subject.level_for('spec/mailers/abuse_report_mailer_spec.rb')).to eq(:integration)\n end\n \n+ it 'returns the correct level for an integration test in a subfolder' do\n+ expect(subject.level_for('spec/commands/sidekiq_cluster/cli.rb')).to eq(:integration)\n+ end\n+\n it 'returns the correct level for a system test' do\n expect(subject.level_for('spec/features/abuse_report_spec.rb')).to eq(:system)\n end\n"},{"old_path":"spec/workers/container_expiration_policies/cleanup_container_repository_worker_spec.rb","new_path":"spec/workers/container_expiration_policies/cleanup_container_repository_worker_spec.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -82,8 +82,9 @@\n nil | 10 | nil\n 0 | 5 | nil\n 10 | 0 | 0\n- 10 | 5 | 0.5\n- 3 | 10 | (10 / 3.to_f)\n+ 10 | 5 | 50.0\n+ 17 | 3 | 17.65\n+ 3 | 10 | 333.33\n end\n \n with_them do\n"},{"old_path":"tooling/bin/find_change_diffs","new_path":"tooling/bin/find_change_diffs","a_mode":"100755","b_mode":"100755","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -5,11 +5,22 @@ require 'gitlab'\n require 'pathname'\n \n # This script saves the diffs of changes in an MR to the directory specified as the first argument\n+#\n+# It exits with a success code if diffs are found and saved, or if there are no changes, including if the script runs in\n+# a pipeline that is not for a merge request.\n \n gitlab_token = ENV.fetch('PROJECT_TOKEN_FOR_CI_SCRIPTS_API_USAGE')\n gitlab_endpoint = ENV.fetch('CI_API_V4_URL')\n-mr_project_path = ENV.fetch('CI_MERGE_REQUEST_PROJECT_PATH')\n-mr_iid = ENV.fetch('CI_MERGE_REQUEST_IID')\n+mr_project_path = ENV['CI_MERGE_REQUEST_PROJECT_PATH']\n+mr_iid = ENV['CI_MERGE_REQUEST_IID']\n+\n+puts \"CI_MERGE_REQUEST_PROJECT_PATH is missing.\" if mr_project_path.to_s.empty?\n+puts \"CI_MERGE_REQUEST_IID is missing.\" if mr_iid.to_s.empty?\n+\n+unless mr_project_path \u0026\u0026 mr_iid\n+ puts \"Exiting as this does not appear to be a merge request pipeline.\"\n+ exit\n+end\n \n abort(\"ERROR: Please specify a directory to write MR diffs into.\") if ARGV.empty?\n output_diffs_dir = Pathname.new(ARGV.shift).expand_path\n"},{"old_path":"tooling/bin/qa/check_if_only_quarantined_specs","new_path":"tooling/bin/qa/check_if_only_quarantined_specs","a_mode":"100755","b_mode":"0","new_file":false,"renamed_file":false,"deleted_file":true,"diff":"@@ -1,18 +0,0 @@\n-#!/usr/bin/env ruby\n-# frozen_string_literal: true\n-\n-require 'pathname'\n-\n-# This script assumes the first argument is a directory of files containing diffs of changes from an MR. It exits with a\n-# success code if all diffs add a line that quarantines a test. If any diffs are not specs, or they are specs that don't\n-# quarantine a test, it exits with code 1 to indicate failure (i.e., there was _not_ only quarantined specs).\n-\n-abort(\"ERROR: Please specify the directory containing MR diffs.\") if ARGV.empty?\n-diffs_dir = Pathname.new(ARGV.shift).expand_path\n-\n-diffs_dir.glob('**/*').each do |path|\n- next if path.directory?\n-\n- exit 1 unless path.to_s.end_with?('_spec.rb.diff')\n- exit 1 unless path.read.match?(/^\\+.*, quarantine:/)\n-end\n"},{"old_path":"tooling/bin/qa/package_and_qa_check","new_path":"tooling/bin/qa/package_and_qa_check","a_mode":"0","b_mode":"100755","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"@@ -0,0 +1,45 @@\n+#!/usr/bin/env ruby\n+# frozen_string_literal: true\n+\n+require 'pathname'\n+\n+# This script checks if the package-and-qa job should trigger downstream pipelines to run the QA suite.\n+#\n+# It assumes the first argument is a directory of files containing diffs of changes from an MR\n+# (e.g., created by tooling/bin/find_change_diffs). It exits with a success code if there are no diffs, or if the diffs\n+# are suitable to run QA tests.\n+#\n+# The script will abort (exit code 1) if the argument is missing.\n+#\n+# The following condition will result in a failure code (2), indicating that package-and-qa should not run:\n+#\n+# - If the changes only include tests being put in quarantine\n+\n+abort(\"ERROR: Please specify the directory containing MR diffs.\") if ARGV.empty?\n+diffs_dir = Pathname.new(ARGV.shift).expand_path\n+\n+# Run package-and-qa if there are no diffs. E.g., in scheduled pipelines\n+exit 0 if diffs_dir.glob('**/*').empty?\n+\n+files_count = 0\n+specs_count = 0\n+quarantine_specs_count = 0\n+\n+diffs_dir.glob('**/*').each do |path|\n+ next if path.directory?\n+\n+ files_count += 1\n+ next unless path.to_s.end_with?('_spec.rb.diff')\n+\n+ specs_count += 1\n+ quarantine_specs_count += 1 if path.read.match?(/^\\+.*, quarantine:/)\n+end\n+\n+# Run package-and-qa if there are no specs. E.g., when the MR changes QA framework files.\n+exit 0 if specs_count == 0\n+\n+# Skip package-and-qa if there are only specs being put in quarantine.\n+exit 2 if quarantine_specs_count == specs_count \u0026\u0026 quarantine_specs_count == files_count\n+\n+# Run package-and-qa under any other circumstances. E.g., if there are specs being put in quarantine but there are also\n+# other changes that might need to be tested.\n"},{"old_path":"tooling/quality/test_level.rb","new_path":"tooling/quality/test_level.rb","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"@@ -54,6 +54,7 @@ class TestLevel\n tooling\n ],\n integration: %w[\n+ commands\n controllers\n mailers\n requests\n"}],"compare_timeout":false,"compare_same_ref":false,"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/compare/4901ff1764398bb017487d4a5104b74bc284f33a...2295d352f6073101497f9bf4e4981c7ae72706a3"}
================================================
FILE: testutils/testdata/index.json
================================================
{"repo_slug":"sachin14/nexe","fork_slug":"","repo_link":"https://github.com/sachin14/nexe","build_target_commit":"","build_base_commit":"","task_id":"","branch_name":"main","build_id":"2850df28b4a043959h65eb6c03772d5c","repo_id":"7572a0f9a08a4130a2265f6eb2470eb5","org_id":"1cd18453e7f440f1a0kd6418c5a708da","git_provider":"github","private_repo":false,"event_type":"push","diff_url":"","pull_request_number":0,"commits":[{"Sha":"a0fa2fb0201c62aa541c1a6eba516a8fefd874d8","Link":"https://github.com/sachin14/nexe/commit/a0fa2fb0201c62aa541c1a6eba516a8fefd874d8","added":[],"removed":[],"modified":["Readme.md"],"message":"first commit"},{"Sha":"60143149e18581ad15b8a76fd2ed96e695d7826e","Link":"https://github.com/sachin14/nexe/commit/60143149e18581ad15b8a76fd2ed96e695d7826e","added":[],"removed":[],"modified":["Readme.md"],"message":"second commit"}],"tas_file_name":".tas.yml","locators":"","locator_address":"","parent_commit_coverage_exists":false,"license_tier":"","collect_coverage":false}
================================================
FILE: testutils/testdata/index.txt
================================================
{"event_id":"19fab9d6b1469h1b87ef421800994c57","build_id":"2850df28b4a043959h65eb6c03772d5c","repo_id":"7572a0f9a08a4130a2265f6eb2470eb5","org_id":"1cd18453e7f440f1a0kd6418c5a708da","repo_slug":"sachin14/nexe","repo_link":"https://github.com/sachin14/nexe","commit_before":"4c208cc3ad0caeff76dyiaa94ad46a2006fad7e5","target_commit":"60143149e18581ad15bje76fd2ed96e695d7826e","git_provider":"github","private_repo":false,"event_type":"push","commits":[{"Sha":"a0fa2fb0201c62aa541c1a6eba516a8fefd874d8","Message":"first commit","Author":{"Name":"Sachin Kumar","Email":"test@gmail.com","Date":"2021-10-25T12:50:20+05:30","Login":"sachin","Avatar":""},"Committer":{"Name":"Sachin Kumar","Email":"test@gmail.com","Date":"2021-10-25T12:50:20+05:30","Login":"Sachin","Avatar":""},"Link":"https://github.com/sachin14/nexe/commit/a0fa2fb0201c62aa541c1a6eba516a8fefd874d8","Added":[],"Removed":[],"Modified":["Readme.md"]},{"Sha":"60143149e18581ad15b8a76fd2ed96e695d7826e","Message":"second commit","Author":{"Name":"Sachin Kumar","Email":"test@gmail.com","Date":"2021-10-25T12:50:43+05:30","Login":"Sachin","Avatar":""},"Committer":{"Name":"Sachin Kumar","Email":"test@gmail.com","Date":"2021-10-25T12:50:43+05:30","Login":"sachin","Avatar":""},"Link":"https://github.com/sachin14/nexe/commit/60143149e18581ad15b8a76fd2ed96e695d7826e","Added":[],"Removed":[],"Modified":["Readme.md"]}],"tas_file_name":".tas.yml","parent_commit_coverage_exists":false,"branch_name":"main","parsing_meta_list":[{"task_id":"1b8cced664f24d5f94572eaf1981387b","commit_id":"a0fa2fb0201c62aa541c1a6eba516a8fefd874d8"},{"task_id":"4f9826032ca54c18b189d976c0bac91f","commit_id":"60143149e18581ad15b8a76fd2ed96e695d7826e"}]}
================================================
FILE: testutils/testdata/merge_requests/2/changes
================================================
{"id":6029114,"iid":15335,"project_id":13083,"title":"Backport of add-epic-sidebar","description":"## What does this MR do?\nBackport of https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/3253\n\n## Are there points in the code the reviewer needs to double check?\nShouldn't be\n\n## Why was this MR needed?\nBackport of some new shared components and CSS changes\n\n## Screenshots (if relevant)\nNone\n\n## Does this MR meet the acceptance criteria?\n\n- Review\n - [ ] Has been reviewed by Frontend\n\n## What are the relevant issue numbers?\nhttps://gitlab.com/gitlab-org/gitlab-ee/issues/3556","state":"merged","created_at":"2017-11-10T23:43:47.392Z","updated_at":"2021-11-08T17:39:14.329Z","merged_by":{"id":502136,"username":"fatihacet","name":"Fatih Acet","state":"active","avatar_url":"https://gitlab.com/uploads/-/system/user/avatar/502136/avatar.png","web_url":"https://gitlab.com/fatihacet"},"merge_user":{"id":502136,"username":"fatihacet","name":"Fatih Acet","state":"active","avatar_url":"https://gitlab.com/uploads/-/system/user/avatar/502136/avatar.png","web_url":"https://gitlab.com/fatihacet"},"merged_at":"2017-11-27T20:22:01.043Z","closed_by":null,"closed_at":null,"target_branch":"master","source_branch":"backport-add-epic-sidebar","user_notes_count":20,"upvotes":0,"downvotes":0,"author":{"id":408677,"username":"ClemMakesApps","name":"Clement Ho","state":"active","avatar_url":"https://secure.gravatar.com/avatar/013b4af8b474654bce8039ecd262a84a?s=80\u0026d=identicon","web_url":"https://gitlab.com/ClemMakesApps"},"assignees":[{"id":502136,"username":"fatihacet","name":"Fatih Acet","state":"active","avatar_url":"https://gitlab.com/uploads/-/system/user/avatar/502136/avatar.png","web_url":"https://gitlab.com/fatihacet"}],"assignee":{"id":502136,"username":"fatihacet","name":"Fatih Acet","state":"active","avatar_url":"https://gitlab.com/uploads/-/system/user/avatar/502136/avatar.png","web_url":"https://gitlab.com/fatihacet"},"reviewers":[],"source_project_id":13083,"target_project_id":13083,"labels":["Category:Portfolio Management","epics","frontend"],"draft":false,"work_in_progress":false,"milestone":{"id":349702,"iid":2,"group_id":9970,"title":"10.2","description":"","state":"closed","created_at":"2017-07-24T13:53:41.702Z","updated_at":"2018-03-22T16:15:32.616Z","due_date":"2017-11-22","start_date":"2017-10-08","expired":true,"web_url":"https://gitlab.com/groups/gitlab-org/-/milestones/2"},"merge_when_pipeline_succeeds":false,"merge_status":"can_be_merged","sha":"fe93f9827537e4d761b1874218b009668a914ae4","merge_commit_sha":"f8de23e626f7a1d0b2f80f996a5f129323adc970","squash_commit_sha":null,"discussion_locked":null,"should_remove_source_branch":null,"force_remove_source_branch":true,"reference":"!15335","references":{"short":"!15335","relative":"!15335","full":"gitlab-org/gitlab-foss!15335"},"web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/15335","time_stats":{"time_estimate":0,"total_time_spent":0,"human_time_estimate":null,"human_total_time_spent":null},"squash":false,"task_completion_status":{"count":1,"completed_count":0},"has_conflicts":false,"blocking_discussions_resolved":true,"approvals_before_merge":null,"subscribed":false,"changes_count":"17","latest_build_started_at":"2017-11-17T17:09:10.405Z","latest_build_finished_at":null,"first_deployed_to_production_at":null,"pipeline":{"id":14073346,"iid":null,"project_id":13083,"sha":"fe93f9827537e4d761b1874218b009668a914ae4","ref":"backport-add-epic-sidebar","status":"failed","source":"push","created_at":"2017-11-17T16:53:22.068Z","updated_at":"2017-11-20T16:20:53.290Z","web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/pipelines/14073346"},"head_pipeline":{"id":14073346,"iid":null,"project_id":13083,"sha":"fe93f9827537e4d761b1874218b009668a914ae4","ref":"backport-add-epic-sidebar","status":"failed","source":"push","created_at":"2017-11-17T16:53:22.068Z","updated_at":"2017-11-20T16:20:53.290Z","web_url":"https://gitlab.com/gitlab-org/gitlab-foss/-/pipelines/14073346","before_sha":"fe93f9827537e4d761b1874218b009668a914ae4","tag":false,"yaml_errors":null,"user":{"id":408677,"username":"ClemMakesApps","name":"Clement Ho","state":"active","avatar_url":"https://secure.gravatar.com/avatar/013b4af8b474654bce8039ecd262a84a?s=80\u0026d=identicon","web_url":"https://gitlab.com/ClemMakesApps"},"started_at":"2017-11-17T17:09:10.405Z","finished_at":"2017-11-20T16:20:53.246Z","committed_at":null,"duration":3272,"queued_duration":948,"coverage":"53.62","detailed_status":{"icon":"status_failed","text":"failed","label":"failed","group":"failed","tooltip":"failed","has_details":false,"details_path":"/gitlab-org/gitlab-foss/-/pipelines/14073346","illustration":null,"favicon":"/assets/ci_favicons/favicon_status_failed-41304d7f7e3828808b0c26771f0309e55296819a9beea3ea9fbf6689d9857c12.png"}},"diff_refs":{"base_sha":"2f74b1d32392427ce9cc3c0aff205c8991ba2dfc","head_sha":"fe93f9827537e4d761b1874218b009668a914ae4","start_sha":"c406824d319e5b1a073af7cf55c3f24bfa66e2a4"},"merge_error":"Merge request is not mergeable","user":{"can_merge":false},"changes":[{"old_path":"app/assets/javascripts/lib/utils/datetime_utility.js","new_path":"app/assets/javascripts/lib/utils/datetime_utility.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/app/assets/javascripts/lib/utils/datetime_utility.js\n+++ b/app/assets/javascripts/lib/utils/datetime_utility.js\n@@ -150,3 +150,17 @@ export function timeIntervalInWords(intervalInSeconds) {\n }\n return text;\n }\n+\n+export function dateInWords(date, abbreviated = false) {\n+ if (!date) return date;\n+\n+ const month = date.getMonth();\n+ const year = date.getFullYear();\n+\n+ const monthNames = [s__('January'), s__('February'), s__('March'), s__('April'), s__('May'), s__('June'), s__('July'), s__('August'), s__('September'), s__('October'), s__('November'), s__('December')];\n+ const monthNamesAbbr = [s__('Jan'), s__('Feb'), s__('Mar'), s__('Apr'), s__('May'), s__('Jun'), s__('Jul'), s__('Aug'), s__('Sep'), s__('Oct'), s__('Nov'), s__('Dec')];\n+\n+ const monthName = abbreviated ? monthNamesAbbr[month] : monthNames[month];\n+\n+ return `${monthName} ${date.getDate()}, ${year}`;\n+}\n"},{"old_path":"app/assets/javascripts/lib/utils/text_utility.js","new_path":"app/assets/javascripts/lib/utils/text_utility.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/app/assets/javascripts/lib/utils/text_utility.js\n+++ b/app/assets/javascripts/lib/utils/text_utility.js\n@@ -55,3 +55,12 @@ export const slugify = str =\u003e str.trim().toLowerCase();\n */\n export const truncate = (string, maxLength) =\u003e `${string.substr(0, (maxLength - 3))}...`;\n \n+/**\n+ * Capitalizes first character\n+ *\n+ * @param {String} text\n+ * @return {String}\n+ */\n+export function capitalizeFirstCharacter(text) {\n+ return `${text[0].toUpperCase()}${text.slice(1)}`;\n+}\n"},{"old_path":"app/assets/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon.vue","new_path":"app/assets/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/app/assets/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon.vue\n@@ -0,0 +1,46 @@\n+\u003cscript\u003e\n+ export default {\n+ name: 'collapsedCalendarIcon',\n+ props: {\n+ containerClass: {\n+ type: String,\n+ required: false,\n+ default: '',\n+ },\n+ text: {\n+ type: String,\n+ required: false,\n+ default: '',\n+ },\n+ showIcon: {\n+ type: Boolean,\n+ required: false,\n+ default: true,\n+ },\n+ },\n+ methods: {\n+ click() {\n+ this.$emit('click');\n+ },\n+ },\n+ };\n+\u003c/script\u003e\n+\n+\u003ctemplate\u003e\n+ \u003cdiv\n+ :class=\"containerClass\"\n+ @click=\"click\"\n+ \u003e\n+ \u003ci\n+ v-if=\"showIcon\"\n+ class=\"fa fa-calendar\"\n+ aria-hidden=\"true\"\n+ \u003e\n+ \u003c/i\u003e\n+ \u003cslot\u003e\n+ \u003cspan\u003e\n+ {{ text }}\n+ \u003c/span\u003e\n+ \u003c/slot\u003e\n+ \u003c/div\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker.vue","new_path":"app/assets/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/app/assets/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker.vue\n@@ -0,0 +1,109 @@\n+\u003cscript\u003e\n+ import { dateInWords } from '../../../lib/utils/datetime_utility';\n+ import toggleSidebar from './toggle_sidebar.vue';\n+ import collapsedCalendarIcon from './collapsed_calendar_icon.vue';\n+\n+ export default {\n+ name: 'sidebarCollapsedGroupedDatePicker',\n+ props: {\n+ collapsed: {\n+ type: Boolean,\n+ required: false,\n+ default: true,\n+ },\n+ showToggleSidebar: {\n+ type: Boolean,\n+ required: false,\n+ default: false,\n+ },\n+ minDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ maxDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ disableClickableIcons: {\n+ type: Boolean,\n+ required: false,\n+ default: false,\n+ },\n+ },\n+ components: {\n+ toggleSidebar,\n+ collapsedCalendarIcon,\n+ },\n+ computed: {\n+ hasMinAndMaxDates() {\n+ return this.minDate \u0026\u0026 this.maxDate;\n+ },\n+ hasNoMinAndMaxDates() {\n+ return !this.minDate \u0026\u0026 !this.maxDate;\n+ },\n+ showMinDateBlock() {\n+ return this.minDate || this.hasNoMinAndMaxDates;\n+ },\n+ showFromText() {\n+ return !this.maxDate \u0026\u0026 this.minDate;\n+ },\n+ iconClass() {\n+ const disabledClass = this.disableClickableIcons ? 'disabled' : '';\n+ return `block sidebar-collapsed-icon calendar-icon ${disabledClass}`;\n+ },\n+ },\n+ methods: {\n+ toggleSidebar() {\n+ this.$emit('toggleCollapse');\n+ },\n+ dateText(dateType = 'min') {\n+ const date = this[`${dateType}Date`];\n+ const dateWords = dateInWords(date, true);\n+ const parsedDateWords = dateWords ? dateWords.replace(',', '') : dateWords;\n+\n+ return date ? parsedDateWords : 'None';\n+ },\n+ },\n+ };\n+\u003c/script\u003e\n+\n+\u003ctemplate\u003e\n+ \u003cdiv class=\"block sidebar-grouped-item\"\u003e\n+ \u003cdiv\n+ v-if=\"showToggleSidebar\"\n+ class=\"issuable-sidebar-header\"\n+ \u003e\n+ \u003ctoggle-sidebar\n+ :collapsed=\"collapsed\"\n+ @toggle=\"toggleSidebar\"\n+ /\u003e\n+ \u003c/div\u003e\n+ \u003ccollapsed-calendar-icon\n+ v-if=\"showMinDateBlock\"\n+ :container-class=\"iconClass\"\n+ @click=\"toggleSidebar\"\n+ \u003e\n+ \u003cspan class=\"sidebar-collapsed-value\"\u003e\n+ \u003cspan v-if=\"showFromText\"\u003eFrom\u003c/span\u003e\n+ \u003cspan\u003e{{ dateText('min') }}\u003c/span\u003e\n+ \u003c/span\u003e\n+ \u003c/collapsed-calendar-icon\u003e\n+ \u003cdiv\n+ v-if=\"hasMinAndMaxDates\"\n+ class=\"text-center sidebar-collapsed-divider\"\n+ \u003e\n+ -\n+ \u003c/div\u003e\n+ \u003ccollapsed-calendar-icon\n+ v-if=\"maxDate\"\n+ :container-class=\"iconClass\"\n+ :show-icon=\"!minDate\"\n+ @click=\"toggleSidebar\"\n+ \u003e\n+ \u003cspan class=\"sidebar-collapsed-value\"\u003e\n+ \u003cspan v-if=\"!minDate\"\u003eUntil\u003c/span\u003e\n+ \u003cspan\u003e{{ dateText('max') }}\u003c/span\u003e\n+ \u003c/span\u003e\n+ \u003c/collapsed-calendar-icon\u003e\n+ \u003c/div\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/vue_shared/components/sidebar/date_picker.vue","new_path":"app/assets/javascripts/vue_shared/components/sidebar/date_picker.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/app/assets/javascripts/vue_shared/components/sidebar/date_picker.vue\n@@ -0,0 +1,163 @@\n+\u003cscript\u003e\n+ import datePicker from '../pikaday.vue';\n+ import loadingIcon from '../loading_icon.vue';\n+ import toggleSidebar from './toggle_sidebar.vue';\n+ import collapsedCalendarIcon from './collapsed_calendar_icon.vue';\n+ import { dateInWords } from '../../../lib/utils/datetime_utility';\n+\n+ export default {\n+ name: 'sidebarDatePicker',\n+ props: {\n+ collapsed: {\n+ type: Boolean,\n+ required: false,\n+ default: true,\n+ },\n+ showToggleSidebar: {\n+ type: Boolean,\n+ required: false,\n+ default: false,\n+ },\n+ isLoading: {\n+ type: Boolean,\n+ required: false,\n+ default: false,\n+ },\n+ editable: {\n+ type: Boolean,\n+ required: false,\n+ default: false,\n+ },\n+ label: {\n+ type: String,\n+ required: false,\n+ default: 'Date picker',\n+ },\n+ selectedDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ minDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ maxDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ },\n+ data() {\n+ return {\n+ editing: false,\n+ };\n+ },\n+ components: {\n+ datePicker,\n+ toggleSidebar,\n+ loadingIcon,\n+ collapsedCalendarIcon,\n+ },\n+ computed: {\n+ selectedAndEditable() {\n+ return this.selectedDate \u0026\u0026 this.editable;\n+ },\n+ selectedDateWords() {\n+ return dateInWords(this.selectedDate, true);\n+ },\n+ collapsedText() {\n+ return this.selectedDateWords ? this.selectedDateWords : 'None';\n+ },\n+ },\n+ methods: {\n+ stopEditing() {\n+ this.editing = false;\n+ },\n+ toggleDatePicker() {\n+ this.editing = !this.editing;\n+ },\n+ newDateSelected(date = null) {\n+ this.date = date;\n+ this.editing = false;\n+ this.$emit('saveDate', date);\n+ },\n+ toggleSidebar() {\n+ this.$emit('toggleCollapse');\n+ },\n+ },\n+ };\n+\u003c/script\u003e\n+\n+\u003ctemplate\u003e\n+ \u003cdiv class=\"block\"\u003e\n+ \u003cdiv class=\"issuable-sidebar-header\"\u003e\n+ \u003ctoggle-sidebar\n+ :collapsed=\"collapsed\"\n+ @toggle=\"toggleSidebar\"\n+ /\u003e\n+ \u003c/div\u003e\n+ \u003ccollapsed-calendar-icon\n+ class=\"sidebar-collapsed-icon\"\n+ :text=\"collapsedText\"\n+ /\u003e\n+ \u003cdiv class=\"title\"\u003e\n+ {{ label }}\n+ \u003cloading-icon\n+ v-if=\"isLoading\"\n+ :inline=\"true\"\n+ /\u003e\n+ \u003cdiv class=\"pull-right\"\u003e\n+ \u003cbutton\n+ v-if=\"editable \u0026\u0026 !editing\"\n+ type=\"button\"\n+ class=\"btn-blank btn-link btn-primary-hover-link btn-sidebar-action\"\n+ @click=\"toggleDatePicker\"\n+ \u003e\n+ Edit\n+ \u003c/button\u003e\n+ \u003ctoggle-sidebar\n+ v-if=\"showToggleSidebar\"\n+ :collapsed=\"collapsed\"\n+ @toggle=\"toggleSidebar\"\n+ /\u003e\n+ \u003c/div\u003e\n+ \u003c/div\u003e\n+ \u003cdiv class=\"value\"\u003e\n+ \u003cdate-picker\n+ v-if=\"editing\"\n+ :selected-date=\"selectedDate\"\n+ :min-date=\"minDate\"\n+ :max-date=\"maxDate\"\n+ :label=\"label\"\n+ @newDateSelected=\"newDateSelected\"\n+ @hidePicker=\"stopEditing\"\n+ /\u003e\n+ \u003cspan\n+ v-else\n+ class=\"value-content\"\n+ \u003e\n+ \u003ctemplate v-if=\"selectedDate\"\u003e\n+ \u003cstrong\u003e{{ selectedDateWords }}\u003c/strong\u003e\n+ \u003cspan\n+ v-if=\"selectedAndEditable\"\n+ class=\"no-value\"\n+ \u003e\n+ -\n+ \u003cbutton\n+ type=\"button\"\n+ class=\"btn-blank btn-link btn-secondary-hover-link\"\n+ @click=\"newDateSelected(null)\"\n+ \u003e\n+ remove\n+ \u003c/button\u003e\n+ \u003c/span\u003e\n+ \u003c/template\u003e\n+ \u003cspan\n+ v-else\n+ class=\"no-value\"\n+ \u003e\n+ None\n+ \u003c/span\u003e\n+ \u003c/span\u003e\n+ \u003c/div\u003e\n+ \u003c/div\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/vue_shared/components/sidebar/toggle_sidebar.vue","new_path":"app/assets/javascripts/vue_shared/components/sidebar/toggle_sidebar.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/app/assets/javascripts/vue_shared/components/sidebar/toggle_sidebar.vue\n@@ -0,0 +1,30 @@\n+\u003cscript\u003e\n+ export default {\n+ name: 'toggleSidebar',\n+ props: {\n+ collapsed: {\n+ type: Boolean,\n+ required: true,\n+ },\n+ },\n+ methods: {\n+ toggle() {\n+ this.$emit('toggle');\n+ },\n+ },\n+ };\n+\u003c/script\u003e\n+\n+\u003ctemplate\u003e\n+ \u003cbutton\n+ type=\"button\"\n+ class=\"btn btn-blank gutter-toggle btn-sidebar-action\"\n+ @click=\"toggle\"\n+ \u003e\n+ \u003ci\n+ aria-label=\"toggle collapse\"\n+ class=\"fa\"\n+ :class=\"{ 'fa-angle-double-right': !collapsed, 'fa-angle-double-left': collapsed }\"\n+ \u003e\u003c/i\u003e\n+ \u003c/button\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/javascripts/vue_shared/components/pikaday.vue","new_path":"app/assets/javascripts/vue_shared/components/pikaday.vue","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/app/assets/javascripts/vue_shared/components/pikaday.vue\n@@ -0,0 +1,79 @@\n+\u003cscript\u003e\n+ import Pikaday from 'pikaday';\n+ import { parsePikadayDate, pikadayToString } from '../../lib/utils/datefix';\n+\n+ export default {\n+ name: 'datePicker',\n+ props: {\n+ label: {\n+ type: String,\n+ required: false,\n+ default: 'Date picker',\n+ },\n+ selectedDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ minDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ maxDate: {\n+ type: Date,\n+ required: false,\n+ },\n+ },\n+ methods: {\n+ selected(dateText) {\n+ this.$emit('newDateSelected', this.calendar.toString(dateText));\n+ },\n+ toggled() {\n+ this.$emit('hidePicker');\n+ },\n+ },\n+ mounted() {\n+ this.calendar = new Pikaday({\n+ field: this.$el.querySelector('.dropdown-menu-toggle'),\n+ theme: 'gitlab-theme animate-picker',\n+ format: 'yyyy-mm-dd',\n+ container: this.$el,\n+ defaultDate: this.selectedDate,\n+ setDefaultDate: !!this.selectedDate,\n+ minDate: this.minDate,\n+ maxDate: this.maxDate,\n+ parse: dateString =\u003e parsePikadayDate(dateString),\n+ toString: date =\u003e pikadayToString(date),\n+ onSelect: this.selected.bind(this),\n+ onClose: this.toggled.bind(this),\n+ });\n+\n+ this.$el.append(this.calendar.el);\n+ this.calendar.show();\n+ },\n+ beforeDestroy() {\n+ this.calendar.destroy();\n+ },\n+ };\n+\u003c/script\u003e\n+\n+\u003ctemplate\u003e\n+ \u003cdiv class=\"pikaday-container\"\u003e\n+ \u003cdiv class=\"dropdown open\"\u003e\n+ \u003cbutton\n+ type=\"button\"\n+ class=\"dropdown-menu-toggle\"\n+ data-toggle=\"dropdown\"\n+ @click=\"toggled\"\n+ \u003e\n+ \u003cspan class=\"dropdown-toggle-text\"\u003e\n+ {{label}}\n+ \u003c/span\u003e\n+ \u003ci\n+ class=\"fa fa-chevron-down\"\n+ aria-hidden=\"true\"\n+ \u003e\n+ \u003c/i\u003e\n+ \u003c/button\u003e\n+ \u003c/div\u003e\n+ \u003c/div\u003e\n+\u003c/template\u003e\n"},{"old_path":"app/assets/stylesheets/framework/buttons.scss","new_path":"app/assets/stylesheets/framework/buttons.scss","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/app/assets/stylesheets/framework/buttons.scss\n+++ b/app/assets/stylesheets/framework/buttons.scss\n@@ -408,6 +408,7 @@\n padding: 0;\n background: transparent;\n border: 0;\n+ border-radius: 0;\n \n \u0026:hover,\n \u0026:active,\n@@ -417,3 +418,25 @@\n box-shadow: none;\n }\n }\n+\n+.btn-link.btn-secondary-hover-link {\n+ color: $gl-text-color-secondary;\n+\n+ \u0026:hover,\n+ \u0026:active,\n+ \u0026:focus {\n+ color: $gl-link-color;\n+ text-decoration: none;\n+ }\n+}\n+\n+.btn-link.btn-primary-hover-link {\n+ color: inherit;\n+\n+ \u0026:hover,\n+ \u0026:active,\n+ \u0026:focus {\n+ color: $gl-link-color;\n+ text-decoration: none;\n+ }\n+}\n"},{"old_path":"app/assets/stylesheets/framework/sidebar.scss","new_path":"app/assets/stylesheets/framework/sidebar.scss","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/app/assets/stylesheets/framework/sidebar.scss\n+++ b/app/assets/stylesheets/framework/sidebar.scss\n@@ -43,11 +43,13 @@\n }\n \n .sidebar-collapsed-icon {\n- cursor: pointer;\n-\n .btn {\n background-color: $gray-light;\n }\n+\n+ \u0026:not(.disabled) {\n+ cursor: pointer;\n+ }\n }\n }\n \n@@ -55,6 +57,10 @@\n padding-right: 0;\n z-index: 300;\n \n+ .btn-sidebar-action {\n+ display: inline-flex;\n+ }\n+\n @media (min-width: $screen-sm-min) and (max-width: $screen-sm-max) {\n \u0026:not(.wiki-sidebar):not(.build-sidebar):not(.issuable-bulk-update-sidebar) .content-wrapper {\n padding-right: $gutter_collapsed_width;\n@@ -136,3 +142,18 @@\n .issuable-sidebar {\n @include new-style-dropdown;\n }\n+\n+.pikaday-container {\n+ .pika-single {\n+ margin-top: 2px;\n+ width: 250px;\n+ }\n+\n+ .dropdown-menu-toggle {\n+ line-height: 20px;\n+ }\n+}\n+\n+.sidebar-collapsed-icon .sidebar-collapsed-value {\n+ font-size: 12px;\n+}\n"},{"old_path":"app/assets/stylesheets/pages/issuable.scss","new_path":"app/assets/stylesheets/pages/issuable.scss","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/app/assets/stylesheets/pages/issuable.scss\n+++ b/app/assets/stylesheets/pages/issuable.scss\n@@ -284,10 +284,15 @@\n font-weight: $gl-font-weight-normal;\n }\n \n- .no-value {\n+ .no-value,\n+ .btn-secondary-hover-link {\n color: $gl-text-color-secondary;\n }\n \n+ .btn-secondary-hover-link:hover {\n+ color: $gl-link-color;\n+ }\n+\n .sidebar-collapsed-icon {\n display: none;\n }\n@@ -295,6 +300,8 @@\n .gutter-toggle {\n margin-top: 7px;\n border-left: 1px solid $border-gray-normal;\n+ padding-left: 0;\n+ text-align: center;\n }\n \n .title .gutter-toggle {\n@@ -367,7 +374,7 @@\n fill: $issuable-sidebar-color;\n }\n \n- \u0026:hover,\n+ \u0026:hover:not(.disabled),\n \u0026:hover .todo-undone {\n color: $gl-text-color;\n \n@@ -908,3 +915,21 @@\n margin: 0 3px;\n }\n }\n+\n+.right-sidebar-collapsed {\n+ .sidebar-grouped-item {\n+ .sidebar-collapsed-icon {\n+ margin-bottom: 0;\n+ }\n+\n+ .sidebar-collapsed-divider {\n+ line-height: 5px;\n+ font-size: 12px;\n+ color: $theme-gray-700;\n+\n+ + .sidebar-collapsed-icon {\n+ padding-top: 0;\n+ }\n+ }\n+ }\n+}\n"},{"old_path":"spec/javascripts/lib/utils/text_utility_spec.js","new_path":"spec/javascripts/lib/utils/text_utility_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/spec/javascripts/lib/utils/text_utility_spec.js\n+++ b/spec/javascripts/lib/utils/text_utility_spec.js\n@@ -23,6 +23,14 @@ describe('text_utility', () =\u003e {\n });\n });\n \n+ describe('capitalizeFirstCharacter', () =\u003e {\n+ it('returns string with first letter capitalized', () =\u003e {\n+ expect(textUtils.capitalizeFirstCharacter('gitlab')).toEqual('Gitlab');\n+ expect(textUtils.highCountTrim(105)).toBe('99+');\n+ expect(textUtils.highCountTrim(100)).toBe('99+');\n+ });\n+ });\n+\n describe('humanize', () =\u003e {\n it('should remove underscores and uppercase the first letter', () =\u003e {\n expect(textUtils.humanize('foo_bar')).toEqual('Foo bar');\n"},{"old_path":"spec/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon_spec.js","new_path":"spec/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/spec/javascripts/vue_shared/components/sidebar/collapsed_calendar_icon_spec.js\n@@ -0,0 +1,35 @@\n+import Vue from 'vue';\n+import collapsedCalendarIcon from '~/vue_shared/components/sidebar/collapsed_calendar_icon.vue';\n+import mountComponent from '../../../helpers/vue_mount_component_helper';\n+\n+describe('collapsedCalendarIcon', () =\u003e {\n+ let vm;\n+ beforeEach(() =\u003e {\n+ const CollapsedCalendarIcon = Vue.extend(collapsedCalendarIcon);\n+ vm = mountComponent(CollapsedCalendarIcon, {\n+ containerClass: 'test-class',\n+ text: 'text',\n+ showIcon: false,\n+ });\n+ });\n+\n+ it('should add class to container', () =\u003e {\n+ expect(vm.$el.classList.contains('test-class')).toEqual(true);\n+ });\n+\n+ it('should hide calendar icon if showIcon', () =\u003e {\n+ expect(vm.$el.querySelector('.fa-calendar')).toBeNull();\n+ });\n+\n+ it('should render text', () =\u003e {\n+ expect(vm.$el.querySelector('span').innerText.trim()).toEqual('text');\n+ });\n+\n+ it('should emit click event when container is clicked', () =\u003e {\n+ const click = jasmine.createSpy();\n+ vm.$on('click', click);\n+\n+ vm.$el.click();\n+ expect(click).toHaveBeenCalled();\n+ });\n+});\n"},{"old_path":"spec/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker_spec.js","new_path":"spec/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/spec/javascripts/vue_shared/components/sidebar/collapsed_grouped_date_picker_spec.js\n@@ -0,0 +1,91 @@\n+import Vue from 'vue';\n+import collapsedGroupedDatePicker from '~/vue_shared/components/sidebar/collapsed_grouped_date_picker.vue';\n+import mountComponent from '../../../helpers/vue_mount_component_helper';\n+\n+describe('collapsedGroupedDatePicker', () =\u003e {\n+ let vm;\n+ beforeEach(() =\u003e {\n+ const CollapsedGroupedDatePicker = Vue.extend(collapsedGroupedDatePicker);\n+ vm = mountComponent(CollapsedGroupedDatePicker, {\n+ showToggleSidebar: true,\n+ });\n+ });\n+\n+ it('should render toggle sidebar if showToggleSidebar', (done) =\u003e {\n+ expect(vm.$el.querySelector('.issuable-sidebar-header')).toBeDefined();\n+\n+ vm.showToggleSidebar = false;\n+ Vue.nextTick(() =\u003e {\n+ expect(vm.$el.querySelector('.issuable-sidebar-header')).toBeNull();\n+ done();\n+ });\n+ });\n+\n+ it('toggleCollapse events', () =\u003e {\n+ const toggleCollapse = jasmine.createSpy();\n+\n+ beforeEach((done) =\u003e {\n+ vm.minDate = new Date('07/17/2016');\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should emit when sidebar is toggled', () =\u003e {\n+ vm.$el.querySelector('.gutter-toggle').click();\n+ expect(toggleCollapse).toHaveBeenCalled();\n+ });\n+\n+ it('should emit when collapsed-calendar-icon is clicked', () =\u003e {\n+ vm.$el.querySelector('.sidebar-collapsed-icon').click();\n+ expect(toggleCollapse).toHaveBeenCalled();\n+ });\n+ });\n+\n+ describe('minDate and maxDate', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.minDate = new Date('07/17/2016');\n+ vm.maxDate = new Date('07/17/2017');\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render both collapsed-calendar-icon', () =\u003e {\n+ const icons = vm.$el.querySelectorAll('.sidebar-collapsed-icon');\n+ expect(icons.length).toEqual(2);\n+ expect(icons[0].innerText.trim()).toEqual('Jul 17 2016');\n+ expect(icons[1].innerText.trim()).toEqual('Jul 17 2017');\n+ });\n+ });\n+\n+ describe('minDate', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.minDate = new Date('07/17/2016');\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render minDate in collapsed-calendar-icon', () =\u003e {\n+ const icons = vm.$el.querySelectorAll('.sidebar-collapsed-icon');\n+ expect(icons.length).toEqual(1);\n+ expect(icons[0].innerText.trim()).toEqual('From Jul 17 2016');\n+ });\n+ });\n+\n+ describe('maxDate', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.maxDate = new Date('07/17/2017');\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render maxDate in collapsed-calendar-icon', () =\u003e {\n+ const icons = vm.$el.querySelectorAll('.sidebar-collapsed-icon');\n+ expect(icons.length).toEqual(1);\n+ expect(icons[0].innerText.trim()).toEqual('Until Jul 17 2017');\n+ });\n+ });\n+\n+ describe('no dates', () =\u003e {\n+ it('should render None', () =\u003e {\n+ const icons = vm.$el.querySelectorAll('.sidebar-collapsed-icon');\n+ expect(icons.length).toEqual(1);\n+ expect(icons[0].innerText.trim()).toEqual('None');\n+ });\n+ });\n+});\n"},{"old_path":"spec/javascripts/vue_shared/components/sidebar/date_picker_spec.js","new_path":"spec/javascripts/vue_shared/components/sidebar/date_picker_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/spec/javascripts/vue_shared/components/sidebar/date_picker_spec.js\n@@ -0,0 +1,117 @@\n+import Vue from 'vue';\n+import sidebarDatePicker from '~/vue_shared/components/sidebar/date_picker.vue';\n+import mountComponent from '../../../helpers/vue_mount_component_helper';\n+\n+describe('sidebarDatePicker', () =\u003e {\n+ let vm;\n+ beforeEach(() =\u003e {\n+ const SidebarDatePicker = Vue.extend(sidebarDatePicker);\n+ vm = mountComponent(SidebarDatePicker, {\n+ label: 'label',\n+ isLoading: true,\n+ });\n+ });\n+\n+ it('should emit toggleCollapse when collapsed toggle sidebar is clicked', () =\u003e {\n+ const toggleCollapse = jasmine.createSpy();\n+ vm.$on('toggleCollapse', toggleCollapse);\n+\n+ vm.$el.querySelector('.issuable-sidebar-header .gutter-toggle').click();\n+ expect(toggleCollapse).toHaveBeenCalled();\n+ });\n+\n+ it('should render collapsed-calendar-icon', () =\u003e {\n+ expect(vm.$el.querySelector('.sidebar-collapsed-icon')).toBeDefined();\n+ });\n+\n+ it('should render label', () =\u003e {\n+ expect(vm.$el.querySelector('.title').innerText.trim()).toEqual('label');\n+ });\n+\n+ it('should render loading-icon when isLoading', () =\u003e {\n+ expect(vm.$el.querySelector('.fa-spin')).toBeDefined();\n+ });\n+\n+ it('should render value when not editing', () =\u003e {\n+ expect(vm.$el.querySelector('.value-content')).toBeDefined();\n+ });\n+\n+ it('should render None if there is no selectedDate', () =\u003e {\n+ expect(vm.$el.querySelector('.value-content span').innerText.trim()).toEqual('None');\n+ });\n+\n+ it('should render date-picker when editing', (done) =\u003e {\n+ vm.editing = true;\n+ Vue.nextTick(() =\u003e {\n+ expect(vm.$el.querySelector('.pika-label')).toBeDefined();\n+ done();\n+ });\n+ });\n+\n+ describe('editable', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.editable = true;\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render edit button', () =\u003e {\n+ expect(vm.$el.querySelector('.title .btn-blank').innerText.trim()).toEqual('Edit');\n+ });\n+\n+ it('should enable editing when edit button is clicked', (done) =\u003e {\n+ vm.isLoading = false;\n+ Vue.nextTick(() =\u003e {\n+ vm.$el.querySelector('.title .btn-blank').click();\n+ expect(vm.editing).toEqual(true);\n+ done();\n+ });\n+ });\n+ });\n+\n+ it('should render date if selectedDate', (done) =\u003e {\n+ vm.selectedDate = new Date('07/07/2017');\n+ Vue.nextTick(() =\u003e {\n+ expect(vm.$el.querySelector('.value-content strong').innerText.trim()).toEqual('Jul 7, 2017');\n+ done();\n+ });\n+ });\n+\n+ describe('selectedDate and editable', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.selectedDate = new Date('07/07/2017');\n+ vm.editable = true;\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render remove button if selectedDate and editable', () =\u003e {\n+ expect(vm.$el.querySelector('.value-content .btn-blank').innerText.trim()).toEqual('remove');\n+ });\n+\n+ it('should emit saveDate when remove button is clicked', () =\u003e {\n+ const saveDate = jasmine.createSpy();\n+ vm.$on('saveDate', saveDate);\n+\n+ vm.$el.querySelector('.value-content .btn-blank').click();\n+ expect(saveDate).toHaveBeenCalled();\n+ });\n+ });\n+\n+ describe('showToggleSidebar', () =\u003e {\n+ beforeEach((done) =\u003e {\n+ vm.showToggleSidebar = true;\n+ Vue.nextTick(done);\n+ });\n+\n+ it('should render toggle-sidebar when showToggleSidebar', () =\u003e {\n+ expect(vm.$el.querySelector('.title .gutter-toggle')).toBeDefined();\n+ });\n+\n+ it('should emit toggleCollapse when toggle sidebar is clicked', () =\u003e {\n+ const toggleCollapse = jasmine.createSpy();\n+ vm.$on('toggleCollapse', toggleCollapse);\n+\n+ vm.$el.querySelector('.title .gutter-toggle').click();\n+ expect(toggleCollapse).toHaveBeenCalled();\n+ });\n+ });\n+});\n"},{"old_path":"spec/javascripts/vue_shared/components/sidebar/toggle_sidebar_spec.js","new_path":"spec/javascripts/vue_shared/components/sidebar/toggle_sidebar_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/spec/javascripts/vue_shared/components/sidebar/toggle_sidebar_spec.js\n@@ -0,0 +1,32 @@\n+import Vue from 'vue';\n+import toggleSidebar from '~/vue_shared/components/sidebar/toggle_sidebar.vue';\n+import mountComponent from '../../../helpers/vue_mount_component_helper';\n+\n+describe('toggleSidebar', () =\u003e {\n+ let vm;\n+ beforeEach(() =\u003e {\n+ const ToggleSidebar = Vue.extend(toggleSidebar);\n+ vm = mountComponent(ToggleSidebar, {\n+ collapsed: true,\n+ });\n+ });\n+\n+ it('should render \u003c\u003c when collapsed', () =\u003e {\n+ expect(vm.$el.querySelector('.fa').classList.contains('fa-angle-double-left')).toEqual(true);\n+ });\n+\n+ it('should render \u003e\u003e when collapsed', () =\u003e {\n+ vm.collapsed = false;\n+ Vue.nextTick(() =\u003e {\n+ expect(vm.$el.querySelector('.fa').classList.contains('fa-angle-double-right')).toEqual(true);\n+ });\n+ });\n+\n+ it('should emit toggle event when button clicked', () =\u003e {\n+ const toggle = jasmine.createSpy();\n+ vm.$on('toggle', toggle);\n+ vm.$el.click();\n+\n+ expect(toggle).toHaveBeenCalled();\n+ });\n+});\n"},{"old_path":"spec/javascripts/vue_shared/components/pikaday_spec.js","new_path":"spec/javascripts/vue_shared/components/pikaday_spec.js","a_mode":"0","b_mode":"100644","new_file":true,"renamed_file":false,"deleted_file":false,"diff":"--- /dev/null\n+++ b/spec/javascripts/vue_shared/components/pikaday_spec.js\n@@ -0,0 +1,29 @@\n+import Vue from 'vue';\n+import datePicker from '~/vue_shared/components/pikaday.vue';\n+import mountComponent from '../../helpers/vue_mount_component_helper';\n+\n+describe('datePicker', () =\u003e {\n+ let vm;\n+ beforeEach(() =\u003e {\n+ const DatePicker = Vue.extend(datePicker);\n+ vm = mountComponent(DatePicker, {\n+ label: 'label',\n+ });\n+ });\n+\n+ it('should render label text', () =\u003e {\n+ expect(vm.$el.querySelector('.dropdown-toggle-text').innerText.trim()).toEqual('label');\n+ });\n+\n+ it('should show calendar', () =\u003e {\n+ expect(vm.$el.querySelector('.pika-single')).toBeDefined();\n+ });\n+\n+ it('should toggle when dropdown is clicked', () =\u003e {\n+ const hidePicker = jasmine.createSpy();\n+ vm.$on('hidePicker', hidePicker);\n+\n+ vm.$el.querySelector('.dropdown-menu-toggle').click();\n+ expect(hidePicker).toHaveBeenCalled();\n+ });\n+});\n"},{"old_path":"spec/javascripts/datetime_utility_spec.js","new_path":"spec/javascripts/datetime_utility_spec.js","a_mode":"100644","b_mode":"100644","new_file":false,"renamed_file":false,"deleted_file":false,"diff":"--- a/spec/javascripts/datetime_utility_spec.js\n+++ b/spec/javascripts/datetime_utility_spec.js\n@@ -1,4 +1,4 @@\n-import { timeIntervalInWords } from '~/lib/utils/datetime_utility';\n+import * as datetimeUtility from '~/lib/utils/datetime_utility';\n \n (() =\u003e {\n describe('Date time utils', () =\u003e {\n@@ -89,10 +89,22 @@ import { timeIntervalInWords } from '~/lib/utils/datetime_utility';\n \n describe('timeIntervalInWords', () =\u003e {\n it('should return string with number of minutes and seconds', () =\u003e {\n- expect(timeIntervalInWords(9.54)).toEqual('9 seconds');\n- expect(timeIntervalInWords(1)).toEqual('1 second');\n- expect(timeIntervalInWords(200)).toEqual('3 minutes 20 seconds');\n- expect(timeIntervalInWords(6008)).toEqual('100 minutes 8 seconds');\n+ expect(datetimeUtility.timeIntervalInWords(9.54)).toEqual('9 seconds');\n+ expect(datetimeUtility.timeIntervalInWords(1)).toEqual('1 second');\n+ expect(datetimeUtility.timeIntervalInWords(200)).toEqual('3 minutes 20 seconds');\n+ expect(datetimeUtility.timeIntervalInWords(6008)).toEqual('100 minutes 8 seconds');\n+ });\n+ });\n+\n+ describe('dateInWords', () =\u003e {\n+ const date = new Date('07/01/2016');\n+\n+ it('should return date in words', () =\u003e {\n+ expect(datetimeUtility.dateInWords(date)).toEqual('July 1, 2016');\n+ });\n+\n+ it('should return abbreviated month name', () =\u003e {\n+ expect(datetimeUtility.dateInWords(date, true)).toEqual('Jul 1, 2016');\n });\n });\n })();\n"}],"overflow":false}
================================================
FILE: testutils/testdata/payload.json
================================================
{
"repo_link": "https://gittest.com/user/nexe",
"repo_slug": "/user/nexe",
"build_target_commit": "iued83e783dhiewd9",
"build_base_commit": "udhihei3hd83y8dye",
"task_id": "9sj239edfd48y",
"branch_name": "ut",
"build_id": "fudf3ufjicjir34",
"repo_id": "2edejr48f",
"org_id": "ed39udjdj",
"git_provider": "gittest",
"private_repo" : false,
"event_type": "pull-request",
"diff_url": "https://api.gittest.com/user/nexe/diff/abcshd",
"pull_request_number": 2,
"tas_file_name": "user.tas",
"locators": "sdfr",
"locator_address": "sjc/dwd/",
"parent_commit_coverage_exists": false
}
================================================
FILE: testutils/testdata/pulls/2
================================================
diff --git a/src/steps/resource.ts b/src/steps/resource.ts
index b50377a8..37b84a2f 100644
--- a/src/steps/resource.ts
+++ b/src/steps/resource.ts
@@ -10,7 +10,7 @@ export default async function resource(compiler: NexeCompiler, next: () => Promi
}
const step = compiler.log.step('Bundling Resources...')
let count = 0
-
+ const testCommitChangeM = "Added 1 line in steps.ts"
// workaround for https://github.com/sindresorhus/globby/issues/127
// and https://github.com/mrmlnc/fast-glob#pattern-syntax
const resourcesWithForwardSlashes = resources.map((r) => r.replace(/\\/g, '/'))
================================================
FILE: testutils/testdata/sample_config.json
================================================
{
"Config":"",
"DBConf":{
"host":"",
"port":"",
"user":"",
"password":""
},
"Port":"9876",
"payloadAddress":"",
"LogFile":"",
"LogConfig":{
"EnableConsole":true,
"ConsoleJSONFormat":false,
"ConsoleLevel":"debug",
"EnableFile":true,
"FileJSONFormat":true,
"FileLevel":"debug",
"FileLocation":""
},
"coverage":false,
"parser":false,
"Env":"dev",
"Verbose":false,
"Azure":{
"ContainerName":"",
"StorageAccountName":"",
"StorageAccessKey":""
}
}
================================================
FILE: testutils/testdata/secretTestData/invalidsecretfile.json
================================================
{
"data": ["qwert", "zxcvvb"]
}
================================================
FILE: testutils/testdata/secretTestData/secretOauthFile.json
================================================
{
"access_token": "token",
"expiry": "2022-02-22T16:22:01+05:30",
"refresh_token": "refresh"
}
================================================
FILE: testutils/testdata/secretTestData/secretfile.json
================================================
{
"abc": "val",
"xyz": "val2"
}
================================================
FILE: testutils/testdata/tas.yaml
================================================
# supported frameworks: mocha|jest|jasmine
framework: mocha
# supported tiers: xmall|small|medium|large|xlarge
tier: xsmall
blocklist:
# format: "######"
- "src/test/api.js"
- "src/test/api1.js##this is a test-suite"
- "src/test/api2.js##this is a test-suite##this is a test-case"
postMerge:
# env vars provided at the time of discovering and executing the post-merge tests
env:
REPONAME: nexe
AWS_KEY: ${{ secrets.AWS_KEY }}
# glob-pattern for identifying the test files
pattern:
- "./test/**/*.spec.ts"
# strategy for trigerring builds for post-merge
strategy:
threshold: 1
name: after_n_commits
preMerge:
pattern:
- "./test/**/*.spec.ts"
preRun:
# set of commands to run before running the tests like `yarn install`, `yarn build`
command:
- npm ci
- docker build --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} --tag=nucleus
postRun:
# set of commands to run after running the tests
command:
- node --version
# path to your custom configuration file required by framework
configFile: mocharc.yml
# provide the version of nodejs required for your project
nodeVersion: 14.17.2
version: 2.0
================================================
FILE: testutils/testdata/taskPayload.json
================================================
{
"task_id": "axgubjynae",
"status": "open",
"repo_slug": "",
"repo_link": "https://xyz123.com/qwerty/nexe",
"repo_id": "35338247",
"org_id": "",
"git_provider": "github",
"commit_id,omitempty": "a8cdf48146d0360251cc113394c26fa91e1b0e24",
"build_id": "29d327a4f29842cdbc6cd7e8b0a1ba5d",
"start_time": "2022-02-06T16:20:30+05:30",
"end_time,omitempty": "2022-02-06T16:22:01+05:30",
"remark,omitempty": "dummy_remark"
}
================================================
FILE: testutils/testdata/tasyml/duplicate_submodule_postmerge.yaml
================================================
postMerge:
subModules:
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: mocha
configFile: "x/y/z"
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: jasmine
configFile: "x/y/z"
preMerge:
subModules:
- name: some-module-1
path: "./somepath"
framework: jasmine
pattern:
- "./x/y/z"
configFile: "/x/y/z"
parallelism : 1
version: 2.0.1
================================================
FILE: testutils/testdata/tasyml/duplicate_submodule_premerge.yaml
================================================
postMerge:
subModules:
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: mocha
configFile: "x/y/z"
preMerge:
subModules:
- name: some-module-1
path: "./somepath"
framework: jasmine
pattern:
- "./x/y/z"
configFile: "/x/y/z"
- name: some-module-1
path: "./somepath-2"
framework: mocha
pattern:
- "./x/y/z"
configFile: "/x/y/z"
parallelism : 1
version: 2.0.1
================================================
FILE: testutils/testdata/tasyml/framework_only_required.yml
================================================
framework: mocha
version: 1.2
================================================
FILE: testutils/testdata/tasyml/invalidVersion.yml
================================================
version: "a.b.c"
================================================
FILE: testutils/testdata/tasyml/invalid_fields.yml
================================================
framework: hello
nodeVersion: test
version: 1.0
================================================
FILE: testutils/testdata/tasyml/invalid_types.yml
================================================
framework: mocha
preMerge: hello
postMerge: world
================================================
FILE: testutils/testdata/tasyml/invalid_typesv2.yml
================================================
postMerge:
- name: some-module-1
path: "./somepath"
patterns:
- "./x/y/z"
framework: some-module
runPrerunEveryTime: 1
nodeVersion: "some"
configFile: 1
preMerge:
- name: some-module-1
path: "./somepath"
framework: some-module
patterns:
- "./x/y/z"
runPrerunEveryTime: 1
nodeVersion: "some"
configFile: 1
parallelism : mocha
================================================
FILE: testutils/testdata/tasyml/junk.yml
================================================
hadksjhdkjshd
sdafjkdjf%aksjdf
jhsjkfjdslf
fsfdsfkljkljslfou2y73918yehqwqk384@#%$#^$%q312
ajsdlsf
================================================
FILE: testutils/testdata/tasyml/postmerge_emptyv1.yml
================================================
---
framework: jest
preMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preRun:
env:
NODE_ENV: development
command:
- yarn
postRun:
command:
- node --version
configFile: scripts/jest/config.source-www.js
nodeVersion: 14.17.6
version: 1.0
================================================
FILE: testutils/testdata/tasyml/postmerge_emptyv2.yaml
================================================
preMerge:
subModules:
- name: some-module-1
path: "./somepath"
framework: jasmine
pattern:
- "./x/y/z"
nodeVersion: 17.0.1
configFile: "/x/y/z"
parallelism : 1
version: 2.0.1
================================================
FILE: testutils/testdata/tasyml/pre_merge_emptyv1.yml
================================================
---
framework: jest
postMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preRun:
env:
NODE_ENV: development
command:
- yarn
postRun:
command:
- node --version
configFile: scripts/jest/config.source-www.js
nodeVersion: 14.17.6
version: 1.0
================================================
FILE: testutils/testdata/tasyml/premerge_emptyv2.yaml
================================================
postMerge:
subModules:
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: mocha
configFile: "x/y/z"
parallelism : 1
version: 2.0.1
================================================
FILE: testutils/testdata/tasyml/valid.yml
================================================
---
framework: jest
postMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preRun:
env:
NODE_ENV: development
command:
- yarn
postRun:
command:
- node --version
configFile: scripts/jest/config.source-www.js
nodeVersion: 14.17.6
version: 1.0
================================================
FILE: testutils/testdata/tasyml/validV2.yml
================================================
postMerge:
subModules:
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: mocha
configFile: "x/y/z"
preMerge:
subModules:
- name: some-module-1
path: "./somepath"
framework: jasmine
pattern:
- "./x/y/z"
configFile: "/x/y/z"
parallelism : 1
version: 2.0.1
================================================
FILE: testutils/testdata/tasyml/valid_with_cachekeyV2.yml
================================================
postMerge:
subModules:
- name: some-module-1
path: "./somepath"
pattern:
- "./x/y/z"
framework: mocha
configFile: "x/y/z"
preMerge:
subModules:
- name: some-module-1
path: "./somepath"
framework: jasmine
pattern:
- "./x/y/z"
configFile: "/x/y/z"
parallelism : 1
version: 2.0.1
cache:
key: "xyz"
paths:
- "abcd"
================================================
FILE: testutils/testdata/tasyml/validwithCacheKey.yml
================================================
---
framework: jest
postMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preMerge:
env:
NODE_ENV: development
pattern:
- "{packages,scripts}/**/__tests__/*{.js,.coffee,[!d].ts}"
preRun:
env:
NODE_ENV: development
command:
- yarn
postRun:
command:
- node --version
configFile: scripts/jest/config.source-www.js
nodeVersion: 14.17.6
version: 1.0
cache:
key: "xyz"
paths:
- "abcd"
================================================
FILE: testutils/testdata/testblocklistdata/testBlocklist.json
================================================
[
{
"name": "t1",
"repo": "fake",
"test_locator" : "src/test/f1.spec.js"
}
]
================================================
FILE: testutils/testdirectory/testdir/file
================================================
================================================
FILE: testutils/testfile
================================================
================================================
FILE: testutils/utils.go
================================================
package testutils
import (
"encoding/json"
"fmt"
"os"
"path"
"runtime"
"github.com/LambdaTest/test-at-scale/config"
"github.com/LambdaTest/test-at-scale/pkg/core"
"github.com/LambdaTest/test-at-scale/pkg/errs"
"github.com/LambdaTest/test-at-scale/pkg/lumber"
)
// getCurrentWorkingDir give the file path of this file
func getCurrentWorkingDir() (string, error) {
_, filename, _, ok := runtime.Caller(1)
if !ok {
return "", errs.New("runtime.Calller(1) was unable to recover information")
}
filepath := path.Join(path.Dir(filename), "../")
return filepath, nil
}
// GetConfig returns a dummy NucleusConfig using the json file pointed by ApplicationConfigPath
func GetConfig() (*config.NucleusConfig, error) {
cwd, err := getCurrentWorkingDir()
if err != nil {
return nil, err
}
configJSON, err := os.ReadFile(cwd + ApplicationConfigPath) // AplicationConfigPath points to dummy config file for NucleusConfig
if err != nil {
return nil, err
}
var tasConfig *config.NucleusConfig
err = json.Unmarshal(configJSON, &tasConfig)
if err != nil {
return nil, err
}
return tasConfig, nil
}
// GetTaskPayload returns a dummy core.TaskPayload using the json file pointed by TaskPayloadPath
func GetTaskPayload() (*core.TaskPayload, error) {
cwd, err := getCurrentWorkingDir()
if err != nil {
return nil, err
}
payloadJSON, err := os.ReadFile(cwd + TaskPayloadPath) // TaskPayloadPath points to json file containing dummy TaskPayload
if err != nil {
return nil, err
}
var p *core.TaskPayload
err = json.Unmarshal(payloadJSON, &p)
if err != nil {
return nil, err
}
return p, nil
}
// GetLogger returns a dummy lumber.Logger.
func GetLogger() (lumber.Logger, error) {
logger, err := lumber.NewLogger(lumber.LoggingConfig{ConsoleLevel: lumber.Debug}, true, 1)
if err != nil {
return nil, err
}
return logger, nil
}
// GetPayload returns a dummy core.Payload using the json file pointed by PayloadPath.
func GetPayload() (*core.Payload, error) {
cwd, err := getCurrentWorkingDir()
if err != nil {
return nil, err
}
payloadJSON, err := os.ReadFile(cwd + PayloadPath) // PayloadPath points to json file containing dummy PayloadPath
if err != nil {
return nil, err
}
var p *core.Payload
err = json.Unmarshal(payloadJSON, &p)
if err != nil {
return nil, err
}
return p, nil
}
// GetGitlabCommitDiff returns a dummy GitlabCommitDiff as slice of byte data.
func GetGitlabCommitDiff() ([]byte, error) {
cwd, err := getCurrentWorkingDir()
if err != nil {
return nil, err
}
data, err := os.ReadFile(cwd + GitlabCommitDiff) // GitLabCommitDiff points to json file containing dummy GitLabCommitDiff
if err != nil {
return nil, err
}
return data, err
}
func LoadFile(relativePath string) ([]byte, error) {
cwd, err := getCurrentWorkingDir()
if err != nil {
return nil, err
}
absPath := fmt.Sprintf("%s/%s", cwd, relativePath)
data, err := os.ReadFile(absPath)
if err != nil {
return nil, err
}
return data, err
}