Showing preview only (282K chars total). Download the full file or copy to clipboard to get everything.
Repository: GoogleCloudPlatform/spinnaker-for-gcp
Branch: master
Commit: 26455a6d348a
Files: 84
Total size: 259.1 KB
Directory structure:
gitextract_n7i9yuwk/
├── .gitignore
├── README.md
├── apptest/
│ └── tester/
│ ├── Dockerfile
│ ├── build-and-run-tests.sh
│ ├── spinnaker-test-job.yaml
│ ├── tester.sh
│ └── tests/
│ └── basic-suite.yaml
├── ci/
│ ├── CLOUD_BUILD.md
│ ├── Dockerfile
│ ├── JENKINS.md
│ ├── README.md
│ ├── cloudbuild.yaml
│ └── install.bash
├── samples/
│ └── helloworldwebapp/
│ ├── cleanup_app_and_pipelines.sh
│ ├── create_app_and_pipelines.sh
│ ├── install.md
│ └── templates/
│ ├── pipelines/
│ │ ├── deployprod_json.template
│ │ └── deploystaging_json.template
│ └── repo/
│ ├── Dockerfile
│ ├── cloudbuild_yaml.template
│ ├── config/
│ │ ├── prod/
│ │ │ ├── namespace.yaml
│ │ │ ├── replicaset_yaml.template
│ │ │ └── service.yaml
│ │ └── staging/
│ │ ├── namespace.yaml
│ │ ├── replicaset_yaml.template
│ │ └── service.yaml
│ └── src/
│ └── main.go
├── scripts/
│ ├── cli/
│ │ ├── install_hal.sh
│ │ ├── install_spin.sh
│ │ └── update_hal.sh
│ ├── experimental/
│ │ └── configure_for_workload_identity.sh
│ ├── expose/
│ │ ├── backend-config.yml
│ │ ├── configure_endpoint.sh
│ │ ├── configure_hal_security.sh
│ │ ├── configure_iap.md
│ │ ├── configure_iap.sh
│ │ ├── deck-ingress.yml
│ │ ├── iap_policy.json
│ │ ├── launch_configure_iap.sh
│ │ ├── openapi.yml
│ │ └── set_iap_properties.sh
│ ├── install/
│ │ ├── instructions.txt
│ │ ├── provision-spinnaker.md
│ │ ├── quick-install.yml
│ │ ├── setup.sh
│ │ ├── setup_properties.sh
│ │ └── spinnakerAuditLog/
│ │ ├── config_json.template
│ │ ├── index_js.template
│ │ └── package.json
│ └── manage/
│ ├── add_gae_account.sh
│ ├── add_gce_account.sh
│ ├── add_gke_account.sh
│ ├── add_missing_properties.sh
│ ├── apply_config.sh
│ ├── check_cluster_config.sh
│ ├── check_duplicate_dirs.sh
│ ├── check_git_config.sh
│ ├── check_project_mismatch.sh
│ ├── cluster_utils.sh
│ ├── connect_to_redis.sh
│ ├── connect_unsecured.sh
│ ├── deploy_application_manifest.sh
│ ├── generate_deletion_script.sh
│ ├── grant_iap_access.sh
│ ├── instructions.txt
│ ├── landing_page_base.md
│ ├── landing_page_secured.md
│ ├── landing_page_unsecured.md
│ ├── list_samples.sh
│ ├── pull_config.sh
│ ├── push_and_apply.sh
│ ├── push_config.sh
│ ├── restore_backup_to_cloud_shell.sh
│ ├── restore_config_utils.sh
│ ├── service_utils.sh
│ ├── update_console.sh
│ ├── update_halyard_daemon.sh
│ ├── update_landing_page.sh
│ ├── update_management_environment.sh
│ └── update_spinnaker_version.sh
└── templates/
├── spinnaker_application_manifest_bottom.yaml
├── spinnaker_application_manifest_middle_secured.yaml
├── spinnaker_application_manifest_middle_unsecured.yaml
└── spinnaker_application_manifest_top.yaml
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
InstallHalyard.sh
samples/helloworldwebapp/templates/pipelines/deployprod.json
samples/helloworldwebapp/templates/pipelines/deploystaging.json
scripts/expose/*_expanded.*
scripts/install/properties
scripts/install/properties.*
scripts/install/InstallHalyard.sh
scripts/install/spinnakerAuditLog/config.json
scripts/install/spinnakerAuditLog/index.js
scripts/manage/*_expanded.*
scripts/manage/delete-all_*.*
.idea
================================================
FILE: README.md
================================================
# Install and manage Spinnaker on Google Cloud Platform
Spinnaker on Google Cloud Platform is a tool for easily installing a production-ready instance of Spinnaker, and for managing that instance over time.
## Do I want to use this solution?
This solution is for…
* Anyone who wants an easy path to install open-source Spinnaker, in a
production-ready configuration, on Google Cloud Platform
* Anyone who wants to "kick the tires" of Spinnaker, to decide if it's the right
CD solution for their needs
* Administrators who will manage one or more long-running instances of
Spinnaker, including adding additional administrators, adding accounts,
upgrading, and so on
This solution gives you...
* Google recommendations and best practices for installing and running Spinnaker
on GCP
* Pre-integration with many other services that Spinnaker is commonly used with
* Sample applications and other helpers for a smoother experience
## What is this solution?
Spinnaker for Google Cloud Platform is a solution for installing and managing
Spinnaker on Google Cloud Platform. It consists of an installation and
management console, Spinnaker and its microservices, and sample applications.
### What is Spinnaker?
Spinnaker is an open source, multi-cloud continuous delivery platform for
releasing software changes with high velocity and confidence.
If you would like to learn more about Spinnaker, please visit the
[Spinnaker website](https://www.spinnaker.io).
### What is Deck?
Deck is the Spinnaker UI. You access Deck in one of the following ways:
* Via port forwarding
The management console provides a command for forwarding port 8080, and a
button to click to access Deck via that port.
* Over the internet, on a publicly available domain
This domain is secured with [Identity-Aware Proxy](https://cloud.google.com/iap).
### The management console
The management console makes it easy for you to do the following:
* Install Spinnaker
Spinnaker for Google Cloud Platform makes it easy to get a working version of
open-source Spinnaker running on Google Kubernetes Engine. After it's
installed, you can make it available to your users. The installation flow
begins in the management console after you start the solution.
* Manage Spinnaker
Use this same management console to manage/operate your Spinnaker
installation, including adding administrators, and creating accounts for
deploying to additional GKE clusters or other providers.
The management flow begins after you finish installing Spinnaker. You can also
open it directly via a link from the GKE Applications page in the Google Cloud
Console.
The management console uses Cloud Shell, with instructions shown in a guide on
the right-hand side of the window. The guide shows the commands that will be
run, and you can click those commands to copy them into Cloud Shell and run them
there.
### What is Cloud Shell?
[Cloud Shell](https://cloud.google.com/shell) is a tool in Google Cloud Platform
that provides command-line access to GCP.
### How do I find and restore the instructions?
* If the instructions in the right-hand pane disappear, just enter the following
command in Cloud Shell:
```bash
cloudshell launch-tutorial ~/spinnaker-for-gcp/scripts/install/provision-spinnaker.md
```
* If you need to find your way back to the management console, you can relaunch
it by following the instructions under Install Spinnaker on Google Cloud
Platform.
* Refer back to this document if you get lost.
## Am I billed for this?
You are billed for Google Cloud Platform resources that are installed as part of
Spinnaker for Google Cloud Platform.
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)
* [Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/)
* [Google Cloud Load Balancing](https://cloud.google.com/load-balancing/)
...and possibly other resources, depending on the options you select when you
install and configure Spinnaker. You can use the [Google Cloud Platform Pricing
Calculator](https://cloud.google.com/products/calculator/) to estimate the cost
of this solution.
[Learn more about Google Cloud pricing](https://cloud.google.com/pricing/) &
[free trial](https://cloud.google.com/developers/startups/).
## Install and use Spinnaker on Google Cloud Platform
You access this solution by clicking the **Go to Spinnaker for Google Cloud
Platform** button on the [Spinnaker for GCP
page](https://console.cloud.google.com/marketplace/details/google-cloud-platform/spinnaker)
in Marketplace.
After you've installed Spinnaker for Google Cloud Platform, you can access
Spinnaker and the management console from [Google Cloud
Console](https://console.cloud.google.com/kubernetes/application).
> Note: Spinnaker for Google Cloud Platform doesn't support regional clusters.
> If you intend to [install Spinnaker on an existing
> cluster](#install_spinnaker_on_existing_cluster), it must be zonal.
> Note: Google recommends that you deploy your resources using an account other
> than `spinnaker-install-account`. That account is used to install your
> spinnaker instance, and resources deployed using that account are installed
> into the spinnaker namespace by default. This namespace is not indexed, so
> your deployments will time out before they are deemed stable.
### Install Spinnaker on Google Cloud Platform
1. Start the solution from the [Spinnaker for GCP Marketplace
page](https://console.cloud.google.com/marketplace/details/google-cloud-platform/spinnaker)
by clicking the **Go to Spinnaker for Google Cloud Platform** button.
1. When prompted to Open in Cloud Shell, click **Proceed**.
Cloud Shell opens, along with a file tree showing the files in the Spinnaker
repository, and instructions.

> **Important:** If you've launched the management console at least once before,
> you might be prompted, in the shell, to resume with the clone you created
> before, update that clone, or clone a new copy of the repository. The first
> option is best (`cd` into the existing directory). Don't clone a new copy.
The spinnaker-for-gcp repository is cloned into your Cloud Shell.
1. Follow the instructions shown on the screen.
The flow in the management console guides you through the installation process,
presenting you with commands, which you can copy to the Cloud Shell prompt and
then execute by pressing **Enter**. The commands run scripts that automate the
process of installing Spinnaker on GKE.
If the instruction pane disappears at any time, you can restore it using the
following command, from Cloud Shell:
```bash
cloudshell launch-tutorial ~/spinnaker-for-gcp/scripts/install/provision-spinnaker.md
```
### Access Spinnaker
After you've installed Spinnaker, you can execute a command to forward ports,
which allows you to access the Deck UI and start using Spinnaker. You can share
the port-forwarding command with your users, and if they have access to the GKE
cluster, they can reach Deck (the Spinnaker UI) on port 8080.
Alternatively, you can expose Spinnaker over the public internet, secured using
[Identity-Aware Proxy](https://cloud.google.com/docs/ci-cd/spinnaker/spinnaker-for-gcp#expose_iap).
Both alternatives are described below.
#### Access Spinnaker by forwarding ports
You can run a command in Cloud Shell in the management console, to forward ports
so you can access Spinnaker from localhost:8080.
1. Click to copy the `connect_unsecured.sh` command in the management console,
and press **Enter**.
This forwards the local port 8080 to port 9000 (the port Deck uses) on the pod
running Deck.
1. Click the "Connect to Spinnaker…" link. This highlights the Preview button.
1. Click the highlighted preview button, and select **Preview on port 8080**.

> **Note:**There is a "Connect to Spinnaker" link displayed. If you click it,
> it highlights the preview button, which you then click to select the port.
Deck, the Spinnaker user interface, opens in your browser. The Spinnaker
documentation site has instructions for using Spinnaker.
Back in the management console, there are a few other things you can do:
* Make Spinnaker securely available to your teams without having to forward
ports
* View the Spinnaker audit log
* View logs from Spinnaker microservices
* Click **Next** to move on to the Spinnaker management portion of the solution.
* Share the port-forwarding command with your users If they have access to the
GKE cluster, they can reach Deck (the Spinnaker UI) on port 8080.
### Give your users access to Spinnaker over the internet
The console includes a command that helps you create a secure endpoint from
which to expose Spinnaker to your users, securely, over the internet.
> **Note:** If you need to keep Spinnaker private, you can set up port
> forwarding for your users.
1. Navigate to step 2 of the installation flow in the Management console
("Connect to Spinnaker").
1. Under "Expose Spinnaker publicly," click the button to copy the command to
the command line, and press **Enter**.
The script creates a new endpoint from which to serve your Spinnaker instance.
After the script finishes, the guidance in the console changes to show
instructions for setting up OAuth so that your users can access this endpoint.
1. Follow those on-screen instructions.
Make sure when you create your OAuth credentials that you copy the generated
client ID and secret. You'll need to provide them when prompted by the script.
> **Note:** This process can take up to an hour, even if it appears that the
> script has finished.
You now have a Spinnaker endpoint that you can share with your users, who
authenticate into it using OAuth2. A link to Spinnaker is displayed in the
management console. There is also a link on the GKE applications page for this
Spinnaker instance.
### Manage Spinnaker
Use the management console to manage your spinnaker instance, including the
following actions:
* Add administrators (operators)
* Add cloud provider accounts
A provider is the cloud environment (for example, Google Compute Engine) where
you deploy your applications
* Upgrade Spinnaker
* Invoke Halyard commands to configure Spinnaker
* Invoke spin commands to manage Spinnaker resources, like applications and
pipelines
1. Access the management portion of this console.
Use one of the following options:
**If the console is already open:**
1. At the end of the installation flow, click **Next**.
1. Copy the command on the Next steps page and press **Enter**.
The instructions pane changes to start the management process.

**If the console is not already open:**
1. Go to the Google Kubernetes Engine applications page.
1. Open the Spinnaker application.
The application description includes a link: Open Management Environment
in Cloud Shell.
1. Click that link to open the management console, which now starts with the
management/admin functionality.

1. Select your GCP project, and click Start.
#### Add administrators for your Spinnaker instance
You can give access to more operators, who can then use the management console.
1. On the [IAM permissions page](https://console.developers.google.com/iam-admin/iam),
grant the person the 'Owner' role on the GCP project where you've installed Spinnaker.
1. If you are serving Spinnaker on an IAP-secured endpoint, and if the person to
whom you're giving operator rights doesn't already have *user* access, use the
following command (which is also on step 5 of the management part of the
console):
```bash
~/spinnaker-for-gcp/scripts/manage/grant_iap_access.sh
```
...and follow the instructions on the Cloud Shell command line.
#### Add cloud provider accounts
You can use the management console to add accounts for as many cloud providers
[as Spinnaker supports](https://www.spinnaker.io/setup/install/providers/).
You'll need one for each cloud on which your users intend to deploy
applications. For example, if they will deploy applications to Google Compute
Engine and AWS, you'll add a provider account for each.
The management console includes the following command, for adding a GKE account:
`~/spinnaker-for-gcp/scripts/manage/add_gke_account.sh`
And for Google Compute Engine:
`~/spinnaker-for-gcp/scripts/manage/add_gce_account.sh`
And for Google App Engine:
`~/spinnaker-for-gcp/scripts/manage/add_gae_account.sh`
You can run these commands from the management console or enter them in Cloud
Shell against an existing Spinnaker instance.
#### Run Halyard commands
You can invoke [any hal command](https://www.spinnaker.io/reference/halyard/commands)
to configure and administer your Spinnaker installation.
To do so, just invoke the command from the Cloud Shell in the management
console, *after* you've installed Spinnaker
#### Upgrade Spinnaker
1. Find out the version you want to upgrade to.
The [Versions page](https://www.spinnaker.io/community/releases/versions)
lists the stable versions available.
1. In the console, navigate to the management flow:
`~/spinnaker-for-gcp/scripts/manage/update_console.sh`
1. Click **Next** until you see the screen titled "Scripts for Common Commands."
1. Under "Upgrade Spinnaker," copy the first command to the shell, and press
**Enter**.
That command is...
```bash
cloudshell edit \
~/spinnaker-for-gcp/scripts/install/properties
```
1. Edit the Spinnaker version in the `properties` file that is displayed.
```bash
export SPINNAKER_VERSION=1.19.3
```
The [Spinnaker Versions page](https://www.spinnaker.io/community/releases/versions)
shows the latest versions avaiable.
1. Use the following command to invoke Halyard to apply the changes:
```bash
~/spinnaker-for-gcp/scripts/manage/update_spinnaker_version.sh
```
## Restart the management console
If you need to restart the console for any reason (for example, you closed the
tab or window), you can restart it in the same way that you
[started it](https://console.cloud.google.com/marketplace/details/google-cloud-platform/spinnaker).
You can also launch it from the [GKE Applications
page](https://console.cloud.google.com/kubernetes/application) in the Google
Cloud Console, if you've previously installed Spinnaker for Google Cloud
Platform.
When you restart the console, it prompts you to resume from where you left off, if you want.
## Upgrade the management console
1. In the management console, navigate to step 3, "Scripts for Common Command,"
and scroll to the bottom of the page.
1. Run the command shown under "Upgrade Management Environment."
The management console is upgraded to include the latest changes.
## Remove Spinnaker for Google Cloud Platform
> **Warning:** If you installed Spinnaker on pre-existing infrastructure (GKE
> cluster, Redis, service accounts), this script deletes those items. If you
> want to keep them, edit the generated cleanup script
> (`~/spinnaker-for-gcp/scripts/manage/generate_deletion_script.sh`) to comment
> out the specific deletion commands for items you want to keep.
If you want to remove Spinnaker for any reason:
1. Open the management console and click Next until you get to the "Delete
Spinnaker" page.
1. Copy the command to the Cloud Shell terminal, and press **Enter**.
All resources that were created for this Spinnaker instance, and any existing
resources on which you might have deployed, are deleted.
## Sample Applications
The Spinnaker for Google Cloud Platform solution comes with sample applications
to help you get started with Spinnaker.
To install them:
1. In the management console, click **Next** until you get to the step titled
"Use Spinnaker."
1. Under **Install sample applications and pipelines**, click the button to
paste the command, and press **Enter**.
Cloud Shell returns a list of available sample apps, numbered.
1. Press the number corresponding to the application you want, or the number
corresponding to "Quit" to exit without installing any.
1. Press Enter
The tutorial pane now displays guidance for the sample application.
1. To exit the sample app and return to the management portion of the console,
click **Start** and then **Next**, then scroll to the bottom of the "Start a new
build" page, and run the command under "Return to Spinnaker console."
## Other considerations
### Spinnaker for GCP architecture
Spinnaker and its microservices are installed on GKE using the following
architecture:

### Install Spinnaker on an existing cluster
You can install your Spinnaker instance or instances on pre-existing
infrastructure, instead of having this solution create it new.
The cluster must have the following:
* IP aliases enabled, because this uses a hosted Redis instance
* Full Cloud Platform scope for its nodes if you're using the project default
service account
Before you run the installation script, do the following:
1. Copy and run the following command (which is also available in step 1 of the
installation flow):
```bash
cloudshell edit \
~/spinnaker-for-gcp/scripts/install/properties
```
The properties file is opened in the file editor.
1. Edit this section of the `properties` file to identify the Kubernetes cluster
on which to install Spinnaker:
```bash
# If cluster does not exist, it will be created.
export GKE_CLUSTER=$DEPLOYMENT_NAME
export ZONE=us-west1-b
export REGION=us-west1
```
1. Similarly, edit other properties to identify other existing infrastructure
and accounts that you want to use, if applicable.
For example an existing Cloud Memorystore Redis instance, or a bucket or a
service account. In each case, if the infrastructure doesn't exist, the
installation script creates it for you.
### Manage multiple Spinnaker installations
If you run multiple Spinnaker instances, they must be on separate clusters, and therefore in different Kubernetes contexts.
> **Important:** If you're trying to install multiple Spinnaker instances, don't
clone multiple copies of the spinnaker-for-gcp repo.
To manage one of those installations:
1. Get your credentials.
```bash
gcloud container get-credentials
```
1. Switch to the appropriate Kubernetes context.
```bash
kubectl config use-context <CONTEXT_NAME>
```
1. Pull the configuration stored in that cluster.
```bash
~/spinnaker-for-gcp/scripts/manage/pull_config.sh
```
The config now in `~/spinnaker-for-gcp/scripts/install/properties` is the one
for that Spinnaker instance. Perform the usual management tasks available to
you, including running `hal` commands. Spinnaker applies those commands to the
Spinnaker instance in the chosen context.
================================================
FILE: apptest/tester/Dockerfile
================================================
FROM gcr.io/cloud-marketplace-tools/testrunner:0.1.2
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
gettext \
jq \
uuid-runtime \
wget \
curl \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O /bin/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
&& chmod 755 /bin/kubectl
COPY tests/basic-suite.yaml /tests/basic-suite.yaml
COPY tester.sh /tester.sh
WORKDIR /
ENTRYPOINT ["/tester.sh"]
================================================
FILE: apptest/tester/build-and-run-tests.sh
================================================
#!/bin/bash
#
# Would expect this to be deleted once these tests are properly integrated with GCP Marketplace Verification Pipeline.
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
if [ -z "$PROJECT_ID" ]; then
PROJECT_ID=$(gcloud info --format='value(config.project)')
fi
if [ -z "$PROJECT_ID" ]; then
bold "Please set PROJECT_ID env var."
exit 1
fi
export PROJECT_ID
docker build -t gcr.io/$PROJECT_ID/spinnaker-c2d-tests .
docker push gcr.io/$PROJECT_ID/spinnaker-c2d-tests:latest
kubectl delete job spinnaker-test-job
envsubst < spinnaker-test-job.yaml | kubectl apply -f -
kubectl wait --for condition=complete job spinnaker-test-job
kubectl logs -l job-name=spinnaker-test-job
================================================
FILE: apptest/tester/spinnaker-test-job.yaml
================================================
# Would expect this to be deleted once these tests are properly integrated with GCP Marketplace Verification Pipeline.
apiVersion: batch/v1
kind: Job
metadata:
name: spinnaker-test-job
spec:
template:
spec:
containers:
- name: spinnaker-tester-container
image: gcr.io/$PROJECT_ID/spinnaker-c2d-tests
restartPolicy: Never
================================================
FILE: apptest/tester/tester.sh
================================================
#!/bin/bash
#
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xeo pipefail
shopt -s nullglob
export CLOUDDRIVER_ADDR="spin-clouddriver.spinnaker"
export GATE_ADDR="spin-gate.spinnaker"
for test in /tests/*; do
testrunner -logtostderr "--test_spec=${test}"
done
================================================
FILE: apptest/tester/tests/basic-suite.yaml
================================================
actions:
- name: Clouddriver is up and healthy
bashTest:
script: curl -k "http://{{ .Env.CLOUDDRIVER_ADDR }}:7002/health" | jq -r .status
expect:
stdout:
equals: 'UP'
exitCode:
equals: 0
- name: Gate returns credentials and they include default account
bashTest:
script: curl -k "http://{{ .Env.GATE_ADDR }}:8084/credentials" | jq [.[].name]
expect:
stdout:
contains: '"spinnaker-install-account"'
exitCode:
equals: 0
================================================
FILE: ci/CLOUD_BUILD.md
================================================
# Using Cloud Build to install Spinnaker for GCP
## A note about Shared VPC support
You can't use Cloud Build to install Spinnaker for GCP with a shared VPC. For Shared VPC support, conduct the [setup in Cloud Shell](https://cloud.google.com/docs/ci-cd/spinnaker/spinnaker-for-gcp).
## Service account
To install Spinnaker for GCP, you need to grant the [Cloud Build service account](https://console.cloud.google.com/cloud-build/settings) the following roles:
- Cloud Functions Developer - roles/cloudfunctions.developer
- Compute Network Viewer - roles/compute.networkViewer
- Kubernetes Engine Admin - roles/container.admin
- Create Service Accounts - roles/iam.serviceAccountCreator
- Pub/Sub Editor - roles/pubsub.editor
- Cloud Memorystore Redis Admin - roles/redis.admin
- Service Usage Admin - roles/serviceusage.serviceUsageAdmin
- Source Repository Administrator - roles/source.admin
- Storage Admin - roles/storage.admin
- Project IAM Admin - roles/resourcemanager.projectIamAdmin
- Service Account User - roles/iam.serviceAccountUser
You can grant these roles using the IAM UI or with [gcloud](https://cloud.google.com/sdk/gcloud/reference/projects/add-iam-policy-binding).
## Enable Cloud Resource Manager API
For Cloud Build to successfully retrieve IAM policies, you must enable the Cloud Resource Manager API. Visit this URL, substituting your project id.
https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=[PROJECT_ID]
## Properties file
To get Cloud Build to install Spinnaker for GCP, you need to generate a properties file:
1. Run [setup_properties.sh](../scripts/install/setup_properties.sh).
1. Copy that file to the directory containing the Cloud Build YAML, so the installation script can access it while executing the Cloud Build job.
## Submitting a Build
Cloud Builds can be triggered using [gcloud](https://cloud.google.com/cloud-build/docs/running-builds/start-build-manually), [build triggers](https://cloud.google.com/cloud-build/docs/running-builds/automate-builds), or [GitHub app triggers](https://cloud.google.com/cloud-build/docs/create-github-app-triggers). The solution in this repository installs Spinnaker for GCP using a gcloud-triggered build. Follow these steps to start a build:
1. Create a new directory. The contents of this directory will be submitted to Cloud Build.
2. Place the generated properties file into that directory.
3. Copy the [cloudbuild.yaml](cloudbuild.yaml) file into the directory and edit the `user.name` and `user.email` used in the Git configuration steps.
```yaml
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', '<example-user>']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', '<example-user@example.com>']
```
4. Copy the [Dockerfile](Dockerfile) and [install.bash](install.bash) file into the directory.
5. Submit the build to Cloud Build: `gcloud builds submit --timeout "25m" --config cloudbuild.yaml --project PROJECT_ID .`
Cloud Build will execute the job, installing Spinnaker for GCP. If you make any changes to the properties file, re-run the job. Additional instructions for how to access or manage the deployed Spinnaker application are available [here](https://cloud.google.com/docs/ci-cd/spinnaker/spinnaker-for-gcp#access_spinnaker).
================================================
FILE: ci/Dockerfile
================================================
FROM gcr.io/cloud-builders/gcloud
RUN apt-get -q update && apt-get install -qqy \
jq \
gettext-base
ENTRYPOINT []
================================================
FILE: ci/JENKINS.md
================================================
# Using Jenkins to install Spinnaker for GCP
You can use Jekins to install Spinnaker for GCP. The Jenkins agent executing the job must be installed on a Unix-like operating system.
The following section assumes you have an existing Jenkins server. If not, consider one of the [Jenkins solutions on Google Cloud](https://cloud.google.com/jenkins/).
## Jenkins on GCP
If your Jenkins server is running on GCP, follow [best practices](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#best_practices) for managing its service account. Your Jenkins server must have full access to all Google Cloud APIs to successfully install Spinnaker for GCP. See the [Compute Engine documentation](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes) for guidance on how to modify an instance's Google Cloud API access scopes.
You can't use Jenkins to install Spinnaker for GCP with a shared VPC. For Shared VPC support, conduct the [setup in Cloud Shell](https://cloud.google.com/docs/ci-cd/spinnaker/spinnaker-for-gcp).
## Dependencies
There are several dependencies that must be available to the Jenkins server before it can be used to install Spinnaker for GCP.
### Google Cloud SDK
The Google Cloud SDK is required to provision GCP resources. Install a [versioned archive](https://cloud.google.com/sdk/docs/downloads-versioned-archives) to the Jenkins server.
### Git
Git is required for backing up and restoring the Spinnaker for GCP configuration. Install Git on the Jenkins server by running `sudo apt-get install git-all`
### `kubectl`
`kubectl` is required to manage the cluster Spinnaker for GCP will be installed on. Install `kubectl` on the Jenkins server by running `sudo apt-get install kubectl`
### jq
jq is required for processing JSON. Install jq to the Jenkins server by running `sudo apt-get install jq`
### AnsiColor Plugin
The [AnsiColor Jenkins Plugin](https://plugins.jenkins.io/ansicolor) is required for properly rendering stdout while installing Spinnaker for GCP. Once the plugin has been installed, enable `Color ANSI Console Output` in the build configuration and set the `ANSI color map` to `xterm`.
## Service account
The Jenkins server must be configured with a GCP service account with the following roles:
- Cloud Functions Developer - roles/cloudfunctions.developer
- Compute Network Viewer - roles/compute.networkViewer
- Kubernetes Engine Admin - roles/container.admin
- Create Service Accounts - roles/iam.serviceAccountCreator
- Pub/Sub Editor - roles/pubsub.editor
- Cloud Memorystore Redis Admin - roles/redis.admin
- Service Usage Admin - roles/serviceusage.serviceUsageAdmin
- Source Repository Administrator - roles/source.admin
- Storage Admin - roles/storage.admin
- Project IAM Admin - roles/resourcemanager.projectIamAdmin
- Service Account User - roles/iam.serviceAccountUser
These roles can be enabled through the IAM UI or with [gcloud](https://cloud.google.com/sdk/gcloud/reference/projects/add-iam-policy-binding).
## Enable Cloud Resource Manager API
The Cloud Resource Manager API must be enabled for Jenkins to successfully retrieve IAM policies. Enable it for your project by visiting the below URL and substituting your project id.
https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=[PROJECT_ID]
## Properties file
To get Jenkins to install Spinnaker for GCP, you need to generate a properties file and make it available to Jenkins:
1. Run [setup_properties.sh](../scripts/install/setup_properties.sh).
1. Make the resulting properties file available to the installation script as it executes the Jenkins job. In this example, the properties file has been uploaded to the Jenkins server using the [Credentials plug-in](https://wiki.jenkins.io/display/JENKINS/Credentials+Plugin). It is made accessible to the job by binding the file to the PROPERTIES variable. Alternatively, you can use a secrets-management solution (like [Hashicorp Vault](https://www.vaultproject.io/)) and configure Jenkins to read it from there.
### Configure the Jenkins job
Once the dependencies are fulfilled, follow these steps to configure a job to install Spinnaker for GCP.
1. Create a `New Item` from the Jenkins menu and select a `Freestyle project`.
1. From the configuration screen, enable `Color ANSI Console Output` and set the `ANSI color map` to `xterm`.
1. Under `Build Environment`, enable `Delete workspace before build starts`.
1. Add an `Execute Shell` build step.
1. Configure the build step to retrieve and execute the `setup.sh` script.
```shell
#!/usr/bin/env bash
set -e
git clone https://github.com/GoogleCloudPlatform/spinnaker-for-gcp.git
git config --global user.name "jenkins-user"
git config --global user.email "jenkins-user@example.com"
PARENT_DIR=$WORKSPACE PROPERTIES_FILE=$PROPERTIES CI=true $WORKSPACE/spinnaker-for-gcp/scripts/install/setup.sh
```
In the above example, the Git `user.name` and `user.email` must be configured before you run `setup.sh`. Git operations can also be managed using the [Jenkins Git plugin](https://plugins.jenkins.io/git).
`setup.sh` requires several variables to be passed in:
- `PARENT_DIR`: The absolute path for the Jenkins workspace. Jenkins makes this available via `$WORKSPACE`.
- `PROPERTIES_FILE`: This is the absolute path to your generated Spinnaker for GCP properties file.
- `CI`: This must be set to `true` when running `setup.sh` outside of Cloud Shell.
1. Execute the job to install Spinnaker for GCP.
If you change the properties file, apply the change by re-running the job.
Additional instructions for how to access or manage the deployed Spinnaker application are available [here](https://cloud.google.com/docs/ci-cd/spinnaker/spinnaker-for-gcp#access_spinnaker).
================================================
FILE: ci/README.md
================================================
# Installing Spinnaker for GCP on a Continous Integration Server
You can install Spinnaker for GCP using a continous integration Server. A CI server can be used to conduct the initial installation and to apply updates when the Spinnaker for GCP properties file changes. Solutions for [Google Cloud Build](CLOUD_BUILD.md) and [Jenkins](JENKINS.md) are available.
================================================
FILE: ci/cloudbuild.yaml
================================================
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-f', 'Dockerfile', '-t', 'installer', '.']
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://github.com/GoogleCloudPlatform/spinnaker-for-gcp.git']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', '<example-user>']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', '<example-user@example.com>']
- name: 'installer'
args: ['bash', './install.bash']
env:
- 'TERM=xterm'
================================================
FILE: ci/install.bash
================================================
#!/bin/bash
set -e
PARENT_DIR=/workspace PROPERTIES_FILE=/workspace/properties CI=true /workspace/spinnaker-for-gcp/scripts/install/setup.sh
================================================
FILE: samples/helloworldwebapp/cleanup_app_and_pipelines.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
cd ~/cloudshell_open/spinnaker-for-gcp/
source scripts/install/properties
scripts/manage/check_project_mismatch.sh
read -p ". $(tput bold)You are about to delete all resources from the helloworldwebapp application and pipelines. This step is not reversible. Do you wish to continue (Y/n)? $(tput sgr0)" yn
case $yn in
[Yy]* ) ;;
"" ) ;;
* ) exit;;
esac
bold "Deleting Cloud Source Repository..."
gcloud source repos delete spinnaker-for-gcp-helloworldwebapp
rm -rf ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp
bold "Deleting helloworldwebapp-prod and helloworldwebapp-staging Kubernetes resources..."
kubectl delete -f samples/helloworldwebapp/templates/repo/config/staging/namespace.yaml
kubectl delete -f samples/helloworldwebapp/templates/repo/config/prod/namespace.yaml
bold "Deleting Cloud Build trigger..."
for trigger in $(gcloud alpha builds triggers list --filter triggerTemplate.repoName=spinnaker-for-gcp-helloworldwebapp --format 'get(id)'); do
gcloud alpha builds triggers delete -q $trigger
done
bold "Deleting Kubernetes manifests..."
gsutil -m rm -r gs://$BUCKET_NAME/helloworldwebapp-manifests
bold "Deleting GCR images..."
for digest in $(gcloud container images list-tags gcr.io/${PROJECT_ID}/spinnaker-for-gcp-helloworldwebapp --format='get(digest)'); do
gcloud container images delete -q --force-delete-tags "gcr.io/${PROJECT_ID}/spinnaker-for-gcp-helloworldwebapp@${digest}"
done
bold "Deleting Spinnaker helloworldwebapp application and pipelines..."
set -x
~/spin pipeline delete -a helloworldwebapp -n "Deploy to Staging"
~/spin pipeline delete -a helloworldwebapp -n "Deploy to Production"
~/spin application delete helloworldwebapp
{ set +x ;} 2> /dev/null
bold "Finished cleaning up helloworldwebapp resources."
================================================
FILE: samples/helloworldwebapp/create_app_and_pipelines.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
pushd ~/cloudshell_open/spinnaker-for-gcp/samples/helloworldwebapp
if ! ~/spin app list &> /dev/null ; then
bold "Spinnaker instance is not reachable via the Spin CLI. Please make sure the Spinnaker \
instance is reachable with port-forwarding or is exposed publicly.
To port-forward the Spinnaker UI, run this command:
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/connect_unsecured.sh
If you would instead like to expose the service with a domain behind Identity-Aware Proxy, \
run this command:
~/cloudshell_open/spinnaker-for-gcp/scripts/expose/configure_endpoint.sh
"
exit 1
fi
if [ ! -d ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp ]; then
bold 'Creating GCR repo "spinnaker-for-gcp-helloworldwebapp" in Spinnaker project...'
gcloud source repos create spinnaker-for-gcp-helloworldwebapp
mkdir -p ~/$PROJECT_ID
gcloud source repos clone spinnaker-for-gcp-helloworldwebapp ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp
fi
bold 'Adding/Updating Kubernetes config files, sample Go application, and cloud build files in sample repo...'
cp -r templates/repo/config ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/
cp -r templates/repo/src ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/
cp templates/repo/Dockerfile ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/
cat templates/repo/cloudbuild_yaml.template | envsubst '$BUCKET_NAME' > ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/cloudbuild.yaml
cat ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/staging/replicaset_yaml.template | envsubst > ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/staging/replicaset.yaml
rm ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/staging/replicaset_yaml.template
cat ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/prod/replicaset_yaml.template | envsubst > ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/prod/replicaset.yaml
rm ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp/config/prod/replicaset_yaml.template
pushd ~/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp
git add *
git commit -m "Add source, build, and manifest files."
git push
popd
if [ -z $(gcloud alpha builds triggers list --filter triggerTemplate.repoName=spinnaker-for-gcp-helloworldwebapp --format 'get(id)') ]; then
bold "Creating Cloud Build build trigger for helloworld app..."
gcloud alpha builds triggers create cloud-source-repositories \
--repo spinnaker-for-gcp-helloworldwebapp \
--branch-pattern master \
--build-config cloudbuild.yaml \
--included-files "src/**,config/**"
fi
bold "Creating helloworldwebapp Spinnaker application..."
~/spin app save --application-name helloworldwebapp --cloud-providers kubernetes --owner-email $IAP_USER
bold 'Creating "Deploy to Staging" Spinnaker pipeline...'
cat templates/pipelines/deploystaging_json.template | envsubst > templates/pipelines/deploystaging.json
~/spin pi save -f templates/pipelines/deploystaging.json
export DEPLOY_STAGING_PIPELINE_ID=$(~/spin pi get -a helloworldwebapp -n 'Deploy to Staging' | jq -r '.id')
bold 'Creating "Deploy to Prod" Spinnaker pipeline...'
cat templates/pipelines/deployprod_json.template | envsubst > templates/pipelines/deployprod.json
~/spin pi save -f templates/pipelines/deployprod.json
popd
================================================
FILE: samples/helloworldwebapp/install.md
================================================
# Install and run sample application and pipelines
## Introduction
Try out Spinnaker using the sample application provided with your Spinnaker instance. It comes with...
* A sample "hello world" Go application
* A Cloud Build trigger to build an image from source
* Sample Spinnaker pipelines to deploy the image and validate the application in a progression from staging environment to production
To proceed, make sure the Spinnaker instance is reachable with port-forwarding or is exposed publicly.
Select the project containing your Spinnaker instance, then click **Start**, below.
<walkthrough-project-billing-setup/>
## Create application and pipelines
Run this command to create the required resources:
```bash
~/cloudshell_open/spinnaker-for-gcp/samples/helloworldwebapp/create_app_and_pipelines.sh
```
### Resources created:
The source code is hosted in a repository in [Cloud Source Repository](https://source.cloud.google.com/{{project-id}}/spinnaker-for-gcp-helloworldwebapp)
in the same project as your Spinnaker cluster.
This repository contains a few other items:
* Kubernetes configs for the application
These are used to deploy the application and validate the service.
* A [Cloud Build config](https://source.cloud.google.com/{{project-id}}/spinnaker-for-gcp-helloworldwebapp/+/master:cloudbuild.yaml)
This builds the image and copies the Kubernetes configs to the Spinnaker GCS bucket.
* A [Cloud Build trigger](https://console.developers.google.com/cloud-build/triggers?project={{project-id}})
This executes the Cloud Build config when any source code or manifest files are changed under
src/** or config/** in the repository.
Cloud Build creates an [image](https://gcr.io/{{project-id}}/spinnaker-for-gcp-helloworldwebapp)
from source and tags that image with the short commit hash.
The script also creates two Kubernetes namespaces...
* **helloworldwebapp-staging**
* **helloworldwebapp-prod**
...and the **helloworldwebapp-service** service in each of those namespaces, in the [Spinnaker Kubernetes cluster](https://console.developers.google.com/kubernetes/discovery?project={{project-id}}).
These services expose the Go application for staging and prod environments.
This process creates two Spinnaker pipelines under the **helloworldwebapp** Spinnaker application:
* **Deploy to Staging**
This triggers on a newly completed GCB build, and deploys the image to the
**helloworldwebapp-staging** namespace. It then runs a validation job to check the health status of the service.
* **Deploy to Production**
This starts on a successful **Deploy to Staging** run and Blue/Green deploys
the tested image to **helloworldwebapp-prod** namespace. It then runs the health validation job.
On success, the old replicaset is scaled down after a 5 minute wait period.
On failure, the old replicaset is re-enabled and the new replicaset is disabled. A Pub/Sub
notification of the failure is sent via the preconfigured Pub/Sub publisher.
You can navigate to your Spinnaker UI to see these pipelines.
## Start a new build
To build and deploy an image, just change some [source code](https://source.cloud.google.com/{{project-id}}/spinnaker-for-gcp-helloworldwebapp/+/master:src/main.go)
or [manifest files](https://source.cloud.google.com/{{project-id}}/spinnaker-for-gcp-helloworldwebapp/+/master:config/) and push the change to the master branch.
The repository is already cloned to your home directory. Make some changes to the source code...
```bash
cloudshell edit ~/{{project-id}}/spinnaker-for-gcp-helloworldwebapp/src/main.go
```
...and commit the changes:
```bash
cd ~/{{project-id}}/spinnaker-for-gcp-helloworldwebapp
git commit -am "Cool new features"
git push
```
The new commit triggers the chain of events...
1. Cloud Build builds the image.
2. The **Deploy to Staging** pipeline deploys the image to staging and validates it.
3. The **Deploy to Production** pipeline promotes the image to production and validates it.
Visit the Spinnaker UI to verify that the pipelines complete successfully.
After the pipelines finish, the [**helloworldwebapp-services**](https://console.developers.google.com/kubernetes/discovery?project={{project-id}})
hosting the Go application will now be up and healthy. Click on the **endpoints**
for each service to see a "Hello World" page!
### Clean-up
Run this command to delete all the resources created above:
```bash
~/cloudshell_open/spinnaker-for-gcp/samples/helloworldwebapp/cleanup_app_and_pipelines.sh && cd ~/cloudshell_open/spinnaker-for-gcp
```
### Return to Spinnaker console
Run this command to return to the management environment:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_console.sh
```
================================================
FILE: samples/helloworldwebapp/templates/pipelines/deployprod_json.template
================================================
{
"application": "helloworldwebapp",
"description": "When staging deployment and validation completes, Blue/Green deploy new image to production environment and validate.",
"expectedArtifacts": [
{
"displayName": "Prod Replicaset",
"id": "d9013e3f-e9cd-4f18-ace6-ef14369b7fec",
"matchArtifact": {
"id": "b4557686-0d7e-4163-8cb5-f7e7f1310fa8",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/prod-replicaset.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Hello World WebApp Image",
"id": "4f4d38de-80c3-4bc1-a807-c565bc4024ee",
"matchArtifact": {
"id": "9aa4d777-1d4e-44f9-8f62-baa33a6a8040",
"name": "gcr.io/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp",
"type": "docker/image"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Prod Namespace",
"id": "730af75f-50ad-49ab-b754-ec8ae75938f8",
"matchArtifact": {
"id": "1d82dc39-76e1-4a96-8f9c-7f68b3edfa67",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/prod-namespace.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Prod Service",
"id": "76641671-4d6b-4aee-94b4-dba73fbabfcd",
"matchArtifact": {
"id": "63b7838b-045b-4681-9a54-852cdb22efd0",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/prod-service.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
}
],
"keepWaitingPipelines": false,
"limitConcurrent": true,
"name": "Deploy to Production",
"notifications": [
{
"level": "pipeline",
"publisherName": "$PUBSUB_NOTIFICATION_PUBLISHER",
"type": "pubsub",
"when": [
"pipeline.failed"
]
}
],
"parameterConfig": [],
"stages": [
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"expectedArtifacts": [],
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "d9013e3f-e9cd-4f18-ace6-ef14369b7fec",
"manifests": [],
"moniker": {
"app": "helloworldwebapp"
},
"name": "Blue/Green Deploy Replicaset to Production",
"refId": "1",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [
"4f4d38de-80c3-4bc1-a807-c565bc4024ee"
],
"requisiteStageRefIds": [
"8"
],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": true,
"options": {
"enableTraffic": true,
"namespace": "helloworldwebapp-prod",
"services": [
"service helloworldwebapp-service"
],
"strategy": "redblack"
}
},
"type": "deployManifest"
},
{
"failPipeline": true,
"instructions": "Continue with deployment?",
"judgmentInputs": [],
"name": "Manual Judgment",
"notifications": [],
"refId": "2",
"requisiteStageRefIds": [],
"type": "manualJudgment"
},
{
"account": "spinnaker-install-account",
"app": "helloworldwebapp",
"cloudProvider": "kubernetes",
"completeOtherBranchesThenFail": false,
"continuePipeline": true,
"failPipeline": false,
"location": "helloworldwebapp-prod",
"manifestName": "job validate-deployment",
"mode": "static",
"name": "Clean up Validation Job",
"options": {
"cascading": true
},
"refId": "4",
"requisiteStageRefIds": [
"5"
],
"type": "deleteManifest"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"completeOtherBranchesThenFail": false,
"continuePipeline": true,
"failPipeline": false,
"manifestArtifactAccount": "gcs-install-account",
"manifests": [
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "validate-deployment",
"namespace": "helloworldwebapp-prod"
},
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"containers": [
{
"command": [
"/bin/sh",
"-c",
"curl --max-time 120 helloworldwebapp-service"
],
"image": "appropriate/curl:latest",
"name": "validate-deployment"
}
],
"restartPolicy": "Never"
}
}
}
}
],
"moniker": {
"app": "helloworldwebapp"
},
"name": "Validate New Prod",
"refId": "5",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requisiteStageRefIds": [
"1"
],
"skipExpressionEvaluation": false,
"source": "text",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "730af75f-50ad-49ab-b754-ec8ae75938f8",
"moniker": {
"app": "helloworldwebapp"
},
"name": "Deploy Namespace",
"refId": "7",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [],
"requisiteStageRefIds": [
"2"
],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "76641671-4d6b-4aee-94b4-dba73fbabfcd",
"moniker": {
"app": "helloworldwebapp"
},
"name": "Deploy Service",
"refId": "8",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [],
"requisiteStageRefIds": [
"7"
],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
},
{
"name": "Wait before Scale Down",
"refId": "9",
"requisiteStageRefIds": [
"17"
],
"type": "wait",
"waitTime": 300
},
{
"account": "spinnaker-install-account",
"app": "helloworldwebapp",
"cloudProvider": "kubernetes",
"cluster": "replicaSet helloworldwebapp-frontend",
"criteria": "second_newest",
"kind": "replicaSet",
"location": "helloworldwebapp-prod",
"mode": "dynamic",
"name": "Scale Down Old Prod",
"refId": "10",
"replicas": "0",
"requisiteStageRefIds": [
"9"
],
"type": "scaleManifest"
},
{
"failPipeline": true,
"instructions": "Validation Failed - Rollback to Old Prod?",
"judgmentInputs": [],
"name": "Rollback on Failure",
"notifications": [],
"refId": "11",
"requisiteStageRefIds": [
"16"
],
"type": "manualJudgment"
},
{
"account": "spinnaker-install-account",
"app": "helloworldwebapp",
"cloudProvider": "kubernetes",
"cluster": "replicaSet helloworldwebapp-frontend",
"criteria": "second_newest",
"kind": "replicaSet",
"location": "helloworldwebapp-prod",
"mode": "dynamic",
"name": "Enable Old Prod",
"refId": "12",
"requisiteStageRefIds": [
"11"
],
"type": "enableManifest"
},
{
"account": "spinnaker-install-account",
"app": "helloworldwebapp",
"cloudProvider": "kubernetes",
"cluster": "replicaSet helloworldwebapp-frontend",
"criteria": "newest",
"kind": "replicaSet",
"location": "helloworldwebapp-prod",
"mode": "dynamic",
"name": "Disable New Prod",
"refId": "13",
"requisiteStageRefIds": [
"12"
],
"type": "disableManifest"
},
{
"name": "Validation Failed - Fail Pipeline",
"preconditions": [
{
"context": {
"expression": "${ #stage(\"Validate New Prod\")['status'].toString() == 'SUCCEEDED'}"
},
"failPipeline": true,
"type": "expression"
}
],
"refId": "14",
"requisiteStageRefIds": [
"13"
],
"stageEnabled": {
"expression": "${ #stage(\"Validate New Prod\")['status'].toString() != 'SUCCEEDED'}",
"type": "expression"
},
"type": "checkPreconditions"
},
{
"completeOtherBranchesThenFail": false,
"continuePipeline": false,
"failPipeline": false,
"name": "Validation Succeeded",
"preconditions": [
{
"context": {
"expression": "${ #stage(\"Validate New Prod\")['status'].toString() == 'SUCCEEDED'}"
},
"failPipeline": false,
"type": "expression"
}
],
"refId": "15",
"requisiteStageRefIds": [
"4"
],
"type": "checkPreconditions"
},
{
"completeOtherBranchesThenFail": false,
"continuePipeline": false,
"failPipeline": false,
"name": "Validation Failed",
"preconditions": [
{
"context": {
"expression": "${ #stage(\"Validate New Prod\")['status'].toString() != 'SUCCEEDED'}"
},
"failPipeline": true,
"type": "expression"
}
],
"refId": "16",
"requisiteStageRefIds": [
"4"
],
"type": "checkPreconditions"
},
{
"name": "Old Prod Version Present",
"preconditions": [
{
"cloudProvider": "kubernetes",
"context": {
"cluster": "replicaSet helloworldwebapp-frontend",
"comparison": ">",
"credentials": "spinnaker-install-account",
"expected": 1,
"moniker": {
"app": "helloworldwebapp",
"cluster": "replicaSet helloworldwebapp-frontend"
},
"regions": [
"helloworldwebapp-prod"
]
},
"failPipeline": false,
"type": "clusterSize"
}
],
"refId": "17",
"requisiteStageRefIds": [
"15"
],
"type": "checkPreconditions"
}
],
"triggers": [
{
"application": "helloworldwebapp",
"enabled": true,
"pipeline": "$DEPLOY_STAGING_PIPELINE_ID",
"status": [
"successful"
],
"type": "pipeline"
}
]
}
================================================
FILE: samples/helloworldwebapp/templates/pipelines/deploystaging_json.template
================================================
{
"application": "helloworldwebapp",
"description": "On GCB build completion, deploy new image to staging environment and validate.",
"expectedArtifacts": [
{
"displayName": "Staging Replicaset",
"id": "04429e2c-b48c-48e7-ac9b-6fdc4e3d7f59",
"matchArtifact": {
"id": "f52080c9-c9ce-4406-a869-e04af6c01389",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/staging-replicaset.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Hello World WebApp Image",
"id": "4f4d38de-80c3-4bc1-a807-c565bc4024ee",
"matchArtifact": {
"id": "9aa4d777-1d4e-44f9-8f62-baa33a6a8040",
"name": "gcr.io/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp",
"type": "docker/image"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Staging Namespace",
"id": "097c2e8e-295f-4020-85d9-18ed18f6133b",
"matchArtifact": {
"id": "4be2cb5b-bfee-43cc-a2e8-c544777f3436",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/staging-namespace.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
},
{
"displayName": "Staging Service",
"id": "24f64da7-9eac-46df-8b0d-f7092b6a03e4",
"matchArtifact": {
"id": "09755b68-001a-4d61-bb17-985546618e5f",
"name": "gs://$BUCKET_NAME/helloworldwebapp-manifests/.*/staging-service.yaml",
"type": "gcs/object"
},
"useDefaultArtifact": false,
"usePriorArtifact": true
}
],
"keepWaitingPipelines": false,
"limitConcurrent": true,
"name": "Deploy to Staging",
"parameterConfig": [],
"stages": [
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"expectedArtifacts": [],
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "04429e2c-b48c-48e7-ac9b-6fdc4e3d7f59",
"manifests": [],
"moniker": {
"app": "helloworldwebapp"
},
"name": "Deploy Replicaset to Staging",
"refId": "1",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [
"4f4d38de-80c3-4bc1-a807-c565bc4024ee"
],
"requisiteStageRefIds": [
"8"
],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": true,
"options": {
"enableTraffic": true,
"namespace": "helloworldwebapp-staging",
"services": [
"service helloworldwebapp-service"
],
"strategy": "highlander"
}
},
"type": "deployManifest"
},
{
"account": "spinnaker-install-account",
"app": "helloworldwebapp",
"cloudProvider": "kubernetes",
"completeOtherBranchesThenFail": false,
"continuePipeline": true,
"failPipeline": false,
"location": "helloworldwebapp-staging",
"manifestName": "job validate-deployment",
"mode": "static",
"name": "Clean up Validation Job",
"options": {
"cascading": true
},
"refId": "2",
"requisiteStageRefIds": [
"5"
],
"type": "deleteManifest"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"completeOtherBranchesThenFail": false,
"continuePipeline": true,
"failPipeline": false,
"manifestArtifactAccount": "gcs-install-account",
"manifests": [
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "validate-deployment",
"namespace": "helloworldwebapp-staging"
},
"spec": {
"backoffLimit": 0,
"template": {
"spec": {
"containers": [
{
"command": [
"/bin/sh",
"-c",
"curl --max-time 120 helloworldwebapp-service"
],
"image": "appropriate/curl:latest",
"name": "validate-deployment"
}
],
"restartPolicy": "Never"
}
}
}
}
],
"moniker": {
"app": "helloworldwebapp"
},
"name": "Validate Staging",
"refId": "5",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requisiteStageRefIds": [
"1"
],
"skipExpressionEvaluation": false,
"source": "text",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
},
{
"comments": "Fails the pipeline if the validation failed.",
"name": "Check Validation status",
"preconditions": [
{
"context": {
"expression": "${ #stage(\"Validate Staging\")['status'].toString() == 'SUCCEEDED'}"
},
"failPipeline": true,
"type": "expression"
}
],
"refId": "6",
"requisiteStageRefIds": [
"2"
],
"type": "checkPreconditions"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "097c2e8e-295f-4020-85d9-18ed18f6133b",
"moniker": {
"app": "helloworldwebapp"
},
"name": "Deploy Namespace",
"refId": "7",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [],
"requisiteStageRefIds": [],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
},
{
"account": "spinnaker-install-account",
"cloudProvider": "kubernetes",
"manifestArtifactAccount": "gcs-install-account",
"manifestArtifactId": "24f64da7-9eac-46df-8b0d-f7092b6a03e4",
"moniker": {
"app": "helloworldwebapp"
},
"name": "Deploy Service",
"refId": "8",
"relationships": {
"loadBalancers": [],
"securityGroups": []
},
"requiredArtifactIds": [],
"requisiteStageRefIds": [
"7"
],
"skipExpressionEvaluation": false,
"source": "artifact",
"trafficManagement": {
"enabled": false,
"options": {
"enableTraffic": false,
"services": []
}
},
"type": "deployManifest"
}
],
"triggers": [
{
"attributeConstraints": {
"status": "SUCCESS"
},
"enabled": true,
"expectedArtifactIds": [
"4f4d38de-80c3-4bc1-a807-c565bc4024ee"
],
"payloadConstraints": {},
"pubsubSystem": "google",
"subscriptionName": "gcb-account",
"type": "pubsub"
}
]
}
================================================
FILE: samples/helloworldwebapp/templates/repo/Dockerfile
================================================
FROM alpine
COPY src/gopath/bin/helloworldwebapp /go/bin/helloworldwebapp
ENTRYPOINT /go/bin/helloworldwebapp
================================================
FILE: samples/helloworldwebapp/templates/repo/cloudbuild_yaml.template
================================================
steps:
- name: 'gcr.io/cloud-builders/go'
args: [ 'install', '$PROJECT_ID/helloworldwebapp' ]
env: [ 'PROJECT_ROOT=$PROJECT_ID/helloworldwebapp' ]
dir: 'src'
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
mkdir config-all
# rename config files to be appended with the environment, e.g. staging-service.yaml
for env in config/*; do
if [ -d $env ]; then
for file in $env/*; do
cp $file config-all/$(basename $env)-$(basename $file)
done
fi
done
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:$SHORT_SHA', '.' ]
images:
- 'gcr.io/$PROJECT_ID/$REPO_NAME:$SHORT_SHA'
artifacts:
objects:
location: gs://$BUCKET_NAME/helloworldwebapp-manifests/$SHORT_SHA
paths: [ 'config-all/*' ]
================================================
FILE: samples/helloworldwebapp/templates/repo/config/prod/namespace.yaml
================================================
---
apiVersion: v1
kind: Namespace
metadata:
name: helloworldwebapp-prod
================================================
FILE: samples/helloworldwebapp/templates/repo/config/prod/replicaset_yaml.template
================================================
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
traffic.spinnaker.io/load-balancers: '["service helloworldwebapp-service"]'
labels:
app: helloworldwebapp
name: helloworldwebapp-frontend
namespace: helloworldwebapp-prod
spec:
replicas: 3
selector:
matchLabels:
app: helloworldwebapp
template:
metadata:
labels:
app: helloworldwebapp
spec:
containers:
- image: gcr.io/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp # will be modified on deployment to point at a digest of an image
name: helloworldwebapp
================================================
FILE: samples/helloworldwebapp/templates/repo/config/prod/service.yaml
================================================
---
apiVersion: v1
kind: Service
metadata:
name: helloworldwebapp-service
namespace: helloworldwebapp-prod
spec:
ports:
- protocol: TCP
port: 80
selector:
frontedBy: helloworldwebapp-prod # will be applied to backends by Spinnaker
type: LoadBalancer
loadBalancerIP: ""
================================================
FILE: samples/helloworldwebapp/templates/repo/config/staging/namespace.yaml
================================================
---
apiVersion: v1
kind: Namespace
metadata:
name: helloworldwebapp-staging
================================================
FILE: samples/helloworldwebapp/templates/repo/config/staging/replicaset_yaml.template
================================================
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
traffic.spinnaker.io/load-balancers: '["service helloworldwebapp-service"]'
labels:
app: helloworldwebapp
name: helloworldwebapp-frontend
namespace: helloworldwebapp-staging
spec:
replicas: 3
selector:
matchLabels:
app: helloworldwebapp
template:
metadata:
labels:
app: helloworldwebapp
spec:
containers:
- image: gcr.io/$PROJECT_ID/spinnaker-for-gcp-helloworldwebapp # will be modified on deployment to point at a digest of an image
name: helloworldwebapp
================================================
FILE: samples/helloworldwebapp/templates/repo/config/staging/service.yaml
================================================
---
apiVersion: v1
kind: Service
metadata:
name: helloworldwebapp-service
namespace: helloworldwebapp-staging
spec:
ports:
- protocol: TCP
port: 80
selector:
frontedBy: helloworldwebapp-staging # will be applied to backends by Spinnaker
type: LoadBalancer
loadBalancerIP: ""
================================================
FILE: samples/helloworldwebapp/templates/repo/src/main.go
================================================
package main
import (
"io"
"net/http"
)
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "<body style='background-color: green'><h1>Hello World</h1></body>")
}
func main() {
http.HandleFunc("/", hello)
http.ListenAndServe(":80", nil)
}
================================================
FILE: scripts/cli/install_hal.sh
================================================
#!/usr/bin/env bash
HALYARD_DAEMON_PID_FILE=~/hal/halyard/pid
function kill_daemon() {
pkill -F $HALYARD_DAEMON_PID_FILE
}
if [ -f "$HALYARD_DAEMON_PID_FILE" ]; then
HALYARD_DAEMON_PID=$(cat $HALYARD_DAEMON_PID_FILE)
set +e
ps $HALYARD_DAEMON_PID &> /dev/null
exit_code=$?
set -e
if [ "$exit_code" == "0" ]; then
kill_daemon
fi
fi
# Just in case the pid file doesn't match the daemon that's actually listening on the port.
pkill -f '/opt/halyard/lib/halyard-web' || true
pkill -f "$HOME/hal/halyard/lib/halyard-web" || true
curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/debian/InstallHalyard.sh
sudo bash InstallHalyard.sh --user $USER -y $@
retVal=$?
if [ $retVal == 13 ]; then
exit 13
fi
mkdir -p ~/hal/log
sudo mv /etc/bash_completion.d/hal ~/hal/hal_completion
sudo mv /usr/local/bin/hal ~/hal
sudo mv /usr/local/bin/update-halyard ~/hal
sudo rm -rf ~/hal/halyard/ && sudo mv /opt/halyard ~/hal
sudo rm -rf ~/hal/spinnaker/ && sudo mv /opt/spinnaker ~/hal
sed -i 's:^. /etc/bash_completion.d/hal:# . /etc/bash_completion.d/hal\n. ~/hal/hal_completion\nalias hal=~/hal/hal:' ~/.bashrc
sed -i s:/opt/halyard:~/hal/halyard:g ~/hal/hal
sed -i s:/var/log/spinnaker/halyard:~/hal/log:g ~/hal/hal
sudo sed -i s:/opt/spinnaker:~/hal/spinnaker:g ~/hal/halyard/bin/halyard
sed -i 's:rm -rf /opt/halyard:rm -rf ~/hal/halyard:g' ~/hal/update-halyard
sed -i "s:^ HAL_USER=.*$: HAL_USER=$(cat ~/hal/spinnaker/config/halyard-user):g" ~/hal/update-halyard
sed -i s:/etc/bash_completion.d/hal:~/hal/hal_completion: ~/hal/update-halyard
================================================
FILE: scripts/cli/install_spin.sh
================================================
#!/usr/bin/env bash
curl -LO https://storage.googleapis.com/spinnaker-artifacts/spin/$(curl -s https://storage.googleapis.com/spinnaker-artifacts/spin/latest)/linux/amd64/spin
chmod +x spin
mv spin ~
grep -q '^alias spin=~/spin' ~/.bashrc || echo 'alias spin=~/spin' >> ~/.bashrc
mkdir -p ~/.spin
# If there is no properties file, generate a new ~/.spin/config relying on port-forwarding.
if [ ! -f "$HOME/cloudshell_open/spinnaker-for-gcp/scripts/install/properties" ]; then
cat >~/.spin/config <<EOL
gate:
endpoint: http://localhost:8080/gate
EOL
exit 0
fi
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
# Query for static ip address as a signal that the Spinnaker installation is exposed via a secured endpoint.
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
# Only re-generate ~/.spin/config if Spinnaker installation in unsecured. Otherwise, leave whatever is there.
# The ~/.spin/config will always be restored by pull_config.sh in any case.
if [ -z "$IP_ADDR" ]; then
cat >~/.spin/config <<EOL
gate:
endpoint: http://localhost:8080/gate
EOL
fi
================================================
FILE: scripts/cli/update_hal.sh
================================================
#!/usr/bin/env bash
HALYARD_DAEMON_PID_FILE=~/hal/halyard/pid
function kill_daemon() {
pkill -F $HALYARD_DAEMON_PID_FILE
}
if [ -f "$HALYARD_DAEMON_PID_FILE" ]; then
HALYARD_DAEMON_PID=$(cat $HALYARD_DAEMON_PID_FILE)
set +e
ps $HALYARD_DAEMON_PID &> /dev/null
exit_code=$?
set -e
if [ "$exit_code" == "0" ]; then
kill_daemon
fi
fi
# Just in case the pid file doesn't match the daemon that's actually listening on the port.
pkill -f '/opt/halyard/lib/halyard-web' || true
pkill -f "$HOME/hal/halyard/lib/halyard-web" || true
HAL_USER=$(cat ~/hal/spinnaker/config/halyard-user)
if [ -z "$HAL_USER" ]; then
echo >&2 "Unable to derive halyard user, likely a corrupted install. Aborting."
exit 1
fi
sudo groupadd halyard || true
sudo groupadd spinnaker || true
sudo usermod -G halyard -a $HAL_USER || true
sudo usermod -G spinnaker -a $HAL_USER || true
sudo mkdir -p /var/log/spinnaker/halyard
sudo chown $HAL_USER:halyard /var/log/spinnaker/halyard
sudo chmod 755 /var/log/spinnaker /var/log/spinnaker/halyard
sudo HAL_USER=$HAL_USER ~/hal/update-halyard $@
retVal=$?
if [ $retVal == 13 ]; then
exit 13
fi
mkdir -p ~/hal/log
sudo mv /usr/local/bin/hal ~/hal
sudo rm -rf ~/hal/halyard/ && sudo mv /opt/halyard ~/hal
sudo mv /usr/local/bin/update-halyard ~/hal
sed -i 's:^. /etc/bash_completion.d/hal:# . /etc/bash_completion.d/hal\n. ~/hal/hal_completion\nalias hal=~/hal/hal:' ~/.bashrc
sed -i s:/opt/halyard:~/hal/halyard:g ~/hal/hal
sed -i s:/var/log/spinnaker/halyard:~/hal/log:g ~/hal/hal
sudo sed -i s:/opt/spinnaker:~/hal/spinnaker:g ~/hal/halyard/bin/halyard
sed -i 's:rm -rf /opt/halyard:rm -rf ~/hal/halyard:g' ~/hal/update-halyard
sed -i "s:^ HAL_USER=.*$: HAL_USER=$(cat ~/hal/spinnaker/config/halyard-user):g" ~/hal/update-halyard
sed -i s:/etc/bash_completion.d/hal:~/hal/hal_completion: ~/hal/update-halyard
================================================
FILE: scripts/experimental/configure_for_workload_identity.sh
================================================
#!/usr/bin/env bash
# Prior to running this script, please ensure that you are running these versions or later:
# export SPINNAKER_VERSION=release-1.17.x-latest-validated
# export HALYARD_VERSION=1.26.0
#
# This script is intended to be run after the initial setup.sh script completes and Spinnaker is up
# and running (without Workload Identity enabled).
#
# The expected workflow is as follows:
# - Generate the properties file by running the setup_properties.sh script
# - Modify the properties file to specify the above 2 Spinnaker/Halyard versions (or later versions)
# - Run setup.sh
# - Once Spinnaker is up and running, run this (configure_for_workload_identity.sh) script
#
# Note that this script results in each Spinnaker pod still using the default Kubernetes service account,
# and the default service account in the halyard and spinnaker namespaces being bound to one Google
# service account (spinnaker-wi-acct). If you want to specify a different Kubernetes service account
# for any service, you can do so via the `serviceAccountName` setting described here:
# https://www.spinnaker.io/reference/halyard/custom/#kubernetes
# You would also need to make the appropriate bindings between that Kubernetes service account and a
# Google service account.
#
# The roles assigned are sufficient for deployment to GKE. If you intend to deploy to GCE or GAE, you
# will need to assign the appropriate roles to the spinnaker-wi-acct Google service account, similar
# to what we do in these helper scripts for the non-Workload Identity setup:
# https://github.com/GoogleCloudPlatform/spinnaker-for-gcp/blob/master/scripts/manage/add_gce_account.sh
# https://github.com/GoogleCloudPlatform/spinnaker-for-gcp/blob/master/scripts/manage/add_gae_account.sh
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
PROPERTIES_FILE="$HOME/cloudshell_open/spinnaker-for-gcp/scripts/install/properties"
source "$PROPERTIES_FILE"
bold "Enabling workload identity on cluster $GKE_CLUSTER in project $PROJECT_ID..."
gcloud beta container clusters update $GKE_CLUSTER \
--zone=$ZONE \
--identity-namespace=$PROJECT_ID.svc.id.goog \
--project=$PROJECT_ID
unset CLUSTER_STATUS
while [ "$CLUSTER_STATUS" != "RUNNING" ]; do
CLUSTER_STATUS=$(gcloud container clusters describe $GKE_CLUSTER \
--zone=$ZONE \
--format="value(status)" \
--project=$PROJECT_ID)
sleep 5
echo -n .
done
echo
KSA_NAME=default
GSA_NAME=spinnaker-wi-acct
GSA_DISPLAY_NAME="Spinnaker Workload Identity service account"
bold "Creating Google service account $GSA_NAME..."
gcloud iam service-accounts create $GSA_NAME \
--display-name="$GSA_DISPLAY_NAME" \
--project=$PROJECT_ID
GSA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$GSA_DISPLAY_NAME" \
--format="value(email)" \
--project=$PROJECT_ID)
while [ -z "$GSA_EMAIL" ]; do
GSA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$GSA_DISPLAY_NAME" \
--format="value(email)" \
--project=$PROJECT_ID)
sleep 5
echo -n .
done
echo
bold "Assigning required roles to $GSA_DISPLAY_NAME..."
K8S_REQUIRED_ROLES=(cloudbuild.builds.editor container.admin logging.logWriter monitoring.admin pubsub.admin storage.admin)
EXISTING_ROLES=$(gcloud projects get-iam-policy $PROJECT_ID \
--filter="bindings.members:$GSA_EMAIL" \
--format="value(bindings.role)" \
--flatten="bindings[].members")
for r in "${K8S_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $EXISTING_ROLES | grep $r)" ]; then
bold "Assigning role $r..."
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$GSA_EMAIL" \
--role="roles/$r" \
--format="none"
fi
done
bold "Creating Cloud IAM policy binding between Kubernetes service account halyard/$KSA_NAME and Google service account $GSA_NAME..."
gcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:$PROJECT_ID.svc.id.goog[halyard/$KSA_NAME]" \
--project=$PROJECT_ID \
$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
bold "Creating Cloud IAM policy binding between Kubernetes service account spinnaker/$KSA_NAME and Google service account $GSA_NAME..."
gcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:$PROJECT_ID.svc.id.goog[spinnaker/$KSA_NAME]" \
--project=$PROJECT_ID \
$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
bold "Annotating Kubernetes service account halyard/$KSA_NAME with Google service account to use ($GSA_NAME)..."
kubectl annotate serviceaccount \
--namespace halyard \
$KSA_NAME \
iam.gke.io/gcp-service-account=$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
bold "Annotating Kubernetes service account spinnaker/$KSA_NAME with Google service account to use ($GSA_NAME)..."
kubectl annotate serviceaccount \
--namespace spinnaker \
$KSA_NAME \
iam.gke.io/gcp-service-account=$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com
NODE_POOL_NAME=$(gcloud container clusters describe $GKE_CLUSTER \
--zone=$ZONE \
--format="value(nodePools[0].name)" \
--project=$PROJECT_ID)
bold "Enabling GKE_METADATA_SERVER on node pool $NODE_POOL_NAME..."
gcloud beta container node-pools update $NODE_POOL_NAME \
--cluster=$GKE_CLUSTER \
--zone=$ZONE \
--workload-metadata-from-node=GKE_METADATA_SERVER \
--project=$PROJECT_ID
================================================
FILE: scripts/expose/backend-config.yml
================================================
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: config-default
namespace: spinnaker
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: $SECRET_NAME
================================================
FILE: scripts/expose/configure_endpoint.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
pushd ~/cloudshell_open/spinnaker-for-gcp/scripts
source ./install/properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
DOMAIN_NAME_LENGTH=$(echo -n $DOMAIN_NAME | wc -m)
if [ "$DOMAIN_NAME_LENGTH" -gt "63" ]; then
echo "Domain name $DOMAIN_NAME is greater than 63 characters. Please specify a \
domain name not longer than 63 characters. The domain name is specified in the \
$HOME/cloudshell_open/spinnaker-for-gcp/scripts/install/properties file."
exit 1
fi
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
if [ -z "$IP_ADDR" ]; then
bold "Creating static IP address $STATIC_IP_NAME..."
gcloud compute addresses create $STATIC_IP_NAME --global --project $PROJECT_ID
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
else
bold "Using existing static IP address $STATIC_IP_NAME ($IP_ADDR)..."
fi
if [ $DOMAIN_NAME = "$DEPLOYMENT_NAME.endpoints.$PROJECT_ID.cloud.goog" ]; then
EXISTING_SERVICE_NAME=$(gcloud endpoints services list \
--filter="serviceName=$DOMAIN_NAME" --format="value(serviceName)" \
--project $PROJECT_ID)
if [ -z "$EXISTING_SERVICE_NAME" ]; then
gcurl() {
curl -s -H "Authorization:Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" -H "Accept: application/json" \
-H "X-Goog-User-Project: $PROJECT_ID" $*
}
bold "Creating service $DOMAIN_NAME..."
gcurl -X POST -d \
"{\"serviceName\":\"$DOMAIN_NAME\",\"producerProjectId\":\"$PROJECT_ID\"}" \
https://servicemanagement.googleapis.com/v1/services/
while [ -z "$SERVICE_NAME" ]; do
SERVICE_NAME=$(gcloud endpoints services list \
--filter="serviceName:$DOMAIN_NAME" \
--format="value(serviceName)")
sleep 5
echo -n .
done
echo
else
bold "Using existing service $EXISTING_SERVICE_NAME..."
fi
# The service can exist without an endpoint configuration. The presence of the
# service configuration title is sufficient to indicate that we have configured
# the endpoint.
EXISTING_SERVICE_CONFIGURATION_NAME=$(gcloud endpoints services list \
--filter="serviceName=$DOMAIN_NAME" --format="value(serviceConfig.title)" \
--project $PROJECT_ID)
if [ -z "$EXISTING_SERVICE_CONFIGURATION_NAME" ]; then
bold "Deploying service endpoint configuration for $DOMAIN_NAME..."
cat expose/openapi.yml | envsubst > expose/openapi_expanded.yml
gcloud endpoints services deploy expose/openapi_expanded.yml --project $PROJECT_ID
else
bold "Using existing service endpoint configuration for $DOMAIN_NAME..."
fi
else
CURRENT_IP_ADDR=$(dig +short $DOMAIN_NAME)
if [ -z "$CURRENT_IP_ADDR" ]; then
CURRENT_IP_ADDR="UNRESOLVABLE"
fi
bold "Using existing domain $DOMAIN_NAME ($CURRENT_IP_ADDR)..."
if [ $CURRENT_IP_ADDR != $IP_ADDR ]; then
bold "** This domain currently resolves to $CURRENT_IP_ADDR
** You must configure $DOMAIN_NAME's DNS settings such that it instead resolves to $IP_ADDR"
fi
fi
EXISTING_MANAGED_CERT=$(gcloud beta compute ssl-certificates list \
--filter="name=$MANAGED_CERT" --format="value(name)" --project $PROJECT_ID)
if [ -z "$EXISTING_MANAGED_CERT" ]; then
bold "Creating managed SSL certificate $MANAGED_CERT for domain $DOMAIN_NAME..."
gcloud beta compute ssl-certificates create $MANAGED_CERT --domains $DOMAIN_NAME --global \
--project $PROJECT_ID
else
bold "Using existing managed SSL certificate $EXISTING_MANAGED_CERT..."
fi
./expose/launch_configure_iap.sh
popd
================================================
FILE: scripts/expose/configure_hal_security.sh
================================================
~/hal/hal config security api edit --override-base-url https://$DOMAIN_NAME/gate
~/hal/hal config security ui edit --override-base-url https://$DOMAIN_NAME
~/hal/hal config security authn iap edit --audience $AUD_CLAIM
~/hal/hal config security authn iap enable
================================================
FILE: scripts/expose/configure_iap.md
================================================
# Expose Spinnaker
### Configure OAuth consent screen
Go to the [OAuth consent screen](https://console.developers.google.com/apis/credentials/consent?project=$PROJECT_ID).
Enter an *Application name* (e.g. My Spinnaker) and your *Email address*, and click *Save*.
### Create OAuth credentials
Go to the [Credentials page](https://console.developers.google.com/apis/credentials/oauthclient?project=$PROJECT_ID) and create an *OAuth client ID*.
Select *Application type: Web application* and click *Create*.
Ensure that you note the generated *Client ID* and *Client secret* for your new credentials, as you will need to provide them to the script in the next step.
### Expose Spinnaker and allow for secure access via IAP
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/expose/configure_iap.sh
```
There will be one final IAP configuration step described in the terminal.
This phase could take 30-60 minutes. **Spinnaker will be inaccessible during this time.**
## Conclusion
Connect to your Spinnaker installation [here](https://$DOMAIN_NAME).
### View Spinnaker Audit Log
View the who, what, when and where of your Spinnaker installation
[here](https://console.developers.google.com/logs/viewer?project=$PROJECT_ID&resource=cloud_function&logName=projects%2F$PROJECT_ID%2Flogs%2F$CLOUD_FUNCTION_NAME&minLogLevel=200).
================================================
FILE: scripts/expose/configure_iap.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
pushd ~/cloudshell_open/spinnaker-for-gcp/scripts
source ./install/properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
EXISTING_SECRET_NAME=$(kubectl get secret -n spinnaker \
--field-selector metadata.name=="$SECRET_NAME" \
-o json | jq .items[0].metadata.name)
if [ $EXISTING_SECRET_NAME == 'null' ]; then
bold "Creating Kubernetes secret $SECRET_NAME..."
read -p 'Enter your OAuth credentials Client ID: ' CLIENT_ID
read -p 'Enter your OAuth credentials Client secret: ' CLIENT_SECRET
cat >~/.spin/config <<EOL
gate:
endpoint: https://$DOMAIN_NAME/gate
auth:
enabled: true
iap:
# check detailed config in https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app
iapClientId: $CLIENT_ID
serviceAccountKeyPath: "$HOME/.spin/key.json"
EOL
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
gcloud iam service-accounts keys create ~/.spin/key.json \
--iam-account $SA_EMAIL \
--project $PROJECT_ID
kubectl create secret generic $SECRET_NAME -n spinnaker --from-literal=client_id=$CLIENT_ID \
--from-literal=client_secret=$CLIENT_SECRET
else
bold "Using existing Kubernetes secret $SECRET_NAME..."
fi
envsubst < expose/backend-config.yml | kubectl apply -f -
# Associate deck service with backend config.
kubectl patch svc -n spinnaker spin-deck --patch \
"[{'op': 'add', 'path': '/metadata/annotations/beta.cloud.google.com~1backend-config', \
'value':'{\"default\": \"config-default\"}'}]" --type json
# Change spin-deck service to NodePort:
DECK_SERVICE_TYPE=$(kubectl get service -n spinnaker spin-deck \
--output=jsonpath={.spec.type})
if [ $DECK_SERVICE_TYPE != 'NodePort' ]; then
bold "Patching spin-deck service to be NodePort instead of $DECK_SERVICE_TYPE..."
kubectl patch service -n spinnaker spin-deck --patch \
"[{'op': 'replace', 'path': '/spec/type', \
'value':'NodePort'}]" --type json
else
bold "Service spin-deck is already NodePort..."
fi
# Create ingress:
bold $(envsubst < expose/deck-ingress.yml | kubectl apply -f -)
source expose/set_iap_properties.sh
gcurl() {
curl -s -H "Authorization:Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" -H "Accept: application/json" \
-H "X-Goog-User-Project: $PROJECT_ID" $*
}
export IAP_IAM_POLICY_ETAG=$(gcurl -X POST -d "{"options":{"requested_policy_version":3}}" \
https://iap.googleapis.com/v1beta1/projects/$PROJECT_NUMBER/iap_web/compute/services/$BACKEND_SERVICE_ID:getIamPolicy | jq .etag)
cat expose/iap_policy.json | envsubst | gcurl -X POST -d @- \
https://iap.googleapis.com/v1beta1/projects/$PROJECT_NUMBER/iap_web/compute/services/$BACKEND_SERVICE_ID:setIamPolicy
bold "Configuring Spinnaker security settings..."
cat expose/configure_hal_security.sh | envsubst | bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_landing_page.sh
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh
bold "ACTION REQUIRED:"
bold " - Navigate to: https://console.developers.google.com/apis/credentials/oauthclient/$CLIENT_ID?project=$PROJECT_ID"
bold " - Add https://iap.googleapis.com/v1/oauth/clientIds/$CLIENT_ID:handleRedirect to your Web client ID as an Authorized redirect URI."
# # What about CORS?
# # Wait for services to come online again (steal logic from setup.sh):
popd
================================================
FILE: scripts/expose/deck-ingress.yml
================================================
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: deck-ingress
namespace: spinnaker
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: $MANAGED_CERT
kubernetes.io/ingress.global-static-ip-name: $STATIC_IP_NAME
spec:
backend:
serviceName: spin-deck
servicePort: 9000
================================================
FILE: scripts/expose/iap_policy.json
================================================
{
"policy": {
"etag": $IAP_IAM_POLICY_ETAG,
"bindings": [
{
"role": "roles/iap.httpsResourceAccessor",
"members": [
"serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com",
"user:$IAP_USER"
]
}
]
}
}
================================================
FILE: scripts/expose/launch_configure_iap.sh
================================================
#!/usr/bin/env bash
pushd ~/cloudshell_open/spinnaker-for-gcp/scripts
source ./install/properties
cat expose/configure_iap.md | envsubst > expose/configure_iap_expanded.md
cloudshell launch-tutorial expose/configure_iap_expanded.md
popd
================================================
FILE: scripts/expose/openapi.yml
================================================
swagger: "2.0"
info:
title: Spinnaker for GCP - $PROJECT_ID
version: 1.0.0
host: $DOMAIN_NAME
x-google-endpoints:
- name: $DOMAIN_NAME
target: $IP_ADDR
x-google-allow: all
basePath: /
paths: {}
================================================
FILE: scripts/expose/set_iap_properties.sh
================================================
#!/usr/bin/env bash
if [ -z $CLIENT_ID ]; then
SECRET_JSON=$(kubectl get secret -n spinnaker $SECRET_NAME -o json)
export CLIENT_ID=$(echo $SECRET_JSON | jq -r .data.client_id | base64 -d)
export CLIENT_SECRET=$(echo $SECRET_JSON | jq -r .data.client_secret | base64 -d)
fi
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
bold "Querying for backend service id..."
export BACKEND_SERVICE_ID=$(gcloud compute backend-services list --project $PROJECT_ID \
--filter="iap.oauth2ClientId:$CLIENT_ID AND description:spinnaker/spin-deck" --format="value(id)")
while [ -z "$BACKEND_SERVICE_ID" ]; do
bold "Waiting for backend service to be provisioned..."
sleep 30
export BACKEND_SERVICE_ID=$(gcloud compute backend-services list --project $PROJECT_ID \
--filter="iap.oauth2ClientId:$CLIENT_ID AND description:spinnaker/spin-deck" --format="value(id)")
done
export AUD_CLAIM=/projects/$PROJECT_NUMBER/global/backendServices/$BACKEND_SERVICE_ID
================================================
FILE: scripts/install/instructions.txt
================================================
+-------------------------------------------------------------------------------------------------------+
| |
| To reopen the installation instructions in the right-hand pane at any time, enter: |
| |
| cloudshell launch-tutorial ~/cloudshell_open/spinnaker-for-gcp/scripts/install/provision-spinnaker.md |
| |
+-------------------------------------------------------------------------------------------------------+
================================================
FILE: scripts/install/provision-spinnaker.md
================================================
# Install Spinnaker
## Select GCP project
Select the project in which you'll install Spinnaker, then click **Start**, below.
<walkthrough-project-billing-setup>
</walkthrough-project-billing-setup>
## Spinnaker Installation
Click the **Copy to Cloud Shell** button for each command below, then press **Enter**
to run each commmand.
### Configure Git
If you haven't already configured Git, use the commands below to do so now.
Replace `[EMAIL_ADDRESS]` with your Git email address, and replace `[USERNAME]`
with your Git username.
```bash
git config --global user.email "[EMAIL_ADDRESS]"
git config --global user.name "[USERNAME]"
```
### Configure the environment
Now let's provision Spinnaker within your project {{project-id}}.
```bash
PROJECT_ID={{project-id}} ~/cloudshell_open/spinnaker-for-gcp/scripts/install/setup_properties.sh
```
After that script finishes, you can use the command below to open the properties file for your Spinnaker
installation. This is optional.
```bash
cloudshell edit ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
```
**Proceed with caution**. If you edit this file, the installation might not work
as expected.
### Begin the installation
**This will take some time**
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/install/setup.sh
```
Watch the Cloud Shell command line to see when it completes, then click
**Next** to continue to the next step.
## Connect to Spinnaker
You'll now run commands to...
* connect to Spinnaker
* open the Spinnaker UI (Deck) in a browser window
You have two choices:
* forward port 8080 to tunnel to Spinnaker from your Cloud Shell
* expose Deck securely via a public IP
### Forward the port to Deck, and connect
Don't use the `hal deploy connect` command. Instead, use the following command
only.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/connect_unsecured.sh
```
To connect to the Deck UI, click on the Preview button above and select "Preview on port 8080":

### Expose Spinnaker publicly
If you would like to connect to Spinnaker without relying on port forwarding, we can
expose it via a secure domain behind the [Identity-Aware Proxy](https://cloud.google.com/iap/).
Note that this phase could take 30-60 minutes. **Spinnaker will be inaccessible during this time.**
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/expose/configure_endpoint.sh
```
## Next steps: manage Spinnaker
Now that you've installed Spinnaker on Google Kubernetes Engine, and
accessed it via port forwarding or made it available over the public
internet, you'll use this same console to manage your Spinnaker instance.
You can open this console by navigating to the Kubernetes Application on the
[Applications](https://console.developers.google.com/kubernetes/application?project={{project-id}})
view. The application's *Next Steps* section contains the relevant links and
operator instructions.
You can...
* Use [Halyard](https://www.spinnaker.io/reference/halyard/) to further
configure Spinnaker
* Add provider accounts
* Upgrade Spinnaker
* Add more operators
To start managing Spinnaker:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_console.sh
```
================================================
FILE: scripts/install/quick-install.yml
================================================
apiVersion: v1
kind: Namespace
metadata:
name: halyard
---
apiVersion: v1
kind: Namespace
metadata:
name: spinnaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spinnaker-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: halyard
- kind: ServiceAccount
name: default
namespace: spinnaker
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: halyard-pv-claim
namespace: halyard
labels:
app: halyard-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spin-halyard
namespace: halyard
labels:
app: spin
stack: halyard
spec:
serviceName: spin-halyard
replicas: 1
selector:
matchLabels:
app: spin
stack: halyard
template:
metadata:
labels:
app: spin
stack: halyard
spec:
securityContext:
runAsGroup: 1000
runAsUser: 1000
fsGroup: 1000
containers:
- name: halyard-daemon
image: us-docker.pkg.dev/spinnaker-community/docker/halyard:$HALYARD_VERSION
imagePullPolicy: Always
command:
- /bin/sh
args:
- -c
# We persist the files on a PersistentVolume. To have sane defaults,
# we initialise those files from a ConfigMap if they don't already exist.
- "test -f /home/spinnaker/.hal/config || cp -R /home/spinnaker/staging/.hal/. /home/spinnaker/.hal/ && /opt/halyard/bin/halyard"
readinessProbe:
exec:
command:
- wget
- -q
- --spider
- http://localhost:8064/health
resources:
requests:
cpu: 10m
memory: 256Mi
ports:
- containerPort: 8064
volumeMounts:
- name: persistentconfig
mountPath: /home/spinnaker/.hal
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/config
subPath: config
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/deck.yml
subPath: deck.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/gate.yml
subPath: gate.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/fiat.yml
subPath: fiat.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/redis.yml
subPath: redis.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/clouddriver-local.yml
subPath: clouddriver-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/echo-local.yml
subPath: echo-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/front50-local.yml
subPath: front50-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/gate-local.yml
subPath: gate-local.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/igor-local.yml
subPath: igor-local.yml
volumes:
- name: halconfig
configMap:
name: halconfig
- name: persistentconfig
persistentVolumeClaim:
claimName: halyard-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: spin-halyard
namespace: halyard
spec:
ports:
- port: 8064
targetPort: 8064
protocol: TCP
selector:
app: spin
stack: halyard
---
apiVersion: v1
kind: ConfigMap
metadata:
name: halconfig
namespace: halyard
data:
deck.yml: |
host: 0.0.0.0
env:
API_HOST: http://spin-gate.spinnaker:8084
fiat.yml: |
enabled: false
skipLifeCycleManagement: true
gate.yml: |
host: 0.0.0.0
clouddriver-local.yml: |
kubernetes.v2.managedBySuffix: for-gcp
echo-local.yml: |
rest:
enabled: true
endpoints:
- wrap: true
flatten: false
url: https://$REGION-$PROJECT_ID.cloudfunctions.net/$CLOUD_FUNCTION_NAME
username: $AUDIT_LOG_UNAME
password: $AUDIT_LOG_PW
eventName: spinnaker_events
front50-local.yml: |
spinnaker.s3.versioning: false
gate-local.yml: |
redis.configuration.secure: true
igor-local.yml: |
locking:
enabled: true
redis.yml: |
overrideBaseUrl: redis://$REDIS_INSTANCE_HOST:6379
skipLifeCycleManagement: true
config: |
currentDeployment: default
deploymentConfigurations:
- name: default
version: $SPINNAKER_VERSION
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: false
accounts: []
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: spinnaker-install-account
requiredGroupMembership: []
providerVersion: V2
permissions: {}
dockerRegistries: []
configureImagePullSecrets: true
serviceAccount: true
cacheThreads: 1
namespaces: []
omitNamespaces:
- halyard
- kube-public
- kube-system
- spinnaker
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
oAuthScopes: []
primaryAccount: spinnaker-install-account
oracle:
enabled: false
accounts: []
bakeryDefaults:
templateFile: oci.json
baseImages: []
cloudfoundry:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: spinnaker-install-account
imageVariant: SLIM
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
sidecars: {}
initContainers: {}
hostAliases: {}
affinity: {}
tolerations: {}
nodeSelectors: {}
gitConfig:
upstreamUser: spinnaker
livenessProbeConfig:
enabled: false
haServices:
clouddriver:
enabled: false
disableClouddriverRoDeck: false
echo:
enabled: false
persistentStorage:
persistentStoreType: gcs
azs: {}
gcs:
project: $PROJECT_ID
bucket: $BUCKET_NAME
rootFolder: front50
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
artifacts: true
metricStores:
datadog:
enabled: false
tags: []
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
newrelic:
enabled: false
tags: []
period: 30
enabled: false
notifications:
slack:
enabled: false
twilio:
enabled: false
baseUrl: https://api.twilio.com/
github-status:
enabled: false
timezone: $TIMEZONE
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
wercker:
enabled: false
masters: []
concourse:
enabled: false
masters: []
gcb:
enabled: true
accounts:
- name: gcb-account
permissions: {}
project: $PROJECT_ID
subscriptionName: $GCB_PUBSUB_SUBSCRIPTION
repository:
artifactory:
enabled: false
searches: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: /gate
uiSecurity:
ssl:
enabled: false
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
userAttributeMapping: {}
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
ldap:
roleProviderType: LDAP
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: true
accounts:
- name: gcs-install-account
oracle:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
helm:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
maven:
enabled: false
accounts: []
templates: []
pubsub:
enabled: true
google:
enabled: true
pubsubType: GOOGLE
subscriptions:
- name: gcr-pub-sub
project: $PROJECT_ID
subscriptionName: $GCR_PUBSUB_SUBSCRIPTION
ackDeadlineSeconds: 10
messageFormat: GCR
publishers:
- name: $PUBSUB_NOTIFICATION_PUBLISHER
project: $PROJECT_ID
topicName: $PUBSUB_NOTIFICATION_TOPIC
content: NOTIFICATIONS
canary:
enabled: true
serviceIntegrations:
- name: google
enabled: true
accounts:
- name: my-google-account
project: $PROJECT_ID
bucket: $BUCKET_NAME
rootFolder: kayenta
supportedTypes:
- METRICS_STORE
- CONFIGURATION_STORE
- OBJECT_STORE
gcsEnabled: true
stackdriverEnabled: true
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
plugins:
plugins: []
enabled: false
downloadingEnabled: false
pluginConfigurations:
plugins: {}
webhook:
trust:
enabled: false
telemetry:
enabled: false
endpoint: https://stats.spinnaker.io
instanceId:
connectionTimeoutMillis: 3000
readTimeoutMillis: 5000
---
apiVersion: batch/v1
kind: Job
metadata:
name: hal-deploy-apply
namespace: halyard
labels:
app: job
stack: hal-deploy
spec:
template:
metadata:
labels:
app: job
stack: hal-deploy
spec:
restartPolicy: OnFailure
containers:
- name: hal-deploy-apply
# todo use a custom image
image: us-docker.pkg.dev/spinnaker-community/docker/halyard:$HALYARD_VERSION
command:
- /bin/sh
args:
- -c
- "hal deploy apply --daemon-endpoint http://spin-halyard.halyard:8064"
================================================
FILE: scripts/install/setup.sh
================================================
#!/usr/bin/env bash
err() {
echo "$*" >&2;
}
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
check_for_required_binaries
PARENT_DIR=$PARENT_DIR $PARENT_DIR/spinnaker-for-gcp/scripts/manage/check_git_config.sh || exit 1
[ -z "$PROPERTIES_FILE" ] && PROPERTIES_FILE="$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties"
source "$PROPERTIES_FILE"
check_for_shared_vpc $CI
PARENT_DIR=$PARENT_DIR PROPERTIES_FILE=$PROPERTIES_FILE $PARENT_DIR/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
OPERATOR_SA_EMAIL=$(gcloud config list account --format "value(core.account)" --project $PROJECT_ID)
SETUP_EXISTING_ROLES=$(gcloud projects get-iam-policy --filter bindings.members:$OPERATOR_SA_EMAIL $PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)")
if [ -z "$SETUP_EXISTING_ROLES" ]; then
bold "Unable to verify that the service account \"$OPERATOR_SA_EMAIL\" has the required IAM roles."
bold "\"$OPERATOR_SA_EMAIL\" requires the IAM role \"Project IAM Admin\" to proceed."
exit 1
fi
if [ -z "$(echo $SETUP_EXISTING_ROLES | grep roles/owner)" ]; then
SETUP_REQUIRED_ROLES=(cloudfunctions.developer compute.networkViewer container.admin iam.serviceAccountCreator iam.serviceAccountUser pubsub.editor redis.admin serviceusage.serviceUsageAdmin source.admin storage.admin)
MISSING_ROLES=""
for r in "${SETUP_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $SETUP_EXISTING_ROLES | grep $r)" ]; then
if [ -z "$MISSING_ROLES" ]; then
MISSING_ROLES="$r"
else
MISSING_ROLES="$MISSING_ROLES, $r"
fi
fi
done
if [ -n "$MISSING_ROLES" ]; then
bold "The service account in use, \"$OPERATOR_SA_EMAIL\", is missing the following required role(s): $MISSING_ROLES."
bold "Add the required role(s) and try re-running the script."
exit 1
fi
fi
REQUIRED_APIS="cloudbuild.googleapis.com cloudfunctions.googleapis.com container.googleapis.com endpoints.googleapis.com iap.googleapis.com monitoring.googleapis.com redis.googleapis.com sourcerepo.googleapis.com"
NUM_REQUIRED_APIS=$(wc -w <<< "$REQUIRED_APIS")
NUM_ENABLED_APIS=$(gcloud services list --project $PROJECT_ID \
--filter="config.name:($REQUIRED_APIS)" \
--format="value(config.name)" | wc -l)
if [ $NUM_ENABLED_APIS != $NUM_REQUIRED_APIS ]; then
bold "Enabling required APIs ($REQUIRED_APIS) in $PROJECT_ID..."
bold "This phase will take a few minutes (progress will not be reported during this operation)."
bold
bold "Once the required APIs are enabled, the remaining components will be installed and configured. The entire installation may take 10 minutes or more."
gcloud services --project $PROJECT_ID enable $REQUIRED_APIS
fi
if [ "$PROJECT_ID" != "$NETWORK_PROJECT" ]; then
# Cloud Memorystore for Redis requires the Redis instance to be deployed in the Shared VPC
# host project: https://cloud.google.com/memorystore/docs/redis/networking#limited_and_unsupported_networks
if [ ! $(has_service_enabled $NETWORK_PROJECT redis.googleapis.com) ]; then
bold "Enabling redis.googleapis.com in $NETWORK_PROJECT..."
gcloud services --project $NETWORK_PROJECT enable redis.googleapis.com
fi
fi
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/cluster_utils.sh
CLUSTER_EXISTS=$(check_for_existing_cluster)
if [ -n "$CLUSTER_EXISTS" ]; then
check_existing_cluster_location
bold "Retrieving credentials for GKE cluster $GKE_CLUSTER..."
gcloud container clusters get-credentials $GKE_CLUSTER --zone $ZONE --project $PROJECT_ID
bold "Checking for Spinnaker application in cluster $GKE_CLUSTER..."
SPINNAKER_APPLICATION_LIST_JSON=$(kubectl get applications -n spinnaker -l app.kubernetes.io/name=spinnaker --output json)
SPINNAKER_APPLICATION_COUNT=$(echo $SPINNAKER_APPLICATION_LIST_JSON | jq '.items | length')
if [ -n "$SPINNAKER_APPLICATION_COUNT" ] && [ "$SPINNAKER_APPLICATION_COUNT" != "0" ]; then
bold "The GKE cluster $GKE_CLUSTER already contains an installed Spinnaker application."
if [ "$SPINNAKER_APPLICATION_COUNT" == "1" ]; then
EXISTING_SPINNAKER_APPLICATION_NAME=$(echo $SPINNAKER_APPLICATION_LIST_JSON | jq -r '.items[0].metadata.name')
if [ "$EXISTING_SPINNAKER_APPLICATION_NAME" == "$DEPLOYMENT_NAME" ]; then
bold "Name of existing Spinnaker application matches name specified in properties file; carrying on with installation..."
else
bold "Please choose another cluster."
exit 1
fi
else
# Should never be more than 1 deployment in a cluster, but protect against it just in case.
bold "Please choose another cluster."
exit 1
fi
fi
fi
NETWORK_SUBNET_MODE=$(gcloud compute networks list --project $NETWORK_PROJECT \
--filter "name=$NETWORK" \
--format "value(x_gcloud_subnet_mode)")
if [ -z "$NETWORK_SUBNET_MODE" ]; then
bold "Network $NETWORK was not found in project $NETWORK_PROJECT."
exit 1
elif [ "$NETWORK_SUBNET_MODE" = "LEGACY" ]; then
bold "Network $NETWORK is a legacy network. This installation requires a" \
"non-legacy network. Please specify a non-legacy network in" \
"$PROPERTIES_FILE and re-run this script."
exit 1
fi
# Verify that the subnet exists in the network.
SUBNET_CHECK=$(gcloud compute networks subnets list --project=$NETWORK_PROJECT \
--network=$NETWORK --filter "region: ($REGION) AND name: ($SUBNET)" \
--format "value(name)")
if [ -z "$SUBNET_CHECK" ]; then
bold "Subnet $SUBNET was not found in network $NETWORK" \
"in project $NETWORK_PROJECT. Please specify an existing subnet in" \
"$PROPERTIES_FILE and re-run this script. You can verify" \
"what subnetworks exist in this network by running:"
bold " gcloud compute networks subnets list --project $NETWORK_PROJECT --network=$NETWORK --filter \"region: ($REGION)\""
exit 1
fi
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
if [ -z "$SA_EMAIL" ]; then
bold "Creating service account $SERVICE_ACCOUNT_NAME..."
gcloud iam service-accounts --project $PROJECT_ID create \
$SERVICE_ACCOUNT_NAME \
--display-name $SERVICE_ACCOUNT_NAME
while [ -z "$SA_EMAIL" ]; do
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
sleep 5
done
else
bold "Using existing service account $SERVICE_ACCOUNT_NAME..."
fi
bold "Assigning required roles to $SERVICE_ACCOUNT_NAME..."
K8S_REQUIRED_ROLES=(cloudbuild.builds.editor container.admin logging.logWriter monitoring.admin pubsub.admin storage.admin)
EXISTING_ROLES=$(gcloud projects get-iam-policy --filter bindings.members:$SA_EMAIL $PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)")
for r in "${K8S_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $EXISTING_ROLES | grep $r)" ]; then
bold "Assigning role $r..."
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SA_EMAIL \
--role roles/$r \
--format=none
fi
done
export REDIS_INSTANCE_HOST=$(gcloud redis instances list \
--project $NETWORK_PROJECT --region $REGION \
--filter="name=projects/$NETWORK_PROJECT/locations/$REGION/instances/$REDIS_INSTANCE" \
--format="value(host)")
if [ -z "$REDIS_INSTANCE_HOST" ]; then
bold "Creating redis instance $REDIS_INSTANCE in project $NETWORK_PROJECT..."
gcloud redis instances create $REDIS_INSTANCE --project $NETWORK_PROJECT \
--region=$REGION --zone=$ZONE --network=$NETWORK_REFERENCE \
--redis-config=notify-keyspace-events=gxE
export REDIS_INSTANCE_HOST=$(gcloud redis instances list \
--project $NETWORK_PROJECT --region $REGION \
--filter="name=projects/$NETWORK_PROJECT/locations/$REGION/instances/$REDIS_INSTANCE" \
--format="value(host)")
else
bold "Using existing redis instance $REDIS_INSTANCE ($REDIS_INSTANCE_HOST)..."
fi
# TODO: Could verify ACLs here. In the meantime, error messages should suffice.
gsutil ls $BUCKET_URI
if [ $? != 0 ]; then
bold "Creating bucket $BUCKET_URI..."
gsutil mb -p $PROJECT_ID -l $REGION $BUCKET_URI
gsutil versioning set on $BUCKET_URI
else
bold "Using existing bucket $BUCKET_URI..."
fi
if [ -z "$CLUSTER_EXISTS" ]; then
bold "Creating GKE cluster $GKE_CLUSTER..."
# $GKE_RELEASE_CHANNEL is new as of 2021-08-13, so fall back to
# $GKE_CLUSTER_VERSION if it doesn't exist.
if [ -z "$GKE_RELEASE_CHANNEL" ]; then
CLUSTER_VERSION_SPEC="--cluster-version $GKE_CLUSTER_VERSION"
else
CLUSTER_VERSION_SPEC="--release-channel $GKE_RELEASE_CHANNEL"
fi
# TODO: Move some of these config settings to properties file.
# TODO: Should this be regional instead?
eval gcloud beta container clusters create $GKE_CLUSTER --project $PROJECT_ID \
--zone $ZONE --network $NETWORK_REFERENCE --subnetwork $SUBNET_REFERENCE \
$CLUSTER_VERSION_SPEC --machine-type $GKE_MACHINE_TYPE \
--disk-type $GKE_DISK_TYPE --disk-size $GKE_DISK_SIZE --service-account $SA_EMAIL \
--num-nodes $GKE_NUM_NODES --enable-stackdriver-kubernetes --enable-autoupgrade \
--enable-autorepair --enable-ip-alias --addons HorizontalPodAutoscaling,HttpLoadBalancing \
"${CLUSTER_SECONDARY_RANGE_NAME:+'--cluster-secondary-range-name' $CLUSTER_SECONDARY_RANGE_NAME}" \
"${SERVICES_SECONDARY_RANGE_NAME:+'--services-secondary-range-name' $SERVICES_SECONDARY_RANGE_NAME}"
# If the cluster already exists, we already retrieved credentials way up at the top of the script.
bold "Retrieving credentials for GKE cluster $GKE_CLUSTER..."
gcloud container clusters get-credentials $GKE_CLUSTER --zone $ZONE --project $PROJECT_ID
else
bold "Using existing GKE cluster $GKE_CLUSTER..."
check_existing_cluster_prereqs
fi
GCR_PUBSUB_TOPIC_NAME=projects/$PROJECT_ID/topics/gcr
EXISTING_GCR_PUBSUB_TOPIC_NAME=$(gcloud pubsub topics list --project $PROJECT_ID \
--filter="name=$GCR_PUBSUB_TOPIC_NAME" --format="value(name)")
if [ -z "$EXISTING_GCR_PUBSUB_TOPIC_NAME" ]; then
bold "Creating pubsub topic $GCR_PUBSUB_TOPIC_NAME for GCR..."
gcloud pubsub topics create --project $PROJECT_ID $GCR_PUBSUB_TOPIC_NAME
else
bold "Using existing pubsub topic $EXISTING_GCR_PUBSUB_TOPIC_NAME for GCR..."
fi
EXISTING_GCR_PUBSUB_SUBSCRIPTION_NAME=$(gcloud pubsub subscriptions list \
--project $PROJECT_ID \
--filter="name=projects/$PROJECT_ID/subscriptions/$GCR_PUBSUB_SUBSCRIPTION" \
--format="value(name)")
if [ -z "$EXISTING_GCR_PUBSUB_SUBSCRIPTION_NAME" ]; then
bold "Creating pubsub subscription $GCR_PUBSUB_SUBSCRIPTION for GCR..."
gcloud pubsub subscriptions create --project $PROJECT_ID $GCR_PUBSUB_SUBSCRIPTION \
--topic=gcr
else
bold "Using existing pubsub subscription $GCR_PUBSUB_SUBSCRIPTION for GCR..."
fi
GCB_PUBSUB_TOPIC_NAME=projects/$PROJECT_ID/topics/cloud-builds
EXISTING_GCB_PUBSUB_TOPIC_NAME=$(gcloud pubsub topics list --project $PROJECT_ID \
--filter="name=$GCB_PUBSUB_TOPIC_NAME" --format="value(name)")
if [ -z "$EXISTING_GCB_PUBSUB_TOPIC_NAME" ]; then
bold "Creating pubsub topic $GCB_PUBSUB_TOPIC_NAME for GCB..."
gcloud pubsub topics create --project $PROJECT_ID $GCB_PUBSUB_TOPIC_NAME
else
bold "Using existing pubsub topic $EXISTING_GCB_PUBSUB_TOPIC_NAME for GCB..."
fi
EXISTING_GCB_PUBSUB_SUBSCRIPTION_NAME=$(gcloud pubsub subscriptions list \
--project $PROJECT_ID \
--filter="name=projects/$PROJECT_ID/subscriptions/$GCB_PUBSUB_SUBSCRIPTION" \
--format="value(name)")
if [ -z "$EXISTING_GCB_PUBSUB_SUBSCRIPTION_NAME" ]; then
bold "Creating pubsub subscription $GCB_PUBSUB_SUBSCRIPTION for GCB..."
gcloud pubsub subscriptions create --project $PROJECT_ID $GCB_PUBSUB_SUBSCRIPTION \
--topic=projects/$PROJECT_ID/topics/cloud-builds
else
bold "Using existing pubsub subscription $GCB_PUBSUB_SUBSCRIPTION for GCB..."
fi
NOTIFICATION_PUBSUB_TOPIC_NAME=projects/$PROJECT_ID/topics/$PUBSUB_NOTIFICATION_TOPIC
EXISTING_NOTIFICATION_PUBSUB_TOPIC_NAME=$(gcloud pubsub topics list --project $PROJECT_ID \
--filter="name=$NOTIFICATION_PUBSUB_TOPIC_NAME" --format="value(name)")
if [ -z "$EXISTING_NOTIFICATION_PUBSUB_TOPIC_NAME" ]; then
bold "Creating pubsub topic $NOTIFICATION_PUBSUB_TOPIC_NAME for notifications..."
gcloud pubsub topics create --project $PROJECT_ID $NOTIFICATION_PUBSUB_TOPIC_NAME
else
bold "Using existing pubsub topic $EXISTING_NOTIFICATION_PUBSUB_TOPIC_NAME for notifications..."
fi
EXISTING_HAL_DEPLOY_APPLY_JOB_NAME=$(kubectl get job -n halyard \
--field-selector metadata.name=="hal-deploy-apply" \
-o json | jq -r .items[0].metadata.name)
if [ $EXISTING_HAL_DEPLOY_APPLY_JOB_NAME != 'null' ]; then
bold "Deleting earlier job $EXISTING_HAL_DEPLOY_APPLY_JOB_NAME..."
kubectl delete job hal-deploy-apply -n halyard
fi
bold "Provisioning Spinnaker resources..."
envsubst < $PARENT_DIR/spinnaker-for-gcp/scripts/install/quick-install.yml | kubectl apply -f -
job_ready() {
printf "Waiting on job $1 to complete"
while [[ "$(kubectl get job $1 -n halyard -o \
jsonpath="{.status.succeeded}")" != "1" ]]; do
printf "."
sleep 5
done
echo ""
}
job_ready hal-deploy-apply
# Sourced to import $IP_ADDR.
# Used at the end of setup to check if installation is exposed via a secured endpoint.
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/update_landing_page.sh
PARENT_DIR=$PARENT_DIR PROPERTIES_FILE=$PROPERTIES_FILE $PARENT_DIR/spinnaker-for-gcp/scripts/manage/deploy_application_manifest.sh
# Delete any existing deployment config secret.
# It will be recreated with up-to-date contents during push_config.sh.
EXISTING_DEPLOYMENT_SECRET_NAME=$(kubectl get secret -n halyard \
--field-selector metadata.name=="spinnaker-deployment" \
-o json | jq .items[0].metadata.name)
if [ $EXISTING_DEPLOYMENT_SECRET_NAME != 'null' ]; then
bold "Deleting Kubernetes secret spinnaker-deployment..."
kubectl delete secret spinnaker-deployment -n halyard
fi
EXISTING_CLOUD_FUNCTION=$(gcloud functions list --project $PROJECT_ID \
--format="value(name)" --filter="entryPoint=$CLOUD_FUNCTION_NAME")
if [ -z "$EXISTING_CLOUD_FUNCTION" ]; then
bold "Deploying audit log cloud function $CLOUD_FUNCTION_NAME..."
cat $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/config_json.template | envsubst > $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/config.json
cat $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/index_js.template | envsubst > $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/index.js
gcloud functions deploy $CLOUD_FUNCTION_NAME --source $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog \
--trigger-http --memory 2048MB --runtime nodejs8 --allow-unauthenticated --project $PROJECT_ID --region $REGION
gcloud alpha functions add-iam-policy-binding $CLOUD_FUNCTION_NAME --project $PROJECT_ID --region $REGION --member allUsers --role roles/cloudfunctions.invoker
else
bold "Using existing audit log cloud function $CLOUD_FUNCTION_NAME..."
fi
if [ "$USE_CLOUD_SHELL_HAL_CONFIG" = true ]; then
# Not passing $CI since the guard makes it clear we are running from cloud shell.
$PARENT_DIR/spinnaker-for-gcp/scripts/manage/push_and_apply.sh
else
# We want the local hal config to match what was deployed.
CI=$CI PARENT_DIR=$PARENT_DIR PROPERTIES_FILE=$PROPERTIES_FILE $PARENT_DIR/spinnaker-for-gcp/scripts/manage/pull_config.sh
# We want a full backup stored in the bucket and the full deployment config stored in a secret.
CI=$CI PARENT_DIR=$PARENT_DIR PROPERTIES_FILE=$PROPERTIES_FILE $PARENT_DIR/spinnaker-for-gcp/scripts/manage/push_config.sh
fi
deploy_ready() {
printf "Waiting on $2 to come online"
while [[ "$(kubectl get deploy $1 -n spinnaker -o \
jsonpath="{.status.readyReplicas}")" != \
"$(kubectl get deploy $1 -n spinnaker -o \
jsonpath="{.status.replicas}")" ]]; do
printf "."
sleep 5
done
echo ""
}
deploy_ready spin-gate "API server"
deploy_ready spin-front50 "storage server"
deploy_ready spin-orca "orchestration engine"
deploy_ready spin-kayenta "canary analysis engine"
deploy_ready spin-deck "UI server"
if [ "$CI" != true ]; then
$PARENT_DIR/spinnaker-for-gcp/scripts/cli/install_hal.sh --version $HALYARD_VERSION
$PARENT_DIR/spinnaker-for-gcp/scripts/cli/install_spin.sh
# We want a backup containing the newly-created ~/.spin/* files as well.
# Not passing $CI since the guard already ensures it is not true.
$PARENT_DIR/spinnaker-for-gcp/scripts/manage/push_config.sh
fi
# If restoring a secured endpoint, leave the user on the documentation for iap configuration.
if [ "$USE_CLOUD_SHELL_HAL_CONFIG" = true -a -n "$IP_ADDR" -a "$CI" != true ]; then
$PARENT_DIR/spinnaker-for-gcp/scripts/expose/launch_configure_iap.sh
fi
echo
bold "Installation complete."
echo
bold "Sign up for Spinnaker for GCP updates and announcements:"
bold " https://groups.google.com/forum/#!forum/spinnaker-for-gcp-announce"
echo
================================================
FILE: scripts/install/setup_properties.sh
================================================
#!/usr/bin/env bash
# PROJECT_ID should be set, but we will try to determine via gcloud config if not set.
# DEPLOYMENT_NAME, GKE_CLUSTER and ZONE are optional.
# If GKE_CLUSTER is set, ZONE is required. (This indicates that we should install in an existing cluster.)
# If using a pre-existing cluster, that cluster must have:
# - IP aliases enabled (since we are using a hosted Redis instance)
# - Full Cloud Platform scope for its nodes (if using the default service account)
# ZONE can be set and GKE_CLUSTER left unset. (This indicates we should create a new cluster in $ZONE.)
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_duplicate_dirs.sh || exit 1
if [ -z "$PROJECT_ID" ]; then
PROJECT_ID=$(gcloud info --format='value(config.project)')
fi
if [ -z "$PROJECT_ID" ]; then
echo "PROJECT_ID must be specified."
exit 1
fi
PROPERTIES_FILE="$HOME/cloudshell_open/spinnaker-for-gcp/scripts/install/properties"
if [ -f "$PROPERTIES_FILE" ]; then
bold "The properties file already exists at $PROPERTIES_FILE. Please move it out of the way if you want to generate a new properties file."
exit 1
fi
if [ "$GKE_CLUSTER" ]; then
if [ -z "$ZONE" ]; then
echo "If GKE_CLUSTER is specified, ZONE must also be specified."
exit 1
fi
# Since cluster already exists, must resolve service account from the cluster.
EXISTING_SA_EMAIL=$(gcloud beta container clusters describe --project $PROJECT_ID \
--zone $ZONE $GKE_CLUSTER --format="value(nodeConfig.serviceAccount)")
if [ -z $EXISTING_SA_EMAIL ]; then
echo "Unable to resolve service account from existing cluster $GKE_CLUSTER in zone $ZONE."
exit 1
fi
if [ "$EXISTING_SA_EMAIL" == "default" ]; then
SERVICE_ACCOUNT_NAME="Compute Engine default service account"
else
SERVICE_ACCOUNT_NAME=$(echo $EXISTING_SA_EMAIL | cut -d @ -f 1)
fi
fi
NETWORK="default"
SUBNET="default"
ZONE=${ZONE:-us-east1-c}
REGION=$(echo $ZONE | cut -d - -f 1,2)
source ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/service_utils.sh
query_redis_instance_names() {
if [ $(has_service_enabled $1 redis.googleapis.com) ]; then
# TODO: Should really query redis instances across _all_ regions to ensure no deployment naming collision.
# TODO: Alternatively, could incorporate region in generated deployment name.
EXISTING_REDIS_NAMES=$(gcloud redis instances list --region $REGION --project $1 \
--filter="name:spinnaker-" \
--format="value(name)")
echo "$EXISTING_REDIS_NAMES"
fi
}
EXISTING_REDIS_NAMES=$(query_redis_instance_names $PROJECT_ID)
# Also avoid name collisions with potential Shared VPC host project.
if [ $(has_service_enabled $PROJECT_ID compute.googleapis.com) ]; then
SHARED_VPC_HOST_PROJECT=$(gcloud compute shared-vpc get-host-project $PROJECT_ID --format="value(name)")
fi
if [ "$SHARED_VPC_HOST_PROJECT" ]; then
SHARED_VPC_HOST_PROJECT_REDIS_NAMES=$(query_redis_instance_names $SHARED_VPC_HOST_PROJECT)
EXISTING_REDIS_NAMES="$EXISTING_REDIS_NAMES"$'\n'"$SHARED_VPC_HOST_PROJECT_REDIS_NAMES"
fi
EXISTING_DEPLOYMENT_COUNT=$(echo "$EXISTING_REDIS_NAMES" | sed '/^$/d' | wc -l)
NEW_DEPLOYMENT_SUFFIX=$(($EXISTING_DEPLOYMENT_COUNT + 1))
NEW_DEPLOYMENT_NAME="spinnaker-$NEW_DEPLOYMENT_SUFFIX"
while [[ "$(echo "$EXISTING_REDIS_NAMES" | grep ^$NEW_DEPLOYMENT_NAME$ | wc -l)" != "0" ]]; do
NEW_DEPLOYMENT_NAME="spinnaker-$((++NEW_DEPLOYMENT_SUFFIX))"
done
cat > ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties <<EOL
#!/usr/bin/env bash
# This file is generated just once per Spinnaker installation, prior to running setup.sh.
# You can make changes to this file before running setup.sh for the first time.
# If setup.sh is interrupted, you can run it again at any point and it will finish any incomplete steps.
# Do not change this file once you have run setup.sh for the first time.
# If you want to provision a new Spinnaker installation, whether in the same project or a different project,
# simply wait until setup.sh completes and delete this file (or the entire cloned repo) from your
# Cloud Shell home directory. Then you can relaunch the provision-spinnaker.md tutorial and generate a new
# properties file for use in provisioning a new Spinnaker installation.
export PROJECT_ID=$PROJECT_ID
export DEPLOYMENT_NAME=${DEPLOYMENT_NAME:-$NEW_DEPLOYMENT_NAME}
export SPINNAKER_VERSION=1.19.3
export HALYARD_VERSION=1.33.0
export ZONE=$ZONE
export REGION=$REGION
# The specified network must exist, and it must not be a legacy network.
# More info on legacy networks can be found here: https://cloud.google.com/vpc/docs/legacy
export NETWORK=$NETWORK
export SUBNET=$SUBNET
EOL
if [ "$SHARED_VPC_HOST_PROJECT" ]; then
cat >> ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties <<EOL
# If you want to use a shared network/subnet from the Shared VPC host project, you'll need to perform
# these steps prior to running the setup.sh script:
# 1) Specify the name of the shared network in \$NETWORK up above.
# 2) Specify the name of the shared subnet in \$SUBNET up above.
# 3) Specify the Shared VPC host project id ($SHARED_VPC_HOST_PROJECT) in \$NETWORK_PROJECT below.
# 4) Ensure the subnet referenced by \$SUBNET defines 2 named secondary ranges (one for
# pods, and one for services).
# 5) Specify the names of the 2 secondary ranges in \$CLUSTER_SECONDARY_RANGE_NAME and
# \$SERVICES_SECONDARY_RANGE_NAME down below.
EOL
fi
cat >> ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties <<EOL
export NETWORK_PROJECT=\$PROJECT_ID
export NETWORK_REFERENCE=projects/\$NETWORK_PROJECT/global/networks/\$NETWORK
export SUBNET_REFERENCE=projects/\$NETWORK_PROJECT/regions/\$REGION/subnetworks/\$SUBNET
EOL
if [ "$SHARED_VPC_HOST_PROJECT" ]; then
cat >> ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties <<EOL
export CLUSTER_SECONDARY_RANGE_NAME=
export SERVICES_SECONDARY_RANGE_NAME=
EOL
fi
cat >> ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties <<EOL
# If cluster does not exist, it will be created.
export GKE_CLUSTER=${GKE_CLUSTER:-\$DEPLOYMENT_NAME}
# These are only considered if a new GKE cluster is being created.
export GKE_RELEASE_CHANNEL=stable
export GKE_MACHINE_TYPE=n1-highmem-4
export GKE_DISK_TYPE=pd-standard
export GKE_DISK_SIZE=100
export GKE_NUM_NODES=3
# See TZ column in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
export TIMEZONE=$(cat /etc/timezone)
# If service account does not exist, it will be created.
export SERVICE_ACCOUNT_NAME="${SERVICE_ACCOUNT_NAME:-"\$DEPLOYMENT_NAME-acc-$(date +"%s")"}"
# If Cloud Memorystore Redis instance does not exist, it will be created.
export REDIS_INSTANCE=\$DEPLOYMENT_NAME
# If bucket does not exist, it will be created.
export BUCKET_NAME="\$DEPLOYMENT_NAME-$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 20 | head -n 1)-$(date +"%s")"
export BUCKET_URI="gs://\$BUCKET_NAME"
# If CSR repo does not exist, it will be created.
export CONFIG_CSR_REPO=\$DEPLOYMENT_NAME-config
# Used to authenticate calls to the audit log Cloud Function.
export AUDIT_LOG_UNAME="$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 20 | head -n 1)-$(date +"%s")"
export AUDIT_LOG_PW="$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 20 | head -n 1)-$(date +"%s")"
export CLOUD_FUNCTION_NAME="\${DEPLOYMENT_NAME//-}AuditLog"
export GCR_PUBSUB_SUBSCRIPTION=\$DEPLOYMENT_NAME-gcr-pubsub-subscription
export GCB_PUBSUB_SUBSCRIPTION=\$DEPLOYMENT_NAME-gcb-pubsub-subscription
export PUBSUB_NOTIFICATION_PUBLISHER=\$DEPLOYMENT_NAME-publisher
export PUBSUB_NOTIFICATION_TOPIC=\$DEPLOYMENT_NAME-notifications-topic
# The properties following this line are only relevant if you intend to expose your new Spinnaker instance.
export STATIC_IP_NAME=\$DEPLOYMENT_NAME-external-ip
export MANAGED_CERT=\$DEPLOYMENT_NAME-managed-cert
export SECRET_NAME=\$DEPLOYMENT_NAME-oauth-client-secret
# If you own a domain name and want to use that instead of this automatically-assigned one,
# specify it here (you must be able to configure the dns settings).
export DOMAIN_NAME=\$DEPLOYMENT_NAME.endpoints.\$PROJECT_ID.cloud.goog
# This email address will be granted permissions as an IAP-Secured Web App User.
export IAP_USER=$(gcloud auth list --format="value(account)" --filter="status=ACTIVE")
EOL
if [ "$SHARED_VPC_HOST_PROJECT" ]; then
bold "If you want to use a shared network/subnet from the Shared VPC host project ($SHARED_VPC_HOST_PROJECT)," \
"there are additional instructions you must follow in the properties file. You must perform those steps" \
"prior to running the setup.sh script:"
bold " cloudshell edit ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties"
fi
================================================
FILE: scripts/install/spinnakerAuditLog/config_json.template
================================================
{
"USERNAME": "$AUDIT_LOG_UNAME",
"PASSWORD": "$AUDIT_LOG_PW",
"TIMEZONE": "$TIMEZONE",
"PROJECT_ID": "$PROJECT_ID",
"AUDIT_LOG_NAME": "$CLOUD_FUNCTION_NAME"
}
================================================
FILE: scripts/install/spinnakerAuditLog/index_js.template
================================================
const config = require('./config.json');
const moment = require('moment-timezone');
const {Logging} = require('@google-cloud/logging');
const logging = new Logging({
projectId: config.PROJECT_ID,
keyFilename: config.CREDENTIALS_PATH
});
/**
* Logs Spinnaker events to Stackdriver Logging.
*
* @param {!Object} req Cloud Function request context.
* @param {!Object} res Cloud Function response context.
*/
exports.$CLOUD_FUNCTION_NAME = function spinnakerAuditLog (req, res) {
log('req.body.payload=' + JSON.stringify(req.body.payload), null, null, 'DEBUG');
try {
verifyWebhook(req.get('authorization') || '');
if (req.body.eventName !== 'spinnaker_events' || req.body.payload === undefined) {
res.status(400).send('Spinnaker audit log request body is malformed.');
} else {
var content = req.body.payload.content;
var eventSource = req.body.payload.details.source;
var eventType = req.body.payload.details.type;
var execution = content.execution;
var context = content.context;
var stageDetails = (execution && execution.stages && execution.stages.length > 0) ? execution.stages.find(stage => stage.status === 'RUNNING') : {};
var user = execution && execution.authentication && execution.authentication.user ? execution.authentication.user : 'n/a';
if (execution && execution.trigger) {
if (execution.trigger.runAsUser) {
user = execution.trigger.runAsUser;
} else if (execution.trigger.user) {
user = execution.trigger.user;
}
}
var creationTimestamp = moment.tz(Number(req.body.payload.details.created), config.TIMEZONE).format('ddd, DD MMM YYYY HH:mm:ss z');
var reasonSegment;
if (eventSource === 'igor') {
if (eventType === 'build') {
var lastBuild = content.project.lastBuild;
var jenkinsTimestamp = moment.tz(Number(lastBuild.timestamp), config.TIMEZONE).format('ddd, DD MMM YYYY HH:mm:ss z');
if (lastBuild.result === 'SUCCESS') {
log('Jenkins project ' + content.project.name + ' successfully completed build #' + lastBuild.number + ' at ' + jenkinsTimestamp + '.', null, null);
} else {
log('Jenkins project ' + content.project.name + ' completed build #' + lastBuild.number + ' with status ' + lastBuild.result + ' at ' + jenkinsTimestamp + '.', null, null, 'ERROR');
}
} else if (eventType === 'docker') {
log('Docker tag ' + content.tag + ' was pushed to repository ' + content.repository + ' in registry ' + content.registry + ' at ' + creationTimestamp + '.', null, null);
}
} else if (eventType === 'git') {
log('Received webhook for project ' + content.slug + ' in org ' + content.repoProject + ' from ' + eventSource + ' at commit ' + content.hash + ' on branch ' + content.branch + ' at ' + creationTimestamp + '.', null, null);
} else if (eventType === 'orca:stage:starting' && !stageDetails.syntheticStageOwner) {
if (!content.standalone) {
log('User ' + user + ' executed operation ' + stageDetails.name + ' (of type ' + stageDetails.type + ') via pipeline ' + execution.name + ' of application ' + execution.application + ' at ' + creationTimestamp + '.', execution.application, execution.name);
} else if (stageDetails.type === 'savePipeline') {
log('User ' + user + ' executed operation (' + execution.description + ') at ' + creationTimestamp + '.', null, null);
} else {
reasonSegment = context.reason ? ' for reason "' + context.reason + '"' : '';
log('User ' + user + ' executed ad-hoc operation ' + execution.stages[0].type + ' (' + execution.description + ')' + reasonSegment + ' at ' + creationTimestamp + '.', null, null);
}
} else if (eventType === 'orca:pipeline:starting') {
var parametersSegment = execution.trigger.parameters ? ' (with parameters ' + JSON.stringify(execution.trigger.parameters) + ')' : '';
log('User ' + user + ' executed pipeline ' + execution.name + ' of application ' + execution.application + ' via ' + execution.trigger.type + ' trigger' + parametersSegment + ' at ' + creationTimestamp + '.', execution.application, execution.name);
} else if (eventType === 'orca:pipeline:failed' && execution.canceled) {
var cancellationUser = execution.canceledBy ? execution.canceledBy : null;
if (cancellationUser) {
reasonSegment = execution.cancellationReason ? ' for reason "' + execution.cancellationReason + '"' : '';
log('User ' + cancellationUser + ' canceled pipeline ' + execution.name + ' of application ' + execution.application + reasonSegment + ' at ' + creationTimestamp + '.', execution.application, execution.name, 'WARNING');
} else {
log('Pipeline ' + execution.name + ' of application ' + execution.application + ' failed at ' + creationTimestamp + '.', execution.application, execution.name, 'ERROR');
}
} else if (eventType === 'orca:pipeline:complete') {
log('Pipeline ' + execution.name + ' of application ' + execution.application + ' completed at ' + creationTimestamp + '.', execution.application, execution.name);
} else if (!content.standalone && context && stageDetails && stageDetails.type === 'manualJudgment' && eventType === 'orca:task:failed') {
var judgmentInputSegment = context.judgmentInput ? ' (judgment "' + context.judgmentInput + '" was selected)' : '';
log('User ' + context.lastModifiedBy + ' judged stage ' + stageDetails.name + ' of pipeline ' + execution.name + ' of application ' + execution.application + ' to stop' + judgmentInputSegment + ' at ' + creationTimestamp + '.', execution.application, execution.name, 'WARNING');
} else if (!content.standalone && context && stageDetails && stageDetails.type === 'manualJudgment' && eventType === 'orca:task:complete') {
var judgmentInputSegment = context.judgmentInput ? ' (judgment "' + context.judgmentInput + '" was selected)' : '';
log('User ' + context.lastModifiedBy + ' judged stage ' + stageDetails.name + ' of pipeline ' + execution.name + ' of application ' + execution.application + ' to continue' + judgmentInputSegment + ' at ' + creationTimestamp + '.');
} else if (eventType === 'orca:task:failed') {
var failureReasonSegment = context.exception && context.exception.details && context.exception.details.errors && context.exception.details.errors[0] ? ' due to ' + JSON.stringify(context.exception.details.errors) : '';
if (!content.standalone) {
log('Operation ' + stageDetails.name + ' (of type ' + stageDetails.type + ') of pipeline ' + execution.name + ' of application ' + execution.application + ' failed' + failureReasonSegment + ' at ' + creationTimestamp + '.', execution.application, execution.name, 'ERROR');
} else {
log('Ad-hoc operation ' + stageDetails.type + ' failed' + failureReasonSegment + ' at ' + creationTimestamp + '.', null, null, 'ERROR');
}
}
res.status(200).send('Success: ' + req.body.eventName);
}
} catch (err) {
log(err, 'ERROR');
res.status(err.code || 500).send(err);
}
};
/**
* Verify that the webhook request came from spinnaker/echo.
*
* @param {string} authorization The authorization header of the request, e.g. "Basic ZmdvOhJhcg=="
*/
function verifyWebhook (authorization) {
const basicAuth = new Buffer(authorization.replace('Basic ', ''), 'base64').toString();
const parts = basicAuth.split(':');
if (parts[0] !== config.USERNAME || parts[1] !== config.PASSWORD) {
const error = new Error('Invalid credentials');
error.code = 401;
throw error;
}
}
/**
* Writes message to StackDriver with specified severity.
*
* @param {string} message - The message to log to StackDriver logging.
* @param {('ALERT', 'CRITICAL', 'DEBUG', 'EMERGENCY', 'ERROR', 'INFO', 'NOTICE', 'WARNING', 'WRITE')} severity - The
* severity of the logged message. Defaults to 'INFO'.
*/
function log(message, application, pipeline, severity = 'INFO') {
var log = logging.log(config.AUDIT_LOG_NAME);
var metadata = {resource: {type: 'cloud_function'}, severity: severity};
var jsonPayload = {message: message};
if (application) {
jsonPayload.application = application;
}
if (pipeline) {
jsonPayload.pipeline = pipeline;
}
var entry = log.entry(metadata, jsonPayload);
log.write(entry);
}
================================================
FILE: scripts/install/spinnakerAuditLog/package.json
================================================
{
"dependencies": {
"@google-cloud/logging": "4.1.1",
"moment-timezone": "^0.5.11"
}
}
================================================
FILE: scripts/manage/add_gae_account.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
read -e -p "Please enter the id of the project within which you wish to manage GAE resources: " -i $PROJECT_ID MANAGED_PROJECT_ID
read -e -p "Please enter a name for the new Spinnaker account: " -i "$MANAGED_PROJECT_ID-acct" GAE_ACCOUNT_NAME
bold "Assigning required roles to $SERVICE_ACCOUNT_NAME..."
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
GAE_REQUIRED_ROLES=(storage.admin appengine.appAdmin cloudscheduler.admin cloudbuild.serviceAgent cloudtasks.queueAdmin)
EXISTING_ROLES=$(gcloud projects get-iam-policy --filter bindings.members:$SA_EMAIL $MANAGED_PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)")
if [ "$?" != "0" ]; then
bold "$USER does not have permission to query IAM policy on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
for r in "${GAE_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $EXISTING_ROLES | grep $r)" ]; then
bold "Assigning role $r in project $MANAGED_PROJECT_ID to service account $SA_EMAIL..."
gcloud projects add-iam-policy-binding $MANAGED_PROJECT_ID \
--member serviceAccount:$SA_EMAIL \
--role roles/$r \
--format=none
if [ "$?" != "0" ]; then
bold "$USER does not have permission to assign role $r on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
fi
done
~/hal/hal config provider appengine enable
~/hal/hal config provider appengine account add $GAE_ACCOUNT_NAME --project $MANAGED_PROJECT_ID
bold "Remember that your configuration changes have only been made locally."
bold "They must be pushed and applied to your deployment to take effect:"
bold " ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh"
================================================
FILE: scripts/manage/add_gce_account.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
read -e -p "Please enter the id of the project within which you wish to manage GCE resources: " -i $PROJECT_ID MANAGED_PROJECT_ID
read -e -p "Please enter a name for the new Spinnaker account: " -i "$MANAGED_PROJECT_ID-acct" GCE_ACCOUNT_NAME
bold "Assigning required roles to $SERVICE_ACCOUNT_NAME..."
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
GCE_REQUIRED_ROLES=(compute.instanceAdmin compute.networkAdmin compute.securityAdmin compute.storageAdmin iam.serviceAccountUser)
EXISTING_ROLES=$(gcloud projects get-iam-policy --filter bindings.members:$SA_EMAIL $MANAGED_PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)")
if [ "$?" != "0" ]; then
bold "$USER does not have permission to query IAM policy on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
for r in "${GCE_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $EXISTING_ROLES | grep $r)" ]; then
bold "Assigning role $r in project $MANAGED_PROJECT_ID to service account $SA_EMAIL..."
gcloud projects add-iam-policy-binding $MANAGED_PROJECT_ID \
--member serviceAccount:$SA_EMAIL \
--role roles/$r \
--format=none
if [ "$?" != "0" ]; then
bold "$USER does not have permission to assign role $r on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
fi
done
~/hal/hal config provider google account add $GCE_ACCOUNT_NAME --project $MANAGED_PROJECT_ID
~/hal/hal config provider google enable
bold "Remember that your configuration changes have only been made locally."
bold "They must be pushed and applied to your deployment to take effect:"
bold " ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh"
================================================
FILE: scripts/manage/add_gke_account.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
CURRENT_K8S_CONTEXT=$(kubectl config current-context)
AVAILABLE_K8S_CONTEXTS=$(kubectl config get-contexts -o name)
echo "Available contexts:"
echo "$AVAILABLE_K8S_CONTEXTS"
echo
if [ -z $CURRENT_K8S_CONTEXT ]; then
read -e -p "Please enter the context you wish to use to manage your GKE resources: " TARGET_K8S_CONTEXT
else
read -e -p "Please enter the context you wish to use to manage your GKE resources: " -i $CURRENT_K8S_CONTEXT TARGET_K8S_CONTEXT
fi
FOUND_CONTEXT=$(echo "$AVAILABLE_K8S_CONTEXTS" | grep "^$TARGET_K8S_CONTEXT$")
if [ -z $FOUND_CONTEXT ]; then
bold "$TARGET_K8S_CONTEXT not found in available contexts..."
exit 1
fi
MANAGED_PROJECT_ID=$(echo $TARGET_K8S_CONTEXT | cut -d _ -f 2)
read -e -p "Please enter the id of the project within which the referenced cluster lives: " -i $MANAGED_PROJECT_ID MANAGED_PROJECT_ID
read -e -p "Please enter a name for the new Spinnaker account: " -i "$(echo $TARGET_K8S_CONTEXT | cut -d _ -f 4)-acct" GKE_ACCOUNT_NAME
bold "Assigning required roles to $SERVICE_ACCOUNT_NAME..."
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
GKE_REQUIRED_ROLES=(container.admin)
EXISTING_ROLES=$(gcloud projects get-iam-policy --filter bindings.members:$SA_EMAIL $MANAGED_PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)")
if [ "$?" != "0" ]; then
bold "$USER does not have permission to query IAM policy on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
for r in "${GKE_REQUIRED_ROLES[@]}"; do
if [ -z "$(echo $EXISTING_ROLES | grep $r)" ]; then
bold "Assigning role $r in project $MANAGED_PROJECT_ID to service account $SA_EMAIL..."
gcloud projects add-iam-policy-binding $MANAGED_PROJECT_ID \
--member serviceAccount:$SA_EMAIL \
--role roles/$r \
--format=none
if [ "$?" != "0" ]; then
bold "$USER does not have permission to assign role $r on project $MANAGED_PROJECT_ID." \
"Please grant the necessary permissions and re-run this command."
exit 1
fi
fi
done
mkdir -p ~/.hal/default/credentials
KUBECONFIG_FILENAME="kubeconfig-$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 9 | head -n 1)"
bold "Copying ~/.kube/config into ~/.hal/default/credentials/$KUBECONFIG_FILENAME so it can be pushed to your halyard daemon's pod..."
cp ~/.kube/config ~/.hal/default/credentials/$KUBECONFIG_FILENAME
~/hal/hal config provider kubernetes account add $GKE_ACCOUNT_NAME \
--provider-version v2 \
--context $TARGET_K8S_CONTEXT \
--kubeconfig-file ~/.hal/default/credentials/$KUBECONFIG_FILENAME
bold "Remember that your configuration changes have only been made locally."
bold "They must be pushed and applied to your deployment to take effect:"
bold " ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh"
================================================
FILE: scripts/manage/add_missing_properties.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
PROPERTIES_FILE=$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties
add_property_if_missing() {
if [ -z "$(grep "export $1=" $PROPERTIES_FILE)" ]; then
bold "Adding declaration of $1 to $PROPERTIES_FILE..."
echo >> $PROPERTIES_FILE
echo "$2" >> $PROPERTIES_FILE
fi
}
read -r -d '' CSR_PROPERTY_DECLARATION <<EOL
# If CSR repo does not exist, it will be created.
export CONFIG_CSR_REPO=\$DEPLOYMENT_NAME-config
EOL
add_property_if_missing CONFIG_CSR_REPO "$CSR_PROPERTY_DECLARATION"
add_property_if_missing NETWORK_PROJECT "export NETWORK_PROJECT=\$PROJECT_ID"
add_property_if_missing NETWORK_REFERENCE "export NETWORK_REFERENCE=projects/\$NETWORK_PROJECT/global/networks/\$NETWORK"
add_property_if_missing SUBNET_REFERENCE "export SUBNET_REFERENCE=projects/\$NETWORK_PROJECT/regions/\$REGION/subnetworks/\$SUBNET"
================================================
FILE: scripts/manage/apply_config.sh
================================================
#!/usr/bin/env bash
HALYARD_POD=spin-halyard-0
# TODO(duftler): Use --wait-for-completion?
kubectl exec $HALYARD_POD -n halyard -- bash -c 'hal deploy apply'
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/deploy_application_manifest.sh
================================================
FILE: scripts/manage/check_cluster_config.sh
================================================
#!/usr/bin/env bash
# The logic is roughly as follows:
# Check each configured context that is in the specified project for a Spinnaker deployment.
# If there are no matching contexts that contain a deployment, retrieve credentials for all
# of the specified project's clusters and check each of the newly-configured contexts for
# a Spinnaker deployment.
# If there is exactly one matching context, and it contains a deployment, set that as the current context.
# Otherwise, generate a 'kubectl config use-context' command for each matching context that contains a
# deployment (and indicate if one of them is already set as the current context).
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
if [ -z "$PROJECT_ID" ]; then
PROJECT_ID=$(gcloud info --format='value(config.project)')
fi
if [ -z "$PROJECT_ID" ]; then
echo "PROJECT_ID must be specified."
exit 1
fi
check_for_spinnaker_deployment() {
if [ "$1" == "$CURRENT_CONTEXT" ]; then
CURRENT_CONTEXT_MATCH=" (CURRENT CONTEXT)"
else
unset CURRENT_CONTEXT_MATCH
fi
PROJECT_CONTAINING_CLUSTER=$(echo $1 | cut -d _ -f 2)
if [ "$PROJECT_CONTAINING_CLUSTER" == "$PROJECT_ID" ]; then
bold "Checking for Spinnaker deployment in Kubernetes context $1..."
SPINNAKER_APPLICATION_LIST_JSON=$(kubectl get applications -n spinnaker -l app.kubernetes.io/name=spinnaker --output json --context $1)
SPINNAKER_APPLICATION_COUNT=$(echo $SPINNAKER_APPLICATION_LIST_JSON | jq '.items | length')
if [ "$SPINNAKER_APPLICATION_COUNT" == "0" ] || [ -z $SPINNAKER_APPLICATION_COUNT ]; then
bold "No Spinnaker deployment was found via context $1$CURRENT_CONTEXT_MATCH."
elif [ "$SPINNAKER_APPLICATION_COUNT" == "1" ]; then
bold "Found Spinnaker deployment $(echo $SPINNAKER_APPLICATION_LIST_JSON | jq -r .items[0].metadata.name) via context $1$CURRENT_CONTEXT_MATCH."
FOUND_MATCH_IN_PROJECT=true
if [ $2 ] && [ -z "$CURRENT_CONTEXT_MATCH" ]; then
bold "You can select this context using: kubectl config use-context $1"
fi
if [ $3 ]; then
kubectl config use-context $1
fi
else
bold "Multiple Spinnaker deployments were found in cluster $1. This should never be the case."
if [ "$CURRENT_CONTEXT_MATCH" ]; then
clear_current_context
fi
exit 1
fi
else
# The context is not from the specified project.
if [ "$CURRENT_CONTEXT_MATCH" ]; then
clear_current_context
fi
fi
}
clear_current_context() {
sed -i"" -e"s/^current-context:.*$/current-context:/" ~/.kube/config
}
check_all_contexts() {
if [ "$CONTEXT_COUNT" == "1" ]; then
# Since there is exactly one context configured, we'll set that as the current context (if a deployment is found).
check_for_spinnaker_deployment $CONTEXT_LIST "" "select_context"
else
CONTEXT_LIST=($CONTEXT_LIST)
# Since there are multiple contexts configured, we will just query for Spinnaker deployments and generate
# commands that can be used to select a current context.
for c in "${CONTEXT_LIST[@]}"; do
check_for_spinnaker_deployment $c "generate_command"
done
fi
}
query_configured_contexts() {
CONTEXT_LIST=$(kubectl config get-contexts -o name)
CONTEXT_COUNT=$(echo "$CONTEXT_LIST" | sed '/^$/d' | wc -l)
}
get_all_project_cluster_credentials() {
bold "Querying for GKE clusters in project $PROJECT_ID..."
CLUSTER_LIST=$(gcloud beta container clusters list --format json --project $PROJECT_ID)
CLUSTER_COUNT=$(echo $CLUSTER_LIST | jq '. | length')
if [ "$CLUSTER_COUNT" == "0" ]; then
bold "No GKE clusters were found in project $PROJECT_ID."
exit 1
fi
for (( i=0; i<$CLUSTER_COUNT; i++ )); do
# TODO: Determine implications of encountering non-zonal cluster here.
bold "Retrieving credentials from project $PROJECT_ID for cluster" \
"$(echo $CLUSTER_LIST | jq -r ".[$i].name")" \
"in zone $(echo $CLUSTER_LIST | jq -r ".[$i].zone")..."
gcloud container clusters get-credentials $(echo $CLUSTER_LIST | jq -r ".[$i].name") \
--zone $(echo $CLUSTER_LIST | jq -r ".[$i].zone") --project $PROJECT_ID
done
# Removing current context since gcloud container clusters get-credentials implicitly sets it.
# We want to avoid this behavior since there may be multiple clusters in this project.
# If there is exactly one cluster, the context will be automatically selected in the next steps anyway.
clear_current_context
}
bold "Querying for current Kubernetes context..."
CURRENT_CONTEXT=$(kubectl config current-context)
query_configured_contexts
if [ "$CONTEXT_COUNT" != "0" ]; then
check_all_contexts
fi
if [ -z "$FOUND_MATCH_IN_PROJECT" ]; then
get_all_project_cluster_credentials
query_configured_contexts
check_all_contexts
fi
================================================
FILE: scripts/manage/check_duplicate_dirs.sh
================================================
#!/usr/bin/env bash
bold() {
echo "$(tput bold)""$*" "$(tput sgr0)";
}
if [ -d "$HOME/cloudshell_open" ]; then
MATCHING_REPO_DIRS=$(find ~/cloudshell_open -maxdepth 1 -regex '.*/spinnaker-for-gcp-.+')
if [ "$MATCHING_REPO_DIRS" ]; then
NUM_EXTRANEOUS_DIRS=$(echo "$MATCHING_REPO_DIRS" | wc -l)
bold "It looks like you might have cloned the spinnaker-for-gcp repository into" \
"more than one directory. If you have any directories other than" \
"$HOME/cloudshell_open/spinnaker-for-gcp that contain the repo, delete" \
"them in order to avoid unwanted behavior."
bold "If you have any directory that starts with spinnaker-for-gcp-*, even if" \
"it doesn't contain a clone of the repo, you have to delete or move that" \
"in order for this script to run."
bold "Conflicting directories:"
bold "$MATCHING_REPO_DIRS"
bold "All Spinnaker for GCP commands are required to be run within the" \
"~/cloudshell_open/spinnaker-for-gcp directory."
exit 1
fi
fi
if [ -d "$HOME/spinnaker-for-gcp" ]; then
bold "It looks like the spinnaker-for-gcp repository was cloned into" \
"~/spinnaker-for-gcp. The current target location for the cloned repo" \
"is ~/cloudshell_open/spinnaker-for-gcp. If you have any directories other" \
"than $HOME/cloudshell_open/spinnaker-for-gcp that contain the repo," \
"delete them in order to avoid unwanted behavior."
bold "All Spinnaker for GCP commands are required to be run within the" \
"~/cloudshell_open/spinnaker-for-gcp directory."
bold "The easiest way to resolve this is to:"
bold " - Delete the ~/spinnaker-for-gcp directory."
bold " - Exit out of Cloud Shell."
bold " - Click once more on the link you used to reach Cloud Shell (might be in" \
"GCP Marketplace, might be in the Application details view in GKE)."
exit 1
fi
================================================
FILE: scripts/manage/check_git_config.sh
================================================
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
GIT_USERNAME=$(git config --global --get user.name)
GIT_EMAIL=$(git config --global --get user.email)
if [ -z "$GIT_USERNAME" ]; then
bold "Your Git account username is not set. Run 'git config --global user.name \"Your Name\"' and try again."
exit 1
fi
if [ -z "$GIT_EMAIL" ]; then
bold "Your Git account email is not set. Run 'git config --global user.email \"you@example.com\"' and try again."
exit 1
fi
================================================
FILE: scripts/manage/check_project_mismatch.sh
================================================
#!/usr/bin/env bash
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
[ -z "$PROPERTIES_FILE" ] && PROPERTIES_FILE="$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties"
source "$PROPERTIES_FILE"
GCLOUD_PROJECT_ID=$(gcloud info --format='value(config.project)')
GCLOUD_PROJECT_ID=${GCLOUD_PROJECT_ID:-'not set'}
if [ "$GCLOUD_PROJECT_ID" != $PROJECT_ID ]; then
gcloud config set project $PROJECT_ID
bold "Your Spinnaker config references GCP project id $PROJECT_ID, but your gcloud default project id was $GCLOUD_PROJECT_ID."
bold "For safety when executing gcloud commands, 'gcloud config set project $PROJECT_ID' has been used to change the gcloud default."
fi
================================================
FILE: scripts/manage/cluster_utils.sh
================================================
#!/usr/bin/env bash
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
check_for_existing_cluster() {
bold "Checking for existing cluster $GKE_CLUSTER..." >&2
CLUSTER_EXISTS=$(gcloud container clusters list --project $PROJECT_ID \
--filter="name=$GKE_CLUSTER" \
--format="value(name)")
echo $CLUSTER_EXISTS
}
check_existing_cluster_location() {
bold "Verifying location of existing cluster $GKE_CLUSTER..."
# Query for cluster in specified zone, just in case there are multiple clusters with the same name.
CLUSTER_EXISTS_IN_SPECIFIED_ZONE=$(gcloud container clusters list --project $PROJECT_ID \
--zone=$ZONE \
--filter="name=$GKE_CLUSTER" \
--format="value(location)")
# If it's not in the specified zone, figure out where exactly it is.
if [ -z "$CLUSTER_EXISTS_IN_SPECIFIED_ZONE" ]; then
EXISTING_CLUSTER_LOCATION=$(gcloud container clusters list --project $PROJECT_ID \
--filter="name=$GKE_CLUSTER" \
--format="value(location)")
LOCATION_IS_REGION=$(gcloud compute regions list --project $PROJECT_ID \
--filter="name=$EXISTING_CLUSTER_LOCATION" \
--format="value(name)")
if [ -n "$LOCATION_IS_REGION" ]; then
bold "Your pre-existing cluster $GKE_CLUSTER is regional; we do not support regional clusters."
exit 1
fi
fi
}
check_existing_cluster_prereqs() {
EXISTING_CLUSTER_DESCRIPTION=$(gcloud container clusters describe $GKE_CLUSTER --zone $ZONE --format json)
IP_ALIASES_ENABLED=$(echo $EXISTING_CLUSTER_DESCRIPTION | jq .ipAllocationPolicy.useIpAliases)
if [ "$IP_ALIASES_ENABLED" != "true" ]; then
bold "Your pre-existing cluster must have IP Aliases enabled."
exit 1
fi
NODE_CONFIG_SERVICE_ACCOUNT=$(echo $EXISTING_CLUSTER_DESCRIPTION | jq -r .nodeConfig.serviceAccount)
# If using the "Compute Engine default service account", Full Cloud Platform scope is required for its nodes.
if [ "$NODE_CONFIG_SERVICE_ACCOUNT" == "default" ]; then
NODES_HAVE_CLOUD_PLATFORM_SCOPE=$(echo $EXISTING_CLUSTER_DESCRIPTION | \
jq '[.nodeConfig.oauthScopes[] == "https://www.googleapis.com/auth/cloud-platform"] | any')
if [ "$NODES_HAVE_CLOUD_PLATFORM_SCOPE" != "true" ]; then
bold "Your pre-existing cluster is using the \"Compute Engine default service account\". As such," \
"your nodes must have Full Cloud Platform scope."
bold "In general, we recommend using an IAM-backed service account instead. An IAM-backed service" \
"account will be assigned the required roles during the Spinnaker for GCP setup process."
exit 1
fi
fi
}
================================================
FILE: scripts/manage/connect_to_redis.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
bold "Resolving redis host..."
export REDIS_INSTANCE_HOST=$(gcloud redis instances list \
--project $PROJECT_ID --region $REGION \
--filter="name=projects/$PROJECT_ID/locations/$REGION/instances/$REDIS_INSTANCE" \
--format="value(host)")
bold "Locating redis-cli deployment..."
REDIS_CLI_DEPLOYMENT=$(kubectl get deployments -n spinnaker --field-selector metadata.name=redisbox \
--output name)
if [ -z $REDIS_CLI_DEPLOYMENT ]; then
bold "Deploying redis-cli..."
kubectl run redisbox --image=gcr.io/google_containers/redis:v1 -n spinnaker
fi
bold "Waiting for redis-cli deployment to become available..."
kubectl wait --for condition=available deployment redisbox -n spinnaker
bold "Locating redis-cli pod..."
REDIS_CLI_POD=$(kubectl get pods -n spinnaker -l run=redisbox \
-o=jsonpath='{.items[0].metadata.name}')
bold "Connecting to redis-cli pod and specifying redis host $REDIS_INSTANCE_HOST..."
kubectl exec -it $REDIS_CLI_POD -n spinnaker -- redis-cli -h $REDIS_INSTANCE_HOST
bold "Deleting redis-cli deployment..."
kubectl delete deployment redisbox -n spinnaker
================================================
FILE: scripts/manage/connect_unsecured.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
bold "Locating Deck pod..."
DECK_POD=$(kubectl -n spinnaker get pods -l cluster=spin-deck,app=spin \
-o=jsonpath='{.items[0].metadata.name}')
bold "Forwarding localhost port 8080 to 9000 on $DECK_POD..."
pkill -f 'kubectl -n spinnaker port-forward'
kubectl -n spinnaker port-forward $DECK_POD 8080:9000 > /dev/null 2>&1 &
# Query for static ip address as a signal that the Spinnaker installation is exposed via a secured endpoint.
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
if [ "$IP_ADDR" ]; then
bold "Are you sure you aren't intending to connect via the domain name instead? Asking since you have a static ip configured..."
fi
================================================
FILE: scripts/manage/deploy_application_manifest.sh
================================================
#!/usr/bin/env bash
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
[ -z "$PROPERTIES_FILE" ] && PROPERTIES_FILE="$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties"
if [ ! -f "$PROPERTIES_FILE" ]; then
bold "No properties file was found. Not updating GKE Application details view."
git checkout -- $PARENT_DIR/spinnaker-for-gcp/scripts/manage/landing_page_expanded.md
exit 0
fi
source "$PROPERTIES_FILE"
# Query for static ip address as a signal that the Spinnaker installation is exposed via a secured endpoint.
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
if [ -z "$IP_ADDR" ]; then
APP_MANIFEST_MIDDLE=spinnaker_application_manifest_middle_unsecured.yaml
else
APP_MANIFEST_MIDDLE=spinnaker_application_manifest_middle_secured.yaml
fi
kubectl apply -f "https://raw.githubusercontent.com/GoogleCloudPlatform/marketplace-k8s-app-tools/master/crd/app-crd.yaml"
cat $PARENT_DIR/spinnaker-for-gcp/templates/spinnaker_application_manifest_top.yaml \
$PARENT_DIR/spinnaker-for-gcp/templates/$APP_MANIFEST_MIDDLE \
$PARENT_DIR/spinnaker-for-gcp/templates/spinnaker_application_manifest_bottom.yaml \
| envsubst | kubectl apply -f -
bold "Labeling resources as components of application $DEPLOYMENT_NAME..."
kubectl label service --overwrite -n spinnaker spin-clouddriver app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-deck app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-echo app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-front50 app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-gate app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-igor app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-kayenta app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-orca app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label service --overwrite -n spinnaker spin-rosco app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-clouddriver app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-deck app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-echo app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-front50 app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-gate app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-igor app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-kayenta app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-orca app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
kubectl label deployment --overwrite -n spinnaker spin-rosco app.kubernetes.io/name=$DEPLOYMENT_NAME -o name
================================================
FILE: scripts/manage/generate_deletion_script.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
source ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
bold "Generating deletion script for $DEPLOYMENT_NAME in cluster $GKE_CLUSTER of project $PROJECT_ID..."
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
DELETION_SCRIPT_FILENAME="$HOME/cloudshell_open/spinnaker-for-gcp/scripts/manage/delete-all_${PROJECT_ID}_${GKE_CLUSTER}_${DEPLOYMENT_NAME}.sh"
SA_EMAIL=$(gcloud iam service-accounts --project $PROJECT_ID list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
cat > $DELETION_SCRIPT_FILENAME <<EOL
#!/usr/bin/env bash
# Ensure that you comment out the deletion commands for resources you'd rather not delete.
bold() {
echo ". \$(tput bold)" "\$*" "\$(tput sgr0)";
}
bold "Deleting cluster $GKE_CLUSTER in $PROJECT_ID..."
gcloud container clusters delete $GKE_CLUSTER --zone $ZONE --project $PROJECT_ID
bold "Deleting bucket $BUCKET_URI..."
gsutil rm -r $BUCKET_URI
bold "Deleting Cloud Source Repository $CONFIG_CSR_REPO..."
gcloud source repos delete $CONFIG_CSR_REPO --project=$PROJECT_ID
bold "Deleting subscription $GCR_PUBSUB_SUBSCRIPTION in $PROJECT_ID..."
gcloud pubsub subscriptions delete $GCR_PUBSUB_SUBSCRIPTION --project $PROJECT_ID
bold "Deleting subscription $GCB_PUBSUB_SUBSCRIPTION in $PROJECT_ID..."
gcloud pubsub subscriptions delete $GCB_PUBSUB_SUBSCRIPTION --project $PROJECT_ID
bold "Deleting cloud function $CLOUD_FUNCTION_NAME in $PROJECT_ID..."
gcloud functions delete $CLOUD_FUNCTION_NAME --region $REGION --project $PROJECT_ID
bold "Deleting redis instance $REDIS_INSTANCE in $NETWORK_PROJECT..."
gcloud redis instances delete $REDIS_INSTANCE --region $REGION --project $NETWORK_PROJECT
EOL
if [ "$SA_EMAIL" ]; then
EXISTING_ROLES=($(gcloud projects get-iam-policy --filter bindings.members:$SA_EMAIL $PROJECT_ID \
--flatten bindings[].members --format="value(bindings.role)"))
for r in "${EXISTING_ROLES[@]}"; do
cat >> $DELETION_SCRIPT_FILENAME <<EOL
bold "Deleting IAM policy binding for role $r from $SA_EMAIL in $PROJECT_ID..."
gcloud projects remove-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role $r
EOL
done
cat >> $DELETION_SCRIPT_FILENAME <<EOL
bold "Deleting service account $SA_EMAIL in $PROJECT_ID..."
gcloud iam service-accounts delete $SA_EMAIL --project $PROJECT_ID
EOL
fi
# Query for static ip address as a signal that the Spinnaker installation is exposed via a secured endpoint.
export IP_ADDR=$(gcloud compute addresses list --filter="name=$STATIC_IP_NAME" \
--format="value(address)" --global --project $PROJECT_ID)
if [ "$IP_ADDR" ]; then
cat >> $DELETION_SCRIPT_FILENAME <<EOL
bold "Deleting static IP address $STATIC_IP_NAME in $PROJECT_ID..."
gcloud compute addresses delete $STATIC_IP_NAME --global --project $PROJECT_ID
bold "Deleting managed SSL certificate $MANAGED_CERT in project $PROJECT_ID..."
gcloud beta compute ssl-certificates delete $MANAGED_CERT --global --project $PROJECT_ID
bold "Deleting service endpoint $DOMAIN_NAME in project $PROJECT_ID..."
gcloud endpoints services delete $DOMAIN_NAME --project $PROJECT_ID
bold "Ensure that you manually delete your OAuth Client ID here: https://console.developers.google.com/apis/credentials?project=$PROJECT_ID"
EOL
fi
chmod +x $DELETION_SCRIPT_FILENAME
echo
bold "Use this command to delete all the resources that were provisioned as part of your Spinnaker installation:"
bold " $DELETION_SCRIPT_FILENAME"
echo
bold "Warning: If you installed Spinnaker on pre-existing infrastructure (GKE cluster, Redis, service accounts, ...)," \
"this script deletes them. If you want to keep them, edit the generated cleanup script $DELETION_SCRIPT_FILENAME" \
"to comment out the specific deletion commands for items you want to keep:"
bold " cloudshell edit $DELETION_SCRIPT_FILENAME"
================================================
FILE: scripts/manage/grant_iap_access.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
echo "Please enter the member you wish to grant the 'IAP-secured Web App User' role."
echo "Note that you must include the correct prefix depending on the type of member."
echo "These are the supported types: "
echo " user:some-user@somedomain.net, serviceAccount:some-service-account@some-project.iam.gserviceaccount.com, group:some-group@somedomain.net, domain:somedomain.net"
read -p "Member to add: " MEMBER_TO_ADD
echo
pushd ~/cloudshell_open/spinnaker-for-gcp/scripts/install
source ./properties
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_project_mismatch.sh
source ~/cloudshell_open/spinnaker-for-gcp/scripts/expose/set_iap_properties.sh
gcurl() {
curl -s -H "Authorization:Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" -H "Accept: application/json" \
-H "X-Goog-User-Project: $PROJECT_ID" $*
}
bold "Querying for existing IAM policy..."
export EXISTING_IAM_POLICY=$(gcurl -X POST -d "{"options":{"requested_policy_version":3}}" \
https://iap.googleapis.com/v1beta1/projects/$PROJECT_NUMBER/iap_web/compute/services/$BACKEND_SERVICE_ID:getIamPolicy)
if [ "$(echo $EXISTING_IAM_POLICY | grep "\"$MEMBER_TO_ADD\"")" ]; then
bold "Member $MEMBER_TO_ADD already has the 'IAP-secured Web App User' role."
exit 1
fi
UPDATED_IAM_POLICY=$(echo "{}" \
| jq --argjson existing_policy "$EXISTING_IAM_POLICY" '. += {"policy":$existing_policy}' \
| jq ".policy.bindings[0].members += [\"$MEMBER_TO_ADD\"]")
bold "Granting member $MEMBER_TO_ADD the 'IAP-secured Web App User' role..."
echo $UPDATED_IAM_POLICY | gcurl -X POST -d @- \
https://iap.googleapis.com/v1beta1/projects/$PROJECT_NUMBER/iap_web/compute/services/$BACKEND_SERVICE_ID:setIamPolicy
popd
================================================
FILE: scripts/manage/instructions.txt
================================================
+------------------------------------------------------------------------------------------+
| |
| To reopen the ongoing management instructions in the right-hand pane at any time, enter: |
| |
| ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_console.sh |
| |
+------------------------------------------------------------------------------------------+
================================================
FILE: scripts/manage/landing_page_base.md
================================================
# Manage Spinnaker
Use this section to manage your Spinnaker deployment going forward.
## Select GCP project
Select the project in which your Spinnaker is installed, then click **Start**.
<walkthrough-project-billing-setup>
</walkthrough-project-billing-setup>
## Manage Spinnaker via Halyard from Cloud Shell
This management environment lets you run [Halyard
commands](https://www.spinnaker.io/reference/halyard/) to configure and manage
your Spinnaker installation.
### Ensure you are connected to the correct Kubernetes context
```bash
PROJECT_ID={{project-id}} ~/cloudshell_open/spinnaker-for-gcp/scripts/manage/check_cluster_config.sh
```
### Pull Spinnaker config
Paste and run this command to pull the configuration from your Spinnaker
deployment into your Cloud Shell.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/pull_config.sh
```
### Update this console
**This is required if you've just pulled config from a different Spinnaker deployment.**
This command refreshes the contents of the right-hand pane, including details on how
to connect to Spinnaker.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_console.sh
```
### Configure Spinnaker via Halyard
All [halyard](https://www.spinnaker.io/reference/halyard/commands/) commands are available.
```bash
hal config
```
As with provisioning Spinnaker, don't use `hal deploy connect` when managing
Spinnaker. Also, don't use `hal deploy apply`. Instead, use the `push_and_apply.sh`
command shown below.
### Notes on Halyard commands that reference local files
If you add a Kubernetes account that references a kubeconfig file, that file must live within
the '`~/.hal/default/credentials`' directory on your Cloud Shell VM. The
kubeconfig is specified using the `--kubeconfig-file` argument to the
`hal config provider kubernetes account add` and ...`edit` commands.
A similar requirement applies for any other local file referenced from your halyard config,
including Google JSON key files specified via the `--json-path` argument to various commands.
These files must live within '`~/.hal/default/credentials`' or '`~/.hal/default/profiles`'.
### Push and apply updated config to Spinnaker deployment
If you change any of the configuration, paste and run this command to push
and apply those changes to your Spinnaker deployment.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh
```
## Included command-line tools
### Halyard CLI
The [Halyard CLI](https://www.spinnaker.io/reference/halyard/) (`hal`) and
daemon are installed in your Cloud Shell.
If you want to use a specific version of Halyard, use:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/cli/install_hal.sh --version $HALYARD_VERSION
```
If you want to upgrade to the latest version of Halyard, use:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/cli/update_hal.sh
```
### Spinnaker CLI
The [Spinnaker CLI](https://www.spinnaker.io/guides/spin/app/)
(`spin`) is installed in your Cloud Shell.
If you want to upgrade to the latest version, use:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/cli/install_spin.sh
```
## Scripts for Common Commands
Remember that any configuration changes you make locally (e.g. adding
accounts) must be pushed and applied to your deployment to take effect:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_and_apply.sh
```
### Add Spinnaker account for GKE
This script grants the required
[IAM roles](https://cloud.google.com/kubernetes-engine/docs/how-to/iam) to the
Spinnaker instance's service account, in the GCP project containing the referenced
cluster.
Before you run this command, make sure you've configured the context you intend
to use to manage your GKE resources.
The public Spinnaker documentation contains details on [configuring GKE
clusters](https://www.spinnaker.io/setup/install/providers/kubernetes-v2/gke/).
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/add_gke_account.sh
```
### Add Spinnaker account for GCE
This script grants the required
[IAM roles](https://cloud.google.com/compute/docs/access/) to the Spinnaker
instance's service account, in the GCP project within which you wish to manage
GCE resources.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/add_gce_account.sh
```
### Add Spinnaker account for GAE
This script grants the required
[IAM roles](https://cloud.google.com/appengine/docs/admin-api/access-control)
to the Spinnaker instance's service account, in the GCP project within which you
wish to manage GAE resources.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/add_gae_account.sh
```
### Upgrade Spinnaker
First, modify `SPINNAKER_VERSION` in your `properties` file to reflect the desired version of Spinnaker:
```bash
cloudshell edit ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
```
Next, use Halyard to apply the changes:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_spinnaker_version.sh
```
### Upgrade Halyard daemon running in cluster
First, modify `HALYARD_VERSION` in your `properties` file to reflect the desired version of Halyard:
```bash
cloudshell edit ~/cloudshell_open/spinnaker-for-gcp/scripts/install/properties
```
Next, apply this change to the Statefulset managing the Halyard daemon:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_halyard_daemon.sh
```
### Upgrade Management Environment
Update the commands and documentation in your management environment to the latest available version.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_management_environment.sh
```
### Sign up for Spinnaker for GCP updates and announcements
Join the [mailing list](https://groups.google.com/forum/#!forum/spinnaker-for-gcp-announce) to keep informed about updates and other announcements.
### Connect to Redis
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/connect_to_redis.sh
```
### Restore a backup to Cloud Shell
Restore a backup of the halyard configuration and deployment configuration from Cloud Source Repositories to your Cloud Shell.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/restore_backup_to_cloud_shell.sh -p $PROJECT_ID -r $CONFIG_CSR_REPO -h GIT_HASH
```
All backups can be viewed in this [Cloud Source Repository](https://source.cloud.google.com/$PROJECT_ID/$CONFIG_CSR_REPO).
## Configure Operator Access
To add additional operators, grant them the `Owner` role on GCP Project {{project-id}}: [IAM Permissions](https://console.developers.google.com/iam-admin/iam?project={{project-id}})
Once they have been added to the project, they can locate Spinnaker by navigating to the newly-registered [Kubernetes Application](https://console.developers.google.com/kubernetes/application/$ZONE/$DEPLOYMENT_NAME/spinnaker/$DEPLOYMENT_NAME?project={{project-id}}).
The application's *Next Steps* section contains the relevant links and operator instructions.
### If you have secured Spinnaker via IAP
Granting someone the `Owner` role does not implicitly grant them access as a user. For configuring user access, please continue on to the *Configure User Access (IAP)* section.
================================================
FILE: scripts/manage/landing_page_secured.md
================================================
## Configure User Access (IAP)
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/grant_iap_access.sh
```
Alternatively, you can manually grant the `IAP-secured Web App User` role on the `spinnaker/spin-deck` resource to the user you'd like to grant access to [here](https://console.developers.google.com/security/iap?project={{project-id}}).
## Use Spinnaker
### Connect to Spinnaker
Connect to your Spinnaker installation [here](https://$DOMAIN_NAME).
### View Spinnaker Audit Log
View the who, what, when and where of your Spinnaker installation
[here](https://console.developers.google.com/logs/viewer?project={{project-id}}&resource=cloud_function&logName=projects%2F{{project-id}}%2Flogs%2F$CLOUD_FUNCTION_NAME&minLogLevel=200).
### View Spinnaker Container Logs
View the logging output of the individual components of your Spinnaker installation
[here](https://console.developers.google.com/logs/viewer?project={{project-id}}&resource=k8s_container%2Fcluster_name%2F$GKE_CLUSTER%2Fnamespace_name%2Fspinnaker).
### Install sample applications and pipelines
There are sample applications with example pipelines available to install and try out.
View and install the samples by running this command:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/list_samples.sh
```
## Delete Spinnaker
### Generate a cleanup script
This command generates a script that deletes all the resources that were provisioned as part of your Spinnaker installation.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/generate_deletion_script.sh
```
================================================
FILE: scripts/manage/landing_page_unsecured.md
================================================
## Use Spinnaker
### Forward the port to Deck, and connect
Don't use the `hal deploy connect` command. Instead, use the following command
only.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/connect_unsecured.sh
```
To connect to the Deck UI, click on the Preview button above and select "Preview on port 8080":

### View Spinnaker Audit Log
View the who, what, when and where of your Spinnaker installation
[here](https://console.developers.google.com/logs/viewer?project={{project-id}}&resource=cloud_function&logName=projects%2F{{project-id}}%2Flogs%2F$CLOUD_FUNCTION_NAME&minLogLevel=200).
### View Spinnaker Container Logs
View the logging output of the individual components of your Spinnaker installation
[here](https://console.developers.google.com/logs/viewer?project={{project-id}}&resource=k8s_container%2Fcluster_name%2F$GKE_CLUSTER%2Fnamespace_name%2Fspinnaker).
### Expose Spinnaker
If you would like to connect to Spinnaker without relying on port forwarding, we can
expose it via a secure domain behind the [Identity-Aware Proxy](https://cloud.google.com/iap/).
Note that this phase could take 30-60 minutes. **Spinnaker will be inaccessible during this time.**
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/expose/configure_endpoint.sh
```
### Install sample applications and pipelines
There are sample applications with example pipelines available to install and try out.
View and install the samples by running this command:
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/list_samples.sh
```
## Delete Spinnaker
### Generate a cleanup script
This command generates a script that deletes all the resources that were provisioned as part of your Spinnaker installation.
```bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/generate_deletion_script.sh
```
================================================
FILE: scripts/manage/list_samples.sh
================================================
#!/usr/bin/env bash
bold() {
echo ". $(tput bold)" "$*" "$(tput sgr0)";
}
bold "Here is a list of sample applications available to install. Selecting one will launch" \
"a tutorial to install it."
PS3='Please enter your choice: '
tutorials=($(ls -d ~/cloudshell_open/spinnaker-for-gcp/samples/*/ | xargs -n 1 basename) "Quit")
select tutorial in "${tutorials[@]}"
do
case $tutorial in
"Quit")
break
;;
"")
bold "Please choose a valid entry (1-${#tutorials[@]})";;
*)
bold "Launching $tutorial tutorial..."
cloudshell launch-tutorial ~/cloudshell_open/spinnaker-for-gcp/samples/$tutorial/install.md
break
;;
esac
done
================================================
FILE: scripts/manage/pull_config.sh
================================================
#!/usr/bin/env bash
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
if [ "$CI" == true ]; then
HAL_PARENT_DIR=$PARENT_DIR
else
HAL_PARENT_DIR=$HOME
fi
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
[ -z "$PROPERTIES_FILE" ] && PROPERTIES_FILE="$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties"
$PARENT_DIR/spinnaker-for-gcp/scripts/manage/check_duplicate_dirs.sh || exit 1
CURRENT_CONTEXT=$(kubectl config current-context)
if [ "$?" != "0" ]; then
bold "No current Kubernetes context is configured."
exit 1
fi
HALYARD_POD=spin-halyard-0
TEMP_DIR=$(mktemp -d -t halyard.XXXXX)
pushd $TEMP_DIR
mkdir .hal
# Remove local config so persistent config from Halyard Daemon pod can be copied into place.
bold "Removing $HAL_PARENT_DIR/.hal..."
rm -rf $HAL_PARENT_DIR/.hal
# Copy persistent config into place.
bold "Copying halyard/$HALYARD_POD:/home/spinnaker/.hal into $HAL_PARENT_DIR/.hal..."
kubectl cp halyard/$HALYARD_POD:/home/spinnaker/.hal .hal
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/restore_config_utils.sh
rewrite_hal_key_paths
# We want just these subdirs from the Halyard Daemon pod to be copied into place in $HAL_PARENT_DIR/.hal.
copy_hal_subdirs
cp .hal/config $HAL_PARENT_DIR/.hal
EXISTING_DEPLOYMENT_SECRET_NAME=$(kubectl get secret -n halyard \
--field-selector metadata.name=="spinnaker-deployment" \
-o json | jq .items[0].metadata.name)
if [ $EXISTING_DEPLOYMENT_SECRET_NAME != 'null' ]; then
bold "Restoring Spinnaker deployment config files from Kubernetes secret spinnaker-deployment..."
DEPLOYMENT_SECRET_DATA=$(kubectl get secret spinnaker-deployment -n halyard -o json)
extract_to_file_if_defined() {
DATA_ITEM_VALUE=$(echo $DEPLOYMENT_SECRET_DATA | jq -r ".data.\"$1\"")
if [ $DATA_ITEM_VALUE != 'null' ]; then
echo $DATA_ITEM_VALUE | base64 -d > $2
fi
}
extract_to_file_if_defined properties "$PROPERTIES_FILE"
extract_to_file_if_defined config.json $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/config.json
extract_to_file_if_defined index.js $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/index.js
extract_to_file_if_defined configure_iap_expanded.md $PARENT_DIR/spinnaker-for-gcp/scripts/expose/configure_iap_expanded.md
extract_to_file_if_defined openapi_expanded.yml $PARENT_DIR/spinnaker-for-gcp/scripts/expose/openapi_expanded.yml
mkdir -p ~/.spin
extract_to_file_if_defined config ~/.spin/config
extract_to_file_if_defined key.json ~/.spin/key.json
rewrite_spin_key_path
fi
popd
rm -rf $TEMP_DIR
if [ "$CI" != true ]; then
# Update the generated markdown pages.
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/update_landing_page.sh
fi
================================================
FILE: scripts/manage/push_and_apply.sh
================================================
#!/usr/bin/env bash
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_config.sh || exit 1
~/cloudshell_open/spinnaker-for-gcp/scripts/manage/apply_config.sh
================================================
FILE: scripts/manage/push_config.sh
================================================
#!/usr/bin/env bash
[ -z "$PARENT_DIR" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)
if [ "$CI" == true ]; then
HAL_PARENT_DIR=$PARENT_DIR
else
HAL_PARENT_DIR=$HOME
fi
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/service_utils.sh
[ -z "$PROPERTIES_FILE" ] && PROPERTIES_FILE="$PARENT_DIR/spinnaker-for-gcp/scripts/install/properties"
$PARENT_DIR/spinnaker-for-gcp/scripts/manage/check_duplicate_dirs.sh || exit 1
$PARENT_DIR/spinnaker-for-gcp/scripts/manage/check_git_config.sh || exit 1
source "$PROPERTIES_FILE"
# TODO(duftler): Add check to ensure that we are not overriding with older or empty config.
CURRENT_CONTEXT=$(kubectl config current-context)
if [ "$?" != "0" ]; then
bold "No current Kubernetes context is configured."
exit 1
fi
CURRENT_CONTEXT_PROJECT=$(echo $CURRENT_CONTEXT | cut -d '_' -f 2)
CURRENT_CONTEXT_ZONE=$(echo $CURRENT_CONTEXT | cut -d '_' -f 3)
CURRENT_CONTEXT_CLUSTER=$(echo $CURRENT_CONTEXT | cut -d '_' -f 4)
if [ $CURRENT_CONTEXT_PROJECT != $PROJECT_ID ]; then
bold "Your Spinnaker config references project $PROJECT_ID, but you are connected to a cluster in project $CURRENT_CONTEXT_PROJECT."
bold "Use 'kubectl config use-context' to connect to the correct cluster before pushing the config."
exit 1
fi
if [ $CURRENT_CONTEXT_ZONE != $ZONE ]; then
bold "Your Spinnaker config references zone $ZONE, but you are connected to a cluster in zone $CURRENT_CONTEXT_ZONE."
bold "Use 'kubectl config use-context' to connect to the correct cluster before pushing the config."
exit 1
fi
if [ $CURRENT_CONTEXT_CLUSTER != $GKE_CLUSTER ]; then
bold "Your Spinnaker config references cluster $GKE_CLUSTER, but you are connected to cluster $CURRENT_CONTEXT_CLUSTER."
bold "Use 'kubectl config use-context' to connect to the correct cluster before pushing the config."
exit 1
fi
source $PARENT_DIR/spinnaker-for-gcp/scripts/manage/cluster_utils.sh
CLUSTER_EXISTS=$(check_for_existing_cluster)
if [ -z "$CLUSTER_EXISTS" ]; then
bold "Cluster $GKE_CLUSTER cannot be found. It may not exist."
bold "To recreate your installation with this config, run:"
bold "USE_CLOUD_SHELL_HAL_CONFIG=true $PARENT_DIR/spinnaker-for-gcp/scripts/install/setup.sh"
exit 1
fi
if [ -z "$CONFIG_CSR_REPO" ]; then
bold "CONFIG_CSR_REPO was not set. Please run the $PARENT_DIR/spinnaker-for-gcp/scripts/manage/update_management_environment.sh" \
"command to ensure you have all the necessary properties declared."
exit 1
fi
HALYARD_POD=spin-halyard-0
TEMP_DIR=$(mktemp -d -t halyard.XXXXX)
pushd $TEMP_DIR
EXISTING_CSR_REPO=$(gcloud source repos list --format="value(name)" --filter="name=projects/$PROJECT_ID/repos/$CONFIG_CSR_REPO" --project=$PROJECT_ID)
if [ -z "$EXISTING_CSR_REPO" ]; then
bold "Creating Cloud Source Repository $CONFIG_CSR_REPO..."
gcloud source repos create $CONFIG_CSR_REPO --project=$PROJECT_ID
fi
gcloud source repos clone $CONFIG_CSR_REPO --project=$PROJECT_ID
cd $CONFIG_CSR_REPO
bold "Backing up $HAL_PARENT_DIR/.hal..."
rm -rf .hal
mkdir .hal
# We want just these subdirs within $HAL_PARENT_DIR/.hal to be copied into place on the Halyard Daemon pod.
DIRS=(credentials profiles service-settings)
for p in "${DIRS[@]}"; do
for f in $(find $HAL_PARENT_DIR/.hal/*/$p -prune 2> /dev/null); do
SUB_PATH=$(echo $f | rev | cut -d '/' -f 1,2 | rev)
mkdir -p .hal/$SUB_PATH
cp -RT $HAL_PARENT_DIR/.hal/$SUB_PATH .hal/$SUB_PATH
done
done
cp $HAL_PARENT_DIR/.hal/config .hal
# Please note, rewritable key paths are in both push_config.sh and restore_config_utils.sh
REWRITABLE_KEYS=(kubeconfigFile jsonPath jsonKey passwordFile path templatePath tokenFile \
usernamePasswordFile sshPrivateKeyFilePath sshKnownHostsFilePath trustStore credentialPath)
for k in "${REWRITABLE_KEYS[@]}"; do
grep $k .hal/config &> /dev/null
FOUND_TOKEN=$?
if [ "$FOUND_TOKEN" == "0" ]; then
bold "Rewriting $k path to reflect user 'spinnaker' on Halyard Daemon pod..."
sed -i "s/$k: \/home\/$USER/$k: \/home\/spinnaker/" .hal/config
fi
done
bold "Backing up Spinnaker deployment config files..."
rm -rf deployment_config_files
mkdir deployment_config_files
copy_if_exists() {
if [ -e $1 ]; then
# If a filter token was passed, only copy the file if the token is present in the source file.
if [ $3 ]; then
if [ "$(grep $3 $1)" ]; then
cp $1 $2
fi
else
cp $1 $2
fi
fi
}
copy_if_exists "$PROPERTIES_FILE" deployment_config_files
copy_if_exists $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/config.json deployment_config_files
copy_if_exists $PARENT_DIR/spinnaker-for-gcp/scripts/install/spinnakerAuditLog/index.js deployment_config_files
# These files are generated when Spinnaker is exposed via IAP.
# If the operator is managing more than one installation we don't want to inadvertently backup files from the wrong installation.
copy_if_exists $PARENT_DIR/spinnaker-for-gcp/scripts/expose/configure_iap_expanded.md deployment_config_files "$PROJECT_ID\."
copy_if_exists $PARENT_DIR/spinnaker-for-gcp/scripts/expose/openapi_expanded.yml deployment_config_files "$PROJECT_ID\."
copy_if_exists ~/.spin/config deployment_config_files "$PROJECT_ID\."
copy_if_exists ~/.spin/config deployment_config_files "localhost\:"
copy_if_exists ~/.spin/key.json deployment_config_files "$PROJECT_ID\."
# Remove old persistent config so new config can be copied into place.
bold "Removing halyard/$HALYARD_POD:/home/spinnaker/.hal..."
kubectl -n halyard exec $HALYARD_POD -- bash -c "rm -rf ~/.hal/*"
# Copy new config into place.
bold "Copying $HAL_PARENT_DIR/.hal into halyard/$HALYARD_POD:/home/spinnaker/.hal..."
kubectl -n halyard cp $TEMP_DIR/$CONFIG_CSR_REPO/.hal spin-halyard-0:/home/spinnaker
EXISTING_DEPLOYMENT_SECRET_NAME=$(kubectl get secret -n halyard \
--field-selector metadata.name=="spinnaker-deployment" \
-o json | jq .items[0].metadata.name)
if [ $EXISTING_DEPLOYMENT_SECRET_NAME != 'null' ]; then
bold "Deleting Kubernetes secret spinnaker-deployment..."
kubectl delete secret spinnaker-deploym
gitextract_n7i9yuwk/
├── .gitignore
├── README.md
├── apptest/
│ └── tester/
│ ├── Dockerfile
│ ├── build-and-run-tests.sh
│ ├── spinnaker-test-job.yaml
│ ├── tester.sh
│ └── tests/
│ └── basic-suite.yaml
├── ci/
│ ├── CLOUD_BUILD.md
│ ├── Dockerfile
│ ├── JENKINS.md
│ ├── README.md
│ ├── cloudbuild.yaml
│ └── install.bash
├── samples/
│ └── helloworldwebapp/
│ ├── cleanup_app_and_pipelines.sh
│ ├── create_app_and_pipelines.sh
│ ├── install.md
│ └── templates/
│ ├── pipelines/
│ │ ├── deployprod_json.template
│ │ └── deploystaging_json.template
│ └── repo/
│ ├── Dockerfile
│ ├── cloudbuild_yaml.template
│ ├── config/
│ │ ├── prod/
│ │ │ ├── namespace.yaml
│ │ │ ├── replicaset_yaml.template
│ │ │ └── service.yaml
│ │ └── staging/
│ │ ├── namespace.yaml
│ │ ├── replicaset_yaml.template
│ │ └── service.yaml
│ └── src/
│ └── main.go
├── scripts/
│ ├── cli/
│ │ ├── install_hal.sh
│ │ ├── install_spin.sh
│ │ └── update_hal.sh
│ ├── experimental/
│ │ └── configure_for_workload_identity.sh
│ ├── expose/
│ │ ├── backend-config.yml
│ │ ├── configure_endpoint.sh
│ │ ├── configure_hal_security.sh
│ │ ├── configure_iap.md
│ │ ├── configure_iap.sh
│ │ ├── deck-ingress.yml
│ │ ├── iap_policy.json
│ │ ├── launch_configure_iap.sh
│ │ ├── openapi.yml
│ │ └── set_iap_properties.sh
│ ├── install/
│ │ ├── instructions.txt
│ │ ├── provision-spinnaker.md
│ │ ├── quick-install.yml
│ │ ├── setup.sh
│ │ ├── setup_properties.sh
│ │ └── spinnakerAuditLog/
│ │ ├── config_json.template
│ │ ├── index_js.template
│ │ └── package.json
│ └── manage/
│ ├── add_gae_account.sh
│ ├── add_gce_account.sh
│ ├── add_gke_account.sh
│ ├── add_missing_properties.sh
│ ├── apply_config.sh
│ ├── check_cluster_config.sh
│ ├── check_duplicate_dirs.sh
│ ├── check_git_config.sh
│ ├── check_project_mismatch.sh
│ ├── cluster_utils.sh
│ ├── connect_to_redis.sh
│ ├── connect_unsecured.sh
│ ├── deploy_application_manifest.sh
│ ├── generate_deletion_script.sh
│ ├── grant_iap_access.sh
│ ├── instructions.txt
│ ├── landing_page_base.md
│ ├── landing_page_secured.md
│ ├── landing_page_unsecured.md
│ ├── list_samples.sh
│ ├── pull_config.sh
│ ├── push_and_apply.sh
│ ├── push_config.sh
│ ├── restore_backup_to_cloud_shell.sh
│ ├── restore_config_utils.sh
│ ├── service_utils.sh
│ ├── update_console.sh
│ ├── update_halyard_daemon.sh
│ ├── update_landing_page.sh
│ ├── update_management_environment.sh
│ └── update_spinnaker_version.sh
└── templates/
├── spinnaker_application_manifest_bottom.yaml
├── spinnaker_application_manifest_middle_secured.yaml
├── spinnaker_application_manifest_middle_unsecured.yaml
└── spinnaker_application_manifest_top.yaml
SYMBOL INDEX (2 symbols across 1 files)
FILE: samples/helloworldwebapp/templates/repo/src/main.go
function hello (line 8) | func hello(w http.ResponseWriter, r *http.Request) {
function main (line 12) | func main() {
Condensed preview — 84 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (280K chars).
[
{
"path": ".gitignore",
"chars": 414,
"preview": "InstallHalyard.sh\nsamples/helloworldwebapp/templates/pipelines/deployprod.json\nsamples/helloworldwebapp/templates/pipeli"
},
{
"path": "README.md",
"chars": 19178,
"preview": "# Install and manage Spinnaker on Google Cloud Platform\n\nSpinnaker on Google Cloud Platform is a tool for easily install"
},
{
"path": "apptest/tester/Dockerfile",
"chars": 517,
"preview": "FROM gcr.io/cloud-marketplace-tools/testrunner:0.1.2\n\nRUN apt-get update && apt-get install -y --no-install-recommends \\"
},
{
"path": "apptest/tester/build-and-run-tests.sh",
"chars": 707,
"preview": "#!/bin/bash\n#\n# Would expect this to be deleted once these tests are properly integrated with GCP Marketplace Verificati"
},
{
"path": "apptest/tester/spinnaker-test-job.yaml",
"chars": 356,
"preview": "# Would expect this to be deleted once these tests are properly integrated with GCP Marketplace Verification Pipeline.\n\n"
},
{
"path": "apptest/tester/tester.sh",
"chars": 798,
"preview": "#!/bin/bash\n#\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may "
},
{
"path": "apptest/tester/tests/basic-suite.yaml",
"chars": 494,
"preview": "actions:\n- name: Clouddriver is up and healthy\n bashTest:\n script: curl -k \"http://{{ .Env.CLOUDDRIVER_ADDR }}:7002/"
},
{
"path": "ci/CLOUD_BUILD.md",
"chars": 3359,
"preview": "# Using Cloud Build to install Spinnaker for GCP\n\n## A note about Shared VPC support\n\nYou can't use Cloud Build to insta"
},
{
"path": "ci/Dockerfile",
"chars": 120,
"preview": "FROM gcr.io/cloud-builders/gcloud\n\nRUN apt-get -q update && apt-get install -qqy \\ \n jq \\\n gettext-base\n\nENTRYPOINT []"
},
{
"path": "ci/JENKINS.md",
"chars": 5870,
"preview": "# Using Jenkins to install Spinnaker for GCP\n\nYou can use Jekins to install Spinnaker for GCP. The Jenkins agent executi"
},
{
"path": "ci/README.md",
"chars": 363,
"preview": "# Installing Spinnaker for GCP on a Continous Integration Server\n\nYou can install Spinnaker for GCP using a continous in"
},
{
"path": "ci/cloudbuild.yaml",
"chars": 510,
"preview": "steps:\n- name: 'gcr.io/cloud-builders/docker'\n args: ['build', '-f', 'Dockerfile', '-t', 'installer', '.']\n- name: gcr."
},
{
"path": "ci/install.bash",
"chars": 142,
"preview": "#!/bin/bash\n\nset -e\n\nPARENT_DIR=/workspace PROPERTIES_FILE=/workspace/properties CI=true /workspace/spinnaker-for-gcp/sc"
},
{
"path": "samples/helloworldwebapp/cleanup_app_and_pipelines.sh",
"chars": 1848,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\ncd ~/cloudshell_open/spinnaker-for-gcp/\n\ns"
},
{
"path": "samples/helloworldwebapp/create_app_and_pipelines.sh",
"chars": 3489,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "samples/helloworldwebapp/install.md",
"chars": 4749,
"preview": "# Install and run sample application and pipelines\n\n## Introduction\n\nTry out Spinnaker using the sample application prov"
},
{
"path": "samples/helloworldwebapp/templates/pipelines/deployprod_json.template",
"chars": 11462,
"preview": "{\n \"application\": \"helloworldwebapp\",\n \"description\": \"When staging deployment and validation completes, Blue/Green de"
},
{
"path": "samples/helloworldwebapp/templates/pipelines/deploystaging_json.template",
"chars": 7310,
"preview": "{\n \"application\": \"helloworldwebapp\",\n \"description\": \"On GCB build completion, deploy new image to staging environmen"
},
{
"path": "samples/helloworldwebapp/templates/repo/Dockerfile",
"chars": 112,
"preview": "FROM alpine\n\nCOPY src/gopath/bin/helloworldwebapp /go/bin/helloworldwebapp\n\nENTRYPOINT /go/bin/helloworldwebapp\n"
},
{
"path": "samples/helloworldwebapp/templates/repo/cloudbuild_yaml.template",
"chars": 809,
"preview": "steps:\n- name: 'gcr.io/cloud-builders/go'\n args: [ 'install', '$PROJECT_ID/helloworldwebapp' ]\n env: [ 'PROJECT_ROOT=$"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/prod/namespace.yaml",
"chars": 75,
"preview": "---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: helloworldwebapp-prod\n"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/prod/replicaset_yaml.template",
"chars": 596,
"preview": "---\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n annotations:\n traffic.spinnaker.io/load-balancers: '[\"service he"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/prod/service.yaml",
"chars": 291,
"preview": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: helloworldwebapp-service\n namespace: helloworldwebapp-prod\nspec:\n p"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/staging/namespace.yaml",
"chars": 78,
"preview": "---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: helloworldwebapp-staging\n"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/staging/replicaset_yaml.template",
"chars": 599,
"preview": "---\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n annotations:\n traffic.spinnaker.io/load-balancers: '[\"service he"
},
{
"path": "samples/helloworldwebapp/templates/repo/config/staging/service.yaml",
"chars": 297,
"preview": "---\napiVersion: v1\nkind: Service\nmetadata:\n name: helloworldwebapp-service\n namespace: helloworldwebapp-staging\nspec:\n"
},
{
"path": "samples/helloworldwebapp/templates/repo/src/main.go",
"chars": 271,
"preview": "package main\n\nimport (\n \"io\"\n \"net/http\"\n)\n\nfunc hello(w http.ResponseWriter, r *http.Request) {\n io.WriteString(w, \""
},
{
"path": "scripts/cli/install_hal.sh",
"chars": 1606,
"preview": "#!/usr/bin/env bash\n\nHALYARD_DAEMON_PID_FILE=~/hal/halyard/pid\n\nfunction kill_daemon() {\n pkill -F $HALYARD_DAEMON_PI"
},
{
"path": "scripts/cli/install_spin.sh",
"chars": 1187,
"preview": "#!/usr/bin/env bash\n\ncurl -LO https://storage.googleapis.com/spinnaker-artifacts/spin/$(curl -s https://storage.googleap"
},
{
"path": "scripts/cli/update_hal.sh",
"chars": 1880,
"preview": "#!/usr/bin/env bash\n\nHALYARD_DAEMON_PID_FILE=~/hal/halyard/pid\n\nfunction kill_daemon() {\n pkill -F $HALYARD_DAEMON_PI"
},
{
"path": "scripts/experimental/configure_for_workload_identity.sh",
"chars": 5417,
"preview": "#!/usr/bin/env bash\n\n# Prior to running this script, please ensure that you are running these versions or later:\n# expor"
},
{
"path": "scripts/expose/backend-config.yml",
"chars": 203,
"preview": "apiVersion: cloud.google.com/v1beta1\nkind: BackendConfig\nmetadata:\n name: config-default\n namespace: spinnaker\nspec:\n "
},
{
"path": "scripts/expose/configure_endpoint.sh",
"chars": 3789,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\npushd ~/cloudshell_open/spinnaker-for-gcp/"
},
{
"path": "scripts/expose/configure_hal_security.sh",
"chars": 262,
"preview": "~/hal/hal config security api edit --override-base-url https://$DOMAIN_NAME/gate\n~/hal/hal config security ui edit --ove"
},
{
"path": "scripts/expose/configure_iap.md",
"chars": 1338,
"preview": "# Expose Spinnaker\n\n### Configure OAuth consent screen\n\nGo to the [OAuth consent screen](https://console.developers.goog"
},
{
"path": "scripts/expose/configure_iap.sh",
"chars": 3553,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\npushd ~/cloudshell_open/spinnaker-for-gcp/"
},
{
"path": "scripts/expose/deck-ingress.yml",
"chars": 306,
"preview": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n name: deck-ingress\n namespace: spinnaker\n annotations:\n in"
},
{
"path": "scripts/expose/iap_policy.json",
"chars": 291,
"preview": "{\n \"policy\": {\n \"etag\": $IAP_IAM_POLICY_ETAG,\n \"bindings\": [\n {\n \"role\": \"roles/iap.httpsResourceAcce"
},
{
"path": "scripts/expose/launch_configure_iap.sh",
"chars": 242,
"preview": "#!/usr/bin/env bash\n\npushd ~/cloudshell_open/spinnaker-for-gcp/scripts\n\nsource ./install/properties\n\ncat expose/configur"
},
{
"path": "scripts/expose/openapi.yml",
"chars": 205,
"preview": "swagger: \"2.0\"\ninfo: \n title: Spinnaker for GCP - $PROJECT_ID\n version: 1.0.0\nhost: $DOMAIN_NAME\nx-google-endpoints:\n "
},
{
"path": "scripts/expose/set_iap_properties.sh",
"chars": 1007,
"preview": "#!/usr/bin/env bash\n\nif [ -z $CLIENT_ID ]; then\n SECRET_JSON=$(kubectl get secret -n spinnaker $SECRET_NAME -o json)\n\n "
},
{
"path": "scripts/install/instructions.txt",
"chars": 744,
"preview": "\n+-------------------------------------------------------------------------------------------------------+\n| "
},
{
"path": "scripts/install/provision-spinnaker.md",
"chars": 3308,
"preview": "# Install Spinnaker\n\n## Select GCP project\n\nSelect the project in which you'll install Spinnaker, then click **Start**, "
},
{
"path": "scripts/install/quick-install.yml",
"chars": 12856,
"preview": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: halyard\n\n---\n\napiVersion: v1\nkind: Namespace\nmetadata:\n name: spinnake"
},
{
"path": "scripts/install/setup.sh",
"chars": 17157,
"preview": "#!/usr/bin/env bash\n\nerr() {\n echo \"$*\" >&2;\n}\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut"
},
{
"path": "scripts/install/setup_properties.sh",
"chars": 8830,
"preview": "#!/usr/bin/env bash\n\n# PROJECT_ID should be set, but we will try to determine via gcloud config if not set.\n# DEPLOYMENT"
},
{
"path": "scripts/install/spinnakerAuditLog/config_json.template",
"chars": 170,
"preview": "{\n \"USERNAME\": \"$AUDIT_LOG_UNAME\",\n \"PASSWORD\": \"$AUDIT_LOG_PW\",\n \"TIMEZONE\": \"$TIMEZONE\",\n \"PROJECT_ID\": \"$PROJECT_"
},
{
"path": "scripts/install/spinnakerAuditLog/index_js.template",
"chars": 8522,
"preview": "const config = require('./config.json');\nconst moment = require('moment-timezone');\nconst {Logging} = require('@google-c"
},
{
"path": "scripts/install/spinnakerAuditLog/package.json",
"chars": 99,
"preview": "{\n \"dependencies\": {\n \"@google-cloud/logging\": \"4.1.1\",\n \"moment-timezone\": \"^0.5.11\"\n }\n}\n"
},
{
"path": "scripts/manage/add_gae_account.sh",
"chars": 2041,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/add_gce_account.sh",
"chars": 2044,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/add_gke_account.sh",
"chars": 3093,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/add_missing_properties.sh",
"chars": 1013,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirna"
},
{
"path": "scripts/manage/apply_config.sh",
"chars": 243,
"preview": "#!/usr/bin/env bash\n\nHALYARD_POD=spin-halyard-0\n\n# TODO(duftler): Use --wait-for-completion?\nkubectl exec $HALYARD_POD -"
},
{
"path": "scripts/manage/check_cluster_config.sh",
"chars": 4821,
"preview": "#!/usr/bin/env bash\n\n# The logic is roughly as follows:\n# Check each configured context that is in the specified proje"
},
{
"path": "scripts/manage/check_duplicate_dirs.sh",
"chars": 1912,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \"$(tput bold)\"\"$*\" \"$(tput sgr0)\";\n}\n\nif [ -d \"$HOME/cloudshell_open\" ]; then\n MAT"
},
{
"path": "scripts/manage/check_git_config.sh",
"chars": 580,
"preview": "[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nsource $PARENT_DIR/spinnake"
},
{
"path": "scripts/manage/check_project_mismatch.sh",
"chars": 791,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nsource"
},
{
"path": "scripts/manage/cluster_utils.sh",
"chars": 2719,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nsource"
},
{
"path": "scripts/manage/connect_to_redis.sh",
"chars": 1324,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/connect_unsecured.sh",
"chars": 964,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/deploy_application_manifest.sh",
"chars": 3388,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nsource"
},
{
"path": "scripts/manage/generate_deletion_script.sh",
"chars": 3931,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/grant_iap_access.sh",
"chars": 1811,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\necho \"Please enter the member you wish to "
},
{
"path": "scripts/manage/instructions.txt",
"chars": 653,
"preview": "\n+------------------------------------------------------------------------------------------+\n| "
},
{
"path": "scripts/manage/landing_page_base.md",
"chars": 7198,
"preview": "# Manage Spinnaker\n\nUse this section to manage your Spinnaker deployment going forward.\n\n## Select GCP project\n\nSelect t"
},
{
"path": "scripts/manage/landing_page_secured.md",
"chars": 1573,
"preview": "## Configure User Access (IAP)\n\n```bash\n~/cloudshell_open/spinnaker-for-gcp/scripts/manage/grant_iap_access.sh\n```\n\nAlte"
},
{
"path": "scripts/manage/landing_page_unsecured.md",
"chars": 1938,
"preview": "## Use Spinnaker\n\n### Forward the port to Deck, and connect\n\nDon't use the `hal deploy connect` command. Instead, use th"
},
{
"path": "scripts/manage/list_samples.sh",
"chars": 688,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nbold \"Here is a list of sample application"
},
{
"path": "scripts/manage/pull_config.sh",
"chars": 2773,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nif [ \""
},
{
"path": "scripts/manage/push_and_apply.sh",
"chars": 164,
"preview": "#!/usr/bin/env bash\n\n~/cloudshell_open/spinnaker-for-gcp/scripts/manage/push_config.sh || exit 1\n~/cloudshell_open/spinn"
},
{
"path": "scripts/manage/push_config.sh",
"chars": 6434,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nif [ \""
},
{
"path": "scripts/manage/restore_backup_to_cloud_shell.sh",
"chars": 3657,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\n~/cloudshell_open/spinnaker-for-gcp/script"
},
{
"path": "scripts/manage/restore_config_utils.sh",
"chars": 1489,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nif [ \""
},
{
"path": "scripts/manage/service_utils.sh",
"chars": 1345,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\n[ -z \""
},
{
"path": "scripts/manage/update_console.sh",
"chars": 297,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\n$PAREN"
},
{
"path": "scripts/manage/update_halyard_daemon.sh",
"chars": 405,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "scripts/manage/update_landing_page.sh",
"chars": 1417,
"preview": "#!/usr/bin/env bash\n\n[ -z \"$PARENT_DIR\" ] && PARENT_DIR=$(dirname $(realpath $0) | rev | cut -d '/' -f 4- | rev)\n\nsource"
},
{
"path": "scripts/manage/update_management_environment.sh",
"chars": 1110,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\npushd ~/cloudshell_open/spinnaker-for-gcp/"
},
{
"path": "scripts/manage/update_spinnaker_version.sh",
"chars": 411,
"preview": "#!/usr/bin/env bash\n\nbold() {\n echo \". $(tput bold)\" \"$*\" \"$(tput sgr0)\";\n}\n\nsource ~/cloudshell_open/spinnaker-for-gcp"
},
{
"path": "templates/spinnaker_application_manifest_bottom.yaml",
"chars": 395,
"preview": " info:\n - name: Application Namespace\n value: spinnaker\n selector:\n matchLabels:\n app.kubernetes.io/name: "
},
{
"path": "templates/spinnaker_application_manifest_middle_secured.yaml",
"chars": 1439,
"preview": " notes: |-\n # Manage your Spinnaker\n [Open management console in Cloud Shell](https://console.cloud.google."
},
{
"path": "templates/spinnaker_application_manifest_middle_unsecured.yaml",
"chars": 1579,
"preview": " notes: |-\n # Manage your Spinnaker\n [Open management console in Cloud Shell](https://console.cloud.google."
},
{
"path": "templates/spinnaker_application_manifest_top.yaml",
"chars": 61933,
"preview": "---\napiVersion: app.k8s.io/v1beta1\nkind: Application\nmetadata:\n name: $DEPLOYMENT_NAME\n namespace: spinnaker\n annotat"
}
]
About this extraction
This page contains the full source code of the GoogleCloudPlatform/spinnaker-for-gcp GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 84 files (259.1 KB), approximately 95.7k tokens, and a symbol index with 2 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.